2018-03-23 12:04:04

by Petr Mladek

[permalink] [raw]
Subject: [PATCH 0/8] livepatch: Atomic replace feature

The atomic replace allows to create cumulative patches. They
are useful when you maintain many livepatches and want to remove
one that is lower on the stack. In addition it is very useful when
more patches touch the same function and there are dependencies
between them.

This version is heavily refactored and cleaned based on feedback from Josh.
There are actually only three functional changes.

It still passes the first draft of the selfttest from Joe that can
be found at https://lkml.kernel.org/r/[email protected]


Changes against v10:

+ Bug fixes and functional changes:
+ Handle Nops in klp_ftrace_handled() to avoid infinite loop [Mirek]
+ Really add dynamically allocated klp_object into the list [Petr]
+ Clear patch->replace when transition finishes [Josh]

+ Refactoring and clean up [Josh]:
+ Replace enum types with bools
+ Avoid using ERR_PTR
+ Remove too paranoid warnings
+ Distinguish registered patches by a flag instead of a list
+ Squash some functions
+ Update comments, documentation, and commit messages
+ Squashed and split patches to do more controversial changes later

Changes against v9:

+ Fixed check of valid NOPs for already loaded objects,
regression introduced in v9 [Joe, Mirek]
+ Allow to replace even disabled patches [Evgenii]

Changes against v8:

+ Fixed handling of statically defined struct klp_object
with empty array of functions [Joe, Mirek]
+ Removed redundant func->new_func assignment for NOPs [Mirek]
+ Improved some wording [Mirek]

Changes against v7:

+ Fixed handling of NOPs for not-yet-loaded modules
+ Made klp_replaced_patches list static [Mirek]
+ Made klp_free_object() public later [Mirek]
+ Fixed several reported typos [Mirek, Joe]
+ Updated documentation according to the feedback [Joe]
+ Added some Acks [Mirek]

Changes against v6:

+ used list_move when disabling replaced patches [Jason]
+ renamed KLP_FUNC_ORIGINAL -> KLP_FUNC_STATIC [Mirek]
+ used klp_is_func_type() in klp_unpatch_object() [Mirek]
+ moved static definition of klp_get_or_add_object() [Mirek]
+ updated comment about synchronization in forced mode [Mirek]
+ added user documentation
+ fixed several typos


Jason Baron (3):
livepatch: Use lists to manage patches, objects and functions
livepatch: Add atomic replace
livepatch: Remove replaced patches from the stack

Petr Mladek (5):
livepatch: Free only structures with initialized kobject
livepatch: Add an extra flag to distinguish registered patches
livepatch: Remove Nop structures when unused
livepatch: Allow to replace even disabled patches
livepatch: Atomic replace and cumulative patches documentation

Documentation/livepatch/cumulative-patches.txt | 105 ++++++++
include/linux/livepatch.h | 33 ++-
kernel/livepatch/core.c | 355 ++++++++++++++++++++++---
kernel/livepatch/core.h | 4 +
kernel/livepatch/patch.c | 39 ++-
kernel/livepatch/patch.h | 1 +
kernel/livepatch/transition.c | 27 ++
7 files changed, 519 insertions(+), 45 deletions(-)
create mode 100644 Documentation/livepatch/cumulative-patches.txt

--
2.13.6



2018-03-23 12:02:35

by Petr Mladek

[permalink] [raw]
Subject: [PATCH 8/8] livepatch: Atomic replace and cumulative patches documentation

User documentation for the atomic replace feature. It makes it easier
to maintain livepatches using so-called cumulative patches.

Signed-off-by: Petr Mladek <[email protected]>
---
Documentation/livepatch/cumulative-patches.txt | 105 +++++++++++++++++++++++++
1 file changed, 105 insertions(+)
create mode 100644 Documentation/livepatch/cumulative-patches.txt

diff --git a/Documentation/livepatch/cumulative-patches.txt b/Documentation/livepatch/cumulative-patches.txt
new file mode 100644
index 000000000000..206b7f98d270
--- /dev/null
+++ b/Documentation/livepatch/cumulative-patches.txt
@@ -0,0 +1,105 @@
+===================================
+Atomic Replace & Cumulative Patches
+===================================
+
+There might be dependencies between livepatches. If multiple patches need
+to do different changes to the same function(s) then we need to define
+an order in which the patches will be installed. And function implementations
+from any newer livepatch must be done on top of the older ones.
+
+This might become a maintenance nightmare. Especially if anyone would want
+to remove a patch that is in the middle of the stack.
+
+An elegant solution comes with the feature called "Atomic Replace". It allows
+to create so called "Cumulative Patches". They include all wanted changes
+from all older livepatches and completely replace them in one transition.
+
+Usage
+-----
+
+The atomic replace can be enabled by setting "replace" flag in struct klp_patch,
+for example:
+
+ static struct klp_patch patch = {
+ .mod = THIS_MODULE,
+ .objs = objs,
+ .replace = true,
+ };
+
+Such a patch is added on top of the livepatch stack when registered. It can
+be enabled even when some earlier patches have not been enabled yet.
+
+All processes are then migrated to use the code only from the new patch.
+Once the transition is finished, all older patches are removed from the stack
+of patches. Even the older not-enabled patches mentioned above. They can
+even be unregistered and the related modules unloaded.
+
+Ftrace handlers are transparently removed from functions that are no
+longer modified by the new cumulative patch.
+
+As a result, the livepatch authors might maintain sources only for one
+cumulative patch. It helps to keep the patch consistent while adding or
+removing various fixes or features.
+
+Users could keep only the last patch installed on the system after
+the transition to has finished. It helps to clearly see what code is
+actually in use. Also the livepatch might then be seen as a "normal"
+module that modifies the kernel behavior. The only difference is that
+it can be updated at runtime without breaking its functionality.
+
+
+Features
+--------
+
+The atomic replace allows:
+
+ + Atomically revert some functions in a previous patch while
+ upgrading other functions.
+
+ + Remove eventual performance impact caused by core redirection
+ for functions that are no longer patched.
+
+ + Decrease user confusion about stacking order and what patches are
+ currently in effect.
+
+
+Limitations:
+------------
+
+ + Replaced patches can no longer be enabled. But if the transition
+ to the cumulative patch was not forced, the kernel modules with
+ the older livepatches can be removed and eventually added again.
+
+ A good practice is to set .replace flag in any released livepatch.
+ Then re-adding an older livepatch is equivalent to downgrading
+ to that patch. This is safe as long as the livepatches do _not_ do
+ extra modifications in (un)patching callbacks or in the module_init()
+ or module_exit() functions, see below.
+
+
+ + Only the (un)patching callbacks from the _new_ cumulative livepatch are
+ executed. Any callbacks from the replaced patches are ignored.
+
+ By other words, the cumulative patch is responsible for doing any actions
+ that are necessary to properly replace any older patch.
+
+ As a result, it might be dangerous to replace newer cumulative patches by
+ older ones. The old livepatches might not provide the necessary callbacks.
+
+ This might be seen as a limitation in some scenarios. But it makes the life
+ easier in many others. Only the new cumulative livepatch knows what
+ fixes/features are added/removed and what special actions are necessary
+ for a smooth transition.
+
+ In each case, it would be a nightmare to think about the order of
+ the various callbacks and their interactions if the callbacks from all
+ enabled patches were called.
+
+
+ + There is no special handling of shadow variables. Livepatch authors
+ must create their own rules how to pass them from one cumulative
+ patch to the other. Especially they should not blindly remove them
+ in module_exit() functions.
+
+ A good practice might be to remove shadow variables in the post-unpatch
+ callback. It is called only when the livepatch is properly disabled.
--
2.13.6


2018-03-23 12:02:52

by Petr Mladek

[permalink] [raw]
Subject: [PATCH 7/8] livepatch: Allow to replace even disabled patches

Patches without the replace flag might depend on each other. It makes
sense to enforce the order in which they are enabled and disabled.

The situation is different when the patch replaces all existing ones.
These patches should make the life easier for both: patch producers
and users. Such a patch should be ready to replace basically any
older patch. It should work well even in situations when the previous
patches were not installed or when they were disabled from some reasons.

The code is almost ready for this. There are needed two changes:

+ Disable only enabled patches in klp_discard_replaced_patches().

+ Stop enforcing the stack order for the patches with the replace flag.
Instead, we need to make sure that they are still usable (not
replaced). This check is already there.

Signed-off-by: Petr Mladek <[email protected]>
---
kernel/livepatch/core.c | 19 +++++++++++++------
1 file changed, 13 insertions(+), 6 deletions(-)

diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
index 0b3be6e14b80..c64371ffc063 100644
--- a/kernel/livepatch/core.c
+++ b/kernel/livepatch/core.c
@@ -355,11 +355,13 @@ void klp_discard_replaced_patches(struct klp_patch *new_patch, bool keep_module)
if (old_patch == new_patch)
return;

- klp_unpatch_objects(old_patch);
- old_patch->enabled = false;
+ if (old_patch->enabled) {
+ klp_unpatch_objects(old_patch);
+ old_patch->enabled = false;

- if (!keep_module)
- module_put(old_patch->mod);
+ if (!keep_module)
+ module_put(old_patch->mod);
+ }

/*
* Replaced patches could not get re-enabled to keep
@@ -453,8 +455,13 @@ static int __klp_enable_patch(struct klp_patch *patch)
if (!klp_is_patch_on_stack(patch))
return -EINVAL;

- /* Only the first disabled patch can be enabled. */
- if (patch->list.prev != &klp_patches &&
+ /*
+ * Only the first disabled patch can be enabled. This is not required
+ * for patches with the replace flags. They override even disabled
+ * patches that were registered earlier.
+ */
+ if (!patch->replace &&
+ patch->list.prev != &klp_patches &&
!list_prev_entry(patch, list)->enabled)
return -EBUSY;

--
2.13.6


2018-03-23 12:03:01

by Petr Mladek

[permalink] [raw]
Subject: [PATCH 4/8] livepatch: Add an extra flag to distinguish registered patches

The initial implementation of the atomic replace feature keeps the replaced
patches on the stack. But people would like to remove the replaced patches
from different reasons that will be described in the following patch.

This patch is just a small preparation step. We will need to keep
the replaced patches registered even when they are not longer on the stack.
It is because they are typically unregistered by the module exit script.

Therefore we need to detect the registered patches another way. We could
not use kobj.state_initialized because it is racy. The kobject is destroyed
by an asynchronous call and could not be synchronized using klp_mutex.

This patch solves the problem by adding a flag into struct klp_patch.
It is manipulated under klp_mutex and therefore it is safe. It is easy
to understand and it is enough in most situations.

The function klp_is_patch_registered() is not longer needed. Though
it was renamed to klp_is_patch_on_stack() and used in __klp_enable_patch()
as a new sanity check.

This patch does not change the existing behavior.

Signed-off-by: Petr Mladek <[email protected]>
Cc: Josh Poimboeuf <[email protected]>
Cc: Jessica Yu <[email protected]>
Cc: Jiri Kosina <[email protected]>
Cc: Miroslav Benes <[email protected]>
Cc: Jason Baron <[email protected]>
---
include/linux/livepatch.h | 2 ++
kernel/livepatch/core.c | 24 ++++++++++++++++++------
2 files changed, 20 insertions(+), 6 deletions(-)

diff --git a/include/linux/livepatch.h b/include/linux/livepatch.h
index f28af280f9e0..d6e6d8176995 100644
--- a/include/linux/livepatch.h
+++ b/include/linux/livepatch.h
@@ -150,6 +150,7 @@ struct klp_object {
* @list: list node for global list of registered patches
* @kobj: kobject for sysfs resources
* @obj_list: dynamic list of the object entries
+ * @registered: reliable way to check registration status
* @enabled: the patch is enabled (but operation may be incomplete)
* @finish: for waiting till it is safe to remove the patch module
*/
@@ -163,6 +164,7 @@ struct klp_patch {
struct list_head list;
struct kobject kobj;
struct list_head obj_list;
+ bool registered;
bool enabled;
struct completion finish;
};
diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
index 18c400bd9a33..70c67a834e9a 100644
--- a/kernel/livepatch/core.c
+++ b/kernel/livepatch/core.c
@@ -45,6 +45,11 @@
*/
DEFINE_MUTEX(klp_mutex);

+/*
+ * Stack of patches. It defines the order in which the patches can be enabled.
+ * Only patches on this stack might be enabled. New patches are added when
+ * registered. They are removed when they are unregistered.
+ */
static LIST_HEAD(klp_patches);

static struct kobject *klp_root_kobj;
@@ -97,7 +102,7 @@ static void klp_find_object_module(struct klp_object *obj)
mutex_unlock(&module_mutex);
}

-static bool klp_is_patch_registered(struct klp_patch *patch)
+static bool klp_is_patch_on_stack(struct klp_patch *patch)
{
struct klp_patch *mypatch;

@@ -378,7 +383,7 @@ int klp_disable_patch(struct klp_patch *patch)

mutex_lock(&klp_mutex);

- if (!klp_is_patch_registered(patch)) {
+ if (!patch->registered) {
ret = -EINVAL;
goto err;
}
@@ -407,7 +412,11 @@ static int __klp_enable_patch(struct klp_patch *patch)
if (WARN_ON(patch->enabled))
return -EINVAL;

- /* enforce stacking: only the first disabled patch can be enabled */
+ /* Enforce stacking. */
+ if (!klp_is_patch_on_stack(patch))
+ return -EINVAL;
+
+ /* Only the first disabled patch can be enabled. */
if (patch->list.prev != &klp_patches &&
!list_prev_entry(patch, list)->enabled)
return -EBUSY;
@@ -478,7 +487,7 @@ int klp_enable_patch(struct klp_patch *patch)

mutex_lock(&klp_mutex);

- if (!klp_is_patch_registered(patch)) {
+ if (!patch->registered) {
ret = -EINVAL;
goto err;
}
@@ -519,7 +528,7 @@ static ssize_t enabled_store(struct kobject *kobj, struct kobj_attribute *attr,

mutex_lock(&klp_mutex);

- if (!klp_is_patch_registered(patch)) {
+ if (!patch->registered) {
/*
* Module with the patch could either disappear meanwhile or is
* not properly initialized yet.
@@ -528,6 +537,7 @@ static ssize_t enabled_store(struct kobject *kobj, struct kobj_attribute *attr,
goto err;
}

+
if (patch->enabled == enabled) {
/* already in requested state */
ret = -EINVAL;
@@ -1004,6 +1014,7 @@ static int klp_init_patch(struct klp_patch *patch)
}

list_add_tail(&patch->list, &klp_patches);
+ patch->registered = true;

mutex_unlock(&klp_mutex);

@@ -1034,7 +1045,7 @@ int klp_unregister_patch(struct klp_patch *patch)

mutex_lock(&klp_mutex);

- if (!klp_is_patch_registered(patch)) {
+ if (!patch->registered) {
ret = -EINVAL;
goto err;
}
@@ -1045,6 +1056,7 @@ int klp_unregister_patch(struct klp_patch *patch)
}

klp_free_patch(patch);
+ patch->registered = false;

mutex_unlock(&klp_mutex);

--
2.13.6


2018-03-23 12:03:45

by Petr Mladek

[permalink] [raw]
Subject: [PATCH 5/8] livepatch: Remove replaced patches from the stack

From: Jason Baron <[email protected]>

The stack of patches defines an order in which the patches must be enabled.
It has some drawbacks, for example:

+ People could not disable earlier patches without disabling the later
ones even when they are independent.

+ The order is not clearly visible in /sys/kernel/livepatch. Users might
get lost.

+ lsmod does not give a clue what patch is the last one and in use.

+ Lots of unused data and code might be locked in memory.

Some of the above problems can already be solved by using cumulative
patches (replace flag) and reasonable patch names.

This is expected. The atomic replace and cumulative patches are supposed
to help managing and maintaining many patches. And they can resolve even
more problems mentioned above when the replaced patches get removed from
the stack. Then the unused patches might be unregistered and the modules
unloaded.

Removing replaced patches will actually help even more. The patch
will become the first on the stack:

+ Nops' structs will not longer be necessary and might be removed.
This would save memory, restore performance (no ftrace handler),
allow clear view on what is really patched.

+ Disabling the patch will cause using the original code everywhere.
Therefore the livepatch callbacks could handle only one scenario.
Note that the complication is already complex enough when the patch
gets enabled. It is currently solved by calling callbacks only from
the new cumulative patch.

Some people actually expected this behavior from the beginning. After all
a cumulative patch is supposed to "completely" replace an existing one.
It is like when a new version of an application replaces an older one.

This patch does the first step. It removes the replaced patches from
the stack. It is safe. The consistency model ensures that they are
not longer used. By other words, each process works only with
the structures from klp_transition_patch.

The removal is done by a special function. It combines actions done by
__disable_patch() and klp_complete_transition(). But it is a fast
track without all the transaction-related stuff.

Signed-off-by: Jason Baron <[email protected]>
[[email protected]: Split and refactored]
Signed-off-by: Petr Mladek <[email protected]>
Cc: Josh Poimboeuf <[email protected]>
Cc: Jessica Yu <[email protected]>
Cc: Jiri Kosina <[email protected]>
Cc: Miroslav Benes <[email protected]>
---
kernel/livepatch/core.c | 37 +++++++++++++++++++++++++++++++++++++
kernel/livepatch/core.h | 3 +++
kernel/livepatch/transition.c | 3 +++
3 files changed, 43 insertions(+)

diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
index 70c67a834e9a..35f4bbff395f 100644
--- a/kernel/livepatch/core.c
+++ b/kernel/livepatch/core.c
@@ -332,6 +332,43 @@ static int klp_write_object_relocations(struct module *pmod,
return ret;
}

+/*
+ * This function removes replaced patches from both func_stack
+ * and klp_patches stack.
+ *
+ * We could be pretty aggressive here. It is called in the situation where
+ * these structures are no longer accessible. All functions are redirected
+ * by the klp_transition_patch. They use either a new code or they are in
+ * the original code because of the special nop function patches.
+ *
+ * The only exception is when the transition was forced. In this case,
+ * klp_ftrace_handler() might still see the replaced patch on the stack.
+ * Fortunately, it is carefully designed to work with removed functions
+ * thanks to RCU. We only have to keep the patches on the system. This
+ * is handled by @keep_module parameter.
+ */
+void klp_discard_replaced_patches(struct klp_patch *new_patch, bool keep_module)
+{
+ struct klp_patch *old_patch, *tmp_patch;
+
+ list_for_each_entry_safe(old_patch, tmp_patch, &klp_patches, list) {
+ if (old_patch == new_patch)
+ return;
+
+ klp_unpatch_objects(old_patch);
+ old_patch->enabled = false;
+
+ if (!keep_module)
+ module_put(old_patch->mod);
+
+ /*
+ * Replaced patches could not get re-enabled to keep
+ * the code sane.
+ */
+ list_del_init(&old_patch->list);
+ }
+}
+
static int __klp_disable_patch(struct klp_patch *patch)
{
struct klp_object *obj;
diff --git a/kernel/livepatch/core.h b/kernel/livepatch/core.h
index 48a83d4364cf..2d2b724d56a4 100644
--- a/kernel/livepatch/core.h
+++ b/kernel/livepatch/core.h
@@ -6,6 +6,9 @@

extern struct mutex klp_mutex;

+void klp_discard_replaced_patches(struct klp_patch *new_patch,
+ bool keep_module);
+
static inline bool klp_is_object_loaded(struct klp_object *obj)
{
return !obj->name || obj->mod;
diff --git a/kernel/livepatch/transition.c b/kernel/livepatch/transition.c
index 7c6631e693bc..24daed700ee7 100644
--- a/kernel/livepatch/transition.c
+++ b/kernel/livepatch/transition.c
@@ -87,6 +87,9 @@ static void klp_complete_transition(void)
klp_transition_patch->mod->name,
klp_target_state == KLP_PATCHED ? "patching" : "unpatching");

+ if (klp_transition_patch->replace && klp_target_state == KLP_PATCHED)
+ klp_discard_replaced_patches(klp_transition_patch, klp_forced);
+
if (klp_target_state == KLP_UNPATCHED) {
/*
* All tasks have transitioned to KLP_UNPATCHED so we can now
--
2.13.6


2018-03-23 12:03:57

by Petr Mladek

[permalink] [raw]
Subject: [PATCH 1/8] livepatch: Use lists to manage patches, objects and functions

From: Jason Baron <[email protected]>

Currently klp_patch contains a pointer to a statically allocated array of
struct klp_object and struct klp_objects contains a pointer to a statically
allocated array of klp_func. In order to allow for the dynamic allocation
of objects and functions, link klp_patch, klp_object, and klp_func together
via linked lists. This allows us to more easily allocate new objects and
functions, while having the iterator be a simple linked list walk.

The static structures are added to the lists early. It allows to add
the dynamically allocated objects before klp_init_object() and
klp_init_func() calls. Therefore it reduces the further changes
to the code.

This patch does not change the existing behavior.

Signed-off-by: Jason Baron <[email protected]>
[[email protected]: Initialize lists before init calls]
Signed-off-by: Petr Mladek <[email protected]>
Cc: Josh Poimboeuf <[email protected]>
Cc: Jessica Yu <[email protected]>
Cc: Jiri Kosina <[email protected]>
Cc: Miroslav Benes <[email protected]>
---
include/linux/livepatch.h | 19 +++++++++++++++++--
kernel/livepatch/core.c | 16 ++++++++++++++++
2 files changed, 33 insertions(+), 2 deletions(-)

diff --git a/include/linux/livepatch.h b/include/linux/livepatch.h
index 4754f01c1abb..f0a5a28b1386 100644
--- a/include/linux/livepatch.h
+++ b/include/linux/livepatch.h
@@ -24,6 +24,7 @@
#include <linux/module.h>
#include <linux/ftrace.h>
#include <linux/completion.h>
+#include <linux/list.h>

#if IS_ENABLED(CONFIG_LIVEPATCH)

@@ -42,6 +43,7 @@
* can be found (optional)
* @old_addr: the address of the function being patched
* @kobj: kobject for sysfs resources
+ * @node: list node for klp_object func_list
* @stack_node: list node for klp_ops func_stack list
* @old_size: size of the old function
* @new_size: size of the new function
@@ -79,6 +81,7 @@ struct klp_func {
/* internal */
unsigned long old_addr;
struct kobject kobj;
+ struct list_head node;
struct list_head stack_node;
unsigned long old_size, new_size;
bool patched;
@@ -117,6 +120,8 @@ struct klp_callbacks {
* @kobj: kobject for sysfs resources
* @mod: kernel module associated with the patched object
* (NULL for vmlinux)
+ * @func_list: dynamic list of the function entries
+ * @node: list node for klp_patch obj_list
* @patched: the object's funcs have been added to the klp_ops list
*/
struct klp_object {
@@ -127,6 +132,8 @@ struct klp_object {

/* internal */
struct kobject kobj;
+ struct list_head func_list;
+ struct list_head node;
struct module *mod;
bool patched;
};
@@ -137,6 +144,7 @@ struct klp_object {
* @objs: object entries for kernel objects to be patched
* @list: list node for global list of registered patches
* @kobj: kobject for sysfs resources
+ * @obj_list: dynamic list of the object entries
* @enabled: the patch is enabled (but operation may be incomplete)
* @finish: for waiting till it is safe to remove the patch module
*/
@@ -148,18 +156,25 @@ struct klp_patch {
/* internal */
struct list_head list;
struct kobject kobj;
+ struct list_head obj_list;
bool enabled;
struct completion finish;
};

-#define klp_for_each_object(patch, obj) \
+#define klp_for_each_object_static(patch, obj) \
for (obj = patch->objs; obj->funcs || obj->name; obj++)

-#define klp_for_each_func(obj, func) \
+#define klp_for_each_object(patch, obj) \
+ list_for_each_entry(obj, &patch->obj_list, node)
+
+#define klp_for_each_func_static(obj, func) \
for (func = obj->funcs; \
func->old_name || func->new_func || func->old_sympos; \
func++)

+#define klp_for_each_func(obj, func) \
+ list_for_each_entry(func, &obj->func_list, node)
+
int klp_register_patch(struct klp_patch *);
int klp_unregister_patch(struct klp_patch *);
int klp_enable_patch(struct klp_patch *);
diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
index 3a4656fb7047..56cb0574c59c 100644
--- a/kernel/livepatch/core.c
+++ b/kernel/livepatch/core.c
@@ -49,6 +49,21 @@ static LIST_HEAD(klp_patches);

static struct kobject *klp_root_kobj;

+static void klp_init_lists(struct klp_patch *patch)
+{
+ struct klp_object *obj;
+ struct klp_func *func;
+
+ INIT_LIST_HEAD(&patch->obj_list);
+ klp_for_each_object_static(patch, obj) {
+ list_add(&obj->node, &patch->obj_list);
+
+ INIT_LIST_HEAD(&obj->func_list);
+ klp_for_each_func_static(obj, func)
+ list_add(&func->node, &obj->func_list);
+ }
+}
+
static bool klp_is_module(struct klp_object *obj)
{
return obj->name;
@@ -794,6 +809,7 @@ static int klp_init_patch(struct klp_patch *patch)

patch->enabled = false;
init_completion(&patch->finish);
+ klp_init_lists(patch);

ret = kobject_init_and_add(&patch->kobj, &klp_ktype_patch,
klp_root_kobj, "%s", patch->mod->name);
--
2.13.6


2018-03-23 12:04:08

by Petr Mladek

[permalink] [raw]
Subject: [PATCH 6/8] livepatch: Remove Nop structures when unused

Replaced patches are removed from the stack when the transition is
finished. It means that Nop structures will never be needed again
and can be removed. Why should we care?

+ Nop structures make false feeling that the function is patched
even though the ftrace handler has no effect.

+ Ftrace handlers are not completely for free. They cause slowdown that
might be visible in some workloads. The ftrace-related slowdown might
actually be the reason why the function is not longer patched in
the new cumulative patch. One would expect that cumulative patch
would allow to solve these problems as well.

+ Cumulative patches are supposed to replace any earlier version of
the patch. The amount of NOPs depends on which version was replaced.
This multiplies the amount of scenarios that might happen.

One might say that NOPs are innocent. But there are even optimized
NOP instructions for different processor, for example, see
arch/x86/kernel/alternative.c. And klp_ftrace_handler() is much
more complicated.

+ It sounds natural to clean up a mess that is not longer needed.
It could only be worse if we do not do it.

This patch allows to unpatch and free the dynamic structures independently
when the transition finishes.

The free part is a bit tricky because kobject free callbacks are called
asynchronously. We could not wait for them easily. Fortunately, we do
not have to. Any further access can be avoided by removing them from
the dynamic lists.

Finally, the patch become the first on the stack when enabled. The replace
functionality will not longer be needed. Let's clear patch->replace to
avoid the special handling when it is eventually disabled/enabled again.

Signed-off-by: Petr Mladek <[email protected]>
---
include/linux/livepatch.h | 6 ++++++
kernel/livepatch/core.c | 42 +++++++++++++++++++++++++++++++++++-------
kernel/livepatch/core.h | 1 +
kernel/livepatch/patch.c | 31 ++++++++++++++++++++++++++-----
kernel/livepatch/patch.h | 1 +
kernel/livepatch/transition.c | 26 +++++++++++++++++++++++++-
6 files changed, 94 insertions(+), 13 deletions(-)

diff --git a/include/linux/livepatch.h b/include/linux/livepatch.h
index d6e6d8176995..1635b30bb1ec 100644
--- a/include/linux/livepatch.h
+++ b/include/linux/livepatch.h
@@ -172,6 +172,9 @@ struct klp_patch {
#define klp_for_each_object_static(patch, obj) \
for (obj = patch->objs; obj->funcs || obj->name; obj++)

+#define klp_for_each_object_safe(patch, obj, tmp_obj) \
+ list_for_each_entry_safe(obj, tmp_obj, &patch->obj_list, node)
+
#define klp_for_each_object(patch, obj) \
list_for_each_entry(obj, &patch->obj_list, node)

@@ -180,6 +183,9 @@ struct klp_patch {
func->old_name || func->new_func || func->old_sympos; \
func++)

+#define klp_for_each_func_safe(obj, func, tmp_func) \
+ list_for_each_entry_safe(func, tmp_func, &obj->func_list, node)
+
#define klp_for_each_func(obj, func) \
list_for_each_entry(func, &obj->func_list, node)

diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
index 35f4bbff395f..0b3be6e14b80 100644
--- a/kernel/livepatch/core.c
+++ b/kernel/livepatch/core.c
@@ -852,11 +852,20 @@ static struct kobj_type klp_ktype_func = {
.sysfs_ops = &kobj_sysfs_ops,
};

-static void klp_free_funcs(struct klp_object *obj)
+static void __klp_free_funcs(struct klp_object *obj, bool free_all)
{
- struct klp_func *func;
+ struct klp_func *func, *tmp_func;
+
+ klp_for_each_func_safe(obj, func, tmp_func) {
+ if (!free_all && !func->nop)
+ continue;
+
+ /*
+ * Avoid double free. It would be tricky to wait for kobject
+ * callbacks when only NOPs are handled.
+ */
+ list_del(&func->node);

- klp_for_each_func(obj, func) {
/* Might be called from klp_init_patch() error path. */
if (func->kobj.state_initialized)
kobject_put(&func->kobj);
@@ -880,12 +889,21 @@ static void klp_free_object_loaded(struct klp_object *obj)
}
}

-static void klp_free_objects(struct klp_patch *patch)
+static void __klp_free_objects(struct klp_patch *patch, bool free_all)
{
- struct klp_object *obj;
+ struct klp_object *obj, *tmp_obj;

- klp_for_each_object(patch, obj) {
- klp_free_funcs(obj);
+ klp_for_each_object_safe(patch, obj, tmp_obj) {
+ __klp_free_funcs(obj, free_all);
+
+ if (!free_all && !obj->dynamic)
+ continue;
+
+ /*
+ * Avoid double free. It would be tricky to wait for kobject
+ * callbacks when only dynamic objects are handled.
+ */
+ list_del(&obj->node);

/* Might be called from klp_init_patch() error path. */
if (obj->kobj.state_initialized)
@@ -895,6 +913,16 @@ static void klp_free_objects(struct klp_patch *patch)
}
}

+static void klp_free_objects(struct klp_patch *patch)
+{
+ __klp_free_objects(patch, true);
+}
+
+void klp_free_objects_dynamic(struct klp_patch *patch)
+{
+ __klp_free_objects(patch, false);
+}
+
static void klp_free_patch(struct klp_patch *patch)
{
klp_free_objects(patch);
diff --git a/kernel/livepatch/core.h b/kernel/livepatch/core.h
index 2d2b724d56a4..0837360a7170 100644
--- a/kernel/livepatch/core.h
+++ b/kernel/livepatch/core.h
@@ -8,6 +8,7 @@ extern struct mutex klp_mutex;

void klp_discard_replaced_patches(struct klp_patch *new_patch,
bool keep_module);
+void klp_free_objects_dynamic(struct klp_patch *patch);

static inline bool klp_is_object_loaded(struct klp_object *obj)
{
diff --git a/kernel/livepatch/patch.c b/kernel/livepatch/patch.c
index fbf1a3a47fc3..64b9ec3facf7 100644
--- a/kernel/livepatch/patch.c
+++ b/kernel/livepatch/patch.c
@@ -244,15 +244,26 @@ static int klp_patch_func(struct klp_func *func)
return ret;
}

-void klp_unpatch_object(struct klp_object *obj)
+static void __klp_unpatch_object(struct klp_object *obj, bool unpatch_all)
{
struct klp_func *func;

- klp_for_each_func(obj, func)
+ klp_for_each_func(obj, func) {
+ if (!unpatch_all && !func->nop)
+ continue;
+
if (func->patched)
klp_unpatch_func(func);
+ }

- obj->patched = false;
+ if (unpatch_all || obj->dynamic)
+ obj->patched = false;
+}
+
+
+void klp_unpatch_object(struct klp_object *obj)
+{
+ __klp_unpatch_object(obj, true);
}

int klp_patch_object(struct klp_object *obj)
@@ -275,11 +286,21 @@ int klp_patch_object(struct klp_object *obj)
return 0;
}

-void klp_unpatch_objects(struct klp_patch *patch)
+static void __klp_unpatch_objects(struct klp_patch *patch, bool unpatch_all)
{
struct klp_object *obj;

klp_for_each_object(patch, obj)
if (obj->patched)
- klp_unpatch_object(obj);
+ __klp_unpatch_object(obj, unpatch_all);
+}
+
+void klp_unpatch_objects(struct klp_patch *patch)
+{
+ __klp_unpatch_objects(patch, true);
+}
+
+void klp_unpatch_objects_dynamic(struct klp_patch *patch)
+{
+ __klp_unpatch_objects(patch, false);
}
diff --git a/kernel/livepatch/patch.h b/kernel/livepatch/patch.h
index e72d8250d04b..cd8e1f03b22b 100644
--- a/kernel/livepatch/patch.h
+++ b/kernel/livepatch/patch.h
@@ -30,5 +30,6 @@ struct klp_ops *klp_find_ops(unsigned long old_addr);
int klp_patch_object(struct klp_object *obj);
void klp_unpatch_object(struct klp_object *obj);
void klp_unpatch_objects(struct klp_patch *patch);
+void klp_unpatch_objects_dynamic(struct klp_patch *patch);

#endif /* _LIVEPATCH_PATCH_H */
diff --git a/kernel/livepatch/transition.c b/kernel/livepatch/transition.c
index 24daed700ee7..05ea2a8e03bd 100644
--- a/kernel/livepatch/transition.c
+++ b/kernel/livepatch/transition.c
@@ -87,9 +87,18 @@ static void klp_complete_transition(void)
klp_transition_patch->mod->name,
klp_target_state == KLP_PATCHED ? "patching" : "unpatching");

- if (klp_transition_patch->replace && klp_target_state == KLP_PATCHED)
+ if (klp_transition_patch->replace && klp_target_state == KLP_PATCHED) {
klp_discard_replaced_patches(klp_transition_patch, klp_forced);

+ /*
+ * There is no need to synchronize the transition after removing
+ * nops. They must be the last on the func_stack. Ftrace
+ * guarantees that nobody will stay in the trampoline after
+ * the ftrace handler is unregistered.
+ */
+ klp_unpatch_objects_dynamic(klp_transition_patch);
+ }
+
if (klp_target_state == KLP_UNPATCHED) {
/*
* All tasks have transitioned to KLP_UNPATCHED so we can now
@@ -146,6 +155,21 @@ static void klp_complete_transition(void)
if (!klp_forced && klp_target_state == KLP_UNPATCHED)
module_put(klp_transition_patch->mod);

+ if (klp_transition_patch->replace && klp_target_state == KLP_PATCHED) {
+ /*
+ * We do not need to wait until the objects are really freed.
+ * We will never need them again because the patch must be on
+ * the bottom of the stack now.
+ */
+ klp_free_objects_dynamic(klp_transition_patch);
+ /*
+ * Replace behavior will not longer be needed. Avoid the related
+ * code when disabling and enabling again.
+ */
+ klp_transition_patch->replace = false;
+ }
+
+
klp_target_state = KLP_UNDEFINED;
klp_transition_patch = NULL;
}
--
2.13.6


2018-03-23 12:05:32

by Petr Mladek

[permalink] [raw]
Subject: [PATCH 2/8] livepatch: Free only structures with initialized kobject

We are going to add a feature called atomic replace. It will allow to
create a patch that would replace all already registered patches.
For this, we will need to dynamically create funcs and objects
for functions that are no longer patched.

We will want to reuse the existing init() and free() functions. Up to now,
the free() functions checked a limit and were called only for structures
with initialized kobject. But we will want to call them also for structures
that were allocated but where the kobject was not initialized yet.

This patch removes the limit. It calls klp_free*() functions for all
structures. But only the ones with initialized kobject are freed.
The handling of un-initialized structures will be added later with
the support for dynamic structures.

This patch does not change the existing behavior.

Signed-off-by: Petr Mladek <[email protected]>
Cc: Josh Poimboeuf <[email protected]>
Cc: Jessica Yu <[email protected]>
Cc: Jiri Kosina <[email protected]>
Cc: Jason Baron <[email protected]>
Cc: Miroslav Benes <[email protected]>
---
kernel/livepatch/core.c | 44 ++++++++++++++++++--------------------------
1 file changed, 18 insertions(+), 26 deletions(-)

diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
index 56cb0574c59c..dcce028ecde8 100644
--- a/kernel/livepatch/core.c
+++ b/kernel/livepatch/core.c
@@ -642,17 +642,15 @@ static struct kobj_type klp_ktype_func = {
.sysfs_ops = &kobj_sysfs_ops,
};

-/*
- * Free all functions' kobjects in the array up to some limit. When limit is
- * NULL, all kobjects are freed.
- */
-static void klp_free_funcs_limited(struct klp_object *obj,
- struct klp_func *limit)
+static void klp_free_funcs(struct klp_object *obj)
{
struct klp_func *func;

- for (func = obj->funcs; func->old_name && func != limit; func++)
- kobject_put(&func->kobj);
+ klp_for_each_func(obj, func) {
+ /* Might be called from klp_init_patch() error path. */
+ if (func->kobj.state_initialized)
+ kobject_put(&func->kobj);
+ }
}

/* Clean up when a patched object is unloaded */
@@ -666,24 +664,23 @@ static void klp_free_object_loaded(struct klp_object *obj)
func->old_addr = 0;
}

-/*
- * Free all objects' kobjects in the array up to some limit. When limit is
- * NULL, all kobjects are freed.
- */
-static void klp_free_objects_limited(struct klp_patch *patch,
- struct klp_object *limit)
+static void klp_free_objects(struct klp_patch *patch)
{
struct klp_object *obj;

- for (obj = patch->objs; obj->funcs && obj != limit; obj++) {
- klp_free_funcs_limited(obj, NULL);
- kobject_put(&obj->kobj);
+ klp_for_each_object(patch, obj) {
+ klp_free_funcs(obj);
+
+ /* Might be called from klp_init_patch() error path. */
+ if (obj->kobj.state_initialized)
+ kobject_put(&obj->kobj);
}
}

static void klp_free_patch(struct klp_patch *patch)
{
- klp_free_objects_limited(patch, NULL);
+ klp_free_objects(patch);
+
if (!list_empty(&patch->list))
list_del(&patch->list);
}
@@ -780,21 +777,16 @@ static int klp_init_object(struct klp_patch *patch, struct klp_object *obj)
klp_for_each_func(obj, func) {
ret = klp_init_func(obj, func);
if (ret)
- goto free;
+ return ret;
}

if (klp_is_object_loaded(obj)) {
ret = klp_init_object_loaded(patch, obj);
if (ret)
- goto free;
+ return ret;
}

return 0;
-
-free:
- klp_free_funcs_limited(obj, func);
- kobject_put(&obj->kobj);
- return ret;
}

static int klp_init_patch(struct klp_patch *patch)
@@ -831,7 +823,7 @@ static int klp_init_patch(struct klp_patch *patch)
return 0;

free:
- klp_free_objects_limited(patch, obj);
+ klp_free_objects(patch);

mutex_unlock(&klp_mutex);

--
2.13.6


2018-03-23 12:08:58

by Petr Mladek

[permalink] [raw]
Subject: [PATCH 3/8] livepatch: Add atomic replace

From: Jason Baron <[email protected]>

Sometimes we would like to revert a particular fix. Currently, this
is not easy because we want to keep all other fixes active and we
could revert only the last applied patch.

One solution would be to apply new patch that implemented all
the reverted functions like in the original code. It would work
as expected but there will be unnecessary redirections. In addition,
it would also require knowing which functions need to be reverted at
build time.

Another problem is when there are many patches that touch the same
functions. There might be dependencies between patches that are
not enforced on the kernel side. Also it might be pretty hard to
actually prepare the patch and ensure compatibility with the other
patches.

A better solution would be to create cumulative patch and say that
it replaces all older ones.

This patch adds a new "replace" flag to struct klp_patch. When it is
enabled, a set of 'nop' klp_func will be dynamically created for all
functions that are already being patched but that will no longer be
modified by the new patch. They used as a new target during the patch
transition.

The idea is to handle Nops' structures like the static ones. When
the dynamic structures are allocated, we initialize all values that
are normally statically defined. The only exception is "new_func"
in struct klp_func. It has to point to the original function. But
the address is known only when the object (module) is loaded.

Nevertheless we still need to distinguish the dynamically allocated
structures in some operations. For this, we add "nop" flag into
struct klp_func and "dynamic" flag into struct klp_object. They
need special handling in the following situations:

+ The structures are added into the lists of objects and functions
immediately. In fact, the lists were created for this purpose.

+ The address of the original function is known only when the patched
object (module) is loaded. Therefore it is copied later in
klp_init_object_loaded().

+ The ftrace handler must not set PC to func->new_addr. It would cause
infinite loop because the address points back to the beginning of
the original function.

+ The various free() functions must free the structure itself.

Note that other ways to detect the dynamic structures are not considered
safe. For example, even the statically defined struct klp_object might
include empty funcs array. It might be there just to run some callbacks.

Special callbacks handling:

The callbacks from the replaced patches are _not_ called by intention.
It would be pretty hard to define a reasonable semantic and implement it.

It might even be counter-productive. The new patch is cumulative. It is
supposed to include most of the changes from older patches. In most cases,
it will not want to call pre_unpatch() post_unpatch() callbacks from
the replaced patches. It would disable/break things for no good reasons.
Also it should be easier to handle various scenarios in a single script
in the new patch than think about interactions caused by running many
scripts from older patches. Not to say that the old scripts even would
not expect to be called in this situation.

Signed-off-by: Jason Baron <[email protected]>
[[email protected]: Split, reuse existing code, simplified]
Signed-off-by: Petr Mladek <[email protected]>
Cc: Josh Poimboeuf <[email protected]>
Cc: Jessica Yu <[email protected]>
Cc: Jiri Kosina <[email protected]>
Cc: Miroslav Benes <[email protected]>
---
include/linux/livepatch.h | 6 ++
kernel/livepatch/core.c | 193 +++++++++++++++++++++++++++++++++++++++++++++-
kernel/livepatch/patch.c | 8 ++
3 files changed, 204 insertions(+), 3 deletions(-)

diff --git a/include/linux/livepatch.h b/include/linux/livepatch.h
index f0a5a28b1386..f28af280f9e0 100644
--- a/include/linux/livepatch.h
+++ b/include/linux/livepatch.h
@@ -47,6 +47,7 @@
* @stack_node: list node for klp_ops func_stack list
* @old_size: size of the old function
* @new_size: size of the new function
+ * @nop: temporary patch to use the original code again; dyn. allocated
* @patched: the func has been added to the klp_ops list
* @transition: the func is currently being applied or reverted
*
@@ -84,6 +85,7 @@ struct klp_func {
struct list_head node;
struct list_head stack_node;
unsigned long old_size, new_size;
+ bool nop;
bool patched;
bool transition;
};
@@ -122,6 +124,7 @@ struct klp_callbacks {
* (NULL for vmlinux)
* @func_list: dynamic list of the function entries
* @node: list node for klp_patch obj_list
+ * @dynamic: temporary object for nop functions; dynamically allocated
* @patched: the object's funcs have been added to the klp_ops list
*/
struct klp_object {
@@ -135,6 +138,7 @@ struct klp_object {
struct list_head func_list;
struct list_head node;
struct module *mod;
+ bool dynamic;
bool patched;
};

@@ -142,6 +146,7 @@ struct klp_object {
* struct klp_patch - patch structure for live patching
* @mod: reference to the live patch module
* @objs: object entries for kernel objects to be patched
+ * @replace: replace all already registered patches
* @list: list node for global list of registered patches
* @kobj: kobject for sysfs resources
* @obj_list: dynamic list of the object entries
@@ -152,6 +157,7 @@ struct klp_patch {
/* external */
struct module *mod;
struct klp_object *objs;
+ bool replace;

/* internal */
struct list_head list;
diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
index dcce028ecde8..18c400bd9a33 100644
--- a/kernel/livepatch/core.c
+++ b/kernel/livepatch/core.c
@@ -113,6 +113,40 @@ static bool klp_initialized(void)
return !!klp_root_kobj;
}

+static struct klp_func *klp_find_func(struct klp_object *obj,
+ struct klp_func *old_func)
+{
+ struct klp_func *func;
+
+ klp_for_each_func(obj, func) {
+ if ((strcmp(old_func->old_name, func->old_name) == 0) &&
+ (old_func->old_sympos == func->old_sympos)) {
+ return func;
+ }
+ }
+
+ return NULL;
+}
+
+static struct klp_object *klp_find_object(struct klp_patch *patch,
+ struct klp_object *old_obj)
+{
+ struct klp_object *obj;
+
+ klp_for_each_object(patch, obj) {
+ if (klp_is_module(old_obj)) {
+ if (klp_is_module(obj) &&
+ strcmp(old_obj->name, obj->name) == 0) {
+ return obj;
+ }
+ } else if (!klp_is_module(obj)) {
+ return obj;
+ }
+ }
+
+ return NULL;
+}
+
struct klp_find_arg {
const char *objname;
const char *name;
@@ -610,6 +644,123 @@ static struct attribute *klp_patch_attrs[] = {
NULL
};

+/*
+ * Dynamically allocated objects and functions.
+ */
+static void klp_free_object_dynamic(struct klp_object *obj)
+{
+ kfree(obj->name);
+ kfree(obj);
+}
+
+static struct klp_object *klp_alloc_object_dynamic(const char *name)
+{
+ struct klp_object *obj;
+
+ obj = kzalloc(sizeof(*obj), GFP_KERNEL);
+ if (!obj)
+ return NULL;
+
+ if (name) {
+ obj->name = kstrdup(name, GFP_KERNEL);
+ if (!obj->name) {
+ kfree(obj);
+ return NULL;
+ }
+ }
+
+ INIT_LIST_HEAD(&obj->func_list);
+ obj->dynamic = true;
+
+ return obj;
+}
+
+static void klp_free_func_nop(struct klp_func *func)
+{
+ kfree(func->old_name);
+ kfree(func);
+}
+
+static struct klp_func *klp_alloc_func_nop(struct klp_func *old_func,
+ struct klp_object *obj)
+{
+ struct klp_func *func;
+
+ func = kzalloc(sizeof(*func), GFP_KERNEL);
+ if (!func)
+ return NULL;
+
+ if (old_func->old_name) {
+ func->old_name = kstrdup(old_func->old_name, GFP_KERNEL);
+ if (!func->old_name) {
+ kfree(func);
+ return NULL;
+ }
+ }
+
+ /*
+ * func->new_func is same as func->old_addr. These addresses are
+ * set when the object is loaded, see klp_init_object_loaded().
+ */
+ func->old_sympos = old_func->old_sympos;
+ func->nop = true;
+
+ return func;
+}
+
+static int klp_add_object_nops(struct klp_patch *patch,
+ struct klp_object *old_obj)
+{
+ struct klp_object *obj;
+ struct klp_func *func, *old_func;
+
+ obj = klp_find_object(patch, old_obj);
+
+ if (!obj) {
+ obj = klp_alloc_object_dynamic(old_obj->name);
+ if (!obj)
+ return -ENOMEM;
+
+ list_add(&obj->node, &patch->obj_list);
+ }
+
+ klp_for_each_func(old_obj, old_func) {
+ func = klp_find_func(obj, old_func);
+ if (func)
+ continue;
+
+ func = klp_alloc_func_nop(old_func, obj);
+ if (!func)
+ return -ENOMEM;
+
+ list_add(&func->node, &obj->func_list);
+ }
+
+ return 0;
+}
+
+/*
+ * Add 'nop' functions which simply return to the caller to run
+ * the original function. The 'nop' functions are added to a
+ * patch to facilitate a 'replace' mode.
+ */
+static int klp_add_nops(struct klp_patch *patch)
+{
+ struct klp_patch *old_patch;
+ struct klp_object *old_obj;
+ int err = 0;
+
+ list_for_each_entry(old_patch, &klp_patches, list) {
+ klp_for_each_object(old_patch, old_obj) {
+ err = klp_add_object_nops(patch, old_obj);
+ if (err)
+ return err;
+ }
+ }
+
+ return 0;
+}
+
static void klp_kobj_release_patch(struct kobject *kobj)
{
struct klp_patch *patch;
@@ -626,6 +777,12 @@ static struct kobj_type klp_ktype_patch = {

static void klp_kobj_release_object(struct kobject *kobj)
{
+ struct klp_object *obj;
+
+ obj = container_of(kobj, struct klp_object, kobj);
+
+ if (obj->dynamic)
+ klp_free_object_dynamic(obj);
}

static struct kobj_type klp_ktype_object = {
@@ -635,6 +792,12 @@ static struct kobj_type klp_ktype_object = {

static void klp_kobj_release_func(struct kobject *kobj)
{
+ struct klp_func *func;
+
+ func = container_of(kobj, struct klp_func, kobj);
+
+ if (func->nop)
+ klp_free_func_nop(func);
}

static struct kobj_type klp_ktype_func = {
@@ -650,6 +813,8 @@ static void klp_free_funcs(struct klp_object *obj)
/* Might be called from klp_init_patch() error path. */
if (func->kobj.state_initialized)
kobject_put(&func->kobj);
+ else if (func->nop)
+ klp_free_func_nop(func);
}
}

@@ -660,8 +825,12 @@ static void klp_free_object_loaded(struct klp_object *obj)

obj->mod = NULL;

- klp_for_each_func(obj, func)
+ klp_for_each_func(obj, func) {
func->old_addr = 0;
+
+ if (func->nop)
+ func->new_func = NULL;
+ }
}

static void klp_free_objects(struct klp_patch *patch)
@@ -674,6 +843,8 @@ static void klp_free_objects(struct klp_patch *patch)
/* Might be called from klp_init_patch() error path. */
if (obj->kobj.state_initialized)
kobject_put(&obj->kobj);
+ else if (obj->dynamic)
+ klp_free_object_dynamic(obj);
}
}

@@ -687,7 +858,14 @@ static void klp_free_patch(struct klp_patch *patch)

static int klp_init_func(struct klp_object *obj, struct klp_func *func)
{
- if (!func->old_name || !func->new_func)
+ if (!func->old_name)
+ return -EINVAL;
+
+ /*
+ * NOPs get the address later. The the patched module must be loaded,
+ * see klp_init_object_loaded().
+ */
+ if (!func->new_func && !func->nop)
return -EINVAL;

INIT_LIST_HEAD(&func->stack_node);
@@ -742,6 +920,9 @@ static int klp_init_object_loaded(struct klp_patch *patch,
return -ENOENT;
}

+ if (func->nop)
+ func->new_func = (void *)func->old_addr;
+
ret = kallsyms_lookup_size_offset((unsigned long)func->new_func,
&func->new_size, NULL);
if (!ret) {
@@ -760,7 +941,7 @@ static int klp_init_object(struct klp_patch *patch, struct klp_object *obj)
int ret;
const char *name;

- if (!obj->funcs)
+ if (!obj->funcs && !obj->dynamic)
return -EINVAL;

obj->patched = false;
@@ -810,6 +991,12 @@ static int klp_init_patch(struct klp_patch *patch)
return ret;
}

+ if (patch->replace) {
+ ret = klp_add_nops(patch);
+ if (ret)
+ goto free;
+ }
+
klp_for_each_object(patch, obj) {
ret = klp_init_object(patch, obj);
if (ret)
diff --git a/kernel/livepatch/patch.c b/kernel/livepatch/patch.c
index 82d584225dc6..fbf1a3a47fc3 100644
--- a/kernel/livepatch/patch.c
+++ b/kernel/livepatch/patch.c
@@ -118,7 +118,15 @@ static void notrace klp_ftrace_handler(unsigned long ip,
}
}

+ /*
+ * NOPs are used to replace existing patches with original code.
+ * Do nothing! Setting pc would cause an infinite loop.
+ */
+ if (func->nop)
+ goto unlock;
+
klp_arch_set_pc(regs, (unsigned long)func->new_func);
+
unlock:
preempt_enable_notrace();
}
--
2.13.6


2018-03-23 14:56:53

by Petr Mladek

[permalink] [raw]
Subject: Re: [PATCH 0/8] livepatch: Atomic replace feature

On Fri 2018-03-23 13:00:20, Petr Mladek wrote:
> The atomic replace allows to create cumulative patches. They
> are useful when you maintain many livepatches and want to remove
> one that is lower on the stack. In addition it is very useful when
> more patches touch the same function and there are dependencies
> between them.
>
> This version is heavily refactored and cleaned based on feedback from Josh.
> There are actually only three functional changes.
>
> It still passes the first draft of the selfttest from Joe that can
> be found at https://lkml.kernel.org/r/[email protected]
>
>
> Changes against v10:
>
> + Bug fixes and functional changes:
> + Handle Nops in klp_ftrace_handled() to avoid infinite loop [Mirek]
> + Really add dynamically allocated klp_object into the list [Petr]
> + Clear patch->replace when transition finishes [Josh]
>
> + Refactoring and clean up [Josh]:
> + Replace enum types with bools
> + Avoid using ERR_PTR
> + Remove too paranoid warnings
> + Distinguish registered patches by a flag instead of a list
> + Squash some functions
> + Update comments, documentation, and commit messages
> + Squashed and split patches to do more controversial changes later

Please, see below git diff against v10. Otherwise, the changes might
be hard to find because of the refactoring.

diff --git a/Documentation/livepatch/cumulative-patches.txt b/Documentation/livepatch/cumulative-patches.txt
index c041fc1bd259..206b7f98d270 100644
--- a/Documentation/livepatch/cumulative-patches.txt
+++ b/Documentation/livepatch/cumulative-patches.txt
@@ -31,15 +31,37 @@ be enabled even when some earlier patches have not been enabled yet.

All processes are then migrated to use the code only from the new patch.
Once the transition is finished, all older patches are removed from the stack
-of patches. Even the older not-enabled patches mentioned above.
+of patches. Even the older not-enabled patches mentioned above. They can
+even be unregistered and the related modules unloaded.

Ftrace handlers are transparently removed from functions that are no
longer modified by the new cumulative patch.

-As a result, the livepatch author might maintain sources only for one
+As a result, the livepatch authors might maintain sources only for one
cumulative patch. It helps to keep the patch consistent while adding or
removing various fixes or features.

+Users could keep only the last patch installed on the system after
+the transition to has finished. It helps to clearly see what code is
+actually in use. Also the livepatch might then be seen as a "normal"
+module that modifies the kernel behavior. The only difference is that
+it can be updated at runtime without breaking its functionality.
+
+
+Features
+--------
+
+The atomic replace allows:
+
+ + Atomically revert some functions in a previous patch while
+ upgrading other functions.
+
+ + Remove eventual performance impact caused by core redirection
+ for functions that are no longer patched.
+
+ + Decrease user confusion about stacking order and what patches are
+ currently in effect.
+

Limitations:
------------
diff --git a/include/linux/livepatch.h b/include/linux/livepatch.h
index 7222b801d63a..1635b30bb1ec 100644
--- a/include/linux/livepatch.h
+++ b/include/linux/livepatch.h
@@ -35,34 +35,19 @@
#define KLP_UNPATCHED 0
#define KLP_PATCHED 1

-/*
- * Function type is used to distinguish dynamically allocated structures
- * and limit some operations.
- */
-enum klp_func_type {
- KLP_FUNC_ANY = -1, /* Substitute any type */
- KLP_FUNC_STATIC = 0, /* Original statically defined structure */
- KLP_FUNC_NOP, /* Dynamically allocated NOP function patch */
-};
-
-enum klp_object_type {
- KLP_OBJECT_STATIC = 0, /* Original statically defined structure */
- KLP_OBJECT_DYNAMIC, /* Dynamically allocated structure. */
-};
-
/**
* struct klp_func - function structure for live patching
* @old_name: name of the function to be patched
* @new_func: pointer to the patched function code
* @old_sympos: a hint indicating which symbol position the old function
* can be found (optional)
- * @ftype: distinguish static and dynamic structures
* @old_addr: the address of the function being patched
* @kobj: kobject for sysfs resources
+ * @node: list node for klp_object func_list
* @stack_node: list node for klp_ops func_stack list
- * @func_entry: links struct klp_func to struct klp_object
* @old_size: size of the old function
* @new_size: size of the new function
+ * @nop: temporary patch to use the original code again; dyn. allocated
* @patched: the func has been added to the klp_ops list
* @transition: the func is currently being applied or reverted
*
@@ -95,12 +80,12 @@ struct klp_func {
unsigned long old_sympos;

/* internal */
- enum klp_func_type ftype;
unsigned long old_addr;
struct kobject kobj;
+ struct list_head node;
struct list_head stack_node;
- struct list_head func_entry;
unsigned long old_size, new_size;
+ bool nop;
bool patched;
bool transition;
};
@@ -137,8 +122,9 @@ struct klp_callbacks {
* @kobj: kobject for sysfs resources
* @mod: kernel module associated with the patched object
* (NULL for vmlinux)
- * @func_list: head of list for struct klp_func
- * @obj_entry: links struct klp_object to struct klp_patch
+ * @func_list: dynamic list of the function entries
+ * @node: list node for klp_patch obj_list
+ * @dynamic: temporary object for nop functions; dynamically allocated
* @patched: the object's funcs have been added to the klp_ops list
*/
struct klp_object {
@@ -148,11 +134,11 @@ struct klp_object {
struct klp_callbacks callbacks;

/* internal */
- enum klp_object_type otype;
struct kobject kobj;
struct list_head func_list;
- struct list_head obj_entry;
+ struct list_head node;
struct module *mod;
+ bool dynamic;
bool patched;
};

@@ -163,7 +149,8 @@ struct klp_object {
* @replace: replace all already registered patches
* @list: list node for global list of registered patches
* @kobj: kobject for sysfs resources
- * @obj_list: head of list for struct klp_object
+ * @obj_list: dynamic list of the object entries
+ * @registered: reliable way to check registration status
* @enabled: the patch is enabled (but operation may be incomplete)
* @finish: for waiting till it is safe to remove the patch module
*/
@@ -177,6 +164,7 @@ struct klp_patch {
struct list_head list;
struct kobject kobj;
struct list_head obj_list;
+ bool registered;
bool enabled;
struct completion finish;
};
@@ -185,39 +173,21 @@ struct klp_patch {
for (obj = patch->objs; obj->funcs || obj->name; obj++)

#define klp_for_each_object_safe(patch, obj, tmp_obj) \
- list_for_each_entry_safe(obj, tmp_obj, &patch->obj_list, obj_entry)
+ list_for_each_entry_safe(obj, tmp_obj, &patch->obj_list, node)

#define klp_for_each_object(patch, obj) \
- list_for_each_entry(obj, &patch->obj_list, obj_entry)
+ list_for_each_entry(obj, &patch->obj_list, node)

-/* Support also dynamically allocated struct klp_object */
#define klp_for_each_func_static(obj, func) \
for (func = obj->funcs; \
- func && (func->old_name || func->new_func || func->old_sympos); \
+ func->old_name || func->new_func || func->old_sympos; \
func++)

#define klp_for_each_func_safe(obj, func, tmp_func) \
- list_for_each_entry_safe(func, tmp_func, &obj->func_list, func_entry)
+ list_for_each_entry_safe(func, tmp_func, &obj->func_list, node)

#define klp_for_each_func(obj, func) \
- list_for_each_entry(func, &obj->func_list, func_entry)
-
-static inline bool klp_is_object_dynamic(struct klp_object *obj)
-{
- return obj->otype == KLP_OBJECT_DYNAMIC;
-}
-
-static inline bool klp_is_func_dynamic(struct klp_func *func)
-{
- WARN_ON_ONCE(func->ftype == KLP_FUNC_ANY);
- return func->ftype != KLP_FUNC_STATIC;
-}
-
-static inline bool klp_is_func_type(struct klp_func *func,
- enum klp_func_type ftype)
-{
- return ftype == KLP_FUNC_ANY || ftype == func->ftype;
-}
+ list_for_each_entry(func, &obj->func_list, node)

int klp_register_patch(struct klp_patch *);
int klp_unregister_patch(struct klp_patch *);
diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
index b098dc10d4d5..c64371ffc063 100644
--- a/kernel/livepatch/core.c
+++ b/kernel/livepatch/core.c
@@ -45,41 +45,28 @@
*/
DEFINE_MUTEX(klp_mutex);

-static LIST_HEAD(klp_patches);
-
/*
- * List of 'replaced' patches that have been replaced by a patch that has the
- * 'replace' bit set. When they are added to this list, they are disabled and
- * can not be re-enabled, but they can be unregistered().
+ * Stack of patches. It defines the order in which the patches can be enabled.
+ * Only patches on this stack might be enabled. New patches are added when
+ * registered. They are removed when they are unregistered.
*/
-static LIST_HEAD(klp_replaced_patches);
+static LIST_HEAD(klp_patches);

static struct kobject *klp_root_kobj;

-static void klp_init_func_list(struct klp_object *obj, struct klp_func *func)
-{
- list_add(&func->func_entry, &obj->func_list);
-}
-
-static void klp_init_object_list(struct klp_patch *patch,
- struct klp_object *obj)
-{
- struct klp_func *func;
-
- list_add(&obj->obj_entry, &patch->obj_list);
-
- INIT_LIST_HEAD(&obj->func_list);
- klp_for_each_func_static(obj, func)
- klp_init_func_list(obj, func);
-}
-
-static void klp_init_patch_list(struct klp_patch *patch)
+static void klp_init_lists(struct klp_patch *patch)
{
struct klp_object *obj;
+ struct klp_func *func;

INIT_LIST_HEAD(&patch->obj_list);
- klp_for_each_object_static(patch, obj)
- klp_init_object_list(patch, obj);
+ klp_for_each_object_static(patch, obj) {
+ list_add(&obj->node, &patch->obj_list);
+
+ INIT_LIST_HEAD(&obj->func_list);
+ klp_for_each_func_static(obj, func)
+ list_add(&func->node, &obj->func_list);
+ }
}

static bool klp_is_module(struct klp_object *obj)
@@ -115,28 +102,17 @@ static void klp_find_object_module(struct klp_object *obj)
mutex_unlock(&module_mutex);
}

-static bool klp_is_patch_in_list(struct klp_patch *patch,
- struct list_head *head)
+static bool klp_is_patch_on_stack(struct klp_patch *patch)
{
struct klp_patch *mypatch;

- list_for_each_entry(mypatch, head, list)
+ list_for_each_entry(mypatch, &klp_patches, list)
if (mypatch == patch)
return true;

return false;
}

-static bool klp_is_patch_usable(struct klp_patch *patch)
-{
- return klp_is_patch_in_list(patch, &klp_patches);
-}
-
-static bool klp_is_patch_replaced(struct klp_patch *patch)
-{
- return klp_is_patch_in_list(patch, &klp_replaced_patches);
-}
-
static bool klp_initialized(void)
{
return !!klp_root_kobj;
@@ -161,10 +137,9 @@ static struct klp_object *klp_find_object(struct klp_patch *patch,
struct klp_object *old_obj)
{
struct klp_object *obj;
- bool mod = klp_is_module(old_obj);

klp_for_each_object(patch, obj) {
- if (mod) {
+ if (klp_is_module(old_obj)) {
if (klp_is_module(obj) &&
strcmp(old_obj->name, obj->name) == 0) {
return obj;
@@ -361,14 +336,18 @@ static int klp_write_object_relocations(struct module *pmod,
* This function removes replaced patches from both func_stack
* and klp_patches stack.
*
- * We could be pretty aggressive here. It is called in situation
- * when these structures are no longer accessible. All functions
- * are redirected using the klp_transition_patch. They use either
- * a new code or they are in the original code because of the special
- * nop function patches.
+ * We could be pretty aggressive here. It is called in the situation where
+ * these structures are no longer accessible. All functions are redirected
+ * by the klp_transition_patch. They use either a new code or they are in
+ * the original code because of the special nop function patches.
+ *
+ * The only exception is when the transition was forced. In this case,
+ * klp_ftrace_handler() might still see the replaced patch on the stack.
+ * Fortunately, it is carefully designed to work with removed functions
+ * thanks to RCU. We only have to keep the patches on the system. This
+ * is handled by @keep_module parameter.
*/
-void klp_throw_away_replaced_patches(struct klp_patch *new_patch,
- bool keep_module)
+void klp_discard_replaced_patches(struct klp_patch *new_patch, bool keep_module)
{
struct klp_patch *old_patch, *tmp_patch;

@@ -377,7 +356,7 @@ void klp_throw_away_replaced_patches(struct klp_patch *new_patch,
return;

if (old_patch->enabled) {
- klp_unpatch_objects(old_patch, KLP_FUNC_ANY);
+ klp_unpatch_objects(old_patch);
old_patch->enabled = false;

if (!keep_module)
@@ -388,7 +367,7 @@ void klp_throw_away_replaced_patches(struct klp_patch *new_patch,
* Replaced patches could not get re-enabled to keep
* the code sane.
*/
- list_move(&old_patch->list, &klp_replaced_patches);
+ list_del_init(&old_patch->list);
}
}

@@ -443,7 +422,7 @@ int klp_disable_patch(struct klp_patch *patch)

mutex_lock(&klp_mutex);

- if (!klp_is_patch_usable(patch)) {
+ if (!patch->registered) {
ret = -EINVAL;
goto err;
}
@@ -472,13 +451,14 @@ static int __klp_enable_patch(struct klp_patch *patch)
if (WARN_ON(patch->enabled))
return -EINVAL;

- if (!klp_is_patch_usable(patch))
+ /* Enforce stacking. */
+ if (!klp_is_patch_on_stack(patch))
return -EINVAL;

/*
- * Enforce stacking: only the first disabled patch can be enabled.
- * This is not required for patches with the replace flags. They
- * override even disabled patches that were registered earlier.
+ * Only the first disabled patch can be enabled. This is not required
+ * for patches with the replace flags. They override even disabled
+ * patches that were registered earlier.
*/
if (!patch->replace &&
patch->list.prev != &klp_patches &&
@@ -551,7 +531,7 @@ int klp_enable_patch(struct klp_patch *patch)

mutex_lock(&klp_mutex);

- if (!klp_is_patch_usable(patch)) {
+ if (!patch->registered) {
ret = -EINVAL;
goto err;
}
@@ -592,15 +572,16 @@ static ssize_t enabled_store(struct kobject *kobj, struct kobj_attribute *attr,

mutex_lock(&klp_mutex);

- if (!klp_is_patch_usable(patch)) {
+ if (!patch->registered) {
/*
* Module with the patch could either disappear meanwhile or is
- * not properly initialized yet or the patch was just replaced.
+ * not properly initialized yet.
*/
ret = -EINVAL;
goto err;
}

+
if (patch->enabled == enabled) {
/* already in requested state */
ret = -EINVAL;
@@ -720,18 +701,6 @@ static struct attribute *klp_patch_attrs[] = {
/*
* Dynamically allocated objects and functions.
*/
-static void klp_free_func_nop(struct klp_func *func)
-{
- kfree(func->old_name);
- kfree(func);
-}
-
-static void klp_free_func_dynamic(struct klp_func *func)
-{
- if (func->ftype == KLP_FUNC_NOP)
- klp_free_func_nop(func);
-}
-
static void klp_free_object_dynamic(struct klp_object *obj)
{
kfree(obj->name);
@@ -744,35 +713,26 @@ static struct klp_object *klp_alloc_object_dynamic(const char *name)

obj = kzalloc(sizeof(*obj), GFP_KERNEL);
if (!obj)
- return ERR_PTR(-ENOMEM);
+ return NULL;

if (name) {
obj->name = kstrdup(name, GFP_KERNEL);
if (!obj->name) {
kfree(obj);
- return ERR_PTR(-ENOMEM);
+ return NULL;
}
}
- obj->otype = KLP_OBJECT_DYNAMIC;
+
+ INIT_LIST_HEAD(&obj->func_list);
+ obj->dynamic = true;

return obj;
}

-static struct klp_object *klp_get_or_add_object(struct klp_patch *patch,
- struct klp_object *old_obj)
+static void klp_free_func_nop(struct klp_func *func)
{
- struct klp_object *obj;
-
- obj = klp_find_object(patch, old_obj);
- if (obj)
- return obj;
-
- obj = klp_alloc_object_dynamic(old_obj->name);
- if (IS_ERR(obj))
- return obj;
-
- klp_init_object_list(patch, obj);
- return obj;
+ kfree(func->old_name);
+ kfree(func);
}

static struct klp_func *klp_alloc_func_nop(struct klp_func *old_func,
@@ -782,59 +742,52 @@ static struct klp_func *klp_alloc_func_nop(struct klp_func *old_func,

func = kzalloc(sizeof(*func), GFP_KERNEL);
if (!func)
- return ERR_PTR(-ENOMEM);
+ return NULL;

if (old_func->old_name) {
func->old_name = kstrdup(old_func->old_name, GFP_KERNEL);
if (!func->old_name) {
kfree(func);
- return ERR_PTR(-ENOMEM);
+ return NULL;
}
}
- func->old_sympos = old_func->old_sympos;
+
/*
* func->new_func is same as func->old_addr. These addresses are
* set when the object is loaded, see klp_init_object_loaded().
*/
- func->ftype = KLP_FUNC_NOP;
+ func->old_sympos = old_func->old_sympos;
+ func->nop = true;

return func;
}

-static int klp_add_func_nop(struct klp_object *obj,
- struct klp_func *old_func)
-{
- struct klp_func *func;
-
- func = klp_find_func(obj, old_func);
-
- if (func)
- return 0;
-
- func = klp_alloc_func_nop(old_func, obj);
- if (IS_ERR(func))
- return PTR_ERR(func);
-
- klp_init_func_list(obj, func);
-
- return 0;
-}
-
static int klp_add_object_nops(struct klp_patch *patch,
struct klp_object *old_obj)
{
struct klp_object *obj;
- struct klp_func *old_func;
- int err = 0;
+ struct klp_func *func, *old_func;
+
+ obj = klp_find_object(patch, old_obj);
+
+ if (!obj) {
+ obj = klp_alloc_object_dynamic(old_obj->name);
+ if (!obj)
+ return -ENOMEM;

- obj = klp_get_or_add_object(patch, old_obj);
- if (IS_ERR(obj))
- return PTR_ERR(obj);
+ list_add(&obj->node, &patch->obj_list);
+ }

klp_for_each_func(old_obj, old_func) {
- err = klp_add_func_nop(obj, old_func);
- if (err)
- return err;
+ func = klp_find_func(obj, old_func);
+ if (func)
+ continue;
+
+ func = klp_alloc_func_nop(old_func, obj);
+ if (!func)
+ return -ENOMEM;
+
+ list_add(&func->node, &obj->func_list);
}

return 0;
@@ -843,15 +796,7 @@ static int klp_add_object_nops(struct klp_patch *patch,
/*
* Add 'nop' functions which simply return to the caller to run
* the original function. The 'nop' functions are added to a
- * patch to facilitate a 'replace' mode
- *
- * The nops are generated for all patches on the stack when
- * the new patch is initialized. It is safe even though some
- * older patches might get disabled and removed before the
- * new one is enabled. In the worst case, there might be nops
- * which will not be really needed. But it does not harm and
- * simplifies the implementation a lot. Especially we could
- * use the init functions as is.
+ * patch to facilitate a 'replace' mode.
*/
static int klp_add_nops(struct klp_patch *patch)
{
@@ -859,9 +804,6 @@ static int klp_add_nops(struct klp_patch *patch)
struct klp_object *old_obj;
int err = 0;

- if (WARN_ON(!patch->replace))
- return -EINVAL;
-
list_for_each_entry(old_patch, &klp_patches, list) {
klp_for_each_object(old_patch, old_obj) {
err = klp_add_object_nops(patch, old_obj);
@@ -873,17 +815,6 @@ static int klp_add_nops(struct klp_patch *patch)
return 0;
}

-/*
- * Patch release framework must support the following scenarios:
- *
- * + Asynchonous release is used when kobjects are initialized.
- *
- * + Direct release is used in error paths for structures that
- * have not had kobj initialized yet.
- *
- * + Allow to release dynamic structures of the given type when
- * they are not longer needed.
- */
static void klp_kobj_release_patch(struct kobject *kobj)
{
struct klp_patch *patch;
@@ -904,7 +835,7 @@ static void klp_kobj_release_object(struct kobject *kobj)

obj = container_of(kobj, struct klp_object, kobj);

- if (klp_is_object_dynamic(obj))
+ if (obj->dynamic)
klp_free_object_dynamic(obj);
}

@@ -919,8 +850,8 @@ static void klp_kobj_release_func(struct kobject *kobj)

func = container_of(kobj, struct klp_func, kobj);

- if (klp_is_func_dynamic(func))
- klp_free_func_dynamic(func);
+ if (func->nop)
+ klp_free_func_nop(func);
}

static struct kobj_type klp_ktype_func = {
@@ -928,26 +859,25 @@ static struct kobj_type klp_ktype_func = {
.sysfs_ops = &kobj_sysfs_ops,
};

-/*
- * Free all funcs of the given ftype. Use the kobject when it has already
- * been initialized. Otherwise, do it directly.
- */
-static void klp_free_funcs(struct klp_object *obj,
- enum klp_func_type ftype)
+static void __klp_free_funcs(struct klp_object *obj, bool free_all)
{
struct klp_func *func, *tmp_func;

klp_for_each_func_safe(obj, func, tmp_func) {
- if (!klp_is_func_type(func, ftype))
+ if (!free_all && !func->nop)
continue;

- /* Avoid double free and allow to detect empty objects. */
- list_del(&func->func_entry);
+ /*
+ * Avoid double free. It would be tricky to wait for kobject
+ * callbacks when only NOPs are handled.
+ */
+ list_del(&func->node);

+ /* Might be called from klp_init_patch() error path. */
if (func->kobj.state_initialized)
kobject_put(&func->kobj);
- else if (klp_is_func_dynamic(func))
- klp_free_func_dynamic(func);
+ else if (func->nop)
+ klp_free_func_nop(func);
}
}

@@ -961,48 +891,48 @@ static void klp_free_object_loaded(struct klp_object *obj)
klp_for_each_func(obj, func) {
func->old_addr = 0;

- if (klp_is_func_type(func, KLP_FUNC_NOP))
+ if (func->nop)
func->new_func = NULL;
}
}

-/*
- * Free all linked funcs of the given ftype. Then free empty objects.
- * Use the kobject when it has already been initialized. Otherwise,
- * do it directly.
- */
-void klp_free_objects(struct klp_patch *patch, enum klp_func_type ftype)
+static void __klp_free_objects(struct klp_patch *patch, bool free_all)
{
struct klp_object *obj, *tmp_obj;

klp_for_each_object_safe(patch, obj, tmp_obj) {
- klp_free_funcs(obj, ftype);
+ __klp_free_funcs(obj, free_all);

- if (!list_empty(&obj->func_list))
+ if (!free_all && !obj->dynamic)
continue;

/*
- * Keep objects from the original patch initialized until
- * the entire patch is being freed.
+ * Avoid double free. It would be tricky to wait for kobject
+ * callbacks when only dynamic objects are handled.
*/
- if (!klp_is_object_dynamic(obj) &&
- ftype != KLP_FUNC_STATIC &&
- ftype != KLP_FUNC_ANY)
- continue;
-
- /* Avoid freeing the object twice. */
- list_del(&obj->obj_entry);
+ list_del(&obj->node);

+ /* Might be called from klp_init_patch() error path. */
if (obj->kobj.state_initialized)
kobject_put(&obj->kobj);
- else if (klp_is_object_dynamic(obj))
+ else if (obj->dynamic)
klp_free_object_dynamic(obj);
}
}

+static void klp_free_objects(struct klp_patch *patch)
+{
+ __klp_free_objects(patch, true);
+}
+
+void klp_free_objects_dynamic(struct klp_patch *patch)
+{
+ __klp_free_objects(patch, false);
+}
+
static void klp_free_patch(struct klp_patch *patch)
{
- klp_free_objects(patch, KLP_FUNC_ANY);
+ klp_free_objects(patch);

if (!list_empty(&patch->list))
list_del(&patch->list);
@@ -1017,7 +947,7 @@ static int klp_init_func(struct klp_object *obj, struct klp_func *func)
* NOPs get the address later. The the patched module must be loaded,
* see klp_init_object_loaded().
*/
- if (!func->new_func && !klp_is_func_type(func, KLP_FUNC_NOP))
+ if (!func->new_func && !func->nop)
return -EINVAL;

INIT_LIST_HEAD(&func->stack_node);
@@ -1072,7 +1002,7 @@ static int klp_init_object_loaded(struct klp_patch *patch,
return -ENOENT;
}

- if (klp_is_func_type(func, KLP_FUNC_NOP))
+ if (func->nop)
func->new_func = (void *)func->old_addr;

ret = kallsyms_lookup_size_offset((unsigned long)func->new_func,
@@ -1093,6 +1023,9 @@ static int klp_init_object(struct klp_patch *patch, struct klp_object *obj)
int ret;
const char *name;

+ if (!obj->funcs && !obj->dynamic)
+ return -EINVAL;
+
obj->patched = false;
obj->mod = NULL;

@@ -1131,7 +1064,7 @@ static int klp_init_patch(struct klp_patch *patch)

patch->enabled = false;
init_completion(&patch->finish);
- klp_init_patch_list(patch);
+ klp_init_lists(patch);

ret = kobject_init_and_add(&patch->kobj, &klp_ktype_patch,
klp_root_kobj, "%s", patch->mod->name);
@@ -1153,13 +1086,14 @@ static int klp_init_patch(struct klp_patch *patch)
}

list_add_tail(&patch->list, &klp_patches);
+ patch->registered = true;

mutex_unlock(&klp_mutex);

return 0;

free:
- klp_free_objects(patch, KLP_FUNC_ANY);
+ klp_free_objects(patch);

mutex_unlock(&klp_mutex);

@@ -1183,7 +1117,7 @@ int klp_unregister_patch(struct klp_patch *patch)

mutex_lock(&klp_mutex);

- if (!klp_is_patch_usable(patch) && !klp_is_patch_replaced(patch)) {
+ if (!patch->registered) {
ret = -EINVAL;
goto err;
}
@@ -1194,6 +1128,7 @@ int klp_unregister_patch(struct klp_patch *patch)
}

klp_free_patch(patch);
+ patch->registered = false;

mutex_unlock(&klp_mutex);

@@ -1272,7 +1207,7 @@ static void klp_cleanup_module_patches_limited(struct module *mod,

pr_notice("reverting patch '%s' on unloading module '%s'\n",
patch->mod->name, obj->mod->name);
- klp_unpatch_object(obj, KLP_FUNC_ANY);
+ klp_unpatch_object(obj);

klp_post_unpatch_callback(obj);
}
diff --git a/kernel/livepatch/core.h b/kernel/livepatch/core.h
index 43184a5318d8..0837360a7170 100644
--- a/kernel/livepatch/core.h
+++ b/kernel/livepatch/core.h
@@ -6,9 +6,9 @@

extern struct mutex klp_mutex;

-void klp_throw_away_replaced_patches(struct klp_patch *new_patch,
- bool keep_module);
-void klp_free_objects(struct klp_patch *patch, enum klp_func_type ftype);
+void klp_discard_replaced_patches(struct klp_patch *new_patch,
+ bool keep_module);
+void klp_free_objects_dynamic(struct klp_patch *patch);

static inline bool klp_is_object_loaded(struct klp_object *obj)
{
diff --git a/kernel/livepatch/patch.c b/kernel/livepatch/patch.c
index 460a09aa7715..64b9ec3facf7 100644
--- a/kernel/livepatch/patch.c
+++ b/kernel/livepatch/patch.c
@@ -118,8 +118,11 @@ static void notrace klp_ftrace_handler(unsigned long ip,
}
}

- /* Survive ugly mistakes, for example, when handling NOPs. */
- if (WARN_ON_ONCE(!func->new_func))
+ /*
+ * NOPs are used to replace existing patches with original code.
+ * Do nothing! Setting pc would cause an infinite loop.
+ */
+ if (func->nop)
goto unlock;

klp_arch_set_pc(regs, (unsigned long)func->new_func);
@@ -241,27 +244,28 @@ static int klp_patch_func(struct klp_func *func)
return ret;
}

-void klp_unpatch_object(struct klp_object *obj, enum klp_func_type ftype)
+static void __klp_unpatch_object(struct klp_object *obj, bool unpatch_all)
{
struct klp_func *func;

klp_for_each_func(obj, func) {
- if (!func->patched)
+ if (!unpatch_all && !func->nop)
continue;

- if (klp_is_func_type(func, ftype))
+ if (func->patched)
klp_unpatch_func(func);
}

- /*
- * The state of the object is defined by the state of statically
- * defined func structures. Note that we will need to run callbacks
- * even when obj->funcs array is empty.
- */
- if (ftype == KLP_FUNC_ANY || ftype == KLP_FUNC_STATIC)
+ if (unpatch_all || obj->dynamic)
obj->patched = false;
}

+
+void klp_unpatch_object(struct klp_object *obj)
+{
+ __klp_unpatch_object(obj, true);
+}
+
int klp_patch_object(struct klp_object *obj)
{
struct klp_func *func;
@@ -273,7 +277,7 @@ int klp_patch_object(struct klp_object *obj)
klp_for_each_func(obj, func) {
ret = klp_patch_func(func);
if (ret) {
- klp_unpatch_object(obj, KLP_FUNC_ANY);
+ klp_unpatch_object(obj);
return ret;
}
}
@@ -282,11 +286,21 @@ int klp_patch_object(struct klp_object *obj)
return 0;
}

-void klp_unpatch_objects(struct klp_patch *patch, enum klp_func_type ftype)
+static void __klp_unpatch_objects(struct klp_patch *patch, bool unpatch_all)
{
struct klp_object *obj;

klp_for_each_object(patch, obj)
if (obj->patched)
- klp_unpatch_object(obj, ftype);
+ __klp_unpatch_object(obj, unpatch_all);
+}
+
+void klp_unpatch_objects(struct klp_patch *patch)
+{
+ __klp_unpatch_objects(patch, true);
+}
+
+void klp_unpatch_objects_dynamic(struct klp_patch *patch)
+{
+ __klp_unpatch_objects(patch, false);
}
diff --git a/kernel/livepatch/patch.h b/kernel/livepatch/patch.h
index 885f644add4c..cd8e1f03b22b 100644
--- a/kernel/livepatch/patch.h
+++ b/kernel/livepatch/patch.h
@@ -28,7 +28,8 @@ struct klp_ops {
struct klp_ops *klp_find_ops(unsigned long old_addr);

int klp_patch_object(struct klp_object *obj);
-void klp_unpatch_object(struct klp_object *obj, enum klp_func_type ftype);
-void klp_unpatch_objects(struct klp_patch *patch, enum klp_func_type ftype);
+void klp_unpatch_object(struct klp_object *obj);
+void klp_unpatch_objects(struct klp_patch *patch);
+void klp_unpatch_objects_dynamic(struct klp_patch *patch);

#endif /* _LIVEPATCH_PATCH_H */
diff --git a/kernel/livepatch/transition.c b/kernel/livepatch/transition.c
index d6af190865d2..05ea2a8e03bd 100644
--- a/kernel/livepatch/transition.c
+++ b/kernel/livepatch/transition.c
@@ -87,34 +87,16 @@ static void klp_complete_transition(void)
klp_transition_patch->mod->name,
klp_target_state == KLP_PATCHED ? "patching" : "unpatching");

- /*
- * For replace patches, we disable all previous patches, and replace
- * the dynamic no-op functions by removing the ftrace hook.
- */
if (klp_transition_patch->replace && klp_target_state == KLP_PATCHED) {
- /*
- * Make sure that no ftrace handler accesses any older patch
- * on the stack. This might happen when the user forced the
- * transaction while some running tasks were still falling
- * back to the old code. There might even still be ftrace
- * handlers that have not seen the last patch on the stack yet.
- *
- * It probably is not necessary because of the rcu-safe access.
- * But better be safe than sorry.
- */
- if (klp_forced)
- klp_synchronize_transition();
-
- klp_throw_away_replaced_patches(klp_transition_patch,
- klp_forced);
+ klp_discard_replaced_patches(klp_transition_patch, klp_forced);

/*
* There is no need to synchronize the transition after removing
* nops. They must be the last on the func_stack. Ftrace
- * gurantees that nobody will stay in the trampoline after
+ * guarantees that nobody will stay in the trampoline after
* the ftrace handler is unregistered.
*/
- klp_unpatch_objects(klp_transition_patch, KLP_FUNC_NOP);
+ klp_unpatch_objects_dynamic(klp_transition_patch);
}

if (klp_target_state == KLP_UNPATCHED) {
@@ -122,7 +104,7 @@ static void klp_complete_transition(void)
* All tasks have transitioned to KLP_UNPATCHED so we can now
* remove the new functions from the func_stack.
*/
- klp_unpatch_objects(klp_transition_patch, KLP_FUNC_ANY);
+ klp_unpatch_objects(klp_transition_patch);

/*
* Make sure klp_ftrace_handler() can no longer see functions
@@ -173,14 +155,20 @@ static void klp_complete_transition(void)
if (!klp_forced && klp_target_state == KLP_UNPATCHED)
module_put(klp_transition_patch->mod);

- /*
- * We do not need to wait until the objects are really freed.
- * The patch must be on the bottom of the stack. Therefore it
- * will never replace anything else. The only important thing
- * is that we wait when the patch is being unregistered.
- */
- if (klp_transition_patch->replace && klp_target_state == KLP_PATCHED)
- klp_free_objects(klp_transition_patch, KLP_FUNC_NOP);
+ if (klp_transition_patch->replace && klp_target_state == KLP_PATCHED) {
+ /*
+ * We do not need to wait until the objects are really freed.
+ * We will never need them again because the patch must be on
+ * the bottom of the stack now.
+ */
+ klp_free_objects_dynamic(klp_transition_patch);
+ /*
+ * Replace behavior will not longer be needed. Avoid the related
+ * code when disabling and enabling again.
+ */
+ klp_transition_patch->replace = false;
+ }
+

klp_target_state = KLP_UNDEFINED;
klp_transition_patch = NULL;

2018-04-06 22:10:27

by Josh Poimboeuf

[permalink] [raw]
Subject: Re: [PATCH 3/8] livepatch: Add atomic replace

On Fri, Mar 23, 2018 at 01:00:23PM +0100, Petr Mladek wrote:
> @@ -687,7 +858,14 @@ static void klp_free_patch(struct klp_patch *patch)
>
> static int klp_init_func(struct klp_object *obj, struct klp_func *func)
> {
> - if (!func->old_name || !func->new_func)
> + if (!func->old_name)
> + return -EINVAL;
> +
> + /*
> + * NOPs get the address later. The the patched module must be loaded,

"The the" -> "the"

> + * see klp_init_object_loaded().
> + */
> + if (!func->new_func && !func->nop)
> return -EINVAL;

>
> INIT_LIST_HEAD(&func->stack_node);
> @@ -742,6 +920,9 @@ static int klp_init_object_loaded(struct klp_patch *patch,
> return -ENOENT;
> }
>
> + if (func->nop)
> + func->new_func = (void *)func->old_addr;
> +

These changes make it more obvious that 'new_func' isn't quite the right
name. It should really be 'new_addr' IMO.

--
Josh

2018-04-06 22:11:16

by Josh Poimboeuf

[permalink] [raw]
Subject: Re: [PATCH 6/8] livepatch: Remove Nop structures when unused

On Fri, Mar 23, 2018 at 01:00:26PM +0100, Petr Mladek wrote:
> Replaced patches are removed from the stack when the transition is
> finished. It means that Nop structures will never be needed again
> and can be removed. Why should we care?

Warning, grammar pedantry ahead.

"Nop" isn't a proper noun, so it shouldn't be capitalized (here and in
the patch subject).

Also, "NOP", used elsewhere, might be confused with x86 NOP
instructions. I would vote for "nop" everywhere.

> @@ -146,6 +155,21 @@ static void klp_complete_transition(void)
> if (!klp_forced && klp_target_state == KLP_UNPATCHED)
> module_put(klp_transition_patch->mod);
>
> + if (klp_transition_patch->replace && klp_target_state == KLP_PATCHED) {
> + /*
> + * We do not need to wait until the objects are really freed.
> + * We will never need them again because the patch must be on
> + * the bottom of the stack now.
> + */
> + klp_free_objects_dynamic(klp_transition_patch);
> + /*
> + * Replace behavior will not longer be needed. Avoid the related
> + * code when disabling and enabling again.

"not longer" -> "no longer"

--
Josh

2018-04-06 22:13:13

by Josh Poimboeuf

[permalink] [raw]
Subject: Re: [PATCH 4/8] livepatch: Add an extra flag to distinguish registered patches

On Fri, Mar 23, 2018 at 01:00:24PM +0100, Petr Mladek wrote:
> @@ -528,6 +537,7 @@ static ssize_t enabled_store(struct kobject *kobj, struct kobj_attribute *attr,
> goto err;
> }
>
> +
> if (patch->enabled == enabled) {
> /* already in requested state */
> ret = -EINVAL;

Extra newline.

--
Josh

2018-04-06 22:15:06

by Josh Poimboeuf

[permalink] [raw]
Subject: Re: [PATCH 0/8] livepatch: Atomic replace feature

On Fri, Mar 23, 2018 at 01:00:20PM +0100, Petr Mladek wrote:
> The atomic replace allows to create cumulative patches. They
> are useful when you maintain many livepatches and want to remove
> one that is lower on the stack. In addition it is very useful when
> more patches touch the same function and there are dependencies
> between them.
>
> This version is heavily refactored and cleaned based on feedback from Josh.
> There are actually only three functional changes.
>
> It still passes the first draft of the selfttest from Joe that can
> be found at https://lkml.kernel.org/r/[email protected]
>
>
> Changes against v10:
>
> + Bug fixes and functional changes:
> + Handle Nops in klp_ftrace_handled() to avoid infinite loop [Mirek]
> + Really add dynamically allocated klp_object into the list [Petr]
> + Clear patch->replace when transition finishes [Josh]
>
> + Refactoring and clean up [Josh]:
> + Replace enum types with bools
> + Avoid using ERR_PTR
> + Remove too paranoid warnings
> + Distinguish registered patches by a flag instead of a list
> + Squash some functions
> + Update comments, documentation, and commit messages
> + Squashed and split patches to do more controversial changes later

Thanks again for all the changes. I think I like the general direction
of the patches now, even some of the later ones ;-)

Along with the minor comments from my other emails, I still have the
question about "does it make sense to enforce a stack anymore".

And of course I would really like to see the selftests in place first.

--
Josh

2018-04-09 13:57:54

by Miroslav Benes

[permalink] [raw]
Subject: Re: [PATCH 3/8] livepatch: Add atomic replace

> > + * see klp_init_object_loaded().
> > + */
> > + if (!func->new_func && !func->nop)
> > return -EINVAL;
>
> >
> > INIT_LIST_HEAD(&func->stack_node);
> > @@ -742,6 +920,9 @@ static int klp_init_object_loaded(struct klp_patch *patch,
> > return -ENOENT;
> > }
> >
> > + if (func->nop)
> > + func->new_func = (void *)func->old_addr;
> > +
>
> These changes make it more obvious that 'new_func' isn't quite the right
> name. It should really be 'new_addr' IMO.

I think we wanted to point out the difference from old_addr which is
initialized with the symbol name while new_func is initialized with the
new function itself (function pointer). I agree though that it looks
awkward in this context and I'm not against changing it to new_addr.

Petr, could you also add a note to the changelog why we need to setup
new_func for nop functions, please? It's not obvious because of the hack
in klp_ftrace_handler()
(klp_cancel_transition()->...->klp_check_stack_func() needs it).

Miroslav

2018-04-09 14:05:41

by Miroslav Benes

[permalink] [raw]
Subject: Re: [PATCH 4/8] livepatch: Add an extra flag to distinguish registered patches

On Fri, 23 Mar 2018, Petr Mladek wrote:

> The initial implementation of the atomic replace feature keeps the replaced
> patches on the stack. But people would like to remove the replaced patches
> from different reasons that will be described in the following patch.
>
> This patch is just a small preparation step. We will need to keep
> the replaced patches registered even when they are not longer on the stack.

s/not longer/no longer/

> It is because they are typically unregistered by the module exit script.
>
> Therefore we need to detect the registered patches another way.

"in another way", "differently"?

> We could
> not use kobj.state_initialized because it is racy. The kobject is destroyed
> by an asynchronous call and could not be synchronized using klp_mutex.
>
> This patch solves the problem by adding a flag into struct klp_patch.
> It is manipulated under klp_mutex and therefore it is safe. It is easy
> to understand and it is enough in most situations.
>
> The function klp_is_patch_registered() is not longer needed. Though

s/not longer/no longer/

s/Though/So/ ?

> it was renamed to klp_is_patch_on_stack() and used in __klp_enable_patch()
> as a new sanity check.
>
> This patch does not change the existing behavior.

In my opinion it could be easier for a review to squash the patch into the
next one.

> Signed-off-by: Petr Mladek <[email protected]>
> Cc: Josh Poimboeuf <[email protected]>
> Cc: Jessica Yu <[email protected]>
> Cc: Jiri Kosina <[email protected]>
> Cc: Miroslav Benes <[email protected]>
> Cc: Jason Baron <[email protected]>
> ---
> include/linux/livepatch.h | 2 ++
> kernel/livepatch/core.c | 24 ++++++++++++++++++------
> 2 files changed, 20 insertions(+), 6 deletions(-)
>
> diff --git a/include/linux/livepatch.h b/include/linux/livepatch.h
> index f28af280f9e0..d6e6d8176995 100644
> --- a/include/linux/livepatch.h
> +++ b/include/linux/livepatch.h
> @@ -150,6 +150,7 @@ struct klp_object {
> * @list: list node for global list of registered patches
> * @kobj: kobject for sysfs resources
> * @obj_list: dynamic list of the object entries
> + * @registered: reliable way to check registration status
> * @enabled: the patch is enabled (but operation may be incomplete)
> * @finish: for waiting till it is safe to remove the patch module
> */
> @@ -163,6 +164,7 @@ struct klp_patch {
> struct list_head list;
> struct kobject kobj;
> struct list_head obj_list;
> + bool registered;
> bool enabled;
> struct completion finish;
> };
> diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
> index 18c400bd9a33..70c67a834e9a 100644
> --- a/kernel/livepatch/core.c
> +++ b/kernel/livepatch/core.c
> @@ -45,6 +45,11 @@
> */
> DEFINE_MUTEX(klp_mutex);
>
> +/*
> + * Stack of patches. It defines the order in which the patches can be enabled.
> + * Only patches on this stack might be enabled. New patches are added when
> + * registered. They are removed when they are unregistered.
> + */
> static LIST_HEAD(klp_patches);
>
> static struct kobject *klp_root_kobj;
> @@ -97,7 +102,7 @@ static void klp_find_object_module(struct klp_object *obj)
> mutex_unlock(&module_mutex);
> }
>
> -static bool klp_is_patch_registered(struct klp_patch *patch)
> +static bool klp_is_patch_on_stack(struct klp_patch *patch)
> {
> struct klp_patch *mypatch;
>
> @@ -378,7 +383,7 @@ int klp_disable_patch(struct klp_patch *patch)
>
> mutex_lock(&klp_mutex);
>
> - if (!klp_is_patch_registered(patch)) {
> + if (!patch->registered) {

I don't see any actual problem, but I'd feel safer if we preserve
klp_is_patch_on_stack() even somewhere in disable path.

Miroslav

2018-04-10 09:17:35

by Miroslav Benes

[permalink] [raw]
Subject: Re: [PATCH 6/8] livepatch: Remove Nop structures when unused

On Fri, 23 Mar 2018, Petr Mladek wrote:

> Replaced patches are removed from the stack when the transition is
> finished. It means that Nop structures will never be needed again
> and can be removed. Why should we care?
>
> + Nop structures make false feeling that the function is patched
> even though the ftrace handler has no effect.
>
> + Ftrace handlers are not completely for free. They cause slowdown that
> might be visible in some workloads. The ftrace-related slowdown might
> actually be the reason why the function is not longer patched in
> the new cumulative patch. One would expect that cumulative patch
> would allow to solve these problems as well.
>
> + Cumulative patches are supposed to replace any earlier version of
> the patch. The amount of NOPs depends on which version was replaced.
> This multiplies the amount of scenarios that might happen.
>
> One might say that NOPs are innocent. But there are even optimized
> NOP instructions for different processor, for example, see
> arch/x86/kernel/alternative.c. And klp_ftrace_handler() is much
> more complicated.
>
> + It sounds natural to clean up a mess that is not longer needed.
> It could only be worse if we do not do it.
>
> This patch allows to unpatch and free the dynamic structures independently
> when the transition finishes.
>
> The free part is a bit tricky because kobject free callbacks are called
> asynchronously. We could not wait for them easily. Fortunately, we do
> not have to. Any further access can be avoided by removing them from
> the dynamic lists.
>
> Finally, the patch become the first on the stack when enabled. The replace
> functionality will not longer be needed. Let's clear patch->replace to
> avoid the special handling when it is eventually disabled/enabled again.
>
> Signed-off-by: Petr Mladek <[email protected]>
> ---
> include/linux/livepatch.h | 6 ++++++
> kernel/livepatch/core.c | 42 +++++++++++++++++++++++++++++++++++-------
> kernel/livepatch/core.h | 1 +
> kernel/livepatch/patch.c | 31 ++++++++++++++++++++++++++-----
> kernel/livepatch/patch.h | 1 +
> kernel/livepatch/transition.c | 26 +++++++++++++++++++++++++-
> 6 files changed, 94 insertions(+), 13 deletions(-)
>
> diff --git a/include/linux/livepatch.h b/include/linux/livepatch.h
> index d6e6d8176995..1635b30bb1ec 100644
> --- a/include/linux/livepatch.h
> +++ b/include/linux/livepatch.h
> @@ -172,6 +172,9 @@ struct klp_patch {
> #define klp_for_each_object_static(patch, obj) \
> for (obj = patch->objs; obj->funcs || obj->name; obj++)
>
> +#define klp_for_each_object_safe(patch, obj, tmp_obj) \
> + list_for_each_entry_safe(obj, tmp_obj, &patch->obj_list, node)
> +
> #define klp_for_each_object(patch, obj) \
> list_for_each_entry(obj, &patch->obj_list, node)
>
> @@ -180,6 +183,9 @@ struct klp_patch {
> func->old_name || func->new_func || func->old_sympos; \
> func++)
>
> +#define klp_for_each_func_safe(obj, func, tmp_func) \
> + list_for_each_entry_safe(func, tmp_func, &obj->func_list, node)
> +
> #define klp_for_each_func(obj, func) \
> list_for_each_entry(func, &obj->func_list, node)

Is there a benefit of the newly added iterators?

Miroslav

2018-04-10 09:35:13

by Petr Mladek

[permalink] [raw]
Subject: Re: [PATCH 3/8] livepatch: Add atomic replace

On Mon 2018-04-09 15:53:03, Miroslav Benes wrote:
> > > + * see klp_init_object_loaded().
> > > + */
> > > + if (!func->new_func && !func->nop)
> > > return -EINVAL;
> >
> > >
> > > INIT_LIST_HEAD(&func->stack_node);
> > > @@ -742,6 +920,9 @@ static int klp_init_object_loaded(struct klp_patch *patch,
> > > return -ENOENT;
> > > }
> > >
> > > + if (func->nop)
> > > + func->new_func = (void *)func->old_addr;
> > > +
> >
> > These changes make it more obvious that 'new_func' isn't quite the right
> > name. It should really be 'new_addr' IMO.
>
> I think we wanted to point out the difference from old_addr which is
> initialized with the symbol name while new_func is initialized with the
> new function itself (function pointer). I agree though that it looks
> awkward in this context and I'm not against changing it to new_addr.

I am fine with the rename. I was confused by "new_func" several times
in the past. "new_addr" makes it clear that we are setting an address in
compare with the name in "old_name".


> Petr, could you also add a note to the changelog why we need to setup
> new_func for nop functions, please? It's not obvious because of the hack
> in klp_ftrace_handler()
> (klp_cancel_transition()->...->klp_check_stack_func() needs it).

Yup.

Best Regards,
Petr

2018-04-10 11:00:29

by Petr Mladek

[permalink] [raw]
Subject: Re: [PATCH 4/8] livepatch: Add an extra flag to distinguish registered patches

On Mon 2018-04-09 16:02:15, Miroslav Benes wrote:
> On Fri, 23 Mar 2018, Petr Mladek wrote:
>
> > The initial implementation of the atomic replace feature keeps the replaced
> > patches on the stack. But people would like to remove the replaced patches
> > from different reasons that will be described in the following patch.
> >
> > This patch is just a small preparation step. We will need to keep
> > the replaced patches registered even when they are not longer on the stack.
>
> In my opinion it could be easier for a review to squash the patch into the
> next one.

OK, I'll squash them.


> > diff --git a/include/linux/livepatch.h b/include/linux/livepatch.h
> > index f28af280f9e0..d6e6d8176995 100644
> > --- a/include/linux/livepatch.h
> > +++ b/include/linux/livepatch.h
> > @@ -378,7 +383,7 @@ int klp_disable_patch(struct klp_patch *patch)
> >
> > mutex_lock(&klp_mutex);
> >
> > - if (!klp_is_patch_registered(patch)) {
> > + if (!patch->registered) {
>
> I don't see any actual problem, but I'd feel safer if we preserve
> klp_is_patch_on_stack() even somewhere in disable path.

It is strictly needed in klp_enable_patch() which will be better
visible if I squash it with the next patch.

The use in klp_disable_patch() is optional. I do not have problems
to add it there though. If Josh does not see it as too paranoid,
I'll add it.

Best Regards,
Petr

2018-04-10 11:13:03

by Petr Mladek

[permalink] [raw]
Subject: Re: [PATCH 6/8] livepatch: Remove Nop structures when unused

On Tue 2018-04-10 11:14:21, Miroslav Benes wrote:
> On Fri, 23 Mar 2018, Petr Mladek wrote:
> > This patch allows to unpatch and free the dynamic structures independently
> > when the transition finishes.
> >
> > The free part is a bit tricky because kobject free callbacks are called
> > asynchronously. We could not wait for them easily. Fortunately, we do
> > not have to. Any further access can be avoided by removing them from
> > the dynamic lists.
> >
> > Finally, the patch become the first on the stack when enabled. The replace
> > functionality will not longer be needed. Let's clear patch->replace to
> > avoid the special handling when it is eventually disabled/enabled again.
> >
> > diff --git a/include/linux/livepatch.h b/include/linux/livepatch.h
> > index d6e6d8176995..1635b30bb1ec 100644
> > --- a/include/linux/livepatch.h
> > +++ b/include/linux/livepatch.h
> > @@ -172,6 +172,9 @@ struct klp_patch {
> > #define klp_for_each_object_static(patch, obj) \
> > for (obj = patch->objs; obj->funcs || obj->name; obj++)
> >
> > +#define klp_for_each_object_safe(patch, obj, tmp_obj) \
> > + list_for_each_entry_safe(obj, tmp_obj, &patch->obj_list, node)
> > +
> > #define klp_for_each_object(patch, obj) \
> > list_for_each_entry(obj, &patch->obj_list, node)
> >
> > @@ -180,6 +183,9 @@ struct klp_patch {
> > func->old_name || func->new_func || func->old_sympos; \
> > func++)
> >
> > +#define klp_for_each_func_safe(obj, func, tmp_func) \
> > + list_for_each_entry_safe(func, tmp_func, &obj->func_list, node)
> > +
> > #define klp_for_each_func(obj, func) \
> > list_for_each_entry(func, &obj->func_list, node)
>
> Is there a benefit of the newly added iterators?

You are right that there is nothing special and it is used
on a single place only.

Well, it increases the chance that you will catch it when looking
for the iterators. Also it makes it easier to see the difference
against the non-safe iterators. I'll keep it if you are not
strongly against it.

Best Regards,
Petr

2018-04-10 17:58:06

by Josh Poimboeuf

[permalink] [raw]
Subject: Re: [PATCH 4/8] livepatch: Add an extra flag to distinguish registered patches

On Tue, Apr 10, 2018 at 12:56:25PM +0200, Petr Mladek wrote:
> > > diff --git a/include/linux/livepatch.h b/include/linux/livepatch.h
> > > index f28af280f9e0..d6e6d8176995 100644
> > > --- a/include/linux/livepatch.h
> > > +++ b/include/linux/livepatch.h
> > > @@ -378,7 +383,7 @@ int klp_disable_patch(struct klp_patch *patch)
> > >
> > > mutex_lock(&klp_mutex);
> > >
> > > - if (!klp_is_patch_registered(patch)) {
> > > + if (!patch->registered) {
> >
> > I don't see any actual problem, but I'd feel safer if we preserve
> > klp_is_patch_on_stack() even somewhere in disable path.
>
> It is strictly needed in klp_enable_patch() which will be better
> visible if I squash it with the next patch.
>
> The use in klp_disable_patch() is optional. I do not have problems
> to add it there though. If Josh does not see it as too paranoid,
> I'll add it.

That does seem pointless to me, how did you know? :-)

Of course if we make the other changes we've been discussing, maybe it's
no longer an issue.

--
Josh