2014-12-09 18:05:52

by Petr Mladek

[permalink] [raw]
Subject: [PATCH 0/6] livepatch v5: more clean up for the v5 of the patchset

This patch set is against v5 of the livepatch patch set, see
https://lkml.org/lkml/2014/12/4/488

It handles my comments against the 2nd patch from the patch set.
The only exception is the description of the relocation stuff.
It is better understood by Seth and Josh.

If you agree with the changes, please merge them into the original
patchset.

Petr Mladek (6):
livepatch v5: avoid race when checking for state of the patch
livepatch v5: do not quietly ignore wrong usage
livepatch v5: find and verify the old address in klp_*_init()
livepatch v5: clean up ordering of functions
livepatch v5: split init and free code that is done only for loaded
modules
livepatch v5: clean up usage of the old and new addresses

include/linux/livepatch.h | 24 +--
kernel/livepatch/core.c | 403 +++++++++++++++++++++++++++-------------------
2 files changed, 253 insertions(+), 174 deletions(-)

--
1.8.5.2


2014-12-09 18:05:59

by Petr Mladek

[permalink] [raw]
Subject: [PATCH 1/6] livepatch v5: avoid race when checking for state of the patch

klp_patch_enable() and klp_patch_disable() should check the current state
of the patch under the klp_lock. Otherwise, it might detect that the operation
is valid but the situation might change before it takes the lock.

Signed-off-by: Petr Mladek <[email protected]>
---
kernel/livepatch/core.c | 16 ++++++++++------
1 file changed, 10 insertions(+), 6 deletions(-)

diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
index d6d0f50e81f8..b848069e44cc 100644
--- a/kernel/livepatch/core.c
+++ b/kernel/livepatch/core.c
@@ -425,11 +425,13 @@ int klp_disable_patch(struct klp_patch *patch)
{
int ret;

- if (!klp_is_enabled())
- return -ENODEV;
-
mutex_lock(&klp_mutex);

+ if (!klp_is_enabled()) {
+ ret = -ENODEV;
+ goto err;
+ }
+
if (!klp_patch_is_registered(patch)) {
ret = -EINVAL;
goto err;
@@ -489,11 +491,13 @@ int klp_enable_patch(struct klp_patch *patch)
{
int ret;

- if (!klp_is_enabled())
- return -ENODEV;
-
mutex_lock(&klp_mutex);

+ if (!klp_is_enabled()) {
+ ret = -ENODEV;
+ goto err;
+ }
+
if (!klp_patch_is_registered(patch)) {
ret = -EINVAL;
goto err;
--
1.8.5.2

2014-12-09 18:06:04

by Petr Mladek

[permalink] [raw]
Subject: [PATCH 2/6] livepatch v5: do not quietly ignore wrong usage

I suggested to add more checks between v4 and v5 of this patchset. Some
of them were lazy. They allowed to call the function even when there
was nothing to do. Mirek persuaded me that it was bad design because
it would hide some potential problems. We fixed this for klp_object_enable()
but not for the other lazy checks.

This patch removes the other lazy checks. Also it makes sure that
the functions are called only when allowed.

Note that the check in klp_disable_object() must be an error. It is called
when the patch is enabled. We could not disable it without the valid address.

Note that it adds some more nested checks but I plan to clean this later by
introducing __klp_init_object(). It will help to remove the duplicated
code from module_coming().

Signed-off-by: Petr Mladek <[email protected]>
---
kernel/livepatch/core.c | 30 ++++++++++++++++--------------
1 file changed, 16 insertions(+), 14 deletions(-)

diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
index b848069e44cc..034f79a926af 100644
--- a/kernel/livepatch/core.c
+++ b/kernel/livepatch/core.c
@@ -231,12 +231,11 @@ static int klp_write_object_relocations(struct module *pmod,
int ret;
struct klp_reloc *reloc;

- if (!klp_is_object_loaded(obj))
- return 0;
+ if (WARN_ON(!klp_is_object_loaded(obj)))
+ return -EINVAL;

- /* be happy when there is nothing to do */
- if (!obj->relocs)
- return 0;
+ if (WARN_ON(!obj->relocs))
+ return -EINVAL;

for (reloc = obj->relocs; reloc->name; reloc++) {
if (!klp_is_module(obj)) {
@@ -315,9 +314,8 @@ static int klp_disable_func(struct klp_func *func)
if (WARN_ON(func->state != KLP_ENABLED))
return -EINVAL;

- if (!func->old_addr)
- /* parent object is not loaded */
- return 0;
+ if (WARN_ON(!func->old_addr))
+ return -EINVAL;

ret = unregister_ftrace_function(func->fops);
if (ret) {
@@ -520,9 +518,11 @@ static void klp_module_notify_coming(struct module *pmod,
pr_notice("applying patch '%s' to loading module '%s'\n",
pmod->name, mod->name);

- ret = klp_write_object_relocations(pmod, obj);
- if (ret)
- goto err;
+ if (obj->relocs) {
+ ret = klp_write_object_relocations(pmod, obj);
+ if (ret)
+ goto err;
+ }

ret = klp_enable_object(obj);
if (!ret)
@@ -751,9 +751,11 @@ static int klp_init_object(struct klp_patch *patch, struct klp_object *obj)

klp_find_object_module(obj);

- ret = klp_write_object_relocations(patch->mod, obj);
- if (ret)
- return ret;
+ if (obj->relocs && klp_is_object_loaded(obj)) {
+ ret = klp_write_object_relocations(patch->mod, obj);
+ if (ret)
+ return ret;
+ }

name = klp_is_module(obj) ? obj->name : "vmlinux";
obj->kobj = kobject_create_and_add(name, &patch->kobj);
--
1.8.5.2

2014-12-09 18:06:08

by Petr Mladek

[permalink] [raw]
Subject: [PATCH 4/6] livepatch v5: clean up ordering of functions

This patches switch the order of functions:

+ lp_enable_func() <-> lp_disable_func()
+ klp_register_patch() <-> klp_unregister_patch()

It makes it consistent with the rest of the functions. The functions are defined
in the order:

+ klp_disable_*()
+ klp_enable_*()

+ klp_free_*()
+ klp_init_*()

+ klp_unregister_*()
+ klp_register_*()

The patch also moves the module notification handling after the klp_init*()
functions. It will allow to split the common code into __klp_init*() functions
and reduce copy&paste stuff.

Signed-off-by: Petr Mladek <[email protected]>
---
kernel/livepatch/core.c | 282 ++++++++++++++++++++++++------------------------
1 file changed, 141 insertions(+), 141 deletions(-)

diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
index 54bb39d3abb5..97a8d4a3d6d8 100644
--- a/kernel/livepatch/core.c
+++ b/kernel/livepatch/core.c
@@ -280,59 +280,59 @@ static void notrace klp_ftrace_handler(unsigned long ip,
regs->ip = (unsigned long)func->new_func;
}

-static int klp_enable_func(struct klp_func *func)
+static int klp_disable_func(struct klp_func *func)
{
int ret;

- if (WARN_ON(!func->old_addr))
+ if (WARN_ON(func->state != KLP_ENABLED))
return -EINVAL;

- if (WARN_ON(func->state != KLP_DISABLED))
+ if (WARN_ON(!func->old_addr))
return -EINVAL;

- ret = ftrace_set_filter_ip(func->fops, func->old_addr, 0, 0);
+ ret = unregister_ftrace_function(func->fops);
if (ret) {
- pr_err("failed to set ftrace filter for function '%s' (%d)\n",
+ pr_err("failed to unregister ftrace handler for function '%s' (%d)\n",
func->old_name, ret);
return ret;
}

- ret = register_ftrace_function(func->fops);
- if (ret) {
- pr_err("failed to register ftrace handler for function '%s' (%d)\n",
- func->old_name, ret);
- ftrace_set_filter_ip(func->fops, func->old_addr, 1, 0);
- } else {
- func->state = KLP_ENABLED;
- }
+ ret = ftrace_set_filter_ip(func->fops, func->old_addr, 1, 0);
+ if (ret)
+ pr_warn("function unregister succeeded but failed to clear the filter\n");

- return ret;
+ func->state = KLP_DISABLED;
+
+ return 0;
}

-static int klp_disable_func(struct klp_func *func)
+static int klp_enable_func(struct klp_func *func)
{
int ret;

- if (WARN_ON(func->state != KLP_ENABLED))
+ if (WARN_ON(!func->old_addr))
return -EINVAL;

- if (WARN_ON(!func->old_addr))
+ if (WARN_ON(func->state != KLP_DISABLED))
return -EINVAL;

- ret = unregister_ftrace_function(func->fops);
+ ret = ftrace_set_filter_ip(func->fops, func->old_addr, 0, 0);
if (ret) {
- pr_err("failed to unregister ftrace handler for function '%s' (%d)\n",
+ pr_err("failed to set ftrace filter for function '%s' (%d)\n",
func->old_name, ret);
return ret;
}

- ret = ftrace_set_filter_ip(func->fops, func->old_addr, 1, 0);
- if (ret)
- pr_warn("function unregister succeeded but failed to clear the filter\n");
-
- func->state = KLP_DISABLED;
+ ret = register_ftrace_function(func->fops);
+ if (ret) {
+ pr_err("failed to register ftrace handler for function '%s' (%d)\n",
+ func->old_name, ret);
+ ftrace_set_filter_ip(func->fops, func->old_addr, 1, 0);
+ } else {
+ func->state = KLP_ENABLED;
+ }

- return 0;
+ return ret;
}

static int klp_disable_object(struct klp_object *obj)
@@ -504,98 +504,6 @@ err:
}
EXPORT_SYMBOL_GPL(klp_enable_patch);

-static void klp_module_notify_coming(struct module *pmod,
- struct klp_object *obj)
-{
- struct klp_func *func;
- struct module *mod = obj->mod;
- int ret;
-
- pr_notice("applying patch '%s' to loading module '%s'\n",
- pmod->name, mod->name);
-
- if (obj->relocs) {
- ret = klp_write_object_relocations(pmod, obj);
- if (ret)
- goto err;
- }
-
- for (func = obj->funcs; func->old_name; func++) {
- ret = klp_find_verify_func_addr(obj, func);
- if (ret)
- goto err;
- }
-
- ret = klp_enable_object(obj);
- if (!ret)
- return;
-
-err:
- pr_warn("failed to apply patch '%s' to module '%s' (%d)\n",
- pmod->name, mod->name, ret);
-}
-
-static void klp_module_notify_going(struct module *pmod,
- struct klp_object *obj)
-{
- struct klp_func *func;
- struct module *mod = obj->mod;
- int ret;
-
- pr_notice("reverting patch '%s' on unloading module '%s'\n",
- pmod->name, mod->name);
-
- ret = klp_disable_object(obj);
- if (ret)
- pr_warn("failed to revert patch '%s' on module '%s' (%d)\n",
- pmod->name, mod->name, ret);
-
- for (func = obj->funcs; func->old_name; func++)
- func->old_addr = 0;
-
- obj->mod = NULL;
-}
-
-static int klp_module_notify(struct notifier_block *nb, unsigned long action,
- void *data)
-{
- struct module *mod = data;
- struct klp_patch *patch;
- struct klp_object *obj;
-
- if (action != MODULE_STATE_COMING && action != MODULE_STATE_GOING)
- return 0;
-
- mutex_lock(&klp_mutex);
-
- list_for_each_entry(patch, &klp_patches, list) {
- if (patch->state == KLP_DISABLED)
- continue;
-
- for (obj = patch->objs; obj->funcs; obj++) {
- if (!klp_is_module(obj) || strcmp(obj->name, mod->name))
- continue;
-
- if (action == MODULE_STATE_COMING) {
- obj->mod = mod;
- klp_module_notify_coming(patch->mod, obj);
- } else /* MODULE_STATE_GOING */
- klp_module_notify_going(patch->mod, obj);
-
- break;
- }
- }
-
- mutex_unlock(&klp_mutex);
-
- return 0;
-}
-
-static struct notifier_block klp_module_nb = {
- .notifier_call = klp_module_notify,
- .priority = INT_MIN+1, /* called late but before ftrace notifier */
-};
-
/*
* Sysfs Interface
*
@@ -823,6 +731,41 @@ unlock:
}

/**
+ * klp_unregister_patch() - unregisters a patch
+ * @patch: Disabled patch to be unregistered
+ *
+ * Frees the data structures and removes the sysfs interface.
+ *
+ * Return: 0 on success, otherwise error
+ */
+int klp_unregister_patch(struct klp_patch *patch)
+{
+ int ret = 0;
+
+ if (!klp_is_enabled())
+ return -ENODEV;
+
+ mutex_lock(&klp_mutex);
+
+ if (!klp_patch_is_registered(patch)) {
+ ret = -EINVAL;
+ goto out;
+ }
+
+ if (patch->state == KLP_ENABLED) {
+ ret = -EBUSY;
+ goto out;
+ }
+
+ klp_free_patch(patch);
+
+out:
+ mutex_unlock(&klp_mutex);
+ return ret;
+}
+EXPORT_SYMBOL_GPL(klp_unregister_patch);
+
+/**
* klp_register_patch() - registers a patch
* @patch: Patch to be registered
*
@@ -859,40 +802,97 @@ int klp_register_patch(struct klp_patch *patch)
}
EXPORT_SYMBOL_GPL(klp_register_patch);

-/**
- * klp_unregister_patch() - unregisters a patch
- * @patch: Disabled patch to be unregistered
- *
- * Frees the data structures and removes the sysfs interface.
- *
- * Return: 0 on success, otherwise error
- */
-int klp_unregister_patch(struct klp_patch *patch)
+static void klp_module_notify_coming(struct module *pmod,
+ struct klp_object *obj)
{
- int ret = 0;
-
- if (!klp_is_enabled())
- return -ENODEV;
+ struct klp_func *func;
+ struct module *mod = obj->mod;
+ int ret;

- mutex_lock(&klp_mutex);
+ pr_notice("applying patch '%s' to loading module '%s'\n",
+ pmod->name, mod->name);

- if (!klp_patch_is_registered(patch)) {
- ret = -EINVAL;
- goto out;
+ if (obj->relocs) {
+ ret = klp_write_object_relocations(pmod, obj);
+ if (ret)
+ goto err;
}

- if (patch->state == KLP_ENABLED) {
- ret = -EBUSY;
- goto out;
+ for (func = obj->funcs; func->old_name; func++) {
+ ret = klp_find_verify_func_addr(obj, func);
+ if (ret)
+ goto err;
}

- klp_free_patch(patch);
+ ret = klp_enable_object(obj);
+ if (!ret)
+ return;
+
+err:
+ pr_warn("failed to apply patch '%s' to module '%s' (%d)\n",
+ pmod->name, mod->name, ret);
+}
+
+static void klp_module_notify_going(struct module *pmod,
+ struct klp_object *obj)
+{
+ struct klp_func *func;
+ struct module *mod = obj->mod;
+ int ret;
+
+ pr_notice("reverting patch '%s' on unloading module '%s'\n",
+ pmod->name, mod->name);
+
+ ret = klp_disable_object(obj);
+ if (ret)
+ pr_warn("failed to revert patch '%s' on module '%s' (%d)\n",
+ pmod->name, mod->name, ret);
+
+ for (func = obj->funcs; func->old_name; func++)
+ func->old_addr = 0;
+
+ obj->mod = NULL;
+}
+
+static int klp_module_notify(struct notifier_block *nb, unsigned long action,
+ void *data)
+{
+ struct module *mod = data;
+ struct klp_patch *patch;
+ struct klp_object *obj;
+
+ if (action != MODULE_STATE_COMING && action != MODULE_STATE_GOING)
+ return 0;
+
+ mutex_lock(&klp_mutex);
+
+ list_for_each_entry(patch, &klp_patches, list) {
+ if (patch->state == KLP_DISABLED)
+ continue;
+
+ for (obj = patch->objs; obj->funcs; obj++) {
+ if (!klp_is_module(obj) || strcmp(obj->name, mod->name))
+ continue;
+
+ if (action == MODULE_STATE_COMING) {
+ obj->mod = mod;
+ klp_module_notify_coming(patch->mod, obj);
+ } else /* MODULE_STATE_GOING */
+ klp_module_notify_going(patch->mod, obj);
+
+ break;
+ }
+ }

-out:
mutex_unlock(&klp_mutex);
- return ret;
+
+ return 0;
}
-EXPORT_SYMBOL_GPL(klp_unregister_patch);
+
+static struct notifier_block klp_module_nb = {
+ .notifier_call = klp_module_notify,
+ .priority = INT_MIN+1, /* called late but before ftrace notifier */
+};

static int klp_init(void)
{
--
1.8.5.2

2014-12-09 18:06:31

by Petr Mladek

[permalink] [raw]
Subject: [PATCH 6/6] livepatch v5: clean up usage of the old and new addresses

The situation with variables storing the address of the old and new code
is not ideal. One is called "func" and the other is called "addr".
The one is pointer and the other is unsigned long. It makes sense
from the point of how the values are defined. But it might make problems
to understand the code when the values are used.

This patch introduces two new variables "old_ip" and "new_ip".
They have the same type and the name is symmetric. They are supposed
to carry the address of the mcount call-site that ftrace framework
operates with. The value is the same as the function address
on x86 but it might be different on another architectures.

The change is inspired by kGraft that even adds ftrace_function_to_fentry(),
see https://lkml.org/lkml/2014/4/30/688. The function allows to find the right
mcount call-site location. This patch uses ftrace_location() that just allows
to verify that the address is correct. It means that we detect potential problem
already when the patch in being initialized.

Signed-off-by: Petr Mladek <[email protected]>
---
include/linux/livepatch.h | 24 ++++++++++++-----------
kernel/livepatch/core.c | 49 ++++++++++++++++++++++++++++++++++++++---------
2 files changed, 53 insertions(+), 20 deletions(-)

diff --git a/include/linux/livepatch.h b/include/linux/livepatch.h
index b61fe7402f1f..32f82d798c54 100644
--- a/include/linux/livepatch.h
+++ b/include/linux/livepatch.h
@@ -36,8 +36,16 @@ enum klp_state {
* struct klp_func - function structure for live patching
* @old_name: name of the function to be patched
* @new_func: pointer to the patched function code
- * @old_addr: a hint conveying at what address the old function
- * can be found (optional, vmlinux patches only)
+ * @old_addr: address of the function to be patched; if predefined then
+ * the value is used as a hint conveying at what address
+ * the old function can be found; otherwise, the value is
+ * located by name with kallsyms; the predefined value makes
+ * sense only for vmlinux function and when randomization is
+ * disabled
+ * @old_ip: address of the mcount call-site for the old function;
+ * it is the address that ftrace works with
+ * @new_ip: address of the mcount call-sire for the new function;
+ * it is the address that ftrace works with
* @kobj: kobject for sysfs resources
* @fops: ftrace operations structure
* @state: tracks function-level patch application state
@@ -46,15 +54,9 @@ struct klp_func {
/* external */
const char *old_name;
void *new_func;
- /*
- * The old_addr field is optional and can be used to resolve
- * duplicate symbol names in the vmlinux object. If this
- * information is not present, the symbol is located by name
- * with kallsyms. If the name is not unique and old_addr is
- * not provided, the patch application fails as there is no
- * way to resolve the ambiguity.
- */
unsigned long old_addr;
+ unsigned long old_ip;
+ unsigned long new_ip;

/* internal */
struct kobject kobj;
@@ -88,7 +90,7 @@ struct klp_reloc {
* @funcs: function entries for functions to be patched in the object
* @kobj: kobject for sysfs resources
* @mod: kernel module associated with the patched object
- * (NULL for vmlinux)
+ * (NULL for vmlinux)
* @state: tracks object-level patch application state
*/
struct klp_object {
diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
index fe312b9ada78..c54230bf17fe 100644
--- a/kernel/livepatch/core.c
+++ b/kernel/livepatch/core.c
@@ -179,8 +179,8 @@ static int klp_verify_vmlinux_symbol(const char *name, unsigned long addr)
return -EINVAL;
}

-static int klp_find_verify_func_addr(struct klp_object *obj,
- struct klp_func *func)
+static int klp_find_verify_func_old_addr(struct klp_object *obj,
+ struct klp_func *func)
{
int ret;

@@ -205,6 +205,36 @@ static int klp_find_verify_func_addr(struct klp_object *obj,
return ret;
}

+static int klp_find_verify_func_addresses(struct klp_object *obj,
+ struct klp_func *func)
+{
+ int ret;
+
+ ret = klp_find_verify_func_old_addr(obj, func);
+ if (ret)
+ return ret;
+
+ func->old_ip = ftrace_location(func->old_addr);
+ if (!func->old_ip) {
+ pr_err("function '%s' at the address '%lx' can not be patched\n",
+ func->old_name, func->old_addr);
+ return -EINVAL;
+ }
+
+ /*
+ * In fact, the new function need not be traceable. This check is
+ * used to make sure that the address is a correct mcount call-site.
+ */
+ func->new_ip = ftrace_location((unsigned long)func->new_func);
+ if (!func->old_ip) {
+ pr_err("function at the address '%p' can not be used for patching\n",
+ func->new_func);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
/*
* external symbols are located outside the parent object (where the parent
* object is either vmlinux or the kmod being patched).
@@ -277,7 +307,7 @@ static void notrace klp_ftrace_handler(unsigned long ip,
{
struct klp_func *func = ops->private;

- regs->ip = (unsigned long)func->new_func;
+ regs->ip = func->new_ip;
}

static int klp_disable_func(struct klp_func *func)
@@ -287,7 +317,7 @@ static int klp_disable_func(struct klp_func *func)
if (WARN_ON(func->state != KLP_ENABLED))
return -EINVAL;

- if (WARN_ON(!func->old_addr))
+ if (WARN_ON(!func->old_ip))
return -EINVAL;

ret = unregister_ftrace_function(func->fops);
@@ -297,7 +327,7 @@ static int klp_disable_func(struct klp_func *func)
return ret;
}

- ret = ftrace_set_filter_ip(func->fops, func->old_addr, 1, 0);
+ ret = ftrace_set_filter_ip(func->fops, func->old_ip, 1, 0);
if (ret)
pr_warn("function unregister succeeded but failed to clear the filter\n");

@@ -310,13 +340,13 @@ static int klp_enable_func(struct klp_func *func)
{
int ret;

- if (WARN_ON(!func->old_addr))
+ if (WARN_ON(!func->old_ip))
return -EINVAL;

if (WARN_ON(func->state != KLP_DISABLED))
return -EINVAL;

- ret = ftrace_set_filter_ip(func->fops, func->old_addr, 0, 0);
+ ret = ftrace_set_filter_ip(func->fops, func->old_ip, 0, 0);
if (ret) {
pr_err("failed to set ftrace filter for function '%s' (%d)\n",
func->old_name, ret);
@@ -327,7 +357,7 @@ static int klp_enable_func(struct klp_func *func)
if (ret) {
pr_err("failed to register ftrace handler for function '%s' (%d)\n",
func->old_name, ret);
- ftrace_set_filter_ip(func->fops, func->old_addr, 1, 0);
+ ftrace_set_filter_ip(func->fops, func->old_ip, 1, 0);
} else {
func->state = KLP_ENABLED;
}
@@ -594,6 +624,7 @@ static struct kobj_type klp_ktype_func = {
static void klp_free_func_loaded(struct klp_func *func)
{
func->old_addr = 0;
+ func->old_ip = 0;
}

/*
@@ -646,7 +677,7 @@ static void klp_free_patch(struct klp_patch *patch)
/* parts of the initialization that is done only when the object is loaded */
static int klp_init_func_loaded(struct klp_object *obj, struct klp_func *func)
{
- return klp_find_verify_func_addr(obj, func);
+ return klp_find_verify_func_addresses(obj, func);
}

static int klp_init_func(struct klp_object *obj, struct klp_func *func)
--
1.8.5.2

2014-12-09 18:06:12

by Petr Mladek

[permalink] [raw]
Subject: [PATCH 5/6] livepatch v5: split init and free code that is done only for loaded modules

This patch makes it clear what initialization and freeing steps need to be done
when an object (module) is being loaded or removed. It will help to maintain
the module coming and going handlers. Also it will remove duplicated
code from these handlers.

Signed-off-by: Petr Mladek <[email protected]>
---
kernel/livepatch/core.c | 92 ++++++++++++++++++++++++++++++++-----------------
1 file changed, 61 insertions(+), 31 deletions(-)

diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
index 97a8d4a3d6d8..fe312b9ada78 100644
--- a/kernel/livepatch/core.c
+++ b/kernel/livepatch/core.c
@@ -590,6 +590,12 @@ static struct kobj_type klp_ktype_func = {
.sysfs_ops = &kobj_sysfs_ops,
};

+/* Clean up when a patched object is unloaded */
+static void klp_free_func_loaded(struct klp_func *func)
+{
+ func->old_addr = 0;
+}
+
/*
* Free all functions' kobjects in the array up to some limit. When limit is
* NULL, all kobjects are freed.
@@ -603,6 +609,17 @@ static void klp_free_funcs_limited(struct klp_object *obj,
kobject_put(&func->kobj);
}

+/* Clean up when a patched object is unloaded */
+static void klp_free_object_loaded(struct klp_object *obj)
+{
+ struct klp_func *func;
+
+ obj->mod = NULL;
+
+ for (func = obj->funcs; func->old_name; func++)
+ klp_free_func_loaded(func);
+}
+
/*
* Free all objects' kobjects in the array up to some limit. When limit is
* NULL, all kobjects are freed.
@@ -626,6 +643,12 @@ static void klp_free_patch(struct klp_patch *patch)
kobject_put(&patch->kobj);
}

+/* parts of the initialization that is done only when the object is loaded */
+static int klp_init_func_loaded(struct klp_object *obj, struct klp_func *func)
+{
+ return klp_find_verify_func_addr(obj, func);
+}
+
static int klp_init_func(struct klp_object *obj, struct klp_func *func)
{
struct ftrace_ops *ops;
@@ -633,10 +656,6 @@ static int klp_init_func(struct klp_object *obj, struct klp_func *func)

func->state = KLP_DISABLED;

- ret = klp_find_verify_func_addr(obj, func);
- if (ret)
- return ret;
-
ops = kzalloc(sizeof(*ops), GFP_KERNEL);
if (!ops)
ret = -ENOMEM;
@@ -656,6 +675,28 @@ static int klp_init_func(struct klp_object *obj, struct klp_func *func)
return 0;
}

+/* parts of the initialization that is done only when the object is loaded */
+static int klp_init_object_loaded(struct klp_patch *patch,
+ struct klp_object *obj)
+{
+ struct klp_func *func;
+ int ret;
+
+ if (obj->relocs) {
+ ret = klp_write_object_relocations(patch->mod, obj);
+ if (ret)
+ return ret;
+ }
+
+ for (func = obj->funcs; func->old_name; func++) {
+ ret = klp_init_func_loaded(obj, func);
+ if (ret)
+ return ret;
+ }
+
+ return 0;
+}
+
static int klp_init_object(struct klp_patch *patch, struct klp_object *obj)
{
struct klp_func *func;
@@ -669,12 +710,6 @@ static int klp_init_object(struct klp_patch *patch, struct klp_object *obj)

klp_find_object_module(obj);

- if (obj->relocs && klp_is_object_loaded(obj)) {
- ret = klp_write_object_relocations(patch->mod, obj);
- if (ret)
- return ret;
- }
-
name = klp_is_module(obj) ? obj->name : "vmlinux";
obj->kobj = kobject_create_and_add(name, &patch->kobj);
if (!obj->kobj)
@@ -686,6 +721,12 @@ static int klp_init_object(struct klp_patch *patch, struct klp_object *obj)
goto free;
}

+ if (klp_is_object_loaded(obj)) {
+ ret = klp_init_object_loaded(patch, obj);
+ if (ret)
+ goto free;
+ }
+
return 0;

free:
@@ -802,27 +843,19 @@ int klp_register_patch(struct klp_patch *patch)
}
EXPORT_SYMBOL_GPL(klp_register_patch);

-static void klp_module_notify_coming(struct module *pmod,
+static void klp_module_notify_coming(struct klp_patch *patch,
struct klp_object *obj)
{
- struct klp_func *func;
+ struct module *pmod = patch->mod;
struct module *mod = obj->mod;
int ret;

pr_notice("applying patch '%s' to loading module '%s'\n",
pmod->name, mod->name);

- if (obj->relocs) {
- ret = klp_write_object_relocations(pmod, obj);
- if (ret)
- goto err;
- }
-
- for (func = obj->funcs; func->old_name; func++) {
- ret = klp_find_verify_func_addr(obj, func);
- if (ret)
- goto err;
- }
+ ret = klp_init_object_loaded(patch, obj);
+ if (ret)
+ goto err;

ret = klp_enable_object(obj);
if (!ret)
@@ -833,10 +866,10 @@ err:
pmod->name, mod->name, ret);
}

-static void klp_module_notify_going(struct module *pmod,
+static void klp_module_notify_going(struct klp_patch *patch,
struct klp_object *obj)
{
- struct klp_func *func;
+ struct module *pmod = patch->mod;
struct module *mod = obj->mod;
int ret;

@@ -848,10 +881,7 @@ static void klp_module_notify_going(struct module *pmod,
pr_warn("failed to revert patch '%s' on module '%s' (%d)\n",
pmod->name, mod->name, ret);

- for (func = obj->funcs; func->old_name; func++)
- func->old_addr = 0;
-
- obj->mod = NULL;
+ klp_free_object_loaded(obj);
}

static int klp_module_notify(struct notifier_block *nb, unsigned long action,
@@ -876,9 +906,9 @@ static int klp_module_notify(struct notifier_block *nb, unsigned long action,

if (action == MODULE_STATE_COMING) {
obj->mod = mod;
- klp_module_notify_coming(patch->mod, obj);
+ klp_module_notify_coming(patch, obj);
} else /* MODULE_STATE_GOING */
- klp_module_notify_going(patch->mod, obj);
+ klp_module_notify_going(patch, obj);

break;
}
--
1.8.5.2

2014-12-09 18:06:55

by Petr Mladek

[permalink] [raw]
Subject: [PATCH 3/6] livepatch v5: find and verify the old address in klp_*_init()

The patched address is found and checked when the patch is enabled.
If there is a failure the patch must be reverted. It might happen,
for example, when there are duplicated symbols or when the patch
and the kernel are incompatible.

It would be better to detect these problems earlier when the structures
are initialized. It will also help to avoid the costly symbol lookup
when the patch is disabled and enabled again.

This patch does the necessary changes. It adds some complexity
to module_coming() and module_going() functions but I think that
the design is cleaner. For example, it removes the strange situation
when klp_find_verify_func_addr() sets func->old_addr but it complains
when even a valid value has already been set before.

Note that I plan to remove the duplicate code by introducing
__klp_init_func().

Signed-off-by: Petr Mladek <[email protected]>
---
kernel/livepatch/core.c | 44 +++++++++++++++++++++++++++-----------------
1 file changed, 27 insertions(+), 17 deletions(-)

diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
index 034f79a926af..54bb39d3abb5 100644
--- a/kernel/livepatch/core.c
+++ b/kernel/livepatch/core.c
@@ -179,7 +179,8 @@ static int klp_verify_vmlinux_symbol(const char *name, unsigned long addr)
return -EINVAL;
}

-static int klp_find_verify_func_addr(struct klp_func *func, const char *objname)
+static int klp_find_verify_func_addr(struct klp_object *obj,
+ struct klp_func *func)
{
int ret;

@@ -188,17 +189,18 @@ static int klp_find_verify_func_addr(struct klp_func *func, const char *objname)
func->old_addr = 0;
#endif

- if (func->old_addr && objname) {
- pr_err("old address specified for module symbol\n");
- return -EINVAL;
- }
-
- if (func->old_addr)
+ /*
+ * If the old address is predefined, it might be used to check
+ * the consistency between the patch and the patched kernel.
+ * This does not work for kernel modules where the address
+ * always must be detected.
+ */
+ if (!func->old_addr || klp_is_module(obj))
+ ret = klp_find_object_symbol(obj->name, func->old_name,
+ &func->old_addr);
+ else
ret = klp_verify_vmlinux_symbol(func->old_name,
func->old_addr);
- else
- ret = klp_find_object_symbol(objname, func->old_name,
- &func->old_addr);

return ret;
}
@@ -348,9 +350,6 @@ static int klp_disable_object(struct klp_object *obj)
ret = klp_disable_func(func);
if (ret)
return ret;
-
- if (klp_is_module(obj))
- func->old_addr = 0;
}

obj->state = KLP_DISABLED;
@@ -370,10 +369,6 @@ static int klp_enable_object(struct klp_object *obj)
return -EINVAL;

for (func = obj->funcs; func->old_name; func++) {
- ret = klp_find_verify_func_addr(func, obj->name);
- if (ret)
- goto unregister;
-
ret = klp_enable_func(func);
if (ret)
goto unregister;
@@ -512,6 +507,7 @@ EXPORT_SYMBOL_GPL(klp_enable_patch);
static void klp_module_notify_coming(struct module *pmod,
struct klp_object *obj)
{
+ struct klp_func *func;
struct module *mod = obj->mod;
int ret;

@@ -524,6 +520,12 @@ static void klp_module_notify_coming(struct module *pmod,
goto err;
}

+ for (func = obj->funcs; func->old_name; func++) {
+ ret = klp_find_verify_func_addr(obj, func);
+ if (ret)
+ goto err;
+ }
+
ret = klp_enable_object(obj);
if (!ret)
return;
@@ -536,6 +538,7 @@ err:
static void klp_module_notify_going(struct module *pmod,
struct klp_object *obj)
{
+ struct klp_func *func;
struct module *mod = obj->mod;
int ret;

@@ -547,6 +550,9 @@ static void klp_module_notify_going(struct module *pmod,
pr_warn("failed to revert patch '%s' on module '%s' (%d)\n",
pmod->name, mod->name, ret);

+ for (func = obj->funcs; func->old_name; func++)
+ func->old_addr = 0;
+
obj->mod = NULL;
}

@@ -719,6 +725,10 @@ static int klp_init_func(struct klp_object *obj, struct klp_func *func)

func->state = KLP_DISABLED;

+ ret = klp_find_verify_func_addr(obj, func);
+ if (ret)
+ return ret;
+
ops = kzalloc(sizeof(*ops), GFP_KERNEL);
if (!ops)
ret = -ENOMEM;
--
1.8.5.2

2014-12-09 18:33:28

by Josh Poimboeuf

[permalink] [raw]
Subject: Re: [PATCH 1/6] livepatch v5: avoid race when checking for state of the patch

On Tue, Dec 09, 2014 at 07:05:02PM +0100, Petr Mladek wrote:
> klp_patch_enable() and klp_patch_disable() should check the current state
> of the patch under the klp_lock. Otherwise, it might detect that the operation
> is valid but the situation might change before it takes the lock.

Hi Petr,

Thanks for the patches.

I don't think this patch is necessary. klp_is_enabled() doesn't check
the state of the patch. It checks the initialization state of the core
module (klp_root_kobj), which can only be set in klp_init(). It's not
protected by the lock, so I don't see the point of this patch.

>
> Signed-off-by: Petr Mladek <[email protected]>
> ---
> kernel/livepatch/core.c | 16 ++++++++++------
> 1 file changed, 10 insertions(+), 6 deletions(-)
>
> diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
> index d6d0f50e81f8..b848069e44cc 100644
> --- a/kernel/livepatch/core.c
> +++ b/kernel/livepatch/core.c
> @@ -425,11 +425,13 @@ int klp_disable_patch(struct klp_patch *patch)
> {
> int ret;
>
> - if (!klp_is_enabled())
> - return -ENODEV;
> -
> mutex_lock(&klp_mutex);
>
> + if (!klp_is_enabled()) {
> + ret = -ENODEV;
> + goto err;
> + }
> +
> if (!klp_patch_is_registered(patch)) {
> ret = -EINVAL;
> goto err;
> @@ -489,11 +491,13 @@ int klp_enable_patch(struct klp_patch *patch)
> {
> int ret;
>
> - if (!klp_is_enabled())
> - return -ENODEV;
> -
> mutex_lock(&klp_mutex);
>
> + if (!klp_is_enabled()) {
> + ret = -ENODEV;
> + goto err;
> + }
> +
> if (!klp_patch_is_registered(patch)) {
> ret = -EINVAL;
> goto err;
> --
> 1.8.5.2
>

--
Josh

2014-12-09 19:20:48

by Josh Poimboeuf

[permalink] [raw]
Subject: Re: [PATCH 6/6] livepatch v5: clean up usage of the old and new addresses

On Tue, Dec 09, 2014 at 07:05:07PM +0100, Petr Mladek wrote:
> The situation with variables storing the address of the old and new code
> is not ideal. One is called "func" and the other is called "addr".
> The one is pointer and the other is unsigned long. It makes sense
> from the point of how the values are defined. But it might make problems
> to understand the code when the values are used.
>
> This patch introduces two new variables "old_ip" and "new_ip".
> They have the same type and the name is symmetric.

I agree that the naming of addr vs func is a little weird, but I find
that this patch makes the code more confusing. Now we have "func",
"addr" and "ip", all different words meaning "function address".

Adding to the confusion is the existence of four variables instead of
two.

> They are supposed to carry the address of the mcount call-site that
> ftrace framework operates with. The value is the same as the function
> address on x86 but it might be different on another architectures.

I didn't know the mcount call site address can differ from the function
address for some architectures. Can you give more details about this?

> The change is inspired by kGraft that even adds ftrace_function_to_fentry(),
> see https://lkml.org/lkml/2014/4/30/688. The function allows to find the right
> mcount call-site location. This patch uses ftrace_location() that just allows
> to verify that the address is correct. It means that we detect potential problem
> already when the patch in being initialized.
>
> Signed-off-by: Petr Mladek <[email protected]>
> ---
> include/linux/livepatch.h | 24 ++++++++++++-----------
> kernel/livepatch/core.c | 49 ++++++++++++++++++++++++++++++++++++++---------
> 2 files changed, 53 insertions(+), 20 deletions(-)
>
> diff --git a/include/linux/livepatch.h b/include/linux/livepatch.h
> index b61fe7402f1f..32f82d798c54 100644
> --- a/include/linux/livepatch.h
> +++ b/include/linux/livepatch.h
> @@ -36,8 +36,16 @@ enum klp_state {
> * struct klp_func - function structure for live patching
> * @old_name: name of the function to be patched
> * @new_func: pointer to the patched function code
> - * @old_addr: a hint conveying at what address the old function
> - * can be found (optional, vmlinux patches only)
> + * @old_addr: address of the function to be patched; if predefined then
> + * the value is used as a hint conveying at what address
> + * the old function can be found; otherwise, the value is
> + * located by name with kallsyms; the predefined value makes
> + * sense only for vmlinux function and when randomization is
> + * disabled
> + * @old_ip: address of the mcount call-site for the old function;
> + * it is the address that ftrace works with
> + * @new_ip: address of the mcount call-sire for the new function;
> + * it is the address that ftrace works with
> * @kobj: kobject for sysfs resources
> * @fops: ftrace operations structure
> * @state: tracks function-level patch application state
> @@ -46,15 +54,9 @@ struct klp_func {
> /* external */
> const char *old_name;
> void *new_func;
> - /*
> - * The old_addr field is optional and can be used to resolve
> - * duplicate symbol names in the vmlinux object. If this
> - * information is not present, the symbol is located by name
> - * with kallsyms. If the name is not unique and old_addr is
> - * not provided, the patch application fails as there is no
> - * way to resolve the ambiguity.
> - */
> unsigned long old_addr;
> + unsigned long old_ip;
> + unsigned long new_ip;

old_ip and new_ip should be in the /* internal */ section.

>
> /* internal */
> struct kobject kobj;
> @@ -88,7 +90,7 @@ struct klp_reloc {
> * @funcs: function entries for functions to be patched in the object
> * @kobj: kobject for sysfs resources
> * @mod: kernel module associated with the patched object
> - * (NULL for vmlinux)
> + * (NULL for vmlinux)
> * @state: tracks object-level patch application state
> */
> struct klp_object {
> diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
> index fe312b9ada78..c54230bf17fe 100644
> --- a/kernel/livepatch/core.c
> +++ b/kernel/livepatch/core.c
> @@ -179,8 +179,8 @@ static int klp_verify_vmlinux_symbol(const char *name, unsigned long addr)
> return -EINVAL;
> }
>
> -static int klp_find_verify_func_addr(struct klp_object *obj,
> - struct klp_func *func)
> +static int klp_find_verify_func_old_addr(struct klp_object *obj,
> + struct klp_func *func)
> {
> int ret;
>
> @@ -205,6 +205,36 @@ static int klp_find_verify_func_addr(struct klp_object *obj,
> return ret;
> }
>
> +static int klp_find_verify_func_addresses(struct klp_object *obj,
> + struct klp_func *func)
> +{
> + int ret;
> +
> + ret = klp_find_verify_func_old_addr(obj, func);
> + if (ret)
> + return ret;
> +
> + func->old_ip = ftrace_location(func->old_addr);
> + if (!func->old_ip) {
> + pr_err("function '%s' at the address '%lx' can not be patched\n",
> + func->old_name, func->old_addr);
> + return -EINVAL;
> + }
> +
> + /*
> + * In fact, the new function need not be traceable. This check is
> + * used to make sure that the address is a correct mcount call-site.
> + */
> + func->new_ip = ftrace_location((unsigned long)func->new_func);
> + if (!func->old_ip) {
> + pr_err("function at the address '%p' can not be used for patching\n",
> + func->new_func);
> + return -EINVAL;
> + }

Why does the new function need to have an mcount call site? And why do
we want to use that address rather than the real function address?

> +
> + return 0;
> +}
> +
> /*
> * external symbols are located outside the parent object (where the parent
> * object is either vmlinux or the kmod being patched).
> @@ -277,7 +307,7 @@ static void notrace klp_ftrace_handler(unsigned long ip,
> {
> struct klp_func *func = ops->private;
>
> - regs->ip = (unsigned long)func->new_func;
> + regs->ip = func->new_ip;
> }
>
> static int klp_disable_func(struct klp_func *func)
> @@ -287,7 +317,7 @@ static int klp_disable_func(struct klp_func *func)
> if (WARN_ON(func->state != KLP_ENABLED))
> return -EINVAL;
>
> - if (WARN_ON(!func->old_addr))
> + if (WARN_ON(!func->old_ip))
> return -EINVAL;
>
> ret = unregister_ftrace_function(func->fops);
> @@ -297,7 +327,7 @@ static int klp_disable_func(struct klp_func *func)
> return ret;
> }
>
> - ret = ftrace_set_filter_ip(func->fops, func->old_addr, 1, 0);
> + ret = ftrace_set_filter_ip(func->fops, func->old_ip, 1, 0);
> if (ret)
> pr_warn("function unregister succeeded but failed to clear the filter\n");
>
> @@ -310,13 +340,13 @@ static int klp_enable_func(struct klp_func *func)
> {
> int ret;
>
> - if (WARN_ON(!func->old_addr))
> + if (WARN_ON(!func->old_ip))
> return -EINVAL;
>
> if (WARN_ON(func->state != KLP_DISABLED))
> return -EINVAL;
>
> - ret = ftrace_set_filter_ip(func->fops, func->old_addr, 0, 0);
> + ret = ftrace_set_filter_ip(func->fops, func->old_ip, 0, 0);
> if (ret) {
> pr_err("failed to set ftrace filter for function '%s' (%d)\n",
> func->old_name, ret);
> @@ -327,7 +357,7 @@ static int klp_enable_func(struct klp_func *func)
> if (ret) {
> pr_err("failed to register ftrace handler for function '%s' (%d)\n",
> func->old_name, ret);
> - ftrace_set_filter_ip(func->fops, func->old_addr, 1, 0);
> + ftrace_set_filter_ip(func->fops, func->old_ip, 1, 0);
> } else {
> func->state = KLP_ENABLED;
> }
> @@ -594,6 +624,7 @@ static struct kobj_type klp_ktype_func = {
> static void klp_free_func_loaded(struct klp_func *func)
> {
> func->old_addr = 0;
> + func->old_ip = 0;
> }
>
> /*
> @@ -646,7 +677,7 @@ static void klp_free_patch(struct klp_patch *patch)
> /* parts of the initialization that is done only when the object is loaded */
> static int klp_init_func_loaded(struct klp_object *obj, struct klp_func *func)
> {
> - return klp_find_verify_func_addr(obj, func);
> + return klp_find_verify_func_addresses(obj, func);
> }
>
> static int klp_init_func(struct klp_object *obj, struct klp_func *func)
> --
> 1.8.5.2
>

--
Josh

2014-12-09 19:48:56

by Josh Poimboeuf

[permalink] [raw]
Subject: Re: [PATCH 5/6] livepatch v5: split init and free code that is done only for loaded modules

On Tue, Dec 09, 2014 at 07:05:06PM +0100, Petr Mladek wrote:
> This patch makes it clear what initialization and freeing steps need to be done
> when an object (module) is being loaded or removed. It will help to maintain
> the module coming and going handlers. Also it will remove duplicated
> code from these handlers.
>
> Signed-off-by: Petr Mladek <[email protected]>
> ---
> kernel/livepatch/core.c | 92 ++++++++++++++++++++++++++++++++-----------------
> 1 file changed, 61 insertions(+), 31 deletions(-)
>
> diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
> index 97a8d4a3d6d8..fe312b9ada78 100644
> --- a/kernel/livepatch/core.c
> +++ b/kernel/livepatch/core.c
> @@ -590,6 +590,12 @@ static struct kobj_type klp_ktype_func = {
> .sysfs_ops = &kobj_sysfs_ops,
> };
>
> +/* Clean up when a patched object is unloaded */
> +static void klp_free_func_loaded(struct klp_func *func)
> +{
> + func->old_addr = 0;
> +}
> +
> /*
> * Free all functions' kobjects in the array up to some limit. When limit is
> * NULL, all kobjects are freed.
> @@ -603,6 +609,17 @@ static void klp_free_funcs_limited(struct klp_object *obj,
> kobject_put(&func->kobj);
> }
>
> +/* Clean up when a patched object is unloaded */
> +static void klp_free_object_loaded(struct klp_object *obj)
> +{
> + struct klp_func *func;
> +
> + obj->mod = NULL;
> +
> + for (func = obj->funcs; func->old_name; func++)
> + klp_free_func_loaded(func);
> +}
> +
> /*
> * Free all objects' kobjects in the array up to some limit. When limit is
> * NULL, all kobjects are freed.
> @@ -626,6 +643,12 @@ static void klp_free_patch(struct klp_patch *patch)
> kobject_put(&patch->kobj);
> }
>
> +/* parts of the initialization that is done only when the object is loaded */
> +static int klp_init_func_loaded(struct klp_object *obj, struct klp_func *func)
> +{
> + return klp_find_verify_func_addr(obj, func);
> +}
> +

Creating a new function here for one line of code, which is only called
once, seems excessive, and makes the code harder to understand IMO.

Ditto for klp_free_func_loaded.

> static int klp_init_func(struct klp_object *obj, struct klp_func *func)
> {
> struct ftrace_ops *ops;
> @@ -633,10 +656,6 @@ static int klp_init_func(struct klp_object *obj, struct klp_func *func)
>
> func->state = KLP_DISABLED;
>
> - ret = klp_find_verify_func_addr(obj, func);
> - if (ret)
> - return ret;
> -
> ops = kzalloc(sizeof(*ops), GFP_KERNEL);
> if (!ops)
> ret = -ENOMEM;
> @@ -656,6 +675,28 @@ static int klp_init_func(struct klp_object *obj, struct klp_func *func)
> return 0;
> }
>
> +/* parts of the initialization that is done only when the object is loaded */
> +static int klp_init_object_loaded(struct klp_patch *patch,
> + struct klp_object *obj)
> +{
> + struct klp_func *func;
> + int ret;
> +
> + if (obj->relocs) {
> + ret = klp_write_object_relocations(patch->mod, obj);
> + if (ret)
> + return ret;
> + }
> +
> + for (func = obj->funcs; func->old_name; func++) {
> + ret = klp_init_func_loaded(obj, func);
> + if (ret)
> + return ret;
> + }
> +
> + return 0;
> +}
> +
> static int klp_init_object(struct klp_patch *patch, struct klp_object *obj)
> {
> struct klp_func *func;
> @@ -669,12 +710,6 @@ static int klp_init_object(struct klp_patch *patch, struct klp_object *obj)
>
> klp_find_object_module(obj);
>
> - if (obj->relocs && klp_is_object_loaded(obj)) {
> - ret = klp_write_object_relocations(patch->mod, obj);
> - if (ret)
> - return ret;
> - }
> -
> name = klp_is_module(obj) ? obj->name : "vmlinux";
> obj->kobj = kobject_create_and_add(name, &patch->kobj);
> if (!obj->kobj)
> @@ -686,6 +721,12 @@ static int klp_init_object(struct klp_patch *patch, struct klp_object *obj)
> goto free;
> }
>
> + if (klp_is_object_loaded(obj)) {
> + ret = klp_init_object_loaded(patch, obj);
> + if (ret)
> + goto free;
> + }
> +
> return 0;
>
> free:
> @@ -802,27 +843,19 @@ int klp_register_patch(struct klp_patch *patch)
> }
> EXPORT_SYMBOL_GPL(klp_register_patch);
>
> -static void klp_module_notify_coming(struct module *pmod,
> +static void klp_module_notify_coming(struct klp_patch *patch,
> struct klp_object *obj)
> {
> - struct klp_func *func;
> + struct module *pmod = patch->mod;
> struct module *mod = obj->mod;
> int ret;
>
> pr_notice("applying patch '%s' to loading module '%s'\n",
> pmod->name, mod->name);
>
> - if (obj->relocs) {
> - ret = klp_write_object_relocations(pmod, obj);
> - if (ret)
> - goto err;
> - }
> -
> - for (func = obj->funcs; func->old_name; func++) {
> - ret = klp_find_verify_func_addr(obj, func);
> - if (ret)
> - goto err;
> - }
> + ret = klp_init_object_loaded(patch, obj);
> + if (ret)
> + goto err;
>
> ret = klp_enable_object(obj);
> if (!ret)
> @@ -833,10 +866,10 @@ err:
> pmod->name, mod->name, ret);
> }
>
> -static void klp_module_notify_going(struct module *pmod,
> +static void klp_module_notify_going(struct klp_patch *patch,
> struct klp_object *obj)
> {
> - struct klp_func *func;
> + struct module *pmod = patch->mod;
> struct module *mod = obj->mod;
> int ret;
>
> @@ -848,10 +881,7 @@ static void klp_module_notify_going(struct module *pmod,
> pr_warn("failed to revert patch '%s' on module '%s' (%d)\n",
> pmod->name, mod->name, ret);
>
> - for (func = obj->funcs; func->old_name; func++)
> - func->old_addr = 0;
> -
> - obj->mod = NULL;
> + klp_free_object_loaded(obj);
> }
>
> static int klp_module_notify(struct notifier_block *nb, unsigned long action,
> @@ -876,9 +906,9 @@ static int klp_module_notify(struct notifier_block *nb, unsigned long action,
>
> if (action == MODULE_STATE_COMING) {
> obj->mod = mod;
> - klp_module_notify_coming(patch->mod, obj);
> + klp_module_notify_coming(patch, obj);
> } else /* MODULE_STATE_GOING */
> - klp_module_notify_going(patch->mod, obj);
> + klp_module_notify_going(patch, obj);
>
> break;
> }
> --
> 1.8.5.2
>

--
Josh

2014-12-10 10:11:17

by Petr Mladek

[permalink] [raw]
Subject: Re: [PATCH 1/6] livepatch v5: avoid race when checking for state of the patch

On Tue 2014-12-09 12:32:49, Josh Poimboeuf wrote:
> On Tue, Dec 09, 2014 at 07:05:02PM +0100, Petr Mladek wrote:
> > klp_patch_enable() and klp_patch_disable() should check the current state
> > of the patch under the klp_lock. Otherwise, it might detect that the operation
> > is valid but the situation might change before it takes the lock.
>
> Hi Petr,
>
> Thanks for the patches.
>
> I don't think this patch is necessary. klp_is_enabled() doesn't check
> the state of the patch. It checks the initialization state of the core
> module (klp_root_kobj), which can only be set in klp_init(). It's not
> protected by the lock, so I don't see the point of this patch.

Ah, I have misread the name and expected that it checked whether
the patch was enabled or disabled. The original code is OK then.

Well, Jiri Kosina pointed out that the check did not make much sense.
klp_is_enabled() could not be called if the livepatch module is not
loaded. And the later check for klp_patch_is_registered() is enough
to check whether the klp_enable_patch()/klp_disable_patch() calls
are allowed or not.

So, I suggest to remove the checks at all.

Best Regards,
Petr


> >
> > Signed-off-by: Petr Mladek <[email protected]>
> > ---
> > kernel/livepatch/core.c | 16 ++++++++++------
> > 1 file changed, 10 insertions(+), 6 deletions(-)
> >
> > diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
> > index d6d0f50e81f8..b848069e44cc 100644
> > --- a/kernel/livepatch/core.c
> > +++ b/kernel/livepatch/core.c
> > @@ -425,11 +425,13 @@ int klp_disable_patch(struct klp_patch *patch)
> > {
> > int ret;
> >
> > - if (!klp_is_enabled())
> > - return -ENODEV;
> > -
> > mutex_lock(&klp_mutex);
> >
> > + if (!klp_is_enabled()) {
> > + ret = -ENODEV;
> > + goto err;
> > + }
> > +
> > if (!klp_patch_is_registered(patch)) {
> > ret = -EINVAL;
> > goto err;
> > @@ -489,11 +491,13 @@ int klp_enable_patch(struct klp_patch *patch)
> > {
> > int ret;
> >
> > - if (!klp_is_enabled())
> > - return -ENODEV;
> > -
> > mutex_lock(&klp_mutex);
> >
> > + if (!klp_is_enabled()) {
> > + ret = -ENODEV;
> > + goto err;
> > + }
> > +
> > if (!klp_patch_is_registered(patch)) {
> > ret = -EINVAL;
> > goto err;
> > --
> > 1.8.5.2
> >
>
> --
> Josh

2014-12-10 10:29:41

by Petr Mladek

[permalink] [raw]
Subject: Re: [PATCH 5/6] livepatch v5: split init and free code that is done only for loaded modules

On Tue 2014-12-09 12:51:41, Josh Poimboeuf wrote:
> On Tue, Dec 09, 2014 at 07:05:06PM +0100, Petr Mladek wrote:
> > This patch makes it clear what initialization and freeing steps need to be done
> > when an object (module) is being loaded or removed. It will help to maintain
> > the module coming and going handlers. Also it will remove duplicated
> > code from these handlers.
> >
> > Signed-off-by: Petr Mladek <[email protected]>
> > ---
> > kernel/livepatch/core.c | 92 ++++++++++++++++++++++++++++++++-----------------
> > 1 file changed, 61 insertions(+), 31 deletions(-)
> >
> > diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
> > index 97a8d4a3d6d8..fe312b9ada78 100644
> > --- a/kernel/livepatch/core.c
> > +++ b/kernel/livepatch/core.c
> > @@ -590,6 +590,12 @@ static struct kobj_type klp_ktype_func = {
> > .sysfs_ops = &kobj_sysfs_ops,
> > };
> >
> > +/* Clean up when a patched object is unloaded */
> > +static void klp_free_func_loaded(struct klp_func *func)
> > +{
> > + func->old_addr = 0;
> > +}
> > +
> > /*
> > * Free all functions' kobjects in the array up to some limit. When limit is
> > * NULL, all kobjects are freed.
> > @@ -603,6 +609,17 @@ static void klp_free_funcs_limited(struct klp_object *obj,
> > kobject_put(&func->kobj);
> > }
> >
> > +/* Clean up when a patched object is unloaded */
> > +static void klp_free_object_loaded(struct klp_object *obj)
> > +{
> > + struct klp_func *func;
> > +
> > + obj->mod = NULL;
> > +
> > + for (func = obj->funcs; func->old_name; func++)
> > + klp_free_func_loaded(func);
> > +}
> > +
> > /*
> > * Free all objects' kobjects in the array up to some limit. When limit is
> > * NULL, all kobjects are freed.
> > @@ -626,6 +643,12 @@ static void klp_free_patch(struct klp_patch *patch)
> > kobject_put(&patch->kobj);
> > }
> >
> > +/* parts of the initialization that is done only when the object is loaded */
> > +static int klp_init_func_loaded(struct klp_object *obj, struct klp_func *func)
> > +{
> > + return klp_find_verify_func_addr(obj, func);
> > +}
> > +
>
> Creating a new function here for one line of code, which is only called
> once, seems excessive, and makes the code harder to understand IMO.

I see your point. Well, note that the split code is code is called
twice from klp_init_object() and klp_module_coming(), so it helps
to remove the code duplicity.

Also it clearly separates the operations that are always done
and that are done only when the object is loaded.

If you suggest to call klp_find_verify_func_addr() directly
from klp_init_object_loaded(), I am fine with it. We could always
create it later if there are more operations needed for
struct func.


> Ditto for klp_free_func_loaded.

Note that there will be two lines if we agree to add old_ip into
struct klp_func.

If it handles only old_addr, I agree with moving the one line to
klp_free_object_loaded().

Best Regards,
Petr

> > static int klp_init_func(struct klp_object *obj, struct klp_func *func)
> > {
> > struct ftrace_ops *ops;
> > @@ -633,10 +656,6 @@ static int klp_init_func(struct klp_object *obj, struct klp_func *func)
> >
> > func->state = KLP_DISABLED;
> >
> > - ret = klp_find_verify_func_addr(obj, func);
> > - if (ret)
> > - return ret;
> > -
> > ops = kzalloc(sizeof(*ops), GFP_KERNEL);
> > if (!ops)
> > ret = -ENOMEM;
> > @@ -656,6 +675,28 @@ static int klp_init_func(struct klp_object *obj, struct klp_func *func)
> > return 0;
> > }
> >
> > +/* parts of the initialization that is done only when the object is loaded */
> > +static int klp_init_object_loaded(struct klp_patch *patch,
> > + struct klp_object *obj)
> > +{
> > + struct klp_func *func;
> > + int ret;
> > +
> > + if (obj->relocs) {
> > + ret = klp_write_object_relocations(patch->mod, obj);
> > + if (ret)
> > + return ret;
> > + }
> > +
> > + for (func = obj->funcs; func->old_name; func++) {
> > + ret = klp_init_func_loaded(obj, func);
> > + if (ret)
> > + return ret;
> > + }
> > +
> > + return 0;
> > +}
> > +
> > static int klp_init_object(struct klp_patch *patch, struct klp_object *obj)
> > {
> > struct klp_func *func;
> > @@ -669,12 +710,6 @@ static int klp_init_object(struct klp_patch *patch, struct klp_object *obj)
> >
> > klp_find_object_module(obj);
> >
> > - if (obj->relocs && klp_is_object_loaded(obj)) {
> > - ret = klp_write_object_relocations(patch->mod, obj);
> > - if (ret)
> > - return ret;
> > - }
> > -
> > name = klp_is_module(obj) ? obj->name : "vmlinux";
> > obj->kobj = kobject_create_and_add(name, &patch->kobj);
> > if (!obj->kobj)
> > @@ -686,6 +721,12 @@ static int klp_init_object(struct klp_patch *patch, struct klp_object *obj)
> > goto free;
> > }
> >
> > + if (klp_is_object_loaded(obj)) {
> > + ret = klp_init_object_loaded(patch, obj);
> > + if (ret)
> > + goto free;
> > + }
> > +
> > return 0;
> >
> > free:
> > @@ -802,27 +843,19 @@ int klp_register_patch(struct klp_patch *patch)
> > }
> > EXPORT_SYMBOL_GPL(klp_register_patch);
> >
> > -static void klp_module_notify_coming(struct module *pmod,
> > +static void klp_module_notify_coming(struct klp_patch *patch,
> > struct klp_object *obj)
> > {
> > - struct klp_func *func;
> > + struct module *pmod = patch->mod;
> > struct module *mod = obj->mod;
> > int ret;
> >
> > pr_notice("applying patch '%s' to loading module '%s'\n",
> > pmod->name, mod->name);
> >
> > - if (obj->relocs) {
> > - ret = klp_write_object_relocations(pmod, obj);
> > - if (ret)
> > - goto err;
> > - }
> > -
> > - for (func = obj->funcs; func->old_name; func++) {
> > - ret = klp_find_verify_func_addr(obj, func);
> > - if (ret)
> > - goto err;
> > - }
> > + ret = klp_init_object_loaded(patch, obj);
> > + if (ret)
> > + goto err;
> >
> > ret = klp_enable_object(obj);
> > if (!ret)
> > @@ -833,10 +866,10 @@ err:
> > pmod->name, mod->name, ret);
> > }
> >
> > -static void klp_module_notify_going(struct module *pmod,
> > +static void klp_module_notify_going(struct klp_patch *patch,
> > struct klp_object *obj)
> > {
> > - struct klp_func *func;
> > + struct module *pmod = patch->mod;
> > struct module *mod = obj->mod;
> > int ret;
> >
> > @@ -848,10 +881,7 @@ static void klp_module_notify_going(struct module *pmod,
> > pr_warn("failed to revert patch '%s' on module '%s' (%d)\n",
> > pmod->name, mod->name, ret);
> >
> > - for (func = obj->funcs; func->old_name; func++)
> > - func->old_addr = 0;
> > -
> > - obj->mod = NULL;
> > + klp_free_object_loaded(obj);
> > }
> >
> > static int klp_module_notify(struct notifier_block *nb, unsigned long action,
> > @@ -876,9 +906,9 @@ static int klp_module_notify(struct notifier_block *nb, unsigned long action,
> >
> > if (action == MODULE_STATE_COMING) {
> > obj->mod = mod;
> > - klp_module_notify_coming(patch->mod, obj);
> > + klp_module_notify_coming(patch, obj);
> > } else /* MODULE_STATE_GOING */
> > - klp_module_notify_going(patch->mod, obj);
> > + klp_module_notify_going(patch, obj);
> >
> > break;
> > }
> > --
> > 1.8.5.2
> >
>
> --
> Josh

2014-12-10 13:50:17

by Petr Mladek

[permalink] [raw]
Subject: Re: [PATCH 6/6] livepatch v5: clean up usage of the old and new addresses

On Tue 2014-12-09 13:20:09, Josh Poimboeuf wrote:
> On Tue, Dec 09, 2014 at 07:05:07PM +0100, Petr Mladek wrote:
> > The situation with variables storing the address of the old and new code
> > is not ideal. One is called "func" and the other is called "addr".
> > The one is pointer and the other is unsigned long. It makes sense
> > from the point of how the values are defined. But it might make problems
> > to understand the code when the values are used.
> >
> > This patch introduces two new variables "old_ip" and "new_ip".
> > They have the same type and the name is symmetric.
>
> I agree that the naming of addr vs func is a little weird, but I find
> that this patch makes the code more confusing. Now we have "func",
> "addr" and "ip", all different words meaning "function address".
>
> Adding to the confusion is the existence of four variables instead of
> two.

Yup, I am not super happy about it as well. This is why I made this
change in the last patch, so that it is easier to ignore or rework.

Another solution would be to rename either new_func -> new_addr or
old_addr -> old_func. In this case, they should have the same type.

"unsigned long" is used on most locations, so I would prefer this
type. And it better fits with the *_addr names.

The problem is that it would mean to cast the pointer to the new
function in hand-made patches. But we could hide this under some macro
that would be handy anyway.

Ot we could leave it as is for now. I do not have strong opinion
about it.


> > They are supposed to carry the address of the mcount call-site that
> > ftrace framework operates with. The value is the same as the function
> > address on x86 but it might be different on another architectures.
>
> I didn't know the mcount call site address can differ from the function
> address for some architectures. Can you give more details about this?

Only fentry is put at the beginning of the functions. The classic
mcount call is put after the function prologue.

AFAIK, fentry is supported only on x86_64. There is another usable
feature on s390x that can be enabled by -mhotpatch gcc option but
we do not use it with kGraft. Vojtech/JiriK told me that the important
code is at the beginning of the function on PPC. But I am pretty
sure that there will be architectures where the code won't be at
the beginning.

The current implementation supports only x86_64. s390 and PPC seems
to be solvable without the shifted address. I am not sure if we
want to make it ready for other architectures or keep is simple for
now.

Note that the _ip suffix was used because it is the name used by
ftrace stuff and the value is mostly used together with ftrace
functions.


> > The change is inspired by kGraft that even adds ftrace_function_to_fentry(),
> > see https://lkml.org/lkml/2014/4/30/688. The function allows to find the right
> > mcount call-site location. This patch uses ftrace_location() that just allows
> > to verify that the address is correct. It means that we detect potential problem
> > already when the patch in being initialized.
> >
> > Signed-off-by: Petr Mladek <[email protected]>
> > ---
> > include/linux/livepatch.h | 24 ++++++++++++-----------
> > kernel/livepatch/core.c | 49 ++++++++++++++++++++++++++++++++++++++---------
> > 2 files changed, 53 insertions(+), 20 deletions(-)
> >
> > diff --git a/include/linux/livepatch.h b/include/linux/livepatch.h
> > index b61fe7402f1f..32f82d798c54 100644
> > --- a/include/linux/livepatch.h
> > +++ b/include/linux/livepatch.h
> > @@ -36,8 +36,16 @@ enum klp_state {
> > * struct klp_func - function structure for live patching
> > * @old_name: name of the function to be patched
> > * @new_func: pointer to the patched function code
> > - * @old_addr: a hint conveying at what address the old function
> > - * can be found (optional, vmlinux patches only)
> > + * @old_addr: address of the function to be patched; if predefined then
> > + * the value is used as a hint conveying at what address
> > + * the old function can be found; otherwise, the value is
> > + * located by name with kallsyms; the predefined value makes
> > + * sense only for vmlinux function and when randomization is
> > + * disabled
> > + * @old_ip: address of the mcount call-site for the old function;
> > + * it is the address that ftrace works with
> > + * @new_ip: address of the mcount call-sire for the new function;
> > + * it is the address that ftrace works with
> > * @kobj: kobject for sysfs resources
> > * @fops: ftrace operations structure
> > * @state: tracks function-level patch application state
> > @@ -46,15 +54,9 @@ struct klp_func {
> > /* external */
> > const char *old_name;
> > void *new_func;
> > - /*
> > - * The old_addr field is optional and can be used to resolve
> > - * duplicate symbol names in the vmlinux object. If this
> > - * information is not present, the symbol is located by name
> > - * with kallsyms. If the name is not unique and old_addr is
> > - * not provided, the patch application fails as there is no
> > - * way to resolve the ambiguity.
> > - */
> > unsigned long old_addr;
> > + unsigned long old_ip;
> > + unsigned long new_ip;
>
> old_ip and new_ip should be in the /* internal */ section.

Good catch!

Best Regards,
Petr

> >
> > /* internal */
> > struct kobject kobj;
> > @@ -88,7 +90,7 @@ struct klp_reloc {
> > * @funcs: function entries for functions to be patched in the object
> > * @kobj: kobject for sysfs resources
> > * @mod: kernel module associated with the patched object
> > - * (NULL for vmlinux)
> > + * (NULL for vmlinux)
> > * @state: tracks object-level patch application state
> > */
> > struct klp_object {
> > diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
> > index fe312b9ada78..c54230bf17fe 100644
> > --- a/kernel/livepatch/core.c
> > +++ b/kernel/livepatch/core.c
> > @@ -179,8 +179,8 @@ static int klp_verify_vmlinux_symbol(const char *name, unsigned long addr)
> > return -EINVAL;
> > }
> >
> > -static int klp_find_verify_func_addr(struct klp_object *obj,
> > - struct klp_func *func)
> > +static int klp_find_verify_func_old_addr(struct klp_object *obj,
> > + struct klp_func *func)
> > {
> > int ret;
> >
> > @@ -205,6 +205,36 @@ static int klp_find_verify_func_addr(struct klp_object *obj,
> > return ret;
> > }
> >
> > +static int klp_find_verify_func_addresses(struct klp_object *obj,
> > + struct klp_func *func)
> > +{
> > + int ret;
> > +
> > + ret = klp_find_verify_func_old_addr(obj, func);
> > + if (ret)
> > + return ret;
> > +
> > + func->old_ip = ftrace_location(func->old_addr);
> > + if (!func->old_ip) {
> > + pr_err("function '%s' at the address '%lx' can not be patched\n",
> > + func->old_name, func->old_addr);
> > + return -EINVAL;
> > + }
> > +
> > + /*
> > + * In fact, the new function need not be traceable. This check is
> > + * used to make sure that the address is a correct mcount call-site.
> > + */
> > + func->new_ip = ftrace_location((unsigned long)func->new_func);
> > + if (!func->old_ip) {
> > + pr_err("function at the address '%p' can not be used for patching\n",
> > + func->new_func);
> > + return -EINVAL;
> > + }
>
> Why does the new function need to have an mcount call site? And why do
> we want to use that address rather than the real function address?
>
> > +
> > + return 0;
> > +}
> > +
> > /*
> > * external symbols are located outside the parent object (where the parent
> > * object is either vmlinux or the kmod being patched).
> > @@ -277,7 +307,7 @@ static void notrace klp_ftrace_handler(unsigned long ip,
> > {
> > struct klp_func *func = ops->private;
> >
> > - regs->ip = (unsigned long)func->new_func;
> > + regs->ip = func->new_ip;
> > }
> >
> > static int klp_disable_func(struct klp_func *func)
> > @@ -287,7 +317,7 @@ static int klp_disable_func(struct klp_func *func)
> > if (WARN_ON(func->state != KLP_ENABLED))
> > return -EINVAL;
> >
> > - if (WARN_ON(!func->old_addr))
> > + if (WARN_ON(!func->old_ip))
> > return -EINVAL;
> >
> > ret = unregister_ftrace_function(func->fops);
> > @@ -297,7 +327,7 @@ static int klp_disable_func(struct klp_func *func)
> > return ret;
> > }
> >
> > - ret = ftrace_set_filter_ip(func->fops, func->old_addr, 1, 0);
> > + ret = ftrace_set_filter_ip(func->fops, func->old_ip, 1, 0);
> > if (ret)
> > pr_warn("function unregister succeeded but failed to clear the filter\n");
> >
> > @@ -310,13 +340,13 @@ static int klp_enable_func(struct klp_func *func)
> > {
> > int ret;
> >
> > - if (WARN_ON(!func->old_addr))
> > + if (WARN_ON(!func->old_ip))
> > return -EINVAL;
> >
> > if (WARN_ON(func->state != KLP_DISABLED))
> > return -EINVAL;
> >
> > - ret = ftrace_set_filter_ip(func->fops, func->old_addr, 0, 0);
> > + ret = ftrace_set_filter_ip(func->fops, func->old_ip, 0, 0);
> > if (ret) {
> > pr_err("failed to set ftrace filter for function '%s' (%d)\n",
> > func->old_name, ret);
> > @@ -327,7 +357,7 @@ static int klp_enable_func(struct klp_func *func)
> > if (ret) {
> > pr_err("failed to register ftrace handler for function '%s' (%d)\n",
> > func->old_name, ret);
> > - ftrace_set_filter_ip(func->fops, func->old_addr, 1, 0);
> > + ftrace_set_filter_ip(func->fops, func->old_ip, 1, 0);
> > } else {
> > func->state = KLP_ENABLED;
> > }
> > @@ -594,6 +624,7 @@ static struct kobj_type klp_ktype_func = {
> > static void klp_free_func_loaded(struct klp_func *func)
> > {
> > func->old_addr = 0;
> > + func->old_ip = 0;
> > }
> >
> > /*
> > @@ -646,7 +677,7 @@ static void klp_free_patch(struct klp_patch *patch)
> > /* parts of the initialization that is done only when the object is loaded */
> > static int klp_init_func_loaded(struct klp_object *obj, struct klp_func *func)
> > {
> > - return klp_find_verify_func_addr(obj, func);
> > + return klp_find_verify_func_addresses(obj, func);
> > }
> >
> > static int klp_init_func(struct klp_object *obj, struct klp_func *func)
> > --
> > 1.8.5.2
> >
>
> --
> Josh

2014-12-10 15:30:19

by Josh Poimboeuf

[permalink] [raw]
Subject: Re: [PATCH 1/6] livepatch v5: avoid race when checking for state of the patch

On Wed, Dec 10, 2014 at 11:11:47AM +0100, Petr Mladek wrote:
> On Tue 2014-12-09 12:32:49, Josh Poimboeuf wrote:
> > On Tue, Dec 09, 2014 at 07:05:02PM +0100, Petr Mladek wrote:
> > > klp_patch_enable() and klp_patch_disable() should check the current state
> > > of the patch under the klp_lock. Otherwise, it might detect that the operation
> > > is valid but the situation might change before it takes the lock.
> >
> > Hi Petr,
> >
> > Thanks for the patches.
> >
> > I don't think this patch is necessary. klp_is_enabled() doesn't check
> > the state of the patch. It checks the initialization state of the core
> > module (klp_root_kobj), which can only be set in klp_init(). It's not
> > protected by the lock, so I don't see the point of this patch.
>
> Ah, I have misread the name and expected that it checked whether
> the patch was enabled or disabled. The original code is OK then.
>
> Well, Jiri Kosina pointed out that the check did not make much sense.
> klp_is_enabled() could not be called if the livepatch module is not
> loaded. And the later check for klp_patch_is_registered() is enough
> to check whether the klp_enable_patch()/klp_disable_patch() calls
> are allowed or not.

But livepatch isn't a module, it's part of the kernel. Even if the init
function returns an error, that doesn't prevent any of the other
exported functions from getting called.


>
> So, I suggest to remove the checks at all.
>
> Best Regards,
> Petr
>
>
> > >
> > > Signed-off-by: Petr Mladek <[email protected]>
> > > ---
> > > kernel/livepatch/core.c | 16 ++++++++++------
> > > 1 file changed, 10 insertions(+), 6 deletions(-)
> > >
> > > diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
> > > index d6d0f50e81f8..b848069e44cc 100644
> > > --- a/kernel/livepatch/core.c
> > > +++ b/kernel/livepatch/core.c
> > > @@ -425,11 +425,13 @@ int klp_disable_patch(struct klp_patch *patch)
> > > {
> > > int ret;
> > >
> > > - if (!klp_is_enabled())
> > > - return -ENODEV;
> > > -
> > > mutex_lock(&klp_mutex);
> > >
> > > + if (!klp_is_enabled()) {
> > > + ret = -ENODEV;
> > > + goto err;
> > > + }
> > > +
> > > if (!klp_patch_is_registered(patch)) {
> > > ret = -EINVAL;
> > > goto err;
> > > @@ -489,11 +491,13 @@ int klp_enable_patch(struct klp_patch *patch)
> > > {
> > > int ret;
> > >
> > > - if (!klp_is_enabled())
> > > - return -ENODEV;
> > > -
> > > mutex_lock(&klp_mutex);
> > >
> > > + if (!klp_is_enabled()) {
> > > + ret = -ENODEV;
> > > + goto err;
> > > + }
> > > +
> > > if (!klp_patch_is_registered(patch)) {
> > > ret = -EINVAL;
> > > goto err;
> > > --
> > > 1.8.5.2
> > >
> >
> > --
> > Josh

--
Josh

2014-12-10 15:35:23

by Josh Poimboeuf

[permalink] [raw]
Subject: Re: [PATCH 5/6] livepatch v5: split init and free code that is done only for loaded modules

On Wed, Dec 10, 2014 at 11:30:16AM +0100, Petr Mladek wrote:
> On Tue 2014-12-09 12:51:41, Josh Poimboeuf wrote:
> > On Tue, Dec 09, 2014 at 07:05:06PM +0100, Petr Mladek wrote:
> > > This patch makes it clear what initialization and freeing steps need to be done
> > > when an object (module) is being loaded or removed. It will help to maintain
> > > the module coming and going handlers. Also it will remove duplicated
> > > code from these handlers.
> > >
> > > Signed-off-by: Petr Mladek <[email protected]>
> > > ---
> > > kernel/livepatch/core.c | 92 ++++++++++++++++++++++++++++++++-----------------
> > > 1 file changed, 61 insertions(+), 31 deletions(-)
> > >
> > > diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
> > > index 97a8d4a3d6d8..fe312b9ada78 100644
> > > --- a/kernel/livepatch/core.c
> > > +++ b/kernel/livepatch/core.c
> > > @@ -590,6 +590,12 @@ static struct kobj_type klp_ktype_func = {
> > > .sysfs_ops = &kobj_sysfs_ops,
> > > };
> > >
> > > +/* Clean up when a patched object is unloaded */
> > > +static void klp_free_func_loaded(struct klp_func *func)
> > > +{
> > > + func->old_addr = 0;
> > > +}
> > > +
> > > /*
> > > * Free all functions' kobjects in the array up to some limit. When limit is
> > > * NULL, all kobjects are freed.
> > > @@ -603,6 +609,17 @@ static void klp_free_funcs_limited(struct klp_object *obj,
> > > kobject_put(&func->kobj);
> > > }
> > >
> > > +/* Clean up when a patched object is unloaded */
> > > +static void klp_free_object_loaded(struct klp_object *obj)
> > > +{
> > > + struct klp_func *func;
> > > +
> > > + obj->mod = NULL;
> > > +
> > > + for (func = obj->funcs; func->old_name; func++)
> > > + klp_free_func_loaded(func);
> > > +}
> > > +
> > > /*
> > > * Free all objects' kobjects in the array up to some limit. When limit is
> > > * NULL, all kobjects are freed.
> > > @@ -626,6 +643,12 @@ static void klp_free_patch(struct klp_patch *patch)
> > > kobject_put(&patch->kobj);
> > > }
> > >
> > > +/* parts of the initialization that is done only when the object is loaded */
> > > +static int klp_init_func_loaded(struct klp_object *obj, struct klp_func *func)
> > > +{
> > > + return klp_find_verify_func_addr(obj, func);
> > > +}
> > > +
> >
> > Creating a new function here for one line of code, which is only called
> > once, seems excessive, and makes the code harder to understand IMO.
>
> I see your point. Well, note that the split code is code is called
> twice from klp_init_object() and klp_module_coming(), so it helps
> to remove the code duplicity.
>
> Also it clearly separates the operations that are always done
> and that are done only when the object is loaded.
>
> If you suggest to call klp_find_verify_func_addr() directly
> from klp_init_object_loaded(), I am fine with it. We could always
> create it later if there are more operations needed for
> struct func.

Thanks, I'll merge the rest of this patch (except for the separate
klp_init_func_loaded and klp_free_func_loaded).

I should have v6 soon.

--
Josh

2014-12-10 16:15:23

by Josh Poimboeuf

[permalink] [raw]
Subject: Re: [PATCH 6/6] livepatch v5: clean up usage of the old and new addresses

On Wed, Dec 10, 2014 at 02:50:54PM +0100, Petr Mladek wrote:
> On Tue 2014-12-09 13:20:09, Josh Poimboeuf wrote:
> > On Tue, Dec 09, 2014 at 07:05:07PM +0100, Petr Mladek wrote:
> > > The situation with variables storing the address of the old and new code
> > > is not ideal. One is called "func" and the other is called "addr".
> > > The one is pointer and the other is unsigned long. It makes sense
> > > from the point of how the values are defined. But it might make problems
> > > to understand the code when the values are used.
> > >
> > > This patch introduces two new variables "old_ip" and "new_ip".
> > > They have the same type and the name is symmetric.
> >
> > I agree that the naming of addr vs func is a little weird, but I find
> > that this patch makes the code more confusing. Now we have "func",
> > "addr" and "ip", all different words meaning "function address".
> >
> > Adding to the confusion is the existence of four variables instead of
> > two.
>
> Yup, I am not super happy about it as well. This is why I made this
> change in the last patch, so that it is easier to ignore or rework.
>
> Another solution would be to rename either new_func -> new_addr or
> old_addr -> old_func. In this case, they should have the same type.
>
> "unsigned long" is used on most locations, so I would prefer this
> type. And it better fits with the *_addr names.
>
> The problem is that it would mean to cast the pointer to the new
> function in hand-made patches. But we could hide this under some macro
> that would be handy anyway.
>
> Ot we could leave it as is for now. I do not have strong opinion
> about it.

Ok, I'd rather leave it as it is for now.

> > > They are supposed to carry the address of the mcount call-site that
> > > ftrace framework operates with. The value is the same as the function
> > > address on x86 but it might be different on another architectures.
> >
> > I didn't know the mcount call site address can differ from the function
> > address for some architectures. Can you give more details about this?
>
> Only fentry is put at the beginning of the functions. The classic
> mcount call is put after the function prologue.
>
> AFAIK, fentry is supported only on x86_64. There is another usable
> feature on s390x that can be enabled by -mhotpatch gcc option but
> we do not use it with kGraft. Vojtech/JiriK told me that the important
> code is at the beginning of the function on PPC. But I am pretty
> sure that there will be architectures where the code won't be at
> the beginning.
>
> The current implementation supports only x86_64. s390 and PPC seems
> to be solvable without the shifted address. I am not sure if we
> want to make it ready for other architectures or keep is simple for
> now.

Yeah, I'd say let's keep the code simple for now until we need the added
complexity.

[...]
> > > + func->new_ip = ftrace_location((unsigned long)func->new_func);
> > > + if (!func->old_ip) {
> > > + pr_err("function at the address '%p' can not be used for patching\n",
> > > + func->new_func);
> > > + return -EINVAL;
> > > + }
> >
> > Why does the new function need to have an mcount call site? And why do
> > we want to use that address rather than the real function address?

I think you missed this question, but it doesn't matter since we can
skip this patch for now anyway.

Thanks,
Josh

2014-12-11 11:17:39

by Petr Mladek

[permalink] [raw]
Subject: Re: [PATCH 1/6] livepatch v5: avoid race when checking for state of the patch

On Wed 2014-12-10 09:25:06, Josh Poimboeuf wrote:
> On Wed, Dec 10, 2014 at 11:11:47AM +0100, Petr Mladek wrote:
> > On Tue 2014-12-09 12:32:49, Josh Poimboeuf wrote:
> > > On Tue, Dec 09, 2014 at 07:05:02PM +0100, Petr Mladek wrote:
> > > > klp_patch_enable() and klp_patch_disable() should check the current state
> > > > of the patch under the klp_lock. Otherwise, it might detect that the operation
> > > > is valid but the situation might change before it takes the lock.
> > >
> > > Hi Petr,
> > >
> > > Thanks for the patches.
> > >
> > > I don't think this patch is necessary. klp_is_enabled() doesn't check
> > > the state of the patch. It checks the initialization state of the core
> > > module (klp_root_kobj), which can only be set in klp_init(). It's not
> > > protected by the lock, so I don't see the point of this patch.
> >
> > Ah, I have misread the name and expected that it checked whether
> > the patch was enabled or disabled. The original code is OK then.
> >
> > Well, Jiri Kosina pointed out that the check did not make much sense.
> > klp_is_enabled() could not be called if the livepatch module is not
> > loaded. And the later check for klp_patch_is_registered() is enough
> > to check whether the klp_enable_patch()/klp_disable_patch() calls
> > are allowed or not.
>
> But livepatch isn't a module, it's part of the kernel.

Ah, I remembered that module_init(klp_init) and created a wrong mental link ;-)

> Even if the init
> function returns an error, that doesn't prevent any of the other
> exported functions from getting called.

Well, it still will be covered by that later klp_patch_is_registered()
check. But I am find with leaving it as is.

Best Regards,
Petr

2014-12-11 11:30:29

by Petr Mladek

[permalink] [raw]
Subject: Re: [PATCH 6/6] livepatch v5: clean up usage of the old and new addresses

On Wed 2014-12-10 09:33:09, Josh Poimboeuf wrote:
> On Wed, Dec 10, 2014 at 02:50:54PM +0100, Petr Mladek wrote:
> > On Tue 2014-12-09 13:20:09, Josh Poimboeuf wrote:
> > > On Tue, Dec 09, 2014 at 07:05:07PM +0100, Petr Mladek wrote:
> > > > The situation with variables storing the address of the old and new code
> > > > is not ideal. One is called "func" and the other is called "addr".
> > > > The one is pointer and the other is unsigned long. It makes sense
> > > > from the point of how the values are defined. But it might make problems
> > > > to understand the code when the values are used.
> > > >
> > > > This patch introduces two new variables "old_ip" and "new_ip".
> > > > They have the same type and the name is symmetric.
> > >
> > > I agree that the naming of addr vs func is a little weird, but I find
> > > that this patch makes the code more confusing. Now we have "func",
> > > "addr" and "ip", all different words meaning "function address".
> > >
> > > Adding to the confusion is the existence of four variables instead of
> > > two.
> >
> > Yup, I am not super happy about it as well. This is why I made this
> > change in the last patch, so that it is easier to ignore or rework.
> >
> > Another solution would be to rename either new_func -> new_addr or
> > old_addr -> old_func. In this case, they should have the same type.
> >
> > "unsigned long" is used on most locations, so I would prefer this
> > type. And it better fits with the *_addr names.
> >
> > The problem is that it would mean to cast the pointer to the new
> > function in hand-made patches. But we could hide this under some macro
> > that would be handy anyway.
> >
> > Ot we could leave it as is for now. I do not have strong opinion
> > about it.
>
> Ok, I'd rather leave it as it is for now.

Fine with me.

> > > > They are supposed to carry the address of the mcount call-site that
> > > > ftrace framework operates with. The value is the same as the function
> > > > address on x86 but it might be different on another architectures.
> > >
> > > I didn't know the mcount call site address can differ from the function
> > > address for some architectures. Can you give more details about this?
> >
> > Only fentry is put at the beginning of the functions. The classic
> > mcount call is put after the function prologue.
> >
> > AFAIK, fentry is supported only on x86_64. There is another usable
> > feature on s390x that can be enabled by -mhotpatch gcc option but
> > we do not use it with kGraft. Vojtech/JiriK told me that the important
> > code is at the beginning of the function on PPC. But I am pretty
> > sure that there will be architectures where the code won't be at
> > the beginning.
> >
> > The current implementation supports only x86_64. s390 and PPC seems
> > to be solvable without the shifted address. I am not sure if we
> > want to make it ready for other architectures or keep is simple for
> > now.
>
> Yeah, I'd say let's keep the code simple for now until we need the added
> complexity.

OK, I am fine with it.

> [...]
> > > > + func->new_ip = ftrace_location((unsigned long)func->new_func);
> > > > + if (!func->old_ip) {
> > > > + pr_err("function at the address '%p' can not be used for patching\n",
> > > > + func->new_func);
> > > > + return -EINVAL;
> > > > + }
> > >
> > > Why does the new function need to have an mcount call site? And why do
> > > we want to use that address rather than the real function address?
>
> I think you missed this question, but it doesn't matter since we can
> skip this patch for now anyway.

Yeah, good question. To be honest, I did not work on kGraft when this
part was discussed. My intuitive understanding is that if
klp_ftrace_handler() is called after the function prologue,
we would need to skip the prologue from the new function as well.

So, if we start supporting some architecture where the patched code
won't be on the beginning, we would need to somehow handle the
already proceed code.

Best Regards,
Petr

2014-12-11 14:24:06

by Josh Poimboeuf

[permalink] [raw]
Subject: Re: [PATCH 1/6] livepatch v5: avoid race when checking for state of the patch

On Thu, Dec 11, 2014 at 12:17:33PM +0100, Petr Mladek wrote:
> On Wed 2014-12-10 09:25:06, Josh Poimboeuf wrote:
> > On Wed, Dec 10, 2014 at 11:11:47AM +0100, Petr Mladek wrote:
> > > On Tue 2014-12-09 12:32:49, Josh Poimboeuf wrote:
> > But livepatch isn't a module, it's part of the kernel.
>
> Ah, I remembered that module_init(klp_init) and created a wrong mental link ;-)
>
> > Even if the init
> > function returns an error, that doesn't prevent any of the other
> > exported functions from getting called.
>
> Well, it still will be covered by that later klp_patch_is_registered()
> check. But I am find with leaving it as is.

True, it's probably only necessary to call klp_is_enabled() from the
register function. And I'm thinking it should probably have a better
name, like klp_initialized().

But that's a minor cosmetic change, so if there are no more comments for
the v6 patch set, I can send a separate patch for that later.

--
Josh