2014-11-20 22:30:24

by Seth Jennings

[permalink] [raw]
Subject: [PATCHv3 0/3] Kernel Live Patching

Changelog:

Thanks for all the feedback!

changes in v3:
- merge API and core data structures (Miroslav)
- replace lp_ and lpc_ prefixes with klp_
- add ARCH_HAVE_LIVE_PATCHING
- guard livepatch.h with IS_ENABLED(CONFIG_LIVE_PATCHING)
- smaller cleanups
- TODO: FTRACE_OPS_FL_DYNAMIC? allocate fops in core?
- TODO: kernel-doc for API structs once agreed upon

changes in v2:
- rebase to next-20141113
- add copyright/license block to livepatch.h
- add _LINUX prefix to header defines
- replace semaphore with mutex
- add LPC_ prefix to state enum
- convert BUGs to WARNs and handle properly
- change Kconfig default to n
- remove [old|new] attrs from function sysfs dir (KASLR leak, no use)
- disregard user provided old_addr if kernel uses KASLR
- s/out/err for error path labels
- s/unregister/disable for uniform terminology
- s/lp/lpc for module notifier elements
- replace module ref'ing with unload notifier + mutex protection
- adjust notifier priority to run before ftrace
- make LIVE_PATCHING boolean (about to depend on arch stuff)
- move x86-specific reloc code to arch/x86
- s/dynrela/reloc/
- add live patching sysfs documentation
- add API function kernel-doc
- TODO: kernel-doc for API structs once agreed upon

Summary:

This patchset implements an ftrace-based mechanism and kernel interface for
doing live patching of kernel and kernel module functions. It represents the
greatest common functionality set between kpatch [1] and kGraft [2] and can
accept patches built using either method. This solution was discussed in the
Live Patching Mini-conference at LPC 2014 [3].

The model consists of a live patching "core" that provides an interface for
other "patch" kernel modules to register patches with the core.

Patch modules contain the new function code and create an klp_patch structure
containing the required data about what functions to patch, where the new code
for each patched function resides, and in which kernel object (vmlinux or
module) the function to be patch resides. The patch module then invokes the
klp_register_patch() function to register with the core, then klp_enable_patch()
to have the core redirect the execution paths using ftrace.

An example patch module can be found here:
https://github.com/spartacus06/livepatch/blob/master/patch/patch.c

The live patching core creates a sysfs hierarchy for user-level access to live
patching information. The hierarchy is structured like this:

/sys/kernel/livepatch
/sys/kernel/livepatch/<patch>
/sys/kernel/livepatch/<patch>/enabled
/sys/kernel/livepatch/<patch>/<object>
/sys/kernel/livepatch/<patch>/<object>/<func>

The old function is located using one of two methods: it is either provided by
the patch module (only possible for a function in vmlinux) or kallsyms lookup.
Symbol ambiguity results in a failure.

The core takes a reference on the patch module itself to keep it from
unloading. This is because, without a mechanism to ensure that no thread is
currently executing in the patched function, we can not determine whether it is
safe to unload the patch module. For this reason, unloading patch modules is
currently not allowed.

Disabling patches can be done using the "enabled" attribute of the patch:

echo 0 > /sys/kernel/livepatch/<patch>/enabled

If a patch module contains a patch for a module that is not currently loaded,
there is nothing to patch so the core does nothing for that patch object.
However, the core registers a module notifier that looks for COMING events so
that if the module is ever loaded, it is immediately patched. If a module with
patch code is removed, the notifier looks for GOING events and disables any
patched functions for that object before it unloads. The notifier has a higher
priority than that of the ftrace notifier so that it runs before the ftrace
notifier for GOING events and we can cleanly unregister from ftrace.

kpatch and kGraft each have their own mechanisms for ensuring system
consistency during the patching process. This first version does not implement
any consistency mechanism that ensures that old and new code do not run
together. In practice, ~90% of CVEs are safe to apply in this way, since they
simply add a conditional check. However, any function change that can not
execute safely with the old version of the function can _not_ be safely applied
for now.

[1] https://github.com/dynup/kpatch
[2] https://git.kernel.org/cgit/linux/kernel/git/jirislaby/kgraft.git/
[3] https://etherpad.fr/p/LPC2014_LivePatching

Seth Jennings (3):
kernel: add TAINT_LIVEPATCH
kernel: add support for live patching
kernel: add sysfs documentation for live patching

Documentation/ABI/testing/sysfs-kernel-livepatch | 44 ++
Documentation/oops-tracing.txt | 2 +
Documentation/sysctl/kernel.txt | 1 +
MAINTAINERS | 13 +
arch/x86/Kconfig | 3 +
arch/x86/include/asm/livepatch.h | 37 +
arch/x86/kernel/Makefile | 1 +
arch/x86/kernel/livepatch.c | 74 ++
include/linux/kernel.h | 1 +
include/linux/livepatch.h | 96 +++
kernel/Makefile | 1 +
kernel/livepatch/Kconfig | 18 +
kernel/livepatch/Makefile | 3 +
kernel/livepatch/core.c | 828 +++++++++++++++++++++++
kernel/panic.c | 2 +
15 files changed, 1124 insertions(+)
create mode 100644 Documentation/ABI/testing/sysfs-kernel-livepatch
create mode 100644 arch/x86/include/asm/livepatch.h
create mode 100644 arch/x86/kernel/livepatch.c
create mode 100644 include/linux/livepatch.h
create mode 100644 kernel/livepatch/Kconfig
create mode 100644 kernel/livepatch/Makefile
create mode 100644 kernel/livepatch/core.c

--
1.9.3


2014-11-20 22:30:25

by Seth Jennings

[permalink] [raw]
Subject: [PATCHv3 3/3] kernel: add sysfs documentation for live patching

Adds sysfs interface documentation to Documentation/ABI/testing/

Signed-off-by: Seth Jennings <[email protected]>
---
Documentation/ABI/testing/sysfs-kernel-livepatch | 44 ++++++++++++++++++++++++
MAINTAINERS | 1 +
2 files changed, 45 insertions(+)
create mode 100644 Documentation/ABI/testing/sysfs-kernel-livepatch

diff --git a/Documentation/ABI/testing/sysfs-kernel-livepatch b/Documentation/ABI/testing/sysfs-kernel-livepatch
new file mode 100644
index 0000000..5bf42a8
--- /dev/null
+++ b/Documentation/ABI/testing/sysfs-kernel-livepatch
@@ -0,0 +1,44 @@
+What: /sys/kernel/livepatch
+Date: Nov 2014
+KernelVersion: 3.19.0
+Contact: [email protected]
+Description:
+ Interface for kernel live patching
+
+ The /sys/kernel/livepatch directory contains subdirectories for
+ each loaded live patch module.
+
+What: /sys/kernel/livepatch/<patch>
+Date: Nov 2014
+KernelVersion: 3.19.0
+Contact: [email protected]
+Description:
+ The patch directory contains subdirectories for each kernel
+ object (vmlinux or a module) in which it patched functions.
+
+What: /sys/kernel/livepatch/<patch>/enabled
+Date: Nov 2014
+KernelVersion: 3.19.0
+Contact: [email protected]
+Description:
+ A writable attribute that indicates whether the patched
+ code is currently applied. Writing 0 will disable the patch
+ while writing 1 will re-enable the patch.
+
+What: /sys/kernel/livepatch/<patch>/<object>
+Date: Nov 2014
+KernelVersion: 3.19.0
+Contact: [email protected]
+Description:
+ The object directory contains subdirectories for each function
+ that is patched within the object.
+
+What: /sys/kernel/livepatch/<patch>/<object>/<function>
+Date: Nov 2014
+KernelVersion: 3.19.0
+Contact: [email protected]
+Description:
+ The function directory contains attributes regarding the
+ properties and state of the patched function.
+
+ There are currently no such attributes.
diff --git a/MAINTAINERS b/MAINTAINERS
index c7f49ae..7985293 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -5725,6 +5725,7 @@ F: kernel/livepatch/
F: include/linux/livepatch.h
F: arch/x86/include/asm/livepatch.h
F: arch/x86/kernel/livepatch.c
+F: Documentation/ABI/testing/sysfs-kernel-livepatch
L: [email protected]

LLC (802.2)
--
1.9.3

2014-11-20 22:30:31

by Seth Jennings

[permalink] [raw]
Subject: [PATCHv3 2/3] kernel: add support for live patching

This commit introduces code for the live patching core. It implements
an ftrace-based mechanism and kernel interface for doing live patching
of kernel and kernel module functions.

It represents the greatest common functionality set between kpatch and
kgraft and can accept patches built using either method.

This first version does not implement any consistency mechanism that
ensures that old and new code do not run together. In practice, ~90% of
CVEs are safe to apply in this way, since they simply add a conditional
check. However, any function change that can not execute safely with
the old version of the function can _not_ be safely applied in this
version.

Signed-off-by: Seth Jennings <[email protected]>
---
MAINTAINERS | 12 +
arch/x86/Kconfig | 3 +
arch/x86/include/asm/livepatch.h | 37 ++
arch/x86/kernel/Makefile | 1 +
arch/x86/kernel/livepatch.c | 74 ++++
include/linux/livepatch.h | 96 +++++
kernel/Makefile | 1 +
kernel/livepatch/Kconfig | 18 +
kernel/livepatch/Makefile | 3 +
kernel/livepatch/core.c | 828 +++++++++++++++++++++++++++++++++++++++
10 files changed, 1073 insertions(+)
create mode 100644 arch/x86/include/asm/livepatch.h
create mode 100644 arch/x86/kernel/livepatch.c
create mode 100644 include/linux/livepatch.h
create mode 100644 kernel/livepatch/Kconfig
create mode 100644 kernel/livepatch/Makefile
create mode 100644 kernel/livepatch/core.c

diff --git a/MAINTAINERS b/MAINTAINERS
index 4861577..c7f49ae 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -5715,6 +5715,18 @@ F: Documentation/misc-devices/lis3lv02d
F: drivers/misc/lis3lv02d/
F: drivers/platform/x86/hp_accel.c

+LIVE PATCHING
+M: Josh Poimboeuf <[email protected]>
+M: Seth Jennings <[email protected]>
+M: Jiri Kosina <[email protected]>
+M: Vojtech Pavlik <[email protected]>
+S: Maintained
+F: kernel/livepatch/
+F: include/linux/livepatch.h
+F: arch/x86/include/asm/livepatch.h
+F: arch/x86/kernel/livepatch.c
+L: [email protected]
+
LLC (802.2)
M: Arnaldo Carvalho de Melo <[email protected]>
S: Maintained
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index ec21dfd..78715fd 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -17,6 +17,7 @@ config X86_64
depends on 64BIT
select X86_DEV_DMA_OPS
select ARCH_USE_CMPXCHG_LOCKREF
+ select ARCH_HAVE_LIVE_PATCHING

### Arch settings
config X86
@@ -1991,6 +1992,8 @@ config CMDLINE_OVERRIDE
This is used to work around broken boot loaders. This should
be set to 'N' under normal conditions.

+source "kernel/livepatch/Kconfig"
+
endmenu

config ARCH_ENABLE_MEMORY_HOTPLUG
diff --git a/arch/x86/include/asm/livepatch.h b/arch/x86/include/asm/livepatch.h
new file mode 100644
index 0000000..2ed86ec
--- /dev/null
+++ b/arch/x86/include/asm/livepatch.h
@@ -0,0 +1,37 @@
+/*
+ * livepatch.h - x86-specific Kernel Live Patching Core
+ *
+ * Copyright (C) 2014 Seth Jennings <[email protected]>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef _ASM_X86_LIVEPATCH_H
+#define _ASM_X86_LIVEPATCH_H
+
+#include <linux/module.h>
+
+#ifdef CONFIG_LIVE_PATCHING
+extern int klp_write_module_reloc(struct module *mod, unsigned long type,
+ unsigned long loc, unsigned long value);
+
+#else
+static int klp_write_module_reloc(struct module *mod, unsigned long type,
+ unsigned long loc, unsigned long value)
+{
+ return -ENOSYS;
+}
+#endif
+
+#endif /* _ASM_X86_LIVEPATCH_H */
diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index 5d4502c..316b34e 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -63,6 +63,7 @@ obj-$(CONFIG_X86_MPPARSE) += mpparse.o
obj-y += apic/
obj-$(CONFIG_X86_REBOOTFIXUPS) += reboot_fixups_32.o
obj-$(CONFIG_DYNAMIC_FTRACE) += ftrace.o
+obj-$(CONFIG_LIVE_PATCHING) += livepatch.o
obj-$(CONFIG_FUNCTION_GRAPH_TRACER) += ftrace.o
obj-$(CONFIG_FTRACE_SYSCALLS) += ftrace.o
obj-$(CONFIG_X86_TSC) += trace_clock.o
diff --git a/arch/x86/kernel/livepatch.c b/arch/x86/kernel/livepatch.c
new file mode 100644
index 0000000..777a4a4
--- /dev/null
+++ b/arch/x86/kernel/livepatch.c
@@ -0,0 +1,74 @@
+/*
+ * livepatch.c - x86-specific Kernel Live Patching Core
+ *
+ * Copyright (C) 2014 Seth Jennings <[email protected]>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/module.h>
+#include <linux/uaccess.h>
+#include <asm/cacheflush.h>
+#include <asm/page_types.h>
+
+int klp_write_module_reloc(struct module *mod, unsigned long type,
+ unsigned long loc, unsigned long value)
+{
+ int ret, readonly, numpages, size = 4;
+ unsigned long val;
+ unsigned long core = (unsigned long)mod->module_core;
+ unsigned long core_ro_size = mod->core_ro_size;
+ unsigned long core_size = mod->core_size;
+
+ switch (type) {
+ case R_X86_64_NONE:
+ return 0;
+ case R_X86_64_64:
+ val = value;
+ size = 8;
+ break;
+ case R_X86_64_32:
+ val = (u32)value;
+ break;
+ case R_X86_64_32S:
+ val = (s32)value;
+ break;
+ case R_X86_64_PC32:
+ val = (u32)(value - loc);
+ break;
+ default:
+ /* unsupported relocation type */
+ return -EINVAL;
+ }
+
+ if (loc >= core && loc < core + core_ro_size)
+ readonly = 1;
+ else if (loc >= core + core_ro_size && loc < core + core_size)
+ readonly = 0;
+ else
+ /* loc not in expected range */
+ return -EINVAL;
+
+ numpages = ((loc & PAGE_MASK) == ((loc + size) & PAGE_MASK)) ? 1 : 2;
+
+ if (readonly)
+ set_memory_rw(loc & PAGE_MASK, numpages);
+
+ ret = probe_kernel_write((void *)loc, &val, size);
+
+ if (readonly)
+ set_memory_ro(loc & PAGE_MASK, numpages);
+
+ return ret;
+}
diff --git a/include/linux/livepatch.h b/include/linux/livepatch.h
new file mode 100644
index 0000000..eab84df
--- /dev/null
+++ b/include/linux/livepatch.h
@@ -0,0 +1,96 @@
+/*
+ * livepatch.h - Kernel Live Patching Core
+ *
+ * Copyright (C) 2014 Seth Jennings <[email protected]>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef _LINUX_LIVEPATCH_H_
+#define _LINUX_LIVEPATCH_H_
+
+#include <linux/module.h>
+#include <linux/ftrace.h>
+
+#if IS_ENABLED(CONFIG_LIVE_PATCHING)
+
+#include <asm/livepatch.h>
+
+/* TODO: add kernel-doc for structures once agreed upon */
+
+enum klp_state {
+ LPC_DISABLED,
+ LPC_ENABLED
+};
+
+struct klp_func {
+ /* external */
+ const char *old_name; /* function to be patched */
+ void *new_func; /* replacement function in patch module */
+ /*
+ * The old_addr field is optional and can be used to resolve
+ * duplicate symbol names in the vmlinux object. If this
+ * information is not present, the symbol is located by name
+ * with kallsyms. If the name is not unique and old_addr is
+ * not provided, the patch application fails as there is no
+ * way to resolve the ambiguity.
+ */
+ unsigned long old_addr;
+
+ /* internal */
+ struct kobject *kobj;
+ struct ftrace_ops fops;
+ enum klp_state state;
+};
+
+struct klp_reloc {
+ unsigned long dest;
+ unsigned long src;
+ unsigned long type;
+ const char *name;
+ int addend;
+ int external;
+};
+
+struct klp_object {
+ /* external */
+ const char *name; /* "vmlinux" or module name */
+ struct klp_reloc *relocs;
+ struct klp_func *funcs;
+
+ /* internal */
+ struct kobject *kobj;
+ struct module *mod; /* module associated with object */
+ enum klp_state state;
+};
+
+struct klp_patch {
+ /* external */
+ struct module *mod; /* module containing the patch */
+ struct klp_object *objs;
+
+ /* internal */
+ struct list_head list;
+ struct kobject kobj;
+ enum klp_state state;
+};
+
+extern int klp_register_patch(struct klp_patch *);
+extern int klp_unregister_patch(struct klp_patch *);
+extern int klp_enable_patch(struct klp_patch *);
+extern int klp_disable_patch(struct klp_patch *);
+
+#endif /* CONFIG_LIVE_PATCHING */
+
+#endif /* _LINUX_LIVEPATCH_H_ */
diff --git a/kernel/Makefile b/kernel/Makefile
index a59481a..616994f 100644
--- a/kernel/Makefile
+++ b/kernel/Makefile
@@ -26,6 +26,7 @@ obj-y += power/
obj-y += printk/
obj-y += irq/
obj-y += rcu/
+obj-y += livepatch/

obj-$(CONFIG_CHECKPOINT_RESTORE) += kcmp.o
obj-$(CONFIG_FREEZER) += freezer.o
diff --git a/kernel/livepatch/Kconfig b/kernel/livepatch/Kconfig
new file mode 100644
index 0000000..96da00f
--- /dev/null
+++ b/kernel/livepatch/Kconfig
@@ -0,0 +1,18 @@
+config ARCH_HAVE_LIVE_PATCHING
+ boolean
+ help
+ Arch supports kernel live patching
+
+config LIVE_PATCHING
+ boolean "Kernel Live Patching"
+ depends on DYNAMIC_FTRACE_WITH_REGS
+ depends on MODULES
+ depends on SYSFS
+ depends on KALLSYMS_ALL
+ depends on ARCH_HAVE_LIVE_PATCHING
+ help
+ Say Y here if you want to support kernel live patching.
+ This option has no runtime impact until a kernel "patch"
+ module uses the interface provided by this option to register
+ a patch, causing calls to patched functions to be redirected
+ to new function code contained in the patch module.
diff --git a/kernel/livepatch/Makefile b/kernel/livepatch/Makefile
new file mode 100644
index 0000000..7c1f008
--- /dev/null
+++ b/kernel/livepatch/Makefile
@@ -0,0 +1,3 @@
+obj-$(CONFIG_LIVE_PATCHING) += livepatch.o
+
+livepatch-objs := core.o
diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
new file mode 100644
index 0000000..9140e42
--- /dev/null
+++ b/kernel/livepatch/core.c
@@ -0,0 +1,828 @@
+/*
+ * core.c - Kernel Live Patching Core
+ *
+ * Copyright (C) 2014 Seth Jennings <[email protected]>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see <http://www.gnu.org/licenses/>.
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/mutex.h>
+#include <linux/slab.h>
+#include <linux/ftrace.h>
+#include <linux/list.h>
+#include <linux/kallsyms.h>
+#include <linux/livepatch.h>
+
+/*************************************
+ * Core structures
+ ************************************/
+
+static DEFINE_MUTEX(klp_mutex);
+static LIST_HEAD(klp_patches);
+
+/*******************************************
+ * Helpers
+ *******************************************/
+
+/* sets obj->mod if object is not vmlinux and module is found */
+static bool klp_find_object_module(struct klp_object *obj)
+{
+ if (!strcmp(obj->name, "vmlinux"))
+ return 1;
+
+ mutex_lock(&module_mutex);
+ /*
+ * We don't need to take a reference on the module here because we have
+ * the klp_mutex, which is also taken by the module notifier. This
+ * prevents any module from unloading until we release the klp_mutex.
+ */
+ obj->mod = find_module(obj->name);
+ mutex_unlock(&module_mutex);
+
+ return !!obj->mod;
+}
+
+/************************************
+ * kallsyms
+ ***********************************/
+
+struct klp_find_arg {
+ const char *objname;
+ const char *name;
+ unsigned long addr;
+ /*
+ * If count == 0, the symbol was not found. If count == 1, a unique
+ * match was found and addr is set. If count > 1, there is
+ * unresolvable ambiguity among "count" number of symbols with the same
+ * name in the same object.
+ */
+ unsigned long count;
+};
+
+static int klp_find_callback(void *data, const char *name,
+ struct module *mod, unsigned long addr)
+{
+ struct klp_find_arg *args = data;
+
+ if ((mod && !args->objname) || (!mod && args->objname))
+ return 0;
+
+ if (strcmp(args->name, name))
+ return 0;
+
+ if (args->objname && strcmp(args->objname, mod->name))
+ return 0;
+
+ /*
+ * args->addr might be overwritten if another match is found
+ * but klp_find_symbol() handles this and only returns the
+ * addr if count == 1.
+ */
+ args->addr = addr;
+ args->count++;
+
+ return 0;
+}
+
+static int klp_find_symbol(const char *objname, const char *name,
+ unsigned long *addr)
+{
+ struct klp_find_arg args = {
+ .objname = objname,
+ .name = name,
+ .addr = 0,
+ .count = 0
+ };
+
+ if (objname && !strcmp(objname, "vmlinux"))
+ args.objname = NULL;
+
+ kallsyms_on_each_symbol(klp_find_callback, &args);
+
+ if (args.count == 0)
+ pr_err("symbol '%s' not found in symbol table\n", name);
+ else if (args.count > 1)
+ pr_err("unresolvable ambiguity (%lu matches) on symbol '%s' in object '%s'\n",
+ args.count, name, objname);
+ else {
+ *addr = args.addr;
+ return 0;
+ }
+
+ *addr = 0;
+ return -EINVAL;
+}
+
+struct klp_verify_args {
+ const char *name;
+ const unsigned long addr;
+};
+
+static int klp_verify_callback(void *data, const char *name,
+ struct module *mod, unsigned long addr)
+{
+ struct klp_verify_args *args = data;
+
+ if (!mod &&
+ !strcmp(args->name, name) &&
+ args->addr == addr)
+ return 1;
+ return 0;
+}
+
+static int klp_verify_vmlinux_symbol(const char *name, unsigned long addr)
+{
+ struct klp_verify_args args = {
+ .name = name,
+ .addr = addr,
+ };
+
+ if (kallsyms_on_each_symbol(klp_verify_callback, &args))
+ return 0;
+ pr_err("symbol '%s' not found at specified address 0x%016lx, kernel mismatch?",
+ name, addr);
+ return -EINVAL;
+}
+
+static int klp_find_verify_func_addr(struct klp_func *func, const char *objname)
+{
+ int ret;
+
+#if defined(CONFIG_RANDOMIZE_BASE)
+ /* KASLR is enabled, disregard old_addr from user */
+ func->old_addr = 0;
+#endif
+
+ if (func->old_addr && strcmp(objname, "vmlinux")) {
+ pr_err("old address specified for module symbol\n");
+ return -EINVAL;
+ }
+
+ if (func->old_addr)
+ ret = klp_verify_vmlinux_symbol(func->old_name,
+ func->old_addr);
+ else
+ ret = klp_find_symbol(objname, func->old_name,
+ &func->old_addr);
+
+ return ret;
+}
+
+/****************************************
+ * dynamic relocations (load-time linker)
+ ****************************************/
+
+/*
+ * external symbols are located outside the parent object (where the parent
+ * object is either vmlinux or the kmod being patched).
+ */
+static int klp_find_external_symbol(struct module *pmod, const char *name,
+ unsigned long *addr)
+{
+ const struct kernel_symbol *sym;
+
+ /* first, check if it's an exported symbol */
+ preempt_disable();
+ sym = find_symbol(name, NULL, NULL, true, true);
+ preempt_enable();
+ if (sym) {
+ *addr = sym->value;
+ return 0;
+ }
+
+ /* otherwise check if it's in another .o within the patch module */
+ return klp_find_symbol(pmod->name, name, addr);
+}
+
+static int klp_write_object_relocations(struct module *pmod,
+ struct klp_object *obj)
+{
+ int ret;
+ struct klp_reloc *reloc;
+
+ for (reloc = obj->relocs; reloc->name; reloc++) {
+ if (!strcmp(obj->name, "vmlinux")) {
+ ret = klp_verify_vmlinux_symbol(reloc->name,
+ reloc->src);
+ if (ret)
+ return ret;
+ } else {
+ /* module, reloc->src needs to be discovered */
+ if (reloc->external)
+ ret = klp_find_external_symbol(pmod,
+ reloc->name,
+ &reloc->src);
+ else
+ ret = klp_find_symbol(obj->mod->name,
+ reloc->name,
+ &reloc->src);
+ if (ret)
+ return ret;
+ }
+ ret = klp_write_module_reloc(pmod, reloc->type, reloc->dest,
+ reloc->src + reloc->addend);
+ if (ret) {
+ pr_err("relocation failed for symbol '%s' at 0x%016lx (%d)\n",
+ reloc->name, reloc->src, ret);
+ return ret;
+ }
+ }
+
+ return 0;
+}
+
+/***********************************
+ * ftrace registration
+ **********************************/
+
+static void klp_ftrace_handler(unsigned long ip, unsigned long parent_ip,
+ struct ftrace_ops *ops, struct pt_regs *regs)
+{
+ struct klp_func *func = ops->private;
+
+ regs->ip = (unsigned long)func->new_func;
+}
+
+static int klp_enable_func(struct klp_func *func)
+{
+ int ret;
+
+ if (WARN_ON(!func->old_addr || func->state != LPC_DISABLED))
+ return -EINVAL;
+
+ ret = ftrace_set_filter_ip(&func->fops, func->old_addr, 0, 0);
+ if (ret) {
+ pr_err("failed to set ftrace filter for function '%s' (%d)\n",
+ func->old_name, ret);
+ return ret;
+ }
+ ret = register_ftrace_function(&func->fops);
+ if (ret) {
+ pr_err("failed to register ftrace handler for function '%s' (%d)\n",
+ func->old_name, ret);
+ ftrace_set_filter_ip(&func->fops, func->old_addr, 1, 0);
+ } else
+ func->state = LPC_ENABLED;
+
+ return ret;
+}
+
+static int klp_disable_func(struct klp_func *func)
+{
+ int ret;
+
+ if (WARN_ON(func->state != LPC_ENABLED))
+ return -EINVAL;
+
+ if (!func->old_addr)
+ /* parent object is not loaded */
+ return 0;
+ ret = unregister_ftrace_function(&func->fops);
+ if (ret) {
+ pr_err("failed to unregister ftrace handler for function '%s' (%d)\n",
+ func->old_name, ret);
+ return ret;
+ }
+ ret = ftrace_set_filter_ip(&func->fops, func->old_addr, 1, 0);
+ if (ret)
+ pr_warn("function unregister succeeded but failed to clear the filter\n");
+ func->state = LPC_DISABLED;
+
+ return 0;
+}
+
+static int klp_disable_object(struct klp_object *obj)
+{
+ struct klp_func *func;
+ int ret;
+
+ for (func = obj->funcs; func->old_name; func++) {
+ if (func->state != LPC_ENABLED)
+ continue;
+ ret = klp_disable_func(func);
+ if (ret)
+ return ret;
+ if (strcmp(obj->name, "vmlinux"))
+ func->old_addr = 0;
+ }
+ obj->state = LPC_DISABLED;
+
+ return 0;
+}
+
+static int klp_enable_object(struct module *pmod, struct klp_object *obj)
+{
+ struct klp_func *func;
+ int ret;
+
+ if (WARN_ON(!obj->mod && strcmp(obj->name, "vmlinux")))
+ return -EINVAL;
+
+ if (obj->relocs) {
+ ret = klp_write_object_relocations(pmod, obj);
+ if (ret)
+ goto unregister;
+ }
+
+ for (func = obj->funcs; func->old_name; func++) {
+ ret = klp_find_verify_func_addr(func, obj->name);
+ if (ret)
+ goto unregister;
+
+ ret = klp_enable_func(func);
+ if (ret)
+ goto unregister;
+ }
+ obj->state = LPC_ENABLED;
+
+ return 0;
+unregister:
+ WARN_ON(klp_disable_object(obj));
+ return ret;
+}
+
+/******************************
+ * enable/disable
+ ******************************/
+
+static int __klp_disable_patch(struct klp_patch *patch)
+{
+ struct klp_object *obj;
+ int ret;
+
+ pr_notice("disabling patch '%s'\n", patch->mod->name);
+
+ for (obj = patch->objs; obj->name; obj++) {
+ if (obj->state != LPC_ENABLED)
+ continue;
+ ret = klp_disable_object(obj);
+ if (ret)
+ return ret;
+ }
+ patch->state = LPC_DISABLED;
+
+ return 0;
+}
+
+/**
+ * klp_disable_patch() - disables a registered patch
+ * @patch: The registered, enabled patch to be disabled
+ *
+ * Unregisters the patched functions from ftrace.
+ *
+ * Return: 0 on success, otherwise error
+ */
+int klp_disable_patch(struct klp_patch *patch)
+{
+ int ret;
+
+ mutex_lock(&klp_mutex);
+ ret = __klp_disable_patch(patch);
+ mutex_unlock(&klp_mutex);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(klp_disable_patch);
+
+static int __klp_enable_patch(struct klp_patch *patch)
+{
+ struct klp_object *obj;
+ int ret;
+
+ if (WARN_ON(patch->state != LPC_DISABLED))
+ return -EINVAL;
+
+ pr_notice_once("tainting kernel with TAINT_LIVEPATCH\n");
+ add_taint(TAINT_LIVEPATCH, LOCKDEP_STILL_OK);
+
+ pr_notice("enabling patch '%s'\n", patch->mod->name);
+
+ for (obj = patch->objs; obj->name; obj++) {
+ if (!klp_find_object_module(obj))
+ continue;
+ ret = klp_enable_object(patch->mod, obj);
+ if (ret)
+ goto unregister;
+ }
+ patch->state = LPC_ENABLED;
+ return 0;
+
+unregister:
+ WARN_ON(klp_disable_patch(patch));
+ return ret;
+}
+
+/**
+ * klp_enable_patch() - enables a registered patch
+ * @patch: The registered, disabled patch to be enabled
+ *
+ * Performs the needed symbol lookups and code relocations,
+ * then registers the patched functions with ftrace.
+ *
+ * Return: 0 on success, otherwise error
+ */
+int klp_enable_patch(struct klp_patch *patch)
+{
+ int ret;
+
+ mutex_lock(&klp_mutex);
+ ret = __klp_enable_patch(patch);
+ mutex_unlock(&klp_mutex);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(klp_enable_patch);
+
+/******************************
+ * module notifier
+ *****************************/
+
+static void klp_module_notify_coming(struct module *pmod,
+ struct klp_object *obj)
+{
+ struct module *mod = obj->mod;
+ int ret;
+
+ pr_notice("applying patch '%s' to loading module '%s'\n",
+ pmod->name, mod->name);
+ obj->mod = mod;
+ ret = klp_enable_object(pmod, obj);
+ if (ret)
+ pr_warn("failed to apply patch '%s' to module '%s' (%d)\n",
+ pmod->name, mod->name, ret);
+}
+
+static void klp_module_notify_going(struct module *pmod,
+ struct klp_object *obj)
+{
+ struct module *mod = obj->mod;
+ int ret;
+
+ pr_notice("reverting patch '%s' on unloading module '%s'\n",
+ pmod->name, mod->name);
+ ret = klp_disable_object(obj);
+ if (ret)
+ pr_warn("failed to revert patch '%s' on module '%s' (%d)\n",
+ pmod->name, mod->name, ret);
+ obj->mod = NULL;
+}
+
+static int klp_module_notify(struct notifier_block *nb, unsigned long action,
+ void *data)
+{
+ struct module *mod = data;
+ struct klp_patch *patch;
+ struct klp_object *obj;
+
+ if (action != MODULE_STATE_COMING && action != MODULE_STATE_GOING)
+ return 0;
+
+ mutex_lock(&klp_mutex);
+ list_for_each_entry(patch, &klp_patches, list) {
+ if (patch->state == LPC_DISABLED)
+ continue;
+ for (obj = patch->objs; obj->name; obj++) {
+ if (strcmp(obj->name, mod->name))
+ continue;
+ if (action == MODULE_STATE_COMING) {
+ obj->mod = mod;
+ klp_module_notify_coming(patch->mod, obj);
+ } else /* MODULE_STATE_GOING */
+ klp_module_notify_going(patch->mod, obj);
+ break;
+ }
+ }
+ mutex_unlock(&klp_mutex);
+ return 0;
+}
+
+static struct notifier_block klp_module_nb = {
+ .notifier_call = klp_module_notify,
+ .priority = INT_MIN+1, /* called late but before ftrace notifier */
+};
+
+/********************************************
+ * Sysfs Interface
+ *******************************************/
+/*
+ * /sys/kernel/livepatch
+ * /sys/kernel/livepatch/<patch>
+ * /sys/kernel/livepatch/<patch>/enabled
+ * /sys/kernel/livepatch/<patch>/<object>
+ * /sys/kernel/livepatch/<patch>/<object>/<func>
+ */
+
+static ssize_t enabled_store(struct kobject *kobj, struct kobj_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct klp_patch *patch;
+ int ret;
+ unsigned long val;
+
+ ret = kstrtoul(buf, 10, &val);
+ if (ret)
+ return -EINVAL;
+
+ if (val != LPC_DISABLED && val != LPC_ENABLED)
+ return -EINVAL;
+
+ patch = container_of(kobj, struct klp_patch, kobj);
+
+ mutex_lock(&klp_mutex);
+ if (val == patch->state) {
+ /* already in requested state */
+ ret = -EINVAL;
+ goto err;
+ }
+
+ if (val == LPC_ENABLED) {
+ ret = __klp_enable_patch(patch);
+ if (ret)
+ goto err;
+ } else {
+ ret = __klp_disable_patch(patch);
+ if (ret)
+ goto err;
+ }
+ mutex_unlock(&klp_mutex);
+ return count;
+err:
+ mutex_unlock(&klp_mutex);
+ return ret;
+}
+
+static ssize_t enabled_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+{
+ struct klp_patch *patch;
+
+ patch = container_of(kobj, struct klp_patch, kobj);
+ return snprintf(buf, PAGE_SIZE-1, "%d\n", patch->state);
+}
+
+static struct kobj_attribute enabled_kobj_attr = __ATTR_RW(enabled);
+static struct attribute *klp_patch_attrs[] = {
+ &enabled_kobj_attr.attr,
+ NULL
+};
+
+static struct kobject *klp_root_kobj;
+
+static int klp_create_root_kobj(void)
+{
+ klp_root_kobj =
+ kobject_create_and_add("livepatch", kernel_kobj);
+ if (!klp_root_kobj)
+ return -ENOMEM;
+ return 0;
+}
+
+static void klp_remove_root_kobj(void)
+{
+ kobject_put(klp_root_kobj);
+}
+
+static void klp_kobj_release_patch(struct kobject *kobj)
+{
+ struct klp_patch *patch;
+
+ patch = container_of(kobj, struct klp_patch, kobj);
+ if (!list_empty(&patch->list))
+ list_del(&patch->list);
+}
+
+static struct kobj_type klp_ktype_patch = {
+ .release = klp_kobj_release_patch,
+ .sysfs_ops = &kobj_sysfs_ops,
+ .default_attrs = klp_patch_attrs
+};
+
+/*********************************
+ * structure allocation
+ ********************************/
+
+/*
+ * Free all functions' kobjects in the array up to some limit. When limit is
+ * NULL, all kobjects are freed.
+ */
+static void klp_free_funcs_limited(struct klp_object *obj,
+ struct klp_func *limit)
+{
+ struct klp_func *func;
+
+ for (func = obj->funcs; func->old_name && func != limit; func++)
+ kobject_put(func->kobj);
+}
+
+/*
+ * Free all objects' kobjects in the array up to some limit. When limit is
+ * NULL, all kobjects are freed.
+ */
+static void klp_free_objects_limited(struct klp_patch *patch,
+ struct klp_object *limit)
+{
+ struct klp_object *obj;
+
+ for (obj = patch->objs; obj->name && obj != limit; obj++) {
+ klp_free_funcs_limited(obj, NULL);
+ kobject_put(obj->kobj);
+ }
+}
+
+static void klp_free_patch(struct klp_patch *patch)
+{
+ klp_free_objects_limited(patch, NULL);
+ kobject_put(&patch->kobj);
+}
+
+static int klp_init_funcs(struct klp_object *obj)
+{
+ struct klp_func *func;
+ struct ftrace_ops *ops;
+
+ for (func = obj->funcs; func->old_name; func++) {
+ func->state = LPC_DISABLED;
+ ops = &func->fops;
+ ops->private = func;
+ ops->func = klp_ftrace_handler;
+ ops->flags = FTRACE_OPS_FL_SAVE_REGS | FTRACE_OPS_FL_DYNAMIC;
+
+ /* sysfs */
+ func->kobj = kobject_create_and_add(func->old_name, obj->kobj);
+ if (!func->kobj)
+ goto free;
+ }
+
+ return 0;
+free:
+ klp_free_funcs_limited(obj, func);
+ return -ENOMEM;
+}
+
+static int klp_init_objects(struct klp_patch *patch)
+{
+ struct klp_object *obj;
+ int ret;
+
+ for (obj = patch->objs; obj->name; obj++) {
+ /* obj->mod set by klp_object_module_get() */
+ obj->state = LPC_DISABLED;
+
+ /* sysfs */
+ obj->kobj = kobject_create_and_add(obj->name, &patch->kobj);
+ if (!obj->kobj)
+ goto free;
+
+ /* init functions */
+ ret = klp_init_funcs(obj);
+ if (ret) {
+ kobject_put(obj->kobj);
+ goto free;
+ }
+ }
+
+ return 0;
+free:
+ klp_free_objects_limited(patch, obj);
+ return -ENOMEM;
+}
+
+static int klp_init_patch(struct klp_patch *patch)
+{
+ int ret;
+
+ mutex_lock(&klp_mutex);
+
+ /* init */
+ patch->state = LPC_DISABLED;
+
+ /* sysfs */
+ ret = kobject_init_and_add(&patch->kobj, &klp_ktype_patch,
+ klp_root_kobj, patch->mod->name);
+ if (ret)
+ return ret;
+
+ /* create objects */
+ ret = klp_init_objects(patch);
+ if (ret) {
+ kobject_put(&patch->kobj);
+ return ret;
+ }
+
+ /* add to global list of patches */
+ list_add(&patch->list, &klp_patches);
+
+ mutex_unlock(&klp_mutex);
+
+ return 0;
+}
+
+/************************************
+ * register/unregister
+ ***********************************/
+
+/**
+ * klp_register_patch() - registers a patch
+ * @patch: Patch to be registered
+ *
+ * Initializes the data structure associated with the patch and
+ * creates the sysfs interface.
+ *
+ * Return: 0 on success, otherwise error
+ */
+int klp_register_patch(struct klp_patch *patch)
+{
+ int ret;
+
+ if (!patch || !patch->mod || !patch->objs)
+ return -EINVAL;
+
+ /*
+ * A reference is taken on the patch module to prevent it from being
+ * unloaded. Right now, we don't allow patch modules to unload since
+ * there is currently no method to determine if a thread is still
+ * running in the patched code contained in the patch module once
+ * the ftrace registration is successful.
+ */
+ if (!try_module_get(patch->mod))
+ return -ENODEV;
+
+ ret = klp_init_patch(patch);
+ if (ret)
+ module_put(patch->mod);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(klp_register_patch);
+
+/**
+ * klp_unregister_patch() - unregisters a patch
+ * @patch: Disabled patch to be unregistered
+ *
+ * Frees the data structures and removes the sysfs interface.
+ *
+ * Return: 0 on success, otherwise error
+ */
+int klp_unregister_patch(struct klp_patch *patch)
+{
+ int ret = 0;
+
+ mutex_lock(&klp_mutex);
+ if (patch->state == LPC_ENABLED) {
+ ret = -EINVAL;
+ goto out;
+ }
+ klp_free_patch(patch);
+out:
+ mutex_unlock(&klp_mutex);
+ return ret;
+}
+EXPORT_SYMBOL_GPL(klp_unregister_patch);
+
+/************************************
+ * entry/exit
+ ************************************/
+
+static int klp_init(void)
+{
+ int ret;
+
+ ret = register_module_notifier(&klp_module_nb);
+ if (ret)
+ return ret;
+
+ ret = klp_create_root_kobj();
+ if (ret)
+ goto unregister;
+
+ return 0;
+unregister:
+ unregister_module_notifier(&klp_module_nb);
+ return ret;
+}
+
+static void klp_exit(void)
+{
+ klp_remove_root_kobj();
+ unregister_module_notifier(&klp_module_nb);
+}
+
+module_init(klp_init);
+module_exit(klp_exit);
+MODULE_DESCRIPTION("Live Kernel Patching Core");
+MODULE_LICENSE("GPL");
--
1.9.3

2014-11-20 22:30:23

by Seth Jennings

[permalink] [raw]
Subject: [PATCHv3 1/3] kernel: add TAINT_LIVEPATCH

This adds a new taint flag to indicate when the kernel or a kernel
module has been live patched. This will provide a clean indication in
bug reports that live patching was used.

Additionally, if the crash occurs in a live patched function, the live
patch module will appear beside the patched function in the backtrace.

Signed-off-by: Seth Jennings <[email protected]>
---
Documentation/oops-tracing.txt | 2 ++
Documentation/sysctl/kernel.txt | 1 +
include/linux/kernel.h | 1 +
kernel/panic.c | 2 ++
4 files changed, 6 insertions(+)

diff --git a/Documentation/oops-tracing.txt b/Documentation/oops-tracing.txt
index beefb9f..f3ac05c 100644
--- a/Documentation/oops-tracing.txt
+++ b/Documentation/oops-tracing.txt
@@ -270,6 +270,8 @@ characters, each representing a particular tainted value.

15: 'L' if a soft lockup has previously occurred on the system.

+ 16: 'K' if the kernel has been live patched.
+
The primary reason for the 'Tainted: ' string is to tell kernel
debuggers if this is a clean kernel or if anything unusual has
occurred. Tainting is permanent: even if an offending module is
diff --git a/Documentation/sysctl/kernel.txt b/Documentation/sysctl/kernel.txt
index 75511ef..83ab256 100644
--- a/Documentation/sysctl/kernel.txt
+++ b/Documentation/sysctl/kernel.txt
@@ -843,6 +843,7 @@ can be ORed together:
8192 - An unsigned module has been loaded in a kernel supporting module
signature.
16384 - A soft lockup has previously occurred on the system.
+32768 - The kernel has been live patched.

==============================================================

diff --git a/include/linux/kernel.h b/include/linux/kernel.h
index 5449d2f..d03e3de 100644
--- a/include/linux/kernel.h
+++ b/include/linux/kernel.h
@@ -471,6 +471,7 @@ extern enum system_states {
#define TAINT_OOT_MODULE 12
#define TAINT_UNSIGNED_MODULE 13
#define TAINT_SOFTLOCKUP 14
+#define TAINT_LIVEPATCH 15

extern const char hex_asc[];
#define hex_asc_lo(x) hex_asc[((x) & 0x0f)]
diff --git a/kernel/panic.c b/kernel/panic.c
index 4d8d6f9..8136ad7 100644
--- a/kernel/panic.c
+++ b/kernel/panic.c
@@ -226,6 +226,7 @@ static const struct tnt tnts[] = {
{ TAINT_OOT_MODULE, 'O', ' ' },
{ TAINT_UNSIGNED_MODULE, 'E', ' ' },
{ TAINT_SOFTLOCKUP, 'L', ' ' },
+ { TAINT_LIVEPATCH, 'K', ' ' },
};

/**
@@ -246,6 +247,7 @@ static const struct tnt tnts[] = {
* 'O' - Out-of-tree module has been loaded.
* 'E' - Unsigned module has been loaded.
* 'L' - A soft lockup has previously occurred.
+ * 'K' - Kernel has been live patched.
*
* The string is overwritten by the next call to print_tainted().
*/
--
1.9.3

2014-11-21 00:22:33

by Jiri Kosina

[permalink] [raw]
Subject: Re: [PATCHv3 2/3] kernel: add support for live patching

On Thu, 20 Nov 2014, Seth Jennings wrote:

> This commit introduces code for the live patching core. It implements
> an ftrace-based mechanism and kernel interface for doing live patching
> of kernel and kernel module functions.
>
> It represents the greatest common functionality set between kpatch and
> kgraft and can accept patches built using either method.
>
> This first version does not implement any consistency mechanism that
> ensures that old and new code do not run together. In practice, ~90% of
> CVEs are safe to apply in this way, since they simply add a conditional
> check. However, any function change that can not execute safely with
> the old version of the function can _not_ be safely applied in this
> version.
>
> Signed-off-by: Seth Jennings <[email protected]>

I think this is getting really close, which is awesome. A few rather minor
nits below.

[ ... snip ... ]
> diff --git a/arch/x86/include/asm/livepatch.h b/arch/x86/include/asm/livepatch.h
> new file mode 100644
> index 0000000..2ed86ec
> --- /dev/null
> +++ b/arch/x86/include/asm/livepatch.h
> @@ -0,0 +1,37 @@
> +/*
> + * livepatch.h - x86-specific Kernel Live Patching Core
> + *
> + * Copyright (C) 2014 Seth Jennings <[email protected]>
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public License
> + * as published by the Free Software Foundation; either version 2
> + * of the License, or (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program; if not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#ifndef _ASM_X86_LIVEPATCH_H
> +#define _ASM_X86_LIVEPATCH_H
> +
> +#include <linux/module.h>
> +
> +#ifdef CONFIG_LIVE_PATCHING
> +extern int klp_write_module_reloc(struct module *mod, unsigned long type,
> + unsigned long loc, unsigned long value);
> +
> +#else
> +static int klp_write_module_reloc(struct module *mod, unsigned long type,

static inline?

[ ... snip ... ]
> --- /dev/null
> +++ b/kernel/livepatch/Kconfig
> @@ -0,0 +1,18 @@
> +config ARCH_HAVE_LIVE_PATCHING
> + boolean
> + help
> + Arch supports kernel live patching
> +
> +config LIVE_PATCHING
> + boolean "Kernel Live Patching"
> + depends on DYNAMIC_FTRACE_WITH_REGS
> + depends on MODULES
> + depends on SYSFS
> + depends on KALLSYMS_ALL
> + depends on ARCH_HAVE_LIVE_PATCHING

We have to refuse to build on x86_64 if the compiler doesn't support
fentry. mcount is not really usable (well, it would be possible to use it,
be the obstacles are too big to care).

Something like [1] should be applicable here as well I believe.

[1] https://git.kernel.org/cgit/linux/kernel/git/jirislaby/kgraft.git/commit/?h=kgraft&id=bd4bc097c72937d18036f1312a4d79ed0bea9991

[ ... snip ... ]
> --- /dev/null
> +++ b/kernel/livepatch/core.c
> @@ -0,0 +1,828 @@
> +/*
> + * core.c - Kernel Live Patching Core
> + *
> + * Copyright (C) 2014 Seth Jennings <[email protected]>
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public License
> + * as published by the Free Software Foundation; either version 2
> + * of the License, or (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program; if not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
> +
> +#include <linux/module.h>
> +#include <linux/kernel.h>
> +#include <linux/mutex.h>
> +#include <linux/slab.h>
> +#include <linux/ftrace.h>
> +#include <linux/list.h>
> +#include <linux/kallsyms.h>
> +#include <linux/livepatch.h>
> +
> +/*************************************
> + * Core structures
> + ************************************/

I don't personally find such markers (especially with all the '*'s) too
tasteful, and I don't recall ever seeing this being common pattern used in
the kernel code ... ?

> +static DEFINE_MUTEX(klp_mutex);
> +static LIST_HEAD(klp_patches);
> +
> +/*******************************************
> + * Helpers
> + *******************************************/
> +
> +/* sets obj->mod if object is not vmlinux and module is found */
> +static bool klp_find_object_module(struct klp_object *obj)
> +{
> + if (!strcmp(obj->name, "vmlinux"))
> + return 1;

Rather a matter of taste again -- I personally would prefer "obj->name ==
NULL" to be the condition identifying core kernel code text. You can't
really forbid any lunetic out there calling his kernel module "vmlinux",
right? :)

[ ... snip ... ]

> +/***********************************
> + * ftrace registration
> + **********************************/
> +
> +static void klp_ftrace_handler(unsigned long ip, unsigned long parent_ip,
> + struct ftrace_ops *ops, struct pt_regs *regs)
> +{
> + struct klp_func *func = ops->private;
> +
> + regs->ip = (unsigned long)func->new_func;
> +}
> +
> +static int klp_enable_func(struct klp_func *func)
> +{
> + int ret;
> +
> + if (WARN_ON(!func->old_addr || func->state != LPC_DISABLED))
> + return -EINVAL;

If the WARN_ON triggers, there is no good way to find out which of the two
conditions triggered it.

[ ... snip ... ]
> +static int klp_init_patch(struct klp_patch *patch)
> +{
> + int ret;
> +
> + mutex_lock(&klp_mutex);
> +
> + /* init */
> + patch->state = LPC_DISABLED;
> +
> + /* sysfs */
> + ret = kobject_init_and_add(&patch->kobj, &klp_ktype_patch,
> + klp_root_kobj, patch->mod->name);
> + if (ret)
> + return ret;

klp_mutex is leaked locked here.

> +
> + /* create objects */
> + ret = klp_init_objects(patch);
> + if (ret) {
> + kobject_put(&patch->kobj);
> + return ret;

And here as well.

All in all, this is looking very good to me. I think we are really close
to having a code that all the parties would agree with. Thanks everybody,

--
Jiri Kosina
SUSE Labs

Subject: Re: [PATCHv3 2/3] kernel: add support for live patching

(2014/11/21 7:29), Seth Jennings wrote:
> This commit introduces code for the live patching core. It implements
> an ftrace-based mechanism and kernel interface for doing live patching
> of kernel and kernel module functions.
>
> It represents the greatest common functionality set between kpatch and
> kgraft and can accept patches built using either method.
>
> This first version does not implement any consistency mechanism that
> ensures that old and new code do not run together. In practice, ~90% of
> CVEs are safe to apply in this way, since they simply add a conditional
> check. However, any function change that can not execute safely with
> the old version of the function can _not_ be safely applied in this
> version.

Thanks for updating :)

BTW, this still have some LPC_XXX macros, those should be KLP_XXX.

Also, as I sent a series of IPMODIFY patches (just now), could you consider
to use the flag? :)

Thank you,

>
> Signed-off-by: Seth Jennings <[email protected]>
> ---
> MAINTAINERS | 12 +
> arch/x86/Kconfig | 3 +
> arch/x86/include/asm/livepatch.h | 37 ++
> arch/x86/kernel/Makefile | 1 +
> arch/x86/kernel/livepatch.c | 74 ++++
> include/linux/livepatch.h | 96 +++++
> kernel/Makefile | 1 +
> kernel/livepatch/Kconfig | 18 +
> kernel/livepatch/Makefile | 3 +
> kernel/livepatch/core.c | 828 +++++++++++++++++++++++++++++++++++++++
> 10 files changed, 1073 insertions(+)
> create mode 100644 arch/x86/include/asm/livepatch.h
> create mode 100644 arch/x86/kernel/livepatch.c
> create mode 100644 include/linux/livepatch.h
> create mode 100644 kernel/livepatch/Kconfig
> create mode 100644 kernel/livepatch/Makefile
> create mode 100644 kernel/livepatch/core.c
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 4861577..c7f49ae 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -5715,6 +5715,18 @@ F: Documentation/misc-devices/lis3lv02d
> F: drivers/misc/lis3lv02d/
> F: drivers/platform/x86/hp_accel.c
>
> +LIVE PATCHING
> +M: Josh Poimboeuf <[email protected]>
> +M: Seth Jennings <[email protected]>
> +M: Jiri Kosina <[email protected]>
> +M: Vojtech Pavlik <[email protected]>
> +S: Maintained
> +F: kernel/livepatch/
> +F: include/linux/livepatch.h
> +F: arch/x86/include/asm/livepatch.h
> +F: arch/x86/kernel/livepatch.c
> +L: [email protected]
> +
> LLC (802.2)
> M: Arnaldo Carvalho de Melo <[email protected]>
> S: Maintained
> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> index ec21dfd..78715fd 100644
> --- a/arch/x86/Kconfig
> +++ b/arch/x86/Kconfig
> @@ -17,6 +17,7 @@ config X86_64
> depends on 64BIT
> select X86_DEV_DMA_OPS
> select ARCH_USE_CMPXCHG_LOCKREF
> + select ARCH_HAVE_LIVE_PATCHING
>
> ### Arch settings
> config X86
> @@ -1991,6 +1992,8 @@ config CMDLINE_OVERRIDE
> This is used to work around broken boot loaders. This should
> be set to 'N' under normal conditions.
>
> +source "kernel/livepatch/Kconfig"
> +
> endmenu
>
> config ARCH_ENABLE_MEMORY_HOTPLUG
> diff --git a/arch/x86/include/asm/livepatch.h b/arch/x86/include/asm/livepatch.h
> new file mode 100644
> index 0000000..2ed86ec
> --- /dev/null
> +++ b/arch/x86/include/asm/livepatch.h
> @@ -0,0 +1,37 @@
> +/*
> + * livepatch.h - x86-specific Kernel Live Patching Core
> + *
> + * Copyright (C) 2014 Seth Jennings <[email protected]>
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public License
> + * as published by the Free Software Foundation; either version 2
> + * of the License, or (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program; if not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#ifndef _ASM_X86_LIVEPATCH_H
> +#define _ASM_X86_LIVEPATCH_H
> +
> +#include <linux/module.h>
> +
> +#ifdef CONFIG_LIVE_PATCHING
> +extern int klp_write_module_reloc(struct module *mod, unsigned long type,
> + unsigned long loc, unsigned long value);
> +
> +#else
> +static int klp_write_module_reloc(struct module *mod, unsigned long type,
> + unsigned long loc, unsigned long value)
> +{
> + return -ENOSYS;
> +}
> +#endif
> +
> +#endif /* _ASM_X86_LIVEPATCH_H */
> diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
> index 5d4502c..316b34e 100644
> --- a/arch/x86/kernel/Makefile
> +++ b/arch/x86/kernel/Makefile
> @@ -63,6 +63,7 @@ obj-$(CONFIG_X86_MPPARSE) += mpparse.o
> obj-y += apic/
> obj-$(CONFIG_X86_REBOOTFIXUPS) += reboot_fixups_32.o
> obj-$(CONFIG_DYNAMIC_FTRACE) += ftrace.o
> +obj-$(CONFIG_LIVE_PATCHING) += livepatch.o
> obj-$(CONFIG_FUNCTION_GRAPH_TRACER) += ftrace.o
> obj-$(CONFIG_FTRACE_SYSCALLS) += ftrace.o
> obj-$(CONFIG_X86_TSC) += trace_clock.o
> diff --git a/arch/x86/kernel/livepatch.c b/arch/x86/kernel/livepatch.c
> new file mode 100644
> index 0000000..777a4a4
> --- /dev/null
> +++ b/arch/x86/kernel/livepatch.c
> @@ -0,0 +1,74 @@
> +/*
> + * livepatch.c - x86-specific Kernel Live Patching Core
> + *
> + * Copyright (C) 2014 Seth Jennings <[email protected]>
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public License
> + * as published by the Free Software Foundation; either version 2
> + * of the License, or (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program; if not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include <linux/module.h>
> +#include <linux/uaccess.h>
> +#include <asm/cacheflush.h>
> +#include <asm/page_types.h>
> +
> +int klp_write_module_reloc(struct module *mod, unsigned long type,
> + unsigned long loc, unsigned long value)
> +{
> + int ret, readonly, numpages, size = 4;
> + unsigned long val;
> + unsigned long core = (unsigned long)mod->module_core;
> + unsigned long core_ro_size = mod->core_ro_size;
> + unsigned long core_size = mod->core_size;
> +
> + switch (type) {
> + case R_X86_64_NONE:
> + return 0;
> + case R_X86_64_64:
> + val = value;
> + size = 8;
> + break;
> + case R_X86_64_32:
> + val = (u32)value;
> + break;
> + case R_X86_64_32S:
> + val = (s32)value;
> + break;
> + case R_X86_64_PC32:
> + val = (u32)(value - loc);
> + break;
> + default:
> + /* unsupported relocation type */
> + return -EINVAL;
> + }
> +
> + if (loc >= core && loc < core + core_ro_size)
> + readonly = 1;
> + else if (loc >= core + core_ro_size && loc < core + core_size)
> + readonly = 0;
> + else
> + /* loc not in expected range */
> + return -EINVAL;
> +
> + numpages = ((loc & PAGE_MASK) == ((loc + size) & PAGE_MASK)) ? 1 : 2;
> +
> + if (readonly)
> + set_memory_rw(loc & PAGE_MASK, numpages);
> +
> + ret = probe_kernel_write((void *)loc, &val, size);
> +
> + if (readonly)
> + set_memory_ro(loc & PAGE_MASK, numpages);
> +
> + return ret;
> +}
> diff --git a/include/linux/livepatch.h b/include/linux/livepatch.h
> new file mode 100644
> index 0000000..eab84df
> --- /dev/null
> +++ b/include/linux/livepatch.h
> @@ -0,0 +1,96 @@
> +/*
> + * livepatch.h - Kernel Live Patching Core
> + *
> + * Copyright (C) 2014 Seth Jennings <[email protected]>
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public License
> + * as published by the Free Software Foundation; either version 2
> + * of the License, or (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program; if not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#ifndef _LINUX_LIVEPATCH_H_
> +#define _LINUX_LIVEPATCH_H_
> +
> +#include <linux/module.h>
> +#include <linux/ftrace.h>
> +
> +#if IS_ENABLED(CONFIG_LIVE_PATCHING)
> +
> +#include <asm/livepatch.h>
> +
> +/* TODO: add kernel-doc for structures once agreed upon */
> +
> +enum klp_state {
> + LPC_DISABLED,
> + LPC_ENABLED
> +};
> +
> +struct klp_func {
> + /* external */
> + const char *old_name; /* function to be patched */
> + void *new_func; /* replacement function in patch module */
> + /*
> + * The old_addr field is optional and can be used to resolve
> + * duplicate symbol names in the vmlinux object. If this
> + * information is not present, the symbol is located by name
> + * with kallsyms. If the name is not unique and old_addr is
> + * not provided, the patch application fails as there is no
> + * way to resolve the ambiguity.
> + */
> + unsigned long old_addr;
> +
> + /* internal */
> + struct kobject *kobj;
> + struct ftrace_ops fops;
> + enum klp_state state;
> +};
> +
> +struct klp_reloc {
> + unsigned long dest;
> + unsigned long src;
> + unsigned long type;
> + const char *name;
> + int addend;
> + int external;
> +};
> +
> +struct klp_object {
> + /* external */
> + const char *name; /* "vmlinux" or module name */
> + struct klp_reloc *relocs;
> + struct klp_func *funcs;
> +
> + /* internal */
> + struct kobject *kobj;
> + struct module *mod; /* module associated with object */
> + enum klp_state state;
> +};
> +
> +struct klp_patch {
> + /* external */
> + struct module *mod; /* module containing the patch */
> + struct klp_object *objs;
> +
> + /* internal */
> + struct list_head list;
> + struct kobject kobj;
> + enum klp_state state;
> +};
> +
> +extern int klp_register_patch(struct klp_patch *);
> +extern int klp_unregister_patch(struct klp_patch *);
> +extern int klp_enable_patch(struct klp_patch *);
> +extern int klp_disable_patch(struct klp_patch *);
> +
> +#endif /* CONFIG_LIVE_PATCHING */
> +
> +#endif /* _LINUX_LIVEPATCH_H_ */
> diff --git a/kernel/Makefile b/kernel/Makefile
> index a59481a..616994f 100644
> --- a/kernel/Makefile
> +++ b/kernel/Makefile
> @@ -26,6 +26,7 @@ obj-y += power/
> obj-y += printk/
> obj-y += irq/
> obj-y += rcu/
> +obj-y += livepatch/
>
> obj-$(CONFIG_CHECKPOINT_RESTORE) += kcmp.o
> obj-$(CONFIG_FREEZER) += freezer.o
> diff --git a/kernel/livepatch/Kconfig b/kernel/livepatch/Kconfig
> new file mode 100644
> index 0000000..96da00f
> --- /dev/null
> +++ b/kernel/livepatch/Kconfig
> @@ -0,0 +1,18 @@
> +config ARCH_HAVE_LIVE_PATCHING
> + boolean
> + help
> + Arch supports kernel live patching
> +
> +config LIVE_PATCHING
> + boolean "Kernel Live Patching"
> + depends on DYNAMIC_FTRACE_WITH_REGS
> + depends on MODULES
> + depends on SYSFS
> + depends on KALLSYMS_ALL
> + depends on ARCH_HAVE_LIVE_PATCHING
> + help
> + Say Y here if you want to support kernel live patching.
> + This option has no runtime impact until a kernel "patch"
> + module uses the interface provided by this option to register
> + a patch, causing calls to patched functions to be redirected
> + to new function code contained in the patch module.
> diff --git a/kernel/livepatch/Makefile b/kernel/livepatch/Makefile
> new file mode 100644
> index 0000000..7c1f008
> --- /dev/null
> +++ b/kernel/livepatch/Makefile
> @@ -0,0 +1,3 @@
> +obj-$(CONFIG_LIVE_PATCHING) += livepatch.o
> +
> +livepatch-objs := core.o
> diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
> new file mode 100644
> index 0000000..9140e42
> --- /dev/null
> +++ b/kernel/livepatch/core.c
> @@ -0,0 +1,828 @@
> +/*
> + * core.c - Kernel Live Patching Core
> + *
> + * Copyright (C) 2014 Seth Jennings <[email protected]>
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public License
> + * as published by the Free Software Foundation; either version 2
> + * of the License, or (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program; if not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
> +
> +#include <linux/module.h>
> +#include <linux/kernel.h>
> +#include <linux/mutex.h>
> +#include <linux/slab.h>
> +#include <linux/ftrace.h>
> +#include <linux/list.h>
> +#include <linux/kallsyms.h>
> +#include <linux/livepatch.h>
> +
> +/*************************************
> + * Core structures
> + ************************************/
> +
> +static DEFINE_MUTEX(klp_mutex);
> +static LIST_HEAD(klp_patches);
> +
> +/*******************************************
> + * Helpers
> + *******************************************/
> +
> +/* sets obj->mod if object is not vmlinux and module is found */
> +static bool klp_find_object_module(struct klp_object *obj)
> +{
> + if (!strcmp(obj->name, "vmlinux"))
> + return 1;
> +
> + mutex_lock(&module_mutex);
> + /*
> + * We don't need to take a reference on the module here because we have
> + * the klp_mutex, which is also taken by the module notifier. This
> + * prevents any module from unloading until we release the klp_mutex.
> + */
> + obj->mod = find_module(obj->name);
> + mutex_unlock(&module_mutex);
> +
> + return !!obj->mod;
> +}
> +
> +/************************************
> + * kallsyms
> + ***********************************/
> +
> +struct klp_find_arg {
> + const char *objname;
> + const char *name;
> + unsigned long addr;
> + /*
> + * If count == 0, the symbol was not found. If count == 1, a unique
> + * match was found and addr is set. If count > 1, there is
> + * unresolvable ambiguity among "count" number of symbols with the same
> + * name in the same object.
> + */
> + unsigned long count;
> +};
> +
> +static int klp_find_callback(void *data, const char *name,
> + struct module *mod, unsigned long addr)
> +{
> + struct klp_find_arg *args = data;
> +
> + if ((mod && !args->objname) || (!mod && args->objname))
> + return 0;
> +
> + if (strcmp(args->name, name))
> + return 0;
> +
> + if (args->objname && strcmp(args->objname, mod->name))
> + return 0;
> +
> + /*
> + * args->addr might be overwritten if another match is found
> + * but klp_find_symbol() handles this and only returns the
> + * addr if count == 1.
> + */
> + args->addr = addr;
> + args->count++;
> +
> + return 0;
> +}
> +
> +static int klp_find_symbol(const char *objname, const char *name,
> + unsigned long *addr)
> +{
> + struct klp_find_arg args = {
> + .objname = objname,
> + .name = name,
> + .addr = 0,
> + .count = 0
> + };
> +
> + if (objname && !strcmp(objname, "vmlinux"))
> + args.objname = NULL;
> +
> + kallsyms_on_each_symbol(klp_find_callback, &args);
> +
> + if (args.count == 0)
> + pr_err("symbol '%s' not found in symbol table\n", name);
> + else if (args.count > 1)
> + pr_err("unresolvable ambiguity (%lu matches) on symbol '%s' in object '%s'\n",
> + args.count, name, objname);
> + else {
> + *addr = args.addr;
> + return 0;
> + }
> +
> + *addr = 0;
> + return -EINVAL;
> +}
> +
> +struct klp_verify_args {
> + const char *name;
> + const unsigned long addr;
> +};
> +
> +static int klp_verify_callback(void *data, const char *name,
> + struct module *mod, unsigned long addr)
> +{
> + struct klp_verify_args *args = data;
> +
> + if (!mod &&
> + !strcmp(args->name, name) &&
> + args->addr == addr)
> + return 1;
> + return 0;
> +}
> +
> +static int klp_verify_vmlinux_symbol(const char *name, unsigned long addr)
> +{
> + struct klp_verify_args args = {
> + .name = name,
> + .addr = addr,
> + };
> +
> + if (kallsyms_on_each_symbol(klp_verify_callback, &args))
> + return 0;
> + pr_err("symbol '%s' not found at specified address 0x%016lx, kernel mismatch?",
> + name, addr);
> + return -EINVAL;
> +}
> +
> +static int klp_find_verify_func_addr(struct klp_func *func, const char *objname)
> +{
> + int ret;
> +
> +#if defined(CONFIG_RANDOMIZE_BASE)
> + /* KASLR is enabled, disregard old_addr from user */
> + func->old_addr = 0;
> +#endif
> +
> + if (func->old_addr && strcmp(objname, "vmlinux")) {
> + pr_err("old address specified for module symbol\n");
> + return -EINVAL;
> + }
> +
> + if (func->old_addr)
> + ret = klp_verify_vmlinux_symbol(func->old_name,
> + func->old_addr);
> + else
> + ret = klp_find_symbol(objname, func->old_name,
> + &func->old_addr);
> +
> + return ret;
> +}
> +
> +/****************************************
> + * dynamic relocations (load-time linker)
> + ****************************************/
> +
> +/*
> + * external symbols are located outside the parent object (where the parent
> + * object is either vmlinux or the kmod being patched).
> + */
> +static int klp_find_external_symbol(struct module *pmod, const char *name,
> + unsigned long *addr)
> +{
> + const struct kernel_symbol *sym;
> +
> + /* first, check if it's an exported symbol */
> + preempt_disable();
> + sym = find_symbol(name, NULL, NULL, true, true);
> + preempt_enable();
> + if (sym) {
> + *addr = sym->value;
> + return 0;
> + }
> +
> + /* otherwise check if it's in another .o within the patch module */
> + return klp_find_symbol(pmod->name, name, addr);
> +}
> +
> +static int klp_write_object_relocations(struct module *pmod,
> + struct klp_object *obj)
> +{
> + int ret;
> + struct klp_reloc *reloc;
> +
> + for (reloc = obj->relocs; reloc->name; reloc++) {
> + if (!strcmp(obj->name, "vmlinux")) {
> + ret = klp_verify_vmlinux_symbol(reloc->name,
> + reloc->src);
> + if (ret)
> + return ret;
> + } else {
> + /* module, reloc->src needs to be discovered */
> + if (reloc->external)
> + ret = klp_find_external_symbol(pmod,
> + reloc->name,
> + &reloc->src);
> + else
> + ret = klp_find_symbol(obj->mod->name,
> + reloc->name,
> + &reloc->src);
> + if (ret)
> + return ret;
> + }
> + ret = klp_write_module_reloc(pmod, reloc->type, reloc->dest,
> + reloc->src + reloc->addend);
> + if (ret) {
> + pr_err("relocation failed for symbol '%s' at 0x%016lx (%d)\n",
> + reloc->name, reloc->src, ret);
> + return ret;
> + }
> + }
> +
> + return 0;
> +}
> +
> +/***********************************
> + * ftrace registration
> + **********************************/
> +
> +static void klp_ftrace_handler(unsigned long ip, unsigned long parent_ip,
> + struct ftrace_ops *ops, struct pt_regs *regs)
> +{
> + struct klp_func *func = ops->private;
> +
> + regs->ip = (unsigned long)func->new_func;
> +}
> +
> +static int klp_enable_func(struct klp_func *func)
> +{
> + int ret;
> +
> + if (WARN_ON(!func->old_addr || func->state != LPC_DISABLED))
> + return -EINVAL;
> +
> + ret = ftrace_set_filter_ip(&func->fops, func->old_addr, 0, 0);
> + if (ret) {
> + pr_err("failed to set ftrace filter for function '%s' (%d)\n",
> + func->old_name, ret);
> + return ret;
> + }
> + ret = register_ftrace_function(&func->fops);
> + if (ret) {
> + pr_err("failed to register ftrace handler for function '%s' (%d)\n",
> + func->old_name, ret);
> + ftrace_set_filter_ip(&func->fops, func->old_addr, 1, 0);
> + } else
> + func->state = LPC_ENABLED;
> +
> + return ret;
> +}
> +
> +static int klp_disable_func(struct klp_func *func)
> +{
> + int ret;
> +
> + if (WARN_ON(func->state != LPC_ENABLED))
> + return -EINVAL;
> +
> + if (!func->old_addr)
> + /* parent object is not loaded */
> + return 0;
> + ret = unregister_ftrace_function(&func->fops);
> + if (ret) {
> + pr_err("failed to unregister ftrace handler for function '%s' (%d)\n",
> + func->old_name, ret);
> + return ret;
> + }
> + ret = ftrace_set_filter_ip(&func->fops, func->old_addr, 1, 0);
> + if (ret)
> + pr_warn("function unregister succeeded but failed to clear the filter\n");
> + func->state = LPC_DISABLED;
> +
> + return 0;
> +}
> +
> +static int klp_disable_object(struct klp_object *obj)
> +{
> + struct klp_func *func;
> + int ret;
> +
> + for (func = obj->funcs; func->old_name; func++) {
> + if (func->state != LPC_ENABLED)
> + continue;
> + ret = klp_disable_func(func);
> + if (ret)
> + return ret;
> + if (strcmp(obj->name, "vmlinux"))
> + func->old_addr = 0;
> + }
> + obj->state = LPC_DISABLED;
> +
> + return 0;
> +}
> +
> +static int klp_enable_object(struct module *pmod, struct klp_object *obj)
> +{
> + struct klp_func *func;
> + int ret;
> +
> + if (WARN_ON(!obj->mod && strcmp(obj->name, "vmlinux")))
> + return -EINVAL;
> +
> + if (obj->relocs) {
> + ret = klp_write_object_relocations(pmod, obj);
> + if (ret)
> + goto unregister;
> + }
> +
> + for (func = obj->funcs; func->old_name; func++) {
> + ret = klp_find_verify_func_addr(func, obj->name);
> + if (ret)
> + goto unregister;
> +
> + ret = klp_enable_func(func);
> + if (ret)
> + goto unregister;
> + }
> + obj->state = LPC_ENABLED;
> +
> + return 0;
> +unregister:
> + WARN_ON(klp_disable_object(obj));
> + return ret;
> +}
> +
> +/******************************
> + * enable/disable
> + ******************************/
> +
> +static int __klp_disable_patch(struct klp_patch *patch)
> +{
> + struct klp_object *obj;
> + int ret;
> +
> + pr_notice("disabling patch '%s'\n", patch->mod->name);
> +
> + for (obj = patch->objs; obj->name; obj++) {
> + if (obj->state != LPC_ENABLED)
> + continue;
> + ret = klp_disable_object(obj);
> + if (ret)
> + return ret;
> + }
> + patch->state = LPC_DISABLED;
> +
> + return 0;
> +}
> +
> +/**
> + * klp_disable_patch() - disables a registered patch
> + * @patch: The registered, enabled patch to be disabled
> + *
> + * Unregisters the patched functions from ftrace.
> + *
> + * Return: 0 on success, otherwise error
> + */
> +int klp_disable_patch(struct klp_patch *patch)
> +{
> + int ret;
> +
> + mutex_lock(&klp_mutex);
> + ret = __klp_disable_patch(patch);
> + mutex_unlock(&klp_mutex);
> +
> + return ret;
> +}
> +EXPORT_SYMBOL_GPL(klp_disable_patch);
> +
> +static int __klp_enable_patch(struct klp_patch *patch)
> +{
> + struct klp_object *obj;
> + int ret;
> +
> + if (WARN_ON(patch->state != LPC_DISABLED))
> + return -EINVAL;
> +
> + pr_notice_once("tainting kernel with TAINT_LIVEPATCH\n");
> + add_taint(TAINT_LIVEPATCH, LOCKDEP_STILL_OK);
> +
> + pr_notice("enabling patch '%s'\n", patch->mod->name);
> +
> + for (obj = patch->objs; obj->name; obj++) {
> + if (!klp_find_object_module(obj))
> + continue;
> + ret = klp_enable_object(patch->mod, obj);
> + if (ret)
> + goto unregister;
> + }
> + patch->state = LPC_ENABLED;
> + return 0;
> +
> +unregister:
> + WARN_ON(klp_disable_patch(patch));
> + return ret;
> +}
> +
> +/**
> + * klp_enable_patch() - enables a registered patch
> + * @patch: The registered, disabled patch to be enabled
> + *
> + * Performs the needed symbol lookups and code relocations,
> + * then registers the patched functions with ftrace.
> + *
> + * Return: 0 on success, otherwise error
> + */
> +int klp_enable_patch(struct klp_patch *patch)
> +{
> + int ret;
> +
> + mutex_lock(&klp_mutex);
> + ret = __klp_enable_patch(patch);
> + mutex_unlock(&klp_mutex);
> +
> + return ret;
> +}
> +EXPORT_SYMBOL_GPL(klp_enable_patch);
> +
> +/******************************
> + * module notifier
> + *****************************/
> +
> +static void klp_module_notify_coming(struct module *pmod,
> + struct klp_object *obj)
> +{
> + struct module *mod = obj->mod;
> + int ret;
> +
> + pr_notice("applying patch '%s' to loading module '%s'\n",
> + pmod->name, mod->name);
> + obj->mod = mod;
> + ret = klp_enable_object(pmod, obj);
> + if (ret)
> + pr_warn("failed to apply patch '%s' to module '%s' (%d)\n",
> + pmod->name, mod->name, ret);
> +}
> +
> +static void klp_module_notify_going(struct module *pmod,
> + struct klp_object *obj)
> +{
> + struct module *mod = obj->mod;
> + int ret;
> +
> + pr_notice("reverting patch '%s' on unloading module '%s'\n",
> + pmod->name, mod->name);
> + ret = klp_disable_object(obj);
> + if (ret)
> + pr_warn("failed to revert patch '%s' on module '%s' (%d)\n",
> + pmod->name, mod->name, ret);
> + obj->mod = NULL;
> +}
> +
> +static int klp_module_notify(struct notifier_block *nb, unsigned long action,
> + void *data)
> +{
> + struct module *mod = data;
> + struct klp_patch *patch;
> + struct klp_object *obj;
> +
> + if (action != MODULE_STATE_COMING && action != MODULE_STATE_GOING)
> + return 0;
> +
> + mutex_lock(&klp_mutex);
> + list_for_each_entry(patch, &klp_patches, list) {
> + if (patch->state == LPC_DISABLED)
> + continue;
> + for (obj = patch->objs; obj->name; obj++) {
> + if (strcmp(obj->name, mod->name))
> + continue;
> + if (action == MODULE_STATE_COMING) {
> + obj->mod = mod;
> + klp_module_notify_coming(patch->mod, obj);
> + } else /* MODULE_STATE_GOING */
> + klp_module_notify_going(patch->mod, obj);
> + break;
> + }
> + }
> + mutex_unlock(&klp_mutex);
> + return 0;
> +}
> +
> +static struct notifier_block klp_module_nb = {
> + .notifier_call = klp_module_notify,
> + .priority = INT_MIN+1, /* called late but before ftrace notifier */
> +};
> +
> +/********************************************
> + * Sysfs Interface
> + *******************************************/
> +/*
> + * /sys/kernel/livepatch
> + * /sys/kernel/livepatch/<patch>
> + * /sys/kernel/livepatch/<patch>/enabled
> + * /sys/kernel/livepatch/<patch>/<object>
> + * /sys/kernel/livepatch/<patch>/<object>/<func>
> + */
> +
> +static ssize_t enabled_store(struct kobject *kobj, struct kobj_attribute *attr,
> + const char *buf, size_t count)
> +{
> + struct klp_patch *patch;
> + int ret;
> + unsigned long val;
> +
> + ret = kstrtoul(buf, 10, &val);
> + if (ret)
> + return -EINVAL;
> +
> + if (val != LPC_DISABLED && val != LPC_ENABLED)
> + return -EINVAL;
> +
> + patch = container_of(kobj, struct klp_patch, kobj);
> +
> + mutex_lock(&klp_mutex);
> + if (val == patch->state) {
> + /* already in requested state */
> + ret = -EINVAL;
> + goto err;
> + }
> +
> + if (val == LPC_ENABLED) {
> + ret = __klp_enable_patch(patch);
> + if (ret)
> + goto err;
> + } else {
> + ret = __klp_disable_patch(patch);
> + if (ret)
> + goto err;
> + }
> + mutex_unlock(&klp_mutex);
> + return count;
> +err:
> + mutex_unlock(&klp_mutex);
> + return ret;
> +}
> +
> +static ssize_t enabled_show(struct kobject *kobj,
> + struct kobj_attribute *attr, char *buf)
> +{
> + struct klp_patch *patch;
> +
> + patch = container_of(kobj, struct klp_patch, kobj);
> + return snprintf(buf, PAGE_SIZE-1, "%d\n", patch->state);
> +}
> +
> +static struct kobj_attribute enabled_kobj_attr = __ATTR_RW(enabled);
> +static struct attribute *klp_patch_attrs[] = {
> + &enabled_kobj_attr.attr,
> + NULL
> +};
> +
> +static struct kobject *klp_root_kobj;
> +
> +static int klp_create_root_kobj(void)
> +{
> + klp_root_kobj =
> + kobject_create_and_add("livepatch", kernel_kobj);
> + if (!klp_root_kobj)
> + return -ENOMEM;
> + return 0;
> +}
> +
> +static void klp_remove_root_kobj(void)
> +{
> + kobject_put(klp_root_kobj);
> +}
> +
> +static void klp_kobj_release_patch(struct kobject *kobj)
> +{
> + struct klp_patch *patch;
> +
> + patch = container_of(kobj, struct klp_patch, kobj);
> + if (!list_empty(&patch->list))
> + list_del(&patch->list);
> +}
> +
> +static struct kobj_type klp_ktype_patch = {
> + .release = klp_kobj_release_patch,
> + .sysfs_ops = &kobj_sysfs_ops,
> + .default_attrs = klp_patch_attrs
> +};
> +
> +/*********************************
> + * structure allocation
> + ********************************/
> +
> +/*
> + * Free all functions' kobjects in the array up to some limit. When limit is
> + * NULL, all kobjects are freed.
> + */
> +static void klp_free_funcs_limited(struct klp_object *obj,
> + struct klp_func *limit)
> +{
> + struct klp_func *func;
> +
> + for (func = obj->funcs; func->old_name && func != limit; func++)
> + kobject_put(func->kobj);
> +}
> +
> +/*
> + * Free all objects' kobjects in the array up to some limit. When limit is
> + * NULL, all kobjects are freed.
> + */
> +static void klp_free_objects_limited(struct klp_patch *patch,
> + struct klp_object *limit)
> +{
> + struct klp_object *obj;
> +
> + for (obj = patch->objs; obj->name && obj != limit; obj++) {
> + klp_free_funcs_limited(obj, NULL);
> + kobject_put(obj->kobj);
> + }
> +}
> +
> +static void klp_free_patch(struct klp_patch *patch)
> +{
> + klp_free_objects_limited(patch, NULL);
> + kobject_put(&patch->kobj);
> +}
> +
> +static int klp_init_funcs(struct klp_object *obj)
> +{
> + struct klp_func *func;
> + struct ftrace_ops *ops;
> +
> + for (func = obj->funcs; func->old_name; func++) {
> + func->state = LPC_DISABLED;
> + ops = &func->fops;
> + ops->private = func;
> + ops->func = klp_ftrace_handler;
> + ops->flags = FTRACE_OPS_FL_SAVE_REGS | FTRACE_OPS_FL_DYNAMIC;
> +
> + /* sysfs */
> + func->kobj = kobject_create_and_add(func->old_name, obj->kobj);
> + if (!func->kobj)
> + goto free;
> + }
> +
> + return 0;
> +free:
> + klp_free_funcs_limited(obj, func);
> + return -ENOMEM;
> +}
> +
> +static int klp_init_objects(struct klp_patch *patch)
> +{
> + struct klp_object *obj;
> + int ret;
> +
> + for (obj = patch->objs; obj->name; obj++) {
> + /* obj->mod set by klp_object_module_get() */
> + obj->state = LPC_DISABLED;
> +
> + /* sysfs */
> + obj->kobj = kobject_create_and_add(obj->name, &patch->kobj);
> + if (!obj->kobj)
> + goto free;
> +
> + /* init functions */
> + ret = klp_init_funcs(obj);
> + if (ret) {
> + kobject_put(obj->kobj);
> + goto free;
> + }
> + }
> +
> + return 0;
> +free:
> + klp_free_objects_limited(patch, obj);
> + return -ENOMEM;
> +}
> +
> +static int klp_init_patch(struct klp_patch *patch)
> +{
> + int ret;
> +
> + mutex_lock(&klp_mutex);
> +
> + /* init */
> + patch->state = LPC_DISABLED;
> +
> + /* sysfs */
> + ret = kobject_init_and_add(&patch->kobj, &klp_ktype_patch,
> + klp_root_kobj, patch->mod->name);
> + if (ret)
> + return ret;
> +
> + /* create objects */
> + ret = klp_init_objects(patch);
> + if (ret) {
> + kobject_put(&patch->kobj);
> + return ret;
> + }
> +
> + /* add to global list of patches */
> + list_add(&patch->list, &klp_patches);
> +
> + mutex_unlock(&klp_mutex);
> +
> + return 0;
> +}
> +
> +/************************************
> + * register/unregister
> + ***********************************/
> +
> +/**
> + * klp_register_patch() - registers a patch
> + * @patch: Patch to be registered
> + *
> + * Initializes the data structure associated with the patch and
> + * creates the sysfs interface.
> + *
> + * Return: 0 on success, otherwise error
> + */
> +int klp_register_patch(struct klp_patch *patch)
> +{
> + int ret;
> +
> + if (!patch || !patch->mod || !patch->objs)
> + return -EINVAL;
> +
> + /*
> + * A reference is taken on the patch module to prevent it from being
> + * unloaded. Right now, we don't allow patch modules to unload since
> + * there is currently no method to determine if a thread is still
> + * running in the patched code contained in the patch module once
> + * the ftrace registration is successful.
> + */
> + if (!try_module_get(patch->mod))
> + return -ENODEV;
> +
> + ret = klp_init_patch(patch);
> + if (ret)
> + module_put(patch->mod);
> +
> + return ret;
> +}
> +EXPORT_SYMBOL_GPL(klp_register_patch);
> +
> +/**
> + * klp_unregister_patch() - unregisters a patch
> + * @patch: Disabled patch to be unregistered
> + *
> + * Frees the data structures and removes the sysfs interface.
> + *
> + * Return: 0 on success, otherwise error
> + */
> +int klp_unregister_patch(struct klp_patch *patch)
> +{
> + int ret = 0;
> +
> + mutex_lock(&klp_mutex);
> + if (patch->state == LPC_ENABLED) {
> + ret = -EINVAL;
> + goto out;
> + }
> + klp_free_patch(patch);
> +out:
> + mutex_unlock(&klp_mutex);
> + return ret;
> +}
> +EXPORT_SYMBOL_GPL(klp_unregister_patch);
> +
> +/************************************
> + * entry/exit
> + ************************************/
> +
> +static int klp_init(void)
> +{
> + int ret;
> +
> + ret = register_module_notifier(&klp_module_nb);
> + if (ret)
> + return ret;
> +
> + ret = klp_create_root_kobj();
> + if (ret)
> + goto unregister;
> +
> + return 0;
> +unregister:
> + unregister_module_notifier(&klp_module_nb);
> + return ret;
> +}
> +
> +static void klp_exit(void)
> +{
> + klp_remove_root_kobj();
> + unregister_module_notifier(&klp_module_nb);
> +}
> +
> +module_init(klp_init);
> +module_exit(klp_exit);
> +MODULE_DESCRIPTION("Live Kernel Patching Core");
> +MODULE_LICENSE("GPL");
>


--
Masami HIRAMATSU
Software Platform Research Dept. Linux Technology Research Center
Hitachi, Ltd., Yokohama Research Laboratory
E-mail: [email protected]

Subject: Re: [PATCHv3 0/3] Kernel Live Patching

Hi Seth,

(2014/11/21 7:29), Seth Jennings wrote:
> Summary:
>
> This patchset implements an ftrace-based mechanism and kernel interface for
> doing live patching of kernel and kernel module functions. It represents the
> greatest common functionality set between kpatch [1] and kGraft [2] and can
> accept patches built using either method. This solution was discussed in the
> Live Patching Mini-conference at LPC 2014 [3].
>
> The model consists of a live patching "core" that provides an interface for
> other "patch" kernel modules to register patches with the core.
>
> Patch modules contain the new function code and create an klp_patch structure
> containing the required data about what functions to patch, where the new code
> for each patched function resides, and in which kernel object (vmlinux or
> module) the function to be patch resides. The patch module then invokes the
> klp_register_patch() function to register with the core, then klp_enable_patch()
> to have the core redirect the execution paths using ftrace.
>
> An example patch module can be found here:
> https://github.com/spartacus06/livepatch/blob/master/patch/patch.c

Hmm, I think we should import this example under samples/livepatch/ or
tools/testing/selftests/livepatch/ for self testing, so that others
can easily see what will happen. :)

Thank you,


--
Masami HIRAMATSU
Software Platform Research Dept. Linux Technology Research Center
Hitachi, Ltd., Yokohama Research Laboratory
E-mail: [email protected]

Subject: Re: [PATCHv3 3/3] kernel: add sysfs documentation for live patching

(2014/11/21 7:29), Seth Jennings wrote:
> Adds sysfs interface documentation to Documentation/ABI/testing/
>

Hmm, is there any reason to decouple this documentation from code patch?
I think we'd better merge this to 2/3, or move sysfs interface code from 2/3 to this.

Thank you,

> Signed-off-by: Seth Jennings <[email protected]>
> ---
> Documentation/ABI/testing/sysfs-kernel-livepatch | 44 ++++++++++++++++++++++++
> MAINTAINERS | 1 +
> 2 files changed, 45 insertions(+)
> create mode 100644 Documentation/ABI/testing/sysfs-kernel-livepatch
>
> diff --git a/Documentation/ABI/testing/sysfs-kernel-livepatch b/Documentation/ABI/testing/sysfs-kernel-livepatch
> new file mode 100644
> index 0000000..5bf42a8
> --- /dev/null
> +++ b/Documentation/ABI/testing/sysfs-kernel-livepatch
> @@ -0,0 +1,44 @@
> +What: /sys/kernel/livepatch
> +Date: Nov 2014
> +KernelVersion: 3.19.0
> +Contact: [email protected]
> +Description:
> + Interface for kernel live patching
> +
> + The /sys/kernel/livepatch directory contains subdirectories for
> + each loaded live patch module.
> +
> +What: /sys/kernel/livepatch/<patch>
> +Date: Nov 2014
> +KernelVersion: 3.19.0
> +Contact: [email protected]
> +Description:
> + The patch directory contains subdirectories for each kernel
> + object (vmlinux or a module) in which it patched functions.
> +
> +What: /sys/kernel/livepatch/<patch>/enabled
> +Date: Nov 2014
> +KernelVersion: 3.19.0
> +Contact: [email protected]
> +Description:
> + A writable attribute that indicates whether the patched
> + code is currently applied. Writing 0 will disable the patch
> + while writing 1 will re-enable the patch.
> +
> +What: /sys/kernel/livepatch/<patch>/<object>
> +Date: Nov 2014
> +KernelVersion: 3.19.0
> +Contact: [email protected]
> +Description:
> + The object directory contains subdirectories for each function
> + that is patched within the object.
> +
> +What: /sys/kernel/livepatch/<patch>/<object>/<function>
> +Date: Nov 2014
> +KernelVersion: 3.19.0
> +Contact: [email protected]
> +Description:
> + The function directory contains attributes regarding the
> + properties and state of the patched function.
> +
> + There are currently no such attributes.
> diff --git a/MAINTAINERS b/MAINTAINERS
> index c7f49ae..7985293 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -5725,6 +5725,7 @@ F: kernel/livepatch/
> F: include/linux/livepatch.h
> F: arch/x86/include/asm/livepatch.h
> F: arch/x86/kernel/livepatch.c
> +F: Documentation/ABI/testing/sysfs-kernel-livepatch
> L: [email protected]
>
> LLC (802.2)
>


--
Masami HIRAMATSU
Software Platform Research Dept. Linux Technology Research Center
Hitachi, Ltd., Yokohama Research Laboratory
E-mail: [email protected]

2014-11-21 14:44:30

by Miroslav Benes

[permalink] [raw]
Subject: Re: [PATCHv3 2/3] kernel: add support for live patching

On Fri, 21 Nov 2014, Jiri Kosina wrote:

[...]

> [ ... snip ... ]
> > +static int klp_init_patch(struct klp_patch *patch)
> > +{
> > + int ret;
> > +
> > + mutex_lock(&klp_mutex);
> > +
> > + /* init */
> > + patch->state = LPC_DISABLED;
> > +
> > + /* sysfs */
> > + ret = kobject_init_and_add(&patch->kobj, &klp_ktype_patch,
> > + klp_root_kobj, patch->mod->name);
> > + if (ret)
> > + return ret;
>
> klp_mutex is leaked locked here.
>
> > +
> > + /* create objects */
> > + ret = klp_init_objects(patch);
> > + if (ret) {
> > + kobject_put(&patch->kobj);
> > + return ret;
>
> And here as well.
>
> All in all, this is looking very good to me. I think we are really close
> to having a code that all the parties would agree with. Thanks everybody,

The leaking is my fault. I missed that somehow during rebasing.

Seth, could you please fix it in v4?

Thanks
--
Miroslav Benes
SUSE Labs

2014-11-21 15:01:41

by Josh Poimboeuf

[permalink] [raw]
Subject: Re: [PATCHv3 2/3] kernel: add support for live patching

On Fri, Nov 21, 2014 at 03:44:35PM +0100, Miroslav Benes wrote:
> On Fri, 21 Nov 2014, Jiri Kosina wrote:
>
> [...]
>
> > [ ... snip ... ]
> > > +static int klp_init_patch(struct klp_patch *patch)
> > > +{
> > > + int ret;
> > > +
> > > + mutex_lock(&klp_mutex);
> > > +
> > > + /* init */
> > > + patch->state = LPC_DISABLED;
> > > +
> > > + /* sysfs */
> > > + ret = kobject_init_and_add(&patch->kobj, &klp_ktype_patch,
> > > + klp_root_kobj, patch->mod->name);
> > > + if (ret)
> > > + return ret;
> >
> > klp_mutex is leaked locked here.
> >
> > > +
> > > + /* create objects */
> > > + ret = klp_init_objects(patch);
> > > + if (ret) {
> > > + kobject_put(&patch->kobj);
> > > + return ret;
> >
> > And here as well.
> >
> > All in all, this is looking very good to me. I think we are really close
> > to having a code that all the parties would agree with. Thanks everybody,
>
> The leaking is my fault. I missed that somehow during rebasing.
>
> Seth, could you please fix it in v4?

Is it necessary to grab the mutex at the beginning of klp_init_patch? I
think we only need it when adding it to the global list at the end of
the function.

--
Josh

2014-11-21 15:21:43

by Josh Poimboeuf

[permalink] [raw]
Subject: Re: [PATCHv3 2/3] kernel: add support for live patching

On Fri, Nov 21, 2014 at 01:22:33AM +0100, Jiri Kosina wrote:
> On Thu, 20 Nov 2014, Seth Jennings wrote:
> > --- /dev/null
> > +++ b/kernel/livepatch/Kconfig
> > @@ -0,0 +1,18 @@
> > +config ARCH_HAVE_LIVE_PATCHING
> > + boolean
> > + help
> > + Arch supports kernel live patching
> > +
> > +config LIVE_PATCHING
> > + boolean "Kernel Live Patching"
> > + depends on DYNAMIC_FTRACE_WITH_REGS
> > + depends on MODULES
> > + depends on SYSFS
> > + depends on KALLSYMS_ALL
> > + depends on ARCH_HAVE_LIVE_PATCHING
>
> We have to refuse to build on x86_64 if the compiler doesn't support
> fentry. mcount is not really usable (well, it would be possible to use it,
> be the obstacles are too big to care).
>
> Something like [1] should be applicable here as well I believe.
>
> [1] https://git.kernel.org/cgit/linux/kernel/git/jirislaby/kgraft.git/commit/?h=kgraft&id=bd4bc097c72937d18036f1312a4d79ed0bea9991

I think we can use "depends on HAVE_FENTRY" to accomplish this, since
CC_USING_FENTRY gets set by the top level kernel Makefile if
CONFIG_HAVE_FENTRY is set.

--
Josh

2014-11-21 15:27:53

by Jiri Kosina

[permalink] [raw]
Subject: Re: [PATCHv3 2/3] kernel: add support for live patching

On Fri, 21 Nov 2014, Josh Poimboeuf wrote:

> I think we can use "depends on HAVE_FENTRY" to accomplish this, since
> CC_USING_FENTRY gets set by the top level kernel Makefile if
> CONFIG_HAVE_FENTRY is set.

The problem is that HAVE_FENTRY is set automatically on x86_64, no matter
whether your actual compiler supports it or not.

CC_USING_FENTRY is defined if and only if the actual currently used
compiler supports it, but that can't be expressed in Kconfig.

--
Jiri Kosina
SUSE Labs

2014-11-21 15:36:22

by Josh Poimboeuf

[permalink] [raw]
Subject: Re: [PATCHv3 2/3] kernel: add support for live patching

On Fri, Nov 21, 2014 at 04:27:57PM +0100, Jiri Kosina wrote:
> On Fri, 21 Nov 2014, Josh Poimboeuf wrote:
>
> > I think we can use "depends on HAVE_FENTRY" to accomplish this, since
> > CC_USING_FENTRY gets set by the top level kernel Makefile if
> > CONFIG_HAVE_FENTRY is set.
>
> The problem is that HAVE_FENTRY is set automatically on x86_64, no matter
> whether your actual compiler supports it or not.
>
> CC_USING_FENTRY is defined if and only if the actual currently used
> compiler supports it, but that can't be expressed in Kconfig.

Ah, light bulb moment. Now I understand what "$(call cc-option, ...)"
does.

Thanks :-)

--
Josh

2014-11-21 15:46:28

by Miroslav Benes

[permalink] [raw]
Subject: Re: [PATCHv3 2/3] kernel: add support for live patching

On Fri, 21 Nov 2014, Josh Poimboeuf wrote:

> On Fri, Nov 21, 2014 at 03:44:35PM +0100, Miroslav Benes wrote:
> > On Fri, 21 Nov 2014, Jiri Kosina wrote:
> >
> > [...]
> >
> > > [ ... snip ... ]
> > > > +static int klp_init_patch(struct klp_patch *patch)
> > > > +{
> > > > + int ret;
> > > > +
> > > > + mutex_lock(&klp_mutex);
> > > > +
> > > > + /* init */
> > > > + patch->state = LPC_DISABLED;
> > > > +
> > > > + /* sysfs */
> > > > + ret = kobject_init_and_add(&patch->kobj, &klp_ktype_patch,
> > > > + klp_root_kobj, patch->mod->name);
> > > > + if (ret)
> > > > + return ret;
> > >
> > > klp_mutex is leaked locked here.
> > >
> > > > +
> > > > + /* create objects */
> > > > + ret = klp_init_objects(patch);
> > > > + if (ret) {
> > > > + kobject_put(&patch->kobj);
> > > > + return ret;
> > >
> > > And here as well.
> > >
> > > All in all, this is looking very good to me. I think we are really close
> > > to having a code that all the parties would agree with. Thanks everybody,
> >
> > The leaking is my fault. I missed that somehow during rebasing.
> >
> > Seth, could you please fix it in v4?
>
> Is it necessary to grab the mutex at the beginning of klp_init_patch? I
> think we only need it when adding it to the global list at the end of
> the function.

I think it's not necessary now after thinking about that. It could happen
that init values could be written twice to some patch structure if
klp_register_patch would be called twice. But it should not corrupt
anything and adding to the global list is protected. However I think we
should define what is protected by klp_mutex and comment it somewhere near
the mutex definition (if only the klp_patches list is protected or
something more (in the future)).

Mira

2014-11-21 16:13:47

by Seth Jennings

[permalink] [raw]
Subject: Re: [PATCHv3 2/3] kernel: add support for live patching

On Fri, Nov 21, 2014 at 04:46:32PM +0100, Miroslav Benes wrote:
> On Fri, 21 Nov 2014, Josh Poimboeuf wrote:
>
> > On Fri, Nov 21, 2014 at 03:44:35PM +0100, Miroslav Benes wrote:
> > > On Fri, 21 Nov 2014, Jiri Kosina wrote:
> > >
> > > [...]
> > >
> > > > [ ... snip ... ]
> > > > > +static int klp_init_patch(struct klp_patch *patch)
> > > > > +{
> > > > > + int ret;
> > > > > +
> > > > > + mutex_lock(&klp_mutex);
> > > > > +
> > > > > + /* init */
> > > > > + patch->state = LPC_DISABLED;
> > > > > +
> > > > > + /* sysfs */
> > > > > + ret = kobject_init_and_add(&patch->kobj, &klp_ktype_patch,
> > > > > + klp_root_kobj, patch->mod->name);
> > > > > + if (ret)
> > > > > + return ret;
> > > >
> > > > klp_mutex is leaked locked here.
> > > >
> > > > > +
> > > > > + /* create objects */
> > > > > + ret = klp_init_objects(patch);
> > > > > + if (ret) {
> > > > > + kobject_put(&patch->kobj);
> > > > > + return ret;
> > > >
> > > > And here as well.
> > > >
> > > > All in all, this is looking very good to me. I think we are really close
> > > > to having a code that all the parties would agree with. Thanks everybody,
> > >
> > > The leaking is my fault. I missed that somehow during rebasing.
> > >
> > > Seth, could you please fix it in v4?
> >
> > Is it necessary to grab the mutex at the beginning of klp_init_patch? I
> > think we only need it when adding it to the global list at the end of
> > the function.
>
> I think it's not necessary now after thinking about that. It could happen
> that init values could be written twice to some patch structure if
> klp_register_patch would be called twice. But it should not corrupt
> anything and adding to the global list is protected. However I think we
> should define what is protected by klp_mutex and comment it somewhere near
> the mutex definition (if only the klp_patches list is protected or
> something more (in the future)).

I'll fix it up for v4, moving the mutex just around the list_add() and
adding a comment about what is protected by the mutex.

Seth

>
> Mira
> --
> To unsubscribe from this list: send the line "unsubscribe live-patching" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html

2014-11-21 16:41:06

by Seth Jennings

[permalink] [raw]
Subject: Re: [PATCHv3 2/3] kernel: add support for live patching

On Fri, Nov 21, 2014 at 01:22:33AM +0100, Jiri Kosina wrote:
> On Thu, 20 Nov 2014, Seth Jennings wrote:
>
> > This commit introduces code for the live patching core. It implements
> > an ftrace-based mechanism and kernel interface for doing live patching
> > of kernel and kernel module functions.
> >
> > It represents the greatest common functionality set between kpatch and
> > kgraft and can accept patches built using either method.
> >
> > This first version does not implement any consistency mechanism that
> > ensures that old and new code do not run together. In practice, ~90% of
> > CVEs are safe to apply in this way, since they simply add a conditional
> > check. However, any function change that can not execute safely with
> > the old version of the function can _not_ be safely applied in this
> > version.
> >
> > Signed-off-by: Seth Jennings <[email protected]>
>
> I think this is getting really close, which is awesome. A few rather minor
> nits below.
>
> [ ... snip ... ]
> > diff --git a/arch/x86/include/asm/livepatch.h b/arch/x86/include/asm/livepatch.h
> > new file mode 100644
> > index 0000000..2ed86ec
> > --- /dev/null
> > +++ b/arch/x86/include/asm/livepatch.h
> > @@ -0,0 +1,37 @@
> > +/*
> > + * livepatch.h - x86-specific Kernel Live Patching Core
> > + *
> > + * Copyright (C) 2014 Seth Jennings <[email protected]>
> > + *
> > + * This program is free software; you can redistribute it and/or
> > + * modify it under the terms of the GNU General Public License
> > + * as published by the Free Software Foundation; either version 2
> > + * of the License, or (at your option) any later version.
> > + *
> > + * This program is distributed in the hope that it will be useful,
> > + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> > + * GNU General Public License for more details.
> > + *
> > + * You should have received a copy of the GNU General Public License
> > + * along with this program; if not, see <http://www.gnu.org/licenses/>.
> > + */
> > +
> > +#ifndef _ASM_X86_LIVEPATCH_H
> > +#define _ASM_X86_LIVEPATCH_H
> > +
> > +#include <linux/module.h>
> > +
> > +#ifdef CONFIG_LIVE_PATCHING
> > +extern int klp_write_module_reloc(struct module *mod, unsigned long type,
> > + unsigned long loc, unsigned long value);
> > +
> > +#else
> > +static int klp_write_module_reloc(struct module *mod, unsigned long type,
>
> static inline?

I think the practice is to let the compiler handle inline determination
unless you are sure that the compiler isn't inlining something you think
it should.

All other changes are accepted and queued for v4.

Thanks,
Seth

>
> [ ... snip ... ]
> > --- /dev/null
> > +++ b/kernel/livepatch/Kconfig
> > @@ -0,0 +1,18 @@
> > +config ARCH_HAVE_LIVE_PATCHING
> > + boolean
> > + help
> > + Arch supports kernel live patching
> > +
> > +config LIVE_PATCHING
> > + boolean "Kernel Live Patching"
> > + depends on DYNAMIC_FTRACE_WITH_REGS
> > + depends on MODULES
> > + depends on SYSFS
> > + depends on KALLSYMS_ALL
> > + depends on ARCH_HAVE_LIVE_PATCHING
>
> We have to refuse to build on x86_64 if the compiler doesn't support
> fentry. mcount is not really usable (well, it would be possible to use it,
> be the obstacles are too big to care).
>
> Something like [1] should be applicable here as well I believe.
>
> [1] https://git.kernel.org/cgit/linux/kernel/git/jirislaby/kgraft.git/commit/?h=kgraft&id=bd4bc097c72937d18036f1312a4d79ed0bea9991
>
> [ ... snip ... ]
> > --- /dev/null
> > +++ b/kernel/livepatch/core.c
> > @@ -0,0 +1,828 @@
> > +/*
> > + * core.c - Kernel Live Patching Core
> > + *
> > + * Copyright (C) 2014 Seth Jennings <[email protected]>
> > + *
> > + * This program is free software; you can redistribute it and/or
> > + * modify it under the terms of the GNU General Public License
> > + * as published by the Free Software Foundation; either version 2
> > + * of the License, or (at your option) any later version.
> > + *
> > + * This program is distributed in the hope that it will be useful,
> > + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> > + * GNU General Public License for more details.
> > + *
> > + * You should have received a copy of the GNU General Public License
> > + * along with this program; if not, see <http://www.gnu.org/licenses/>.
> > + */
> > +
> > +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
> > +
> > +#include <linux/module.h>
> > +#include <linux/kernel.h>
> > +#include <linux/mutex.h>
> > +#include <linux/slab.h>
> > +#include <linux/ftrace.h>
> > +#include <linux/list.h>
> > +#include <linux/kallsyms.h>
> > +#include <linux/livepatch.h>
> > +
> > +/*************************************
> > + * Core structures
> > + ************************************/
>
> I don't personally find such markers (especially with all the '*'s) too
> tasteful, and I don't recall ever seeing this being common pattern used in
> the kernel code ... ?
>
> > +static DEFINE_MUTEX(klp_mutex);
> > +static LIST_HEAD(klp_patches);
> > +
> > +/*******************************************
> > + * Helpers
> > + *******************************************/
> > +
> > +/* sets obj->mod if object is not vmlinux and module is found */
> > +static bool klp_find_object_module(struct klp_object *obj)
> > +{
> > + if (!strcmp(obj->name, "vmlinux"))
> > + return 1;
>
> Rather a matter of taste again -- I personally would prefer "obj->name ==
> NULL" to be the condition identifying core kernel code text. You can't
> really forbid any lunetic out there calling his kernel module "vmlinux",
> right? :)
>
> [ ... snip ... ]
>
> > +/***********************************
> > + * ftrace registration
> > + **********************************/
> > +
> > +static void klp_ftrace_handler(unsigned long ip, unsigned long parent_ip,
> > + struct ftrace_ops *ops, struct pt_regs *regs)
> > +{
> > + struct klp_func *func = ops->private;
> > +
> > + regs->ip = (unsigned long)func->new_func;
> > +}
> > +
> > +static int klp_enable_func(struct klp_func *func)
> > +{
> > + int ret;
> > +
> > + if (WARN_ON(!func->old_addr || func->state != LPC_DISABLED))
> > + return -EINVAL;
>
> If the WARN_ON triggers, there is no good way to find out which of the two
> conditions triggered it.
>
> [ ... snip ... ]
> > +static int klp_init_patch(struct klp_patch *patch)
> > +{
> > + int ret;
> > +
> > + mutex_lock(&klp_mutex);
> > +
> > + /* init */
> > + patch->state = LPC_DISABLED;
> > +
> > + /* sysfs */
> > + ret = kobject_init_and_add(&patch->kobj, &klp_ktype_patch,
> > + klp_root_kobj, patch->mod->name);
> > + if (ret)
> > + return ret;
>
> klp_mutex is leaked locked here.
>
> > +
> > + /* create objects */
> > + ret = klp_init_objects(patch);
> > + if (ret) {
> > + kobject_put(&patch->kobj);
> > + return ret;
>
> And here as well.
>
> All in all, this is looking very good to me. I think we are really close
> to having a code that all the parties would agree with. Thanks everybody,
>
> --
> Jiri Kosina
> SUSE Labs

2014-11-21 16:42:33

by Seth Jennings

[permalink] [raw]
Subject: Re: [PATCHv3 3/3] kernel: add sysfs documentation for live patching

On Fri, Nov 21, 2014 at 11:49:46AM +0900, Masami Hiramatsu wrote:
> (2014/11/21 7:29), Seth Jennings wrote:
> > Adds sysfs interface documentation to Documentation/ABI/testing/
> >
>
> Hmm, is there any reason to decouple this documentation from code patch?
> I think we'd better merge this to 2/3, or move sysfs interface code from 2/3 to this.

Good point. I'll collapse this into 2/3 for v4.

Thanks,
Seth

>
> Thank you,
>
> > Signed-off-by: Seth Jennings <[email protected]>
> > ---
> > Documentation/ABI/testing/sysfs-kernel-livepatch | 44 ++++++++++++++++++++++++
> > MAINTAINERS | 1 +
> > 2 files changed, 45 insertions(+)
> > create mode 100644 Documentation/ABI/testing/sysfs-kernel-livepatch
> >
> > diff --git a/Documentation/ABI/testing/sysfs-kernel-livepatch b/Documentation/ABI/testing/sysfs-kernel-livepatch
> > new file mode 100644
> > index 0000000..5bf42a8
> > --- /dev/null
> > +++ b/Documentation/ABI/testing/sysfs-kernel-livepatch
> > @@ -0,0 +1,44 @@
> > +What: /sys/kernel/livepatch
> > +Date: Nov 2014
> > +KernelVersion: 3.19.0
> > +Contact: [email protected]
> > +Description:
> > + Interface for kernel live patching
> > +
> > + The /sys/kernel/livepatch directory contains subdirectories for
> > + each loaded live patch module.
> > +
> > +What: /sys/kernel/livepatch/<patch>
> > +Date: Nov 2014
> > +KernelVersion: 3.19.0
> > +Contact: [email protected]
> > +Description:
> > + The patch directory contains subdirectories for each kernel
> > + object (vmlinux or a module) in which it patched functions.
> > +
> > +What: /sys/kernel/livepatch/<patch>/enabled
> > +Date: Nov 2014
> > +KernelVersion: 3.19.0
> > +Contact: [email protected]
> > +Description:
> > + A writable attribute that indicates whether the patched
> > + code is currently applied. Writing 0 will disable the patch
> > + while writing 1 will re-enable the patch.
> > +
> > +What: /sys/kernel/livepatch/<patch>/<object>
> > +Date: Nov 2014
> > +KernelVersion: 3.19.0
> > +Contact: [email protected]
> > +Description:
> > + The object directory contains subdirectories for each function
> > + that is patched within the object.
> > +
> > +What: /sys/kernel/livepatch/<patch>/<object>/<function>
> > +Date: Nov 2014
> > +KernelVersion: 3.19.0
> > +Contact: [email protected]
> > +Description:
> > + The function directory contains attributes regarding the
> > + properties and state of the patched function.
> > +
> > + There are currently no such attributes.
> > diff --git a/MAINTAINERS b/MAINTAINERS
> > index c7f49ae..7985293 100644
> > --- a/MAINTAINERS
> > +++ b/MAINTAINERS
> > @@ -5725,6 +5725,7 @@ F: kernel/livepatch/
> > F: include/linux/livepatch.h
> > F: arch/x86/include/asm/livepatch.h
> > F: arch/x86/kernel/livepatch.c
> > +F: Documentation/ABI/testing/sysfs-kernel-livepatch
> > L: [email protected]
> >
> > LLC (802.2)
> >
>
>
> --
> Masami HIRAMATSU
> Software Platform Research Dept. Linux Technology Research Center
> Hitachi, Ltd., Yokohama Research Laboratory
> E-mail: [email protected]
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe live-patching" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html

2014-11-21 17:35:53

by Jiri Slaby

[permalink] [raw]
Subject: Re: [PATCHv3 2/3] kernel: add support for live patching

On 11/21/2014, 05:40 PM, Seth Jennings wrote:
>>> --- /dev/null
>>> +++ b/arch/x86/include/asm/livepatch.h
>>> @@ -0,0 +1,37 @@
...
>>> +#ifndef _ASM_X86_LIVEPATCH_H
>>> +#define _ASM_X86_LIVEPATCH_H
>>> +
>>> +#include <linux/module.h>
>>> +
>>> +#ifdef CONFIG_LIVE_PATCHING
>>> +extern int klp_write_module_reloc(struct module *mod, unsigned long type,
>>> + unsigned long loc, unsigned long value);
>>> +
>>> +#else
>>> +static int klp_write_module_reloc(struct module *mod, unsigned long type,
>>
>> static inline?
>
> I think the practice is to let the compiler handle inline determination
> unless you are sure that the compiler isn't inlining something you think
> it should.

Although you are right, it is a correct C, gcc specs (6.39) suggests to
use 'static inline' on such functions. gcc then shall inline such functions.

And if you look around in the kernel, we use that combination almost
everywhere.

thanks,
--
js
suse labs

2014-11-21 17:54:09

by Andy Lutomirski

[permalink] [raw]
Subject: Re: [PATCHv3 2/3] kernel: add support for live patching

On Fri, Nov 21, 2014 at 8:40 AM, Seth Jennings <[email protected]> wrote:
> On Fri, Nov 21, 2014 at 01:22:33AM +0100, Jiri Kosina wrote:
>> On Thu, 20 Nov 2014, Seth Jennings wrote:
>>
>> > This commit introduces code for the live patching core. It implements
>> > an ftrace-based mechanism and kernel interface for doing live patching
>> > of kernel and kernel module functions.
>> >
>> > It represents the greatest common functionality set between kpatch and
>> > kgraft and can accept patches built using either method.
>> >
>> > This first version does not implement any consistency mechanism that
>> > ensures that old and new code do not run together. In practice, ~90% of
>> > CVEs are safe to apply in this way, since they simply add a conditional
>> > check. However, any function change that can not execute safely with
>> > the old version of the function can _not_ be safely applied in this
>> > version.
>> >
>> > Signed-off-by: Seth Jennings <[email protected]>
>>
>> I think this is getting really close, which is awesome. A few rather minor
>> nits below.
>>
>> [ ... snip ... ]
>> > diff --git a/arch/x86/include/asm/livepatch.h b/arch/x86/include/asm/livepatch.h
>> > new file mode 100644
>> > index 0000000..2ed86ec
>> > --- /dev/null
>> > +++ b/arch/x86/include/asm/livepatch.h
>> > @@ -0,0 +1,37 @@
>> > +/*
>> > + * livepatch.h - x86-specific Kernel Live Patching Core
>> > + *
>> > + * Copyright (C) 2014 Seth Jennings <[email protected]>
>> > + *
>> > + * This program is free software; you can redistribute it and/or
>> > + * modify it under the terms of the GNU General Public License
>> > + * as published by the Free Software Foundation; either version 2
>> > + * of the License, or (at your option) any later version.
>> > + *
>> > + * This program is distributed in the hope that it will be useful,
>> > + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>> > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
>> > + * GNU General Public License for more details.
>> > + *
>> > + * You should have received a copy of the GNU General Public License
>> > + * along with this program; if not, see <http://www.gnu.org/licenses/>.
>> > + */
>> > +
>> > +#ifndef _ASM_X86_LIVEPATCH_H
>> > +#define _ASM_X86_LIVEPATCH_H
>> > +
>> > +#include <linux/module.h>
>> > +
>> > +#ifdef CONFIG_LIVE_PATCHING
>> > +extern int klp_write_module_reloc(struct module *mod, unsigned long type,
>> > + unsigned long loc, unsigned long value);
>> > +
>> > +#else
>> > +static int klp_write_module_reloc(struct module *mod, unsigned long type,
>>
>> static inline?
>
> I think the practice is to let the compiler handle inline determination
> unless you are sure that the compiler isn't inlining something you think
> it should.
>

It has to be static inline in a header, right? Otherwise, in theory,
the header could generate code, and that's no good. (The compiler
should optimize it out, but still.)

--Andy

> All other changes are accepted and queued for v4.
>
> Thanks,
> Seth
>
>>
>> [ ... snip ... ]
>> > --- /dev/null
>> > +++ b/kernel/livepatch/Kconfig
>> > @@ -0,0 +1,18 @@
>> > +config ARCH_HAVE_LIVE_PATCHING
>> > + boolean
>> > + help
>> > + Arch supports kernel live patching
>> > +
>> > +config LIVE_PATCHING
>> > + boolean "Kernel Live Patching"
>> > + depends on DYNAMIC_FTRACE_WITH_REGS
>> > + depends on MODULES
>> > + depends on SYSFS
>> > + depends on KALLSYMS_ALL
>> > + depends on ARCH_HAVE_LIVE_PATCHING
>>
>> We have to refuse to build on x86_64 if the compiler doesn't support
>> fentry. mcount is not really usable (well, it would be possible to use it,
>> be the obstacles are too big to care).
>>
>> Something like [1] should be applicable here as well I believe.
>>
>> [1] https://git.kernel.org/cgit/linux/kernel/git/jirislaby/kgraft.git/commit/?h=kgraft&id=bd4bc097c72937d18036f1312a4d79ed0bea9991
>>
>> [ ... snip ... ]
>> > --- /dev/null
>> > +++ b/kernel/livepatch/core.c
>> > @@ -0,0 +1,828 @@
>> > +/*
>> > + * core.c - Kernel Live Patching Core
>> > + *
>> > + * Copyright (C) 2014 Seth Jennings <[email protected]>
>> > + *
>> > + * This program is free software; you can redistribute it and/or
>> > + * modify it under the terms of the GNU General Public License
>> > + * as published by the Free Software Foundation; either version 2
>> > + * of the License, or (at your option) any later version.
>> > + *
>> > + * This program is distributed in the hope that it will be useful,
>> > + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>> > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
>> > + * GNU General Public License for more details.
>> > + *
>> > + * You should have received a copy of the GNU General Public License
>> > + * along with this program; if not, see <http://www.gnu.org/licenses/>.
>> > + */
>> > +
>> > +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
>> > +
>> > +#include <linux/module.h>
>> > +#include <linux/kernel.h>
>> > +#include <linux/mutex.h>
>> > +#include <linux/slab.h>
>> > +#include <linux/ftrace.h>
>> > +#include <linux/list.h>
>> > +#include <linux/kallsyms.h>
>> > +#include <linux/livepatch.h>
>> > +
>> > +/*************************************
>> > + * Core structures
>> > + ************************************/
>>
>> I don't personally find such markers (especially with all the '*'s) too
>> tasteful, and I don't recall ever seeing this being common pattern used in
>> the kernel code ... ?
>>
>> > +static DEFINE_MUTEX(klp_mutex);
>> > +static LIST_HEAD(klp_patches);
>> > +
>> > +/*******************************************
>> > + * Helpers
>> > + *******************************************/
>> > +
>> > +/* sets obj->mod if object is not vmlinux and module is found */
>> > +static bool klp_find_object_module(struct klp_object *obj)
>> > +{
>> > + if (!strcmp(obj->name, "vmlinux"))
>> > + return 1;
>>
>> Rather a matter of taste again -- I personally would prefer "obj->name ==
>> NULL" to be the condition identifying core kernel code text. You can't
>> really forbid any lunetic out there calling his kernel module "vmlinux",
>> right? :)
>>
>> [ ... snip ... ]
>>
>> > +/***********************************
>> > + * ftrace registration
>> > + **********************************/
>> > +
>> > +static void klp_ftrace_handler(unsigned long ip, unsigned long parent_ip,
>> > + struct ftrace_ops *ops, struct pt_regs *regs)
>> > +{
>> > + struct klp_func *func = ops->private;
>> > +
>> > + regs->ip = (unsigned long)func->new_func;
>> > +}
>> > +
>> > +static int klp_enable_func(struct klp_func *func)
>> > +{
>> > + int ret;
>> > +
>> > + if (WARN_ON(!func->old_addr || func->state != LPC_DISABLED))
>> > + return -EINVAL;
>>
>> If the WARN_ON triggers, there is no good way to find out which of the two
>> conditions triggered it.
>>
>> [ ... snip ... ]
>> > +static int klp_init_patch(struct klp_patch *patch)
>> > +{
>> > + int ret;
>> > +
>> > + mutex_lock(&klp_mutex);
>> > +
>> > + /* init */
>> > + patch->state = LPC_DISABLED;
>> > +
>> > + /* sysfs */
>> > + ret = kobject_init_and_add(&patch->kobj, &klp_ktype_patch,
>> > + klp_root_kobj, patch->mod->name);
>> > + if (ret)
>> > + return ret;
>>
>> klp_mutex is leaked locked here.
>>
>> > +
>> > + /* create objects */
>> > + ret = klp_init_objects(patch);
>> > + if (ret) {
>> > + kobject_put(&patch->kobj);
>> > + return ret;
>>
>> And here as well.
>>
>> All in all, this is looking very good to me. I think we are really close
>> to having a code that all the parties would agree with. Thanks everybody,
>>
>> --
>> Jiri Kosina
>> SUSE Labs



--
Andy Lutomirski
AMA Capital Management, LLC

2014-11-21 19:16:51

by Seth Jennings

[permalink] [raw]
Subject: Re: [PATCHv3 2/3] kernel: add support for live patching

On Fri, Nov 21, 2014 at 06:35:47PM +0100, Jiri Slaby wrote:
> On 11/21/2014, 05:40 PM, Seth Jennings wrote:
> >>> --- /dev/null
> >>> +++ b/arch/x86/include/asm/livepatch.h
> >>> @@ -0,0 +1,37 @@
> ...
> >>> +#ifndef _ASM_X86_LIVEPATCH_H
> >>> +#define _ASM_X86_LIVEPATCH_H
> >>> +
> >>> +#include <linux/module.h>
> >>> +
> >>> +#ifdef CONFIG_LIVE_PATCHING
> >>> +extern int klp_write_module_reloc(struct module *mod, unsigned long type,
> >>> + unsigned long loc, unsigned long value);
> >>> +
> >>> +#else
> >>> +static int klp_write_module_reloc(struct module *mod, unsigned long type,
> >>
> >> static inline?
> >
> > I think the practice is to let the compiler handle inline determination
> > unless you are sure that the compiler isn't inlining something you think
> > it should.
>
> Although you are right, it is a correct C, gcc specs (6.39) suggests to
> use 'static inline' on such functions. gcc then shall inline such functions.

Fair enough. Queued up.

Thanks,
Seth

>
> And if you look around in the kernel, we use that combination almost
> everywhere.
>
> thanks,
> --
> js
> suse labs
> --
> To unsubscribe from this list: send the line "unsubscribe live-patching" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html

2014-11-24 11:13:37

by Thomas Gleixner

[permalink] [raw]
Subject: Re: [PATCHv3 2/3] kernel: add support for live patching

On Thu, 20 Nov 2014, Seth Jennings wrote:
> This commit introduces code for the live patching core. It implements
> an ftrace-based mechanism and kernel interface for doing live patching
> of kernel and kernel module functions.
>
> It represents the greatest common functionality set between kpatch and
> kgraft and can accept patches built using either method.
>
> This first version does not implement any consistency mechanism that
> ensures that old and new code do not run together. In practice, ~90% of
> CVEs are safe to apply in this way, since they simply add a conditional
> check. However, any function change that can not execute safely with
> the old version of the function can _not_ be safely applied in this
> version.

To be honest this sounds frightening.

How is determined whether a change can be applied w/o a consistency
mechanism or not?

Thanks,

tglx

2014-11-24 13:20:57

by Jiri Kosina

[permalink] [raw]
Subject: Re: [PATCHv3 2/3] kernel: add support for live patching

On Mon, 24 Nov 2014, Thomas Gleixner wrote:

> > This commit introduces code for the live patching core. It implements
> > an ftrace-based mechanism and kernel interface for doing live patching
> > of kernel and kernel module functions.
> >
> > It represents the greatest common functionality set between kpatch and
> > kgraft and can accept patches built using either method.
> >
> > This first version does not implement any consistency mechanism that
> > ensures that old and new code do not run together. In practice, ~90% of
> > CVEs are safe to apply in this way, since they simply add a conditional
> > check. However, any function change that can not execute safely with
> > the old version of the function can _not_ be safely applied in this
> > version.
>
> To be honest this sounds frightening.
>
> How is determined whether a change can be applied w/o a consistency
> mechanism or not?

By a human being producing the "live patch" code.

If the semantics of the patch requires consistency mechanism to be applied
(such as "old and new function must not run in parallel, because locking
rules would be violated", or "return value from a function that is being
called in a loop is changing its meaning", etc.), then this first naive
implementation simply can't be used.

For simple things though, such as "add a missing bounds check to sys_foo()
prologue and return -EINVAL if out-of-bounds", this is sufficient.

It's being designed in a way that more advanced consistency models (such
as the ones kgraft and kpatch are currently implementing) can be built on
top of it.

The person writing the patch would always need to understand what he is
doing to be able to pick correct consistency model to be used. I
personally think this is a good thing -- this is nothing where we should
be relying on any kinds of tools.

Thanks,

--
Jiri Kosina
SUSE Labs

2014-11-24 13:23:12

by Vojtech Pavlik

[permalink] [raw]
Subject: Re: [PATCHv3 2/3] kernel: add support for live patching

On Mon, Nov 24, 2014 at 12:13:20PM +0100, Thomas Gleixner wrote:

> > This commit introduces code for the live patching core. It implements
> > an ftrace-based mechanism and kernel interface for doing live patching
> > of kernel and kernel module functions.
> >
> > It represents the greatest common functionality set between kpatch and
> > kgraft and can accept patches built using either method.
> >
> > This first version does not implement any consistency mechanism that
> > ensures that old and new code do not run together. In practice, ~90% of
> > CVEs are safe to apply in this way, since they simply add a conditional
> > check. However, any function change that can not execute safely with
> > the old version of the function can _not_ be safely applied in this
> > version.
>
> To be honest this sounds frightening.
>
> How is determined whether a change can be applied w/o a consistency
> mechanism or not?

If there are any syntactic (function prototype - arguments, return value
type) or semantic dependencies (data value semantics, locking order,
...) between multiple functions the patch changes, then it cannot be
applied.

If the changes are small and localized, don't depend on the order in
which individual functions are replaced, when both new and old code can
run in parallel on different CPUs or in sequence in a single thread
safely, then it can be applied.

--
Vojtech Pavlik
Director SUSE Labs

2014-11-24 13:25:27

by Josh Poimboeuf

[permalink] [raw]
Subject: Re: [PATCHv3 2/3] kernel: add support for live patching

On Mon, Nov 24, 2014 at 12:13:20PM +0100, Thomas Gleixner wrote:
> On Thu, 20 Nov 2014, Seth Jennings wrote:
> > This commit introduces code for the live patching core. It implements
> > an ftrace-based mechanism and kernel interface for doing live patching
> > of kernel and kernel module functions.
> >
> > It represents the greatest common functionality set between kpatch and
> > kgraft and can accept patches built using either method.
> >
> > This first version does not implement any consistency mechanism that
> > ensures that old and new code do not run together. In practice, ~90% of
> > CVEs are safe to apply in this way, since they simply add a conditional
> > check. However, any function change that can not execute safely with
> > the old version of the function can _not_ be safely applied in this
> > version.
>
> To be honest this sounds frightening.

The risky part of live patching is patch analysis and creation. If the
user isn't careful with their analysis, they're playing with fire.

We'll be documenting the patch analysis steps, so that if you carefully
follow the steps, it's safe. But that should generally be the role of
the distribution.

> How is determined whether a change can be applied w/o a consistency
> mechanism or not?

The following are not safe without consistency mechanisms:

- function prototype changes
- data structure changes
- data semantic changes: changes to how functions interact with a data
structure, e.g. locking order

We'll be adding the consistency mechanisms later to enable function and
data consistency. But even then you have to be very careful.

--
Josh

2014-11-24 13:26:27

by Thomas Gleixner

[permalink] [raw]
Subject: Re: [PATCHv3 2/3] kernel: add support for live patching

On Mon, 24 Nov 2014, Jiri Kosina wrote:
> On Mon, 24 Nov 2014, Thomas Gleixner wrote:
> > How is determined whether a change can be applied w/o a consistency
> > mechanism or not?
>
> By a human being producing the "live patch" code.
>
> If the semantics of the patch requires consistency mechanism to be applied
> (such as "old and new function must not run in parallel, because locking
> rules would be violated", or "return value from a function that is being
> called in a loop is changing its meaning", etc.), then this first naive
> implementation simply can't be used.
>
> For simple things though, such as "add a missing bounds check to sys_foo()
> prologue and return -EINVAL if out-of-bounds", this is sufficient.
>
> It's being designed in a way that more advanced consistency models (such
> as the ones kgraft and kpatch are currently implementing) can be built on
> top of it.
>
> The person writing the patch would always need to understand what he is
> doing to be able to pick correct consistency model to be used. I
> personally think this is a good thing -- this is nothing where we should
> be relying on any kinds of tools.

But why want we to provide a mechanism which has no consistency
enforcement at all?

Surely you can argue that the person who is doing that is supposed to
know what he's doing, but what's the downside of enforcing consistency
mechanisms on all live code changes?

Thanks,

tglx

Subject: Re: [PATCHv3 2/3] kernel: add support for live patching

(2014/11/24 20:13), Thomas Gleixner wrote:
> On Thu, 20 Nov 2014, Seth Jennings wrote:
>> This commit introduces code for the live patching core. It implements
>> an ftrace-based mechanism and kernel interface for doing live patching
>> of kernel and kernel module functions.
>>
>> It represents the greatest common functionality set between kpatch and
>> kgraft and can accept patches built using either method.
>>
>> This first version does not implement any consistency mechanism that
>> ensures that old and new code do not run together. In practice, ~90% of
>> CVEs are safe to apply in this way, since they simply add a conditional
>> check. However, any function change that can not execute safely with
>> the old version of the function can _not_ be safely applied in this
>> version.
>
> To be honest this sounds frightening.
>

I see. This is just a minimal one. We'll add several consistency
mechanisms :).

> How is determined whether a change can be applied w/o a consistency
> mechanism or not?

I guess the following cases are possible to be applied.
- the code itself ensures consistency (e.g. in a critical section)
or,
- the patch just changes one place (e.g. add an if branch) and
only depends on the local context (e.g. local variables), and
doesn't called in a loop, so that old- and new-version of the
function can co-exist.

Thank you,

>
> Thanks,
>
> tglx
>
>


--
Masami HIRAMATSU
Software Platform Research Dept. Linux Technology Research Center
Hitachi, Ltd., Yokohama Research Laboratory
E-mail: [email protected]

2014-11-24 13:31:47

by Vojtech Pavlik

[permalink] [raw]
Subject: Re: [PATCHv3 2/3] kernel: add support for live patching

On Mon, Nov 24, 2014 at 02:26:08PM +0100, Thomas Gleixner wrote:
> On Mon, 24 Nov 2014, Jiri Kosina wrote:
> > On Mon, 24 Nov 2014, Thomas Gleixner wrote:
> > > How is determined whether a change can be applied w/o a consistency
> > > mechanism or not?
> >
> > By a human being producing the "live patch" code.
> >
> > If the semantics of the patch requires consistency mechanism to be applied
> > (such as "old and new function must not run in parallel, because locking
> > rules would be violated", or "return value from a function that is being
> > called in a loop is changing its meaning", etc.), then this first naive
> > implementation simply can't be used.
> >
> > For simple things though, such as "add a missing bounds check to sys_foo()
> > prologue and return -EINVAL if out-of-bounds", this is sufficient.
> >
> > It's being designed in a way that more advanced consistency models (such
> > as the ones kgraft and kpatch are currently implementing) can be built on
> > top of it.
> >
> > The person writing the patch would always need to understand what he is
> > doing to be able to pick correct consistency model to be used. I
> > personally think this is a good thing -- this is nothing where we should
> > be relying on any kinds of tools.
>
> But why want we to provide a mechanism which has no consistency
> enforcement at all?
>
> Surely you can argue that the person who is doing that is supposed to
> know what he's doing, but what's the downside of enforcing consistency
> mechanisms on all live code changes?

The consistency engine implementing the consistency model is the most
complex part of the live patching technology. We want to have something
small, easy to understand pushed out first, to build on top of that.

Plus we're still discussing which exact consistency model to use for
upstream live patching (there are many considerations) and whether one
is enough, or whether an engine that can do more than one is required.

The consistency models of kpatch and kGraft aren't directly compatible.

I think we're on a good way towards a single model, but we'll see when
it's implemented within the live patching framework just posted.

--
Vojtech Pavlik
Director SUSE Labs

2014-11-24 13:31:52

by Jiri Kosina

[permalink] [raw]
Subject: Re: [PATCHv3 2/3] kernel: add support for live patching

On Mon, 24 Nov 2014, Thomas Gleixner wrote:

> > The person writing the patch would always need to understand what he is
> > doing to be able to pick correct consistency model to be used. I
> > personally think this is a good thing -- this is nothing where we should
> > be relying on any kinds of tools.
>
> But why want we to provide a mechanism which has no consistency
> enforcement at all?

"No consistency model needed" is also a consistency model in a sense that
there is a (large) group of patches that can be applied that way. We've
done some very rough analysis, and vast majority patches for CVE bugs with
severity 6+ (which is in some sense the main motivation for all this) are
applicable without any need of extra consistency model.

The "add bounds checking to syscall entry" is a prime example of that.

> Surely you can argue that the person who is doing that is supposed to
> know what he's doing, but what's the downside of enforcing consistency
> mechanisms on all live code changes?

The implementation of the consistency models (the ones that kgraft and
kpatch have at least) is not really super-trivial and it's sometimes
tricky to get it right and cover all the corner cases.

So the agreement was to do cover "no consistency model needed" group of
live patches first, and design the API and data structures in such way
that more sophisticated consistency models can be added on top as needed
in the future.

--
Jiri Kosina
SUSE Labs

Subject: Re: [PATCHv3 2/3] kernel: add support for live patching

(2014/11/24 22:31), Vojtech Pavlik wrote:
> On Mon, Nov 24, 2014 at 02:26:08PM +0100, Thomas Gleixner wrote:
>> On Mon, 24 Nov 2014, Jiri Kosina wrote:
>>> On Mon, 24 Nov 2014, Thomas Gleixner wrote:
>>>> How is determined whether a change can be applied w/o a consistency
>>>> mechanism or not?
>>>
>>> By a human being producing the "live patch" code.
>>>
>>> If the semantics of the patch requires consistency mechanism to be applied
>>> (such as "old and new function must not run in parallel, because locking
>>> rules would be violated", or "return value from a function that is being
>>> called in a loop is changing its meaning", etc.), then this first naive
>>> implementation simply can't be used.
>>>
>>> For simple things though, such as "add a missing bounds check to sys_foo()
>>> prologue and return -EINVAL if out-of-bounds", this is sufficient.
>>>
>>> It's being designed in a way that more advanced consistency models (such
>>> as the ones kgraft and kpatch are currently implementing) can be built on
>>> top of it.
>>>
>>> The person writing the patch would always need to understand what he is
>>> doing to be able to pick correct consistency model to be used. I
>>> personally think this is a good thing -- this is nothing where we should
>>> be relying on any kinds of tools.
>>
>> But why want we to provide a mechanism which has no consistency
>> enforcement at all?
>>
>> Surely you can argue that the person who is doing that is supposed to
>> know what he's doing, but what's the downside of enforcing consistency
>> mechanisms on all live code changes?
>
> The consistency engine implementing the consistency model is the most
> complex part of the live patching technology. We want to have something
> small, easy to understand pushed out first, to build on top of that.

I think we'd better incubate this live patching in another tree
until those consistency engines/models are enough prepared.

>
> Plus we're still discussing which exact consistency model to use for
> upstream live patching (there are many considerations) and whether one
> is enough, or whether an engine that can do more than one is required.
>
> The consistency models of kpatch and kGraft aren't directly compatible.

It maybe not compatible, but complementary. This patch series clarifies
the common patch module format, I think we just need consistency engines
and selector flag for each patch module.

Thank you,

>
> I think we're on a good way towards a single model, but we'll see when
> it's implemented within the live patching framework just posted.
>


--
Masami HIRAMATSU
Software Platform Research Dept. Linux Technology Research Center
Hitachi, Ltd., Yokohama Research Laboratory
E-mail: [email protected]

2014-11-25 16:39:45

by Petr Mladek

[permalink] [raw]
Subject: Re: [PATCHv3 2/3] kernel: add support for live patching

On Fri 2014-11-21 11:39:24, Masami Hiramatsu wrote:
> (2014/11/21 7:29), Seth Jennings wrote:
> > This commit introduces code for the live patching core. It implements
> > an ftrace-based mechanism and kernel interface for doing live patching
> > of kernel and kernel module functions.
> >
> > It represents the greatest common functionality set between kpatch and
> > kgraft and can accept patches built using either method.
> >
> > This first version does not implement any consistency mechanism that
> > ensures that old and new code do not run together. In practice, ~90% of
> > CVEs are safe to apply in this way, since they simply add a conditional
> > check. However, any function change that can not execute safely with
> > the old version of the function can _not_ be safely applied in this
> > version.
>
> Thanks for updating :)
>
> BTW, this still have some LPC_XXX macros, those should be KLP_XXX.
>
> Also, as I sent a series of IPMODIFY patches (just now), could you consider
> to use the flag? :)

Hmm, it would cause problems with the current LivePatch, kGraft
implementation, and probably also with kPatch. They register more
than one ftrace handler with IPMODIFY at the same time.

They pass pointer to the func-related structure via the "private" field
in struct ftrace_ops. The structure provides information where the old
and new code is.

They need to update the structure when new patch for the same functions
appears. It is done by registering a new ftrace function related to the
new patch and unregistering an old ftrace function from the old patch.

We would need to maintain some patch-independent list of ftrace_ops
and the related private fields to avoid the double registration.

[...]

> > diff --git a/include/linux/livepatch.h b/include/linux/livepatch.h
> > new file mode 100644
> > index 0000000..eab84df
> > --- /dev/null
> > +++ b/include/linux/livepatch.h
[...]
> > +struct klp_func {
> > + /* external */
> > + const char *old_name; /* function to be patched */
> > + void *new_func; /* replacement function in patch module */
> > + /*
> > + * The old_addr field is optional and can be used to resolve
> > + * duplicate symbol names in the vmlinux object. If this
> > + * information is not present, the symbol is located by name
> > + * with kallsyms. If the name is not unique and old_addr is
> > + * not provided, the patch application fails as there is no
> > + * way to resolve the ambiguity.
> > + */
> > + unsigned long old_addr;
> > +
> > + /* internal */
> > + struct kobject *kobj;
> > + struct ftrace_ops fops;
> > + enum klp_state state;
> > +};

This is the func-related struct in LivePatch.

> > +
> > +struct klp_reloc {
> > + unsigned long dest;
> > + unsigned long src;
> > + unsigned long type;
> > + const char *name;
> > + int addend;
> > + int external;
> > +};
> > +
> > +struct klp_object {
> > + /* external */
> > + const char *name; /* "vmlinux" or module name */
> > + struct klp_reloc *relocs;
> > + struct klp_func *funcs;
> > +
> > + /* internal */
> > + struct kobject *kobj;
> > + struct module *mod; /* module associated with object */
> > + enum klp_state state;
> > +};
> > +
> > +struct klp_patch {
> > + /* external */
> > + struct module *mod; /* module containing the patch */
> > + struct klp_object *objs;
> > +
> > + /* internal */
> > + struct list_head list;
> > + struct kobject kobj;
> > + enum klp_state state;
> > +};
> > +
[...]
> > --- /dev/null
> > +++ b/kernel/livepatch/core.c
[...]
> > +/***********************************
> > + * ftrace registration
> > + **********************************/
> > +
> > +static void klp_ftrace_handler(unsigned long ip, unsigned long parent_ip,
> > + struct ftrace_ops *ops, struct pt_regs *regs)
> > +{
> > + struct klp_func *func = ops->private;
> > +
> > + regs->ip = (unsigned long)func->new_func;
> > +}
> > +
> > +static int klp_enable_func(struct klp_func *func)
> > +{
> > + int ret;
> > +
> > + if (WARN_ON(!func->old_addr || func->state != LPC_DISABLED))
> > + return -EINVAL;
> > +
> > + ret = ftrace_set_filter_ip(&func->fops, func->old_addr, 0, 0);
> > + if (ret) {
> > + pr_err("failed to set ftrace filter for function '%s' (%d)\n",
> > + func->old_name, ret);
> > + return ret;
> > + }
> > + ret = register_ftrace_function(&func->fops);
> > + if (ret) {
> > + pr_err("failed to register ftrace handler for function '%s' (%d)\n",
> > + func->old_name, ret);
> > + ftrace_set_filter_ip(&func->fops, func->old_addr, 1, 0);
> > + } else
> > + func->state = LPC_ENABLED;
> > +
> > + return ret;
> > +}

This is the sample of registration. Please, see that it is function-
and patch-specific.

> > +static int klp_disable_func(struct klp_func *func)
> > +{
> > + int ret;
> > +
> > + if (WARN_ON(func->state != LPC_ENABLED))
> > + return -EINVAL;
> > +
> > + if (!func->old_addr)
> > + /* parent object is not loaded */
> > + return 0;
> > + ret = unregister_ftrace_function(&func->fops);
> > + if (ret) {
> > + pr_err("failed to unregister ftrace handler for function '%s' (%d)\n",
> > + func->old_name, ret);
> > + return ret;
> > + }
> > + ret = ftrace_set_filter_ip(&func->fops, func->old_addr, 1, 0);
> > + if (ret)
> > + pr_warn("function unregister succeeded but failed to clear the filter\n");
> > + func->state = LPC_DISABLED;
> > +
> > + return 0;
> > +}

[...]

> > +static int klp_init_funcs(struct klp_object *obj)
> > +{
> > + struct klp_func *func;
> > + struct ftrace_ops *ops;
> > +
> > + for (func = obj->funcs; func->old_name; func++) {
> > + func->state = LPC_DISABLED;
> > + ops = &func->fops;
> > + ops->private = func;
> > + ops->func = klp_ftrace_handler;
> > + ops->flags = FTRACE_OPS_FL_SAVE_REGS | FTRACE_OPS_FL_DYNAMIC;
> > +
> > + /* sysfs */
> > + func->kobj = kobject_create_and_add(func->old_name, obj->kobj);
> > + if (!func->kobj)
> > + goto free;
> > + }
> > +
> > + return 0;
> > +free:
> > + klp_free_funcs_limited(obj, func);
> > + return -ENOMEM;
> > +}

This shows how the ftrace ops is initialized.


Note that it is even more complicated in kGraft. We use different ftrace
functions for the slow and fast handling. The slow handling needs to
do some checks and decide whether to use the old or new code. The fast
handling just sets regs->ip to the new address. See

kgr_stub_fast()
kgr_stub_slow()
kgr_patch_code()

at
https://git.kernel.org/cgit/linux/kernel/git/jirislaby/kgraft.git/tree/kernel/kgraft.c?h=kgraft&id=00803476a5c51bd281999a0b3e305c4081fccd6b


Best Regards,
Petr

2014-11-25 16:52:17

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCHv3 2/3] kernel: add support for live patching

On Tue, 25 Nov 2014 17:39:43 +0100
Petr Mladek <[email protected]> wrote:

> On Fri 2014-11-21 11:39:24, Masami Hiramatsu wrote:
> > (2014/11/21 7:29), Seth Jennings wrote:
> > > This commit introduces code for the live patching core. It implements
> > > an ftrace-based mechanism and kernel interface for doing live patching
> > > of kernel and kernel module functions.
> > >
> > > It represents the greatest common functionality set between kpatch and
> > > kgraft and can accept patches built using either method.
> > >
> > > This first version does not implement any consistency mechanism that
> > > ensures that old and new code do not run together. In practice, ~90% of
> > > CVEs are safe to apply in this way, since they simply add a conditional
> > > check. However, any function change that can not execute safely with
> > > the old version of the function can _not_ be safely applied in this
> > > version.
> >
> > Thanks for updating :)
> >
> > BTW, this still have some LPC_XXX macros, those should be KLP_XXX.
> >
> > Also, as I sent a series of IPMODIFY patches (just now), could you consider
> > to use the flag? :)
>
> Hmm, it would cause problems with the current LivePatch, kGraft
> implementation, and probably also with kPatch. They register more
> than one ftrace handler with IPMODIFY at the same time.

But are they hooked to the same functions? That would be a big problem,
and should be avoided. Why would you want too ftrace_ops returning two
different IPs for one function? That causes a paradox. Why would you
want that?

>
> They pass pointer to the func-related structure via the "private" field
> in struct ftrace_ops. The structure provides information where the old
> and new code is.
>
> They need to update the structure when new patch for the same functions
> appears. It is done by registering a new ftrace function related to the
> new patch and unregistering an old ftrace function from the old patch.
>
> We would need to maintain some patch-independent list of ftrace_ops
> and the related private fields to avoid the double registration.

Yes, that would make sense.

You could create one ftrace_ops per function. That would be ideal
because then you get to take advantage of having your own trampoline
per function and no need to worry about what function needs to go with
another function.

-- Steve

2014-11-25 17:04:33

by Petr Mladek

[permalink] [raw]
Subject: Re: [PATCHv3 2/3] kernel: add support for live patching

On Tue 2014-11-25 11:52:10, Steven Rostedt wrote:
> On Tue, 25 Nov 2014 17:39:43 +0100
> Petr Mladek <[email protected]> wrote:
>
> > On Fri 2014-11-21 11:39:24, Masami Hiramatsu wrote:
> > > (2014/11/21 7:29), Seth Jennings wrote:
> > > > This commit introduces code for the live patching core. It implements
> > > > an ftrace-based mechanism and kernel interface for doing live patching
> > > > of kernel and kernel module functions.
> > > >
> > > > It represents the greatest common functionality set between kpatch and
> > > > kgraft and can accept patches built using either method.
> > > >
> > > > This first version does not implement any consistency mechanism that
> > > > ensures that old and new code do not run together. In practice, ~90% of
> > > > CVEs are safe to apply in this way, since they simply add a conditional
> > > > check. However, any function change that can not execute safely with
> > > > the old version of the function can _not_ be safely applied in this
> > > > version.
> > >
> > > Thanks for updating :)
> > >
> > > BTW, this still have some LPC_XXX macros, those should be KLP_XXX.
> > >
> > > Also, as I sent a series of IPMODIFY patches (just now), could you consider
> > > to use the flag? :)
> >
> > Hmm, it would cause problems with the current LivePatch, kGraft
> > implementation, and probably also with kPatch. They register more
> > than one ftrace handler with IPMODIFY at the same time.
>
> But are they hooked to the same functions? That would be a big problem,
> and should be avoided. Why would you want too ftrace_ops returning two
> different IPs for one function? That causes a paradox. Why would you
> want that?

We does not mind which one wins. The two functions are registered only
temporarily. It is guaranteed that they both sets the same regs->ip
address during this time frame.


> > They pass pointer to the func-related structure via the "private" field
> > in struct ftrace_ops. The structure provides information where the old
> > and new code is.
> >
> > They need to update the structure when new patch for the same functions
> > appears. It is done by registering a new ftrace function related to the
> > new patch and unregistering an old ftrace function from the old patch.
> >
> > We would need to maintain some patch-independent list of ftrace_ops
> > and the related private fields to avoid the double registration.
>
> Yes, that would make sense.
>
> You could create one ftrace_ops per function. That would be ideal
> because then you get to take advantage of having your own trampoline
> per function and no need to worry about what function needs to go with
> another function.

I adds some complexity but I think that we will need to go this way.
The check for IPMODIFY conflicts makes sense. It helps to avoid any
misuse.

My main intention was to point out the problem and that we would need
to handle it somehow J

Best Regards,
Petr

2014-11-25 17:16:16

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCHv3 2/3] kernel: add support for live patching

On Tue, 25 Nov 2014 18:04:31 +0100
Petr Mladek <[email protected]> wrote:

> On Tue 2014-11-25 11:52:10, Steven Rostedt wrote:
> > On Tue, 25 Nov 2014 17:39:43 +0100
> > Petr Mladek <[email protected]> wrote:
> >
> > > On Fri 2014-11-21 11:39:24, Masami Hiramatsu wrote:
> > > > (2014/11/21 7:29), Seth Jennings wrote:
> > > > > This commit introduces code for the live patching core. It implements
> > > > > an ftrace-based mechanism and kernel interface for doing live patching
> > > > > of kernel and kernel module functions.
> > > > >
> > > > > It represents the greatest common functionality set between kpatch and
> > > > > kgraft and can accept patches built using either method.
> > > > >
> > > > > This first version does not implement any consistency mechanism that
> > > > > ensures that old and new code do not run together. In practice, ~90% of
> > > > > CVEs are safe to apply in this way, since they simply add a conditional
> > > > > check. However, any function change that can not execute safely with
> > > > > the old version of the function can _not_ be safely applied in this
> > > > > version.
> > > >
> > > > Thanks for updating :)
> > > >
> > > > BTW, this still have some LPC_XXX macros, those should be KLP_XXX.
> > > >
> > > > Also, as I sent a series of IPMODIFY patches (just now), could you consider
> > > > to use the flag? :)
> > >
> > > Hmm, it would cause problems with the current LivePatch, kGraft
> > > implementation, and probably also with kPatch. They register more
> > > than one ftrace handler with IPMODIFY at the same time.
> >
> > But are they hooked to the same functions? That would be a big problem,
> > and should be avoided. Why would you want too ftrace_ops returning two
> > different IPs for one function? That causes a paradox. Why would you
> > want that?
>
> We does not mind which one wins. The two functions are registered only
> temporarily. It is guaranteed that they both sets the same regs->ip
> address during this time frame.

It is not guaranteed from ftrace's stand point. What happens if we have
a kprobe handler that modifies it for someplace else? Changing the ip
address may not be a kpatch/kGraft privilege only.

>
>
> > > They pass pointer to the func-related structure via the "private" field
> > > in struct ftrace_ops. The structure provides information where the old
> > > and new code is.
> > >
> > > They need to update the structure when new patch for the same functions
> > > appears. It is done by registering a new ftrace function related to the
> > > new patch and unregistering an old ftrace function from the old patch.
> > >
> > > We would need to maintain some patch-independent list of ftrace_ops
> > > and the related private fields to avoid the double registration.
> >
> > Yes, that would make sense.
> >
> > You could create one ftrace_ops per function. That would be ideal
> > because then you get to take advantage of having your own trampoline
> > per function and no need to worry about what function needs to go with
> > another function.
>
> I adds some complexity but I think that we will need to go this way.
> The check for IPMODIFY conflicts makes sense. It helps to avoid any
> misuse.

Right.

>
> My main intention was to point out the problem and that we would need
> to handle it somehow J

Great! J

-- Steve

2014-11-25 19:29:30

by Jiri Kosina

[permalink] [raw]
Subject: Re: [PATCHv3 2/3] kernel: add support for live patching

On Tue, 25 Nov 2014, Steven Rostedt wrote:

> It is not guaranteed from ftrace's stand point. What happens if we have
> a kprobe handler that modifies it for someplace else? Changing the ip
> address may not be a kpatch/kGraft privilege only.

This brings me back to the RFC patch I sent back then in october ... what
we really want to do is to at least warn about situations when we are
going to redirect code flow (through IPMODIFY) for function that has a
kprobe installed anywhere inside it. Otherwise the probe will silently
vanish (there is no way how to migrate it to the new function
automatically), which might be very confusing for uses (cosider systemtap,
for example).

I'll resurect my patch if noone beats me doing it. It should go in
together with the live patching framework I believe.

Thanks,

--
Jiri Kosina
SUSE Labs

Subject: Re: Re: [PATCHv3 2/3] kernel: add support for live patching

(2014/11/26 4:29), Jiri Kosina wrote:
> On Tue, 25 Nov 2014, Steven Rostedt wrote:
>
>> It is not guaranteed from ftrace's stand point. What happens if we have
>> a kprobe handler that modifies it for someplace else? Changing the ip
>> address may not be a kpatch/kGraft privilege only.
>
> This brings me back to the RFC patch I sent back then in october ... what
> we really want to do is to at least warn about situations when we are
> going to redirect code flow (through IPMODIFY) for function that has a
> kprobe installed anywhere inside it.

Actually in my plan, normal kprobes/kretprobes don't set IPMODIFY
flag because it don't change the IP. Instead, you can even use
debugfs/kprobes/list to check whether the function is probed or not.
Or, I think we can provide atomic-conflict checking interface which
will iterate probes under locking kprobe list.

> Otherwise the probe will silently
> vanish (there is no way how to migrate it to the new function
> automatically), which might be very confusing for uses (cosider systemtap,
> for example).

Yeah, I think we can add --force option(or sysctl) to patch functions
just ignoring probed or not. (for emergency vulnerability fixes)

Thank you,

>
> I'll resurect my patch if noone beats me doing it. It should go in
> together with the live patching framework I believe.
>
> Thanks,
>


--
Masami HIRAMATSU
Software Platform Research Dept. Linux Technology Research Center
Hitachi, Ltd., Yokohama Research Laboratory
E-mail: [email protected]