2022-07-18 00:15:51

by Song Liu

[permalink] [raw]
Subject: [PATCH v3 bpf-next 0/4] ftrace: host klp and bpf trampoline together

Changes v2 => v3:
1. Major rewrite after discussions with Steven Rostedt. [1]
2. Remove SHARE_IPMODIFY flag from ftrace code. Instead use the callback
function to communicate this information. (Steven)
3. Add cleanup_direct_functions_after_ipmodify() to clear SHARE_IPMODIFY
on the DIRECT ops when the IPMODIFY ops is removed.

Changes v1 => v2:
1. Fix build errors for different config. (kernel test robot)

Kernel Live Patch (livepatch, or klp) and bpf trampoline are important
features for modern systems. This set allows the two to work on the same
kernel function as the same time.

live patch uses ftrace with IPMODIFY, while bpf trampoline use direct
ftrace. Existing policy does not allow the two to attach to the same kernel
function. This is changed by fine tuning ftrace IPMODIFY policy, and allows
one IPMODIFY ftrace_ops and one DIRECT ftrace_ops on the same kernel
function at the same time. Please see patch 2 and 4 for more details.

Note that, one of the constraint here is to let bpf trampoline use direct
call when it is not working on the same function as live patch. This is
achieved by allowing ftrace code to ask bpf trampoline to make changes.

[1] https://lore.kernel.org/all/[email protected]/

Jiri Olsa (1):
bpf, x64: Allow to use caller address from stack

Song Liu (3):
ftrace: add modify_ftrace_direct_multi_nolock
ftrace: allow IPMODIFY and DIRECT ops on the same function
bpf: support bpf_trampoline on functions with IPMODIFY (e.g.
livepatch)

arch/x86/net/bpf_jit_comp.c | 13 +-
include/linux/bpf.h | 13 ++
include/linux/ftrace.h | 43 ++++++
kernel/bpf/trampoline.c | 158 +++++++++++++++++---
kernel/trace/ftrace.c | 288 +++++++++++++++++++++++++++++++-----
5 files changed, 455 insertions(+), 60 deletions(-)

--
2.30.2


2022-07-18 00:16:00

by Song Liu

[permalink] [raw]
Subject: [PATCH v3 bpf-next 1/4] ftrace: add modify_ftrace_direct_multi_nolock

This is similar to modify_ftrace_direct_multi, but does not acquire
direct_mutex. This is useful when direct_mutex is already locked by the
user.

Signed-off-by: Song Liu <[email protected]>
---
include/linux/ftrace.h | 5 +++
kernel/trace/ftrace.c | 85 ++++++++++++++++++++++++++++++------------
2 files changed, 67 insertions(+), 23 deletions(-)

diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
index 979f6bfa2c25..acb35243ce5d 100644
--- a/include/linux/ftrace.h
+++ b/include/linux/ftrace.h
@@ -340,6 +340,7 @@ unsigned long ftrace_find_rec_direct(unsigned long ip);
int register_ftrace_direct_multi(struct ftrace_ops *ops, unsigned long addr);
int unregister_ftrace_direct_multi(struct ftrace_ops *ops, unsigned long addr);
int modify_ftrace_direct_multi(struct ftrace_ops *ops, unsigned long addr);
+int modify_ftrace_direct_multi_nolock(struct ftrace_ops *ops, unsigned long addr);

#else
struct ftrace_ops;
@@ -384,6 +385,10 @@ static inline int modify_ftrace_direct_multi(struct ftrace_ops *ops, unsigned lo
{
return -ENODEV;
}
+static inline int modify_ftrace_direct_multi_nolock(struct ftrace_ops *ops, unsigned long addr)
+{
+ return -ENODEV;
+}
#endif /* CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS */

#ifndef CONFIG_HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 601ccf1b2f09..0c15ec997c13 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -5691,22 +5691,8 @@ int unregister_ftrace_direct_multi(struct ftrace_ops *ops, unsigned long addr)
}
EXPORT_SYMBOL_GPL(unregister_ftrace_direct_multi);

-/**
- * modify_ftrace_direct_multi - Modify an existing direct 'multi' call
- * to call something else
- * @ops: The address of the struct ftrace_ops object
- * @addr: The address of the new trampoline to call at @ops functions
- *
- * This is used to unregister currently registered direct caller and
- * register new one @addr on functions registered in @ops object.
- *
- * Note there's window between ftrace_shutdown and ftrace_startup calls
- * where there will be no callbacks called.
- *
- * Returns: zero on success. Non zero on error, which includes:
- * -EINVAL - The @ops object was not properly registered.
- */
-int modify_ftrace_direct_multi(struct ftrace_ops *ops, unsigned long addr)
+static int
+__modify_ftrace_direct_multi(struct ftrace_ops *ops, unsigned long addr)
{
struct ftrace_hash *hash;
struct ftrace_func_entry *entry, *iter;
@@ -5717,12 +5703,8 @@ int modify_ftrace_direct_multi(struct ftrace_ops *ops, unsigned long addr)
int i, size;
int err;

- if (check_direct_multi(ops))
+ if (WARN_ON_ONCE(!mutex_is_locked(&direct_mutex)))
return -EINVAL;
- if (!(ops->flags & FTRACE_OPS_FL_ENABLED))
- return -EINVAL;
-
- mutex_lock(&direct_mutex);

/* Enable the tmp_ops to have the same functions as the direct ops */
ftrace_ops_init(&tmp_ops);
@@ -5730,7 +5712,7 @@ int modify_ftrace_direct_multi(struct ftrace_ops *ops, unsigned long addr)

err = register_ftrace_function(&tmp_ops);
if (err)
- goto out_direct;
+ return err;

/*
* Now the ftrace_ops_list_func() is called to do the direct callers.
@@ -5754,7 +5736,64 @@ int modify_ftrace_direct_multi(struct ftrace_ops *ops, unsigned long addr)
/* Removing the tmp_ops will add the updated direct callers to the functions */
unregister_ftrace_function(&tmp_ops);

- out_direct:
+ return err;
+}
+
+/**
+ * modify_ftrace_direct_multi_nolock - Modify an existing direct 'multi' call
+ * to call something else
+ * @ops: The address of the struct ftrace_ops object
+ * @addr: The address of the new trampoline to call at @ops functions
+ *
+ * This is used to unregister currently registered direct caller and
+ * register new one @addr on functions registered in @ops object.
+ *
+ * Note there's window between ftrace_shutdown and ftrace_startup calls
+ * where there will be no callbacks called.
+ *
+ * Caller should already have direct_mutex locked, so we don't lock
+ * direct_mutex here.
+ *
+ * Returns: zero on success. Non zero on error, which includes:
+ * -EINVAL - The @ops object was not properly registered.
+ */
+int modify_ftrace_direct_multi_nolock(struct ftrace_ops *ops, unsigned long addr)
+{
+ if (check_direct_multi(ops))
+ return -EINVAL;
+ if (!(ops->flags & FTRACE_OPS_FL_ENABLED))
+ return -EINVAL;
+
+ return __modify_ftrace_direct_multi(ops, addr);
+}
+EXPORT_SYMBOL_GPL(modify_ftrace_direct_multi_nolock);
+
+/**
+ * modify_ftrace_direct_multi - Modify an existing direct 'multi' call
+ * to call something else
+ * @ops: The address of the struct ftrace_ops object
+ * @addr: The address of the new trampoline to call at @ops functions
+ *
+ * This is used to unregister currently registered direct caller and
+ * register new one @addr on functions registered in @ops object.
+ *
+ * Note there's window between ftrace_shutdown and ftrace_startup calls
+ * where there will be no callbacks called.
+ *
+ * Returns: zero on success. Non zero on error, which includes:
+ * -EINVAL - The @ops object was not properly registered.
+ */
+int modify_ftrace_direct_multi(struct ftrace_ops *ops, unsigned long addr)
+{
+ int err;
+
+ if (check_direct_multi(ops))
+ return -EINVAL;
+ if (!(ops->flags & FTRACE_OPS_FL_ENABLED))
+ return -EINVAL;
+
+ mutex_lock(&direct_mutex);
+ err = __modify_ftrace_direct_multi(ops, addr);
mutex_unlock(&direct_mutex);
return err;
}
--
2.30.2

2022-07-18 00:24:58

by Song Liu

[permalink] [raw]
Subject: [PATCH v3 bpf-next 2/4] ftrace: allow IPMODIFY and DIRECT ops on the same function

IPMODIFY (livepatch) and DIRECT (bpf trampoline) ops are both important
users of ftrace. It is necessary to allow them work on the same function
at the same time.

First, DIRECT ops no longer specify IPMODIFY flag. Instead, DIRECT flag is
handled together with IPMODIFY flag in __ftrace_hash_update_ipmodify().

The, a callback function, ops_func, is added to ftrace_ops. This is used
by ftrace core code to understand whether the DIRECT ops can share with an
IPMODIFY ops. To share with IPMODIFY ops, the DIRECT ops need to implement
the callback function and adjust the direct trampoline accordingly.

If DIRECT ops is attached before the IPMODIFY ops, ftrace core code calls
ENABLE_SHARE_IPMODIFY_PEER on the DIRECT ops before registering the
IPMODIFY ops.

If IPMODIFY ops is attached before the DIRECT ops, ftrace core code calls
ENABLE_SHARE_IPMODIFY_SELF in __ftrace_hash_update_ipmodify. Owner of the
DIRECT ops may return 0 if the DIRECT trampoline can share with IPMODIFY,
so error code otherwise. The error code is propagated to
register_ftrace_direct_multi so that onwer of the DIRECT trampoline can
handle it properly.

For more details, please refer to comment before enum ftrace_ops_cmd.

Link: https://lore.kernel.org/all/[email protected]/
Signed-off-by: Song Liu <[email protected]>
---
include/linux/ftrace.h | 38 ++++++++
kernel/trace/ftrace.c | 205 +++++++++++++++++++++++++++++++++++++----
2 files changed, 227 insertions(+), 16 deletions(-)

diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
index acb35243ce5d..306bf08acda6 100644
--- a/include/linux/ftrace.h
+++ b/include/linux/ftrace.h
@@ -208,6 +208,43 @@ enum {
FTRACE_OPS_FL_DIRECT = BIT(17),
};

+/*
+ * FTRACE_OPS_CMD_* commands allow the ftrace core logic to request changes
+ * to a ftrace_ops. Note, the requests may fail.
+ *
+ * ENABLE_SHARE_IPMODIFY_SELF - enable a DIRECT ops to work on the same
+ * function as an ops with IPMODIFY. Called
+ * when the DIRECT ops is being registered.
+ * This is called with both direct_mutex and
+ * ftrace_lock are locked.
+ *
+ * ENABLE_SHARE_IPMODIFY_PEER - enable a DIRECT ops to work on the same
+ * function as an ops with IPMODIFY. Called
+ * when the other ops (the one with IPMODIFY)
+ * is being registered.
+ * This is called with direct_mutex locked.
+ *
+ * DISABLE_SHARE_IPMODIFY_PEER - disable a DIRECT ops to work on the same
+ * function as an ops with IPMODIFY. Called
+ * when the other ops (the one with IPMODIFY)
+ * is being unregistered.
+ * This is called with direct_mutex locked.
+ */
+enum ftrace_ops_cmd {
+ FTRACE_OPS_CMD_ENABLE_SHARE_IPMODIFY_SELF,
+ FTRACE_OPS_CMD_ENABLE_SHARE_IPMODIFY_PEER,
+ FTRACE_OPS_CMD_DISABLE_SHARE_IPMODIFY_PEER,
+};
+
+/*
+ * For most ftrace_ops_cmd,
+ * Returns:
+ * 0 - Success.
+ * -EBUSY - The operation cannot process
+ * -EAGAIN - The operation cannot process tempoorarily.
+ */
+typedef int (*ftrace_ops_func_t)(struct ftrace_ops *op, enum ftrace_ops_cmd cmd);
+
#ifdef CONFIG_DYNAMIC_FTRACE
/* The hash used to know what functions callbacks trace */
struct ftrace_ops_hash {
@@ -250,6 +287,7 @@ struct ftrace_ops {
unsigned long trampoline;
unsigned long trampoline_size;
struct list_head list;
+ ftrace_ops_func_t ops_func;
#endif
};

diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 0c15ec997c13..aa39c3d8c565 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -1861,6 +1861,8 @@ static void ftrace_hash_rec_enable_modify(struct ftrace_ops *ops,
ftrace_hash_rec_update_modify(ops, filter_hash, 1);
}

+static bool ops_references_ip(struct ftrace_ops *ops, unsigned long ip);
+
/*
* Try to update IPMODIFY flag on each ftrace_rec. Return 0 if it is OK
* or no-needed to update, -EBUSY if it detects a conflict of the flag
@@ -1869,6 +1871,13 @@ static void ftrace_hash_rec_enable_modify(struct ftrace_ops *ops,
* - If the hash is NULL, it hits all recs (if IPMODIFY is set, this is rejected)
* - If the hash is EMPTY_HASH, it hits nothing
* - Anything else hits the recs which match the hash entries.
+ *
+ * DIRECT ops does not have IPMODIFY flag, but we still need to check it
+ * against functions with FTRACE_FL_IPMODIFY. If there is any overlap, call
+ * ops_func(SHARE_IPMODIFY_SELF) to make sure current ops can share with
+ * IPMODIFY. If ops_func(SHARE_IPMODIFY_SELF) returns non-zero, propagate
+ * the return value to the caller and eventually to the owner of the DIRECT
+ * ops.
*/
static int __ftrace_hash_update_ipmodify(struct ftrace_ops *ops,
struct ftrace_hash *old_hash,
@@ -1877,17 +1886,23 @@ static int __ftrace_hash_update_ipmodify(struct ftrace_ops *ops,
struct ftrace_page *pg;
struct dyn_ftrace *rec, *end = NULL;
int in_old, in_new;
+ bool is_ipmodify, is_direct;

/* Only update if the ops has been registered */
if (!(ops->flags & FTRACE_OPS_FL_ENABLED))
return 0;

- if (!(ops->flags & FTRACE_OPS_FL_IPMODIFY))
+ is_ipmodify = ops->flags & FTRACE_OPS_FL_IPMODIFY;
+ is_direct = ops->flags & FTRACE_OPS_FL_DIRECT;
+
+ /* either IPMODIFY nor DIRECT, skip */
+ if (!is_ipmodify && !is_direct)
return 0;

/*
- * Since the IPMODIFY is a very address sensitive action, we do not
- * allow ftrace_ops to set all functions to new hash.
+ * Since the IPMODIFY and DIRECT are very address sensitive
+ * actions, we do not allow ftrace_ops to set all functions to new
+ * hash.
*/
if (!new_hash || !old_hash)
return -EINVAL;
@@ -1905,12 +1920,30 @@ static int __ftrace_hash_update_ipmodify(struct ftrace_ops *ops,
continue;

if (in_new) {
- /* New entries must ensure no others are using it */
- if (rec->flags & FTRACE_FL_IPMODIFY)
- goto rollback;
- rec->flags |= FTRACE_FL_IPMODIFY;
- } else /* Removed entry */
+ if (rec->flags & FTRACE_FL_IPMODIFY) {
+ int ret;
+
+ /* Cannot have two ipmodify on same rec */
+ if (is_ipmodify)
+ goto rollback;
+
+ /*
+ * Another ops with IPMODIFY is already
+ * attached. We are now attaching a direct
+ * ops. Run SHARE_IPMODIFY_SELF, to check
+ * whether sharing is supported.
+ */
+ if (!ops->ops_func)
+ return -EBUSY;
+ ret = ops->ops_func(ops, FTRACE_OPS_CMD_ENABLE_SHARE_IPMODIFY_SELF);
+ if (ret)
+ return ret;
+ } else if (is_ipmodify) {
+ rec->flags |= FTRACE_FL_IPMODIFY;
+ }
+ } else if (is_ipmodify) {
rec->flags &= ~FTRACE_FL_IPMODIFY;
+ }
} while_for_each_ftrace_rec();

return 0;
@@ -2455,7 +2488,7 @@ static void call_direct_funcs(unsigned long ip, unsigned long pip,
struct ftrace_ops direct_ops = {
.func = call_direct_funcs,
.flags = FTRACE_OPS_FL_IPMODIFY
- | FTRACE_OPS_FL_DIRECT | FTRACE_OPS_FL_SAVE_REGS
+ | FTRACE_OPS_FL_SAVE_REGS
| FTRACE_OPS_FL_PERMANENT,
/*
* By declaring the main trampoline as this trampoline
@@ -3072,14 +3105,14 @@ static inline int ops_traces_mod(struct ftrace_ops *ops)
}

/*
- * Check if the current ops references the record.
+ * Check if the current ops references the given ip.
*
* If the ops traces all functions, then it was already accounted for.
* If the ops does not trace the current record function, skip it.
* If the ops ignores the function via notrace filter, skip it.
*/
-static inline bool
-ops_references_rec(struct ftrace_ops *ops, struct dyn_ftrace *rec)
+static bool
+ops_references_ip(struct ftrace_ops *ops, unsigned long ip)
{
/* If ops isn't enabled, ignore it */
if (!(ops->flags & FTRACE_OPS_FL_ENABLED))
@@ -3091,16 +3124,29 @@ ops_references_rec(struct ftrace_ops *ops, struct dyn_ftrace *rec)

/* The function must be in the filter */
if (!ftrace_hash_empty(ops->func_hash->filter_hash) &&
- !__ftrace_lookup_ip(ops->func_hash->filter_hash, rec->ip))
+ !__ftrace_lookup_ip(ops->func_hash->filter_hash, ip))
return false;

/* If in notrace hash, we ignore it too */
- if (ftrace_lookup_ip(ops->func_hash->notrace_hash, rec->ip))
+ if (ftrace_lookup_ip(ops->func_hash->notrace_hash, ip))
return false;

return true;
}

+/*
+ * Check if the current ops references the record.
+ *
+ * If the ops traces all functions, then it was already accounted for.
+ * If the ops does not trace the current record function, skip it.
+ * If the ops ignores the function via notrace filter, skip it.
+ */
+static bool
+ops_references_rec(struct ftrace_ops *ops, struct dyn_ftrace *rec)
+{
+ return ops_references_ip(ops, rec->ip);
+}
+
static int ftrace_update_code(struct module *mod, struct ftrace_page *new_pgs)
{
bool init_nop = ftrace_need_init_nop();
@@ -5545,8 +5591,7 @@ int modify_ftrace_direct(unsigned long ip,
}
EXPORT_SYMBOL_GPL(modify_ftrace_direct);

-#define MULTI_FLAGS (FTRACE_OPS_FL_IPMODIFY | FTRACE_OPS_FL_DIRECT | \
- FTRACE_OPS_FL_SAVE_REGS)
+#define MULTI_FLAGS (FTRACE_OPS_FL_DIRECT | FTRACE_OPS_FL_SAVE_REGS)

static int check_direct_multi(struct ftrace_ops *ops)
{
@@ -8004,6 +8049,80 @@ int ftrace_is_dead(void)
return ftrace_disabled;
}

+/*
+ * When registering ftrace_ops with IPMODIFY, it is necessary to make sure
+ * it doesn't conflict with any direct ftrace_ops. If there is existing
+ * direct ftrace_ops on a kernel function being patched, call
+ * FTRACE_OPS_CMD_ENABLE_SHARE_IPMODIFY_PEER on it to enable sharing.
+ *
+ * @ops: ftrace_ops being registered.
+ *
+ * Returns:
+ * 0 - @ops does not have IPMODIFY or @ops itself is DIRECT, no
+ * change needed;
+ * 1 - @ops has IPMODIFY, hold direct_mutex;
+ * -EBUSY - currently registered DIRECT ftrace_ops cannot share the
+ * same function with IPMODIFY, abort the register.
+ * -EAGAIN - cannot make changes to currently registered DIRECT
+ * ftrace_ops due to rare race conditions. Should retry
+ * later. This is needed to avoid potential deadlocks
+ * on the DIRECT ftrace_ops side.
+ */
+static int prepare_direct_functions_for_ipmodify(struct ftrace_ops *ops)
+ __acquires(&direct_mutex)
+{
+ struct ftrace_func_entry *entry;
+ struct ftrace_hash *hash;
+ struct ftrace_ops *op;
+ int size, i, ret;
+
+ if (!(ops->flags & FTRACE_OPS_FL_IPMODIFY))
+ return 0;
+
+ mutex_lock(&direct_mutex);
+
+ hash = ops->func_hash->filter_hash;
+ size = 1 << hash->size_bits;
+ for (i = 0; i < size; i++) {
+ hlist_for_each_entry(entry, &hash->buckets[i], hlist) {
+ unsigned long ip = entry->ip;
+ bool found_op = false;
+
+ mutex_lock(&ftrace_lock);
+ do_for_each_ftrace_op(op, ftrace_ops_list) {
+ if (!(op->flags & FTRACE_OPS_FL_DIRECT))
+ continue;
+ if (ops_references_ip(op, ip)) {
+ found_op = true;
+ break;
+ }
+ } while_for_each_ftrace_op(op);
+ mutex_unlock(&ftrace_lock);
+
+ if (found_op) {
+ if (!op->ops_func) {
+ ret = -EBUSY;
+ goto err_out;
+ }
+ ret = op->ops_func(op, FTRACE_OPS_CMD_ENABLE_SHARE_IPMODIFY_PEER);
+ if (ret)
+ goto err_out;
+ }
+ }
+ }
+
+ /*
+ * Didn't find any overlap with direct ftrace_ops, or the direct
+ * function can share with ipmodify. Hold direct_mutex to make sure
+ * this doesn't change until we are done.
+ */
+ return 1;
+
+err_out:
+ mutex_unlock(&direct_mutex);
+ return ret;
+}
+
/**
* register_ftrace_function - register a function for profiling
* @ops: ops structure that holds the function for profiling.
@@ -8016,21 +8135,74 @@ int ftrace_is_dead(void)
* recursive loop.
*/
int register_ftrace_function(struct ftrace_ops *ops)
+ __releases(&direct_mutex)
{
+ bool direct_mutex_locked = false;
int ret;

ftrace_ops_init(ops);

+ ret = prepare_direct_functions_for_ipmodify(ops);
+ if (ret < 0)
+ return ret;
+ else if (ret == 1)
+ direct_mutex_locked = true;
+
mutex_lock(&ftrace_lock);

ret = ftrace_startup(ops, 0);

mutex_unlock(&ftrace_lock);

+ if (direct_mutex_locked)
+ mutex_unlock(&direct_mutex);
return ret;
}
EXPORT_SYMBOL_GPL(register_ftrace_function);

+/*
+ * Similar to prepare_direct_functions_for_ipmodify, clean up after ops
+ * with IPMODIFY is unregistered. The cleanup is optional for most DIRECT
+ * ops.
+ */
+static void cleanup_direct_functions_after_ipmodify(struct ftrace_ops *ops)
+{
+ struct ftrace_func_entry *entry;
+ struct ftrace_hash *hash;
+ struct ftrace_ops *op;
+ int size, i;
+
+ if (!(ops->flags & FTRACE_OPS_FL_IPMODIFY))
+ return;
+
+ mutex_lock(&direct_mutex);
+
+ hash = ops->func_hash->filter_hash;
+ size = 1 << hash->size_bits;
+ for (i = 0; i < size; i++) {
+ hlist_for_each_entry(entry, &hash->buckets[i], hlist) {
+ unsigned long ip = entry->ip;
+ bool found_op = false;
+
+ mutex_lock(&ftrace_lock);
+ do_for_each_ftrace_op(op, ftrace_ops_list) {
+ if (!(op->flags & FTRACE_OPS_FL_DIRECT))
+ continue;
+ if (ops_references_ip(op, ip)) {
+ found_op = true;
+ break;
+ }
+ } while_for_each_ftrace_op(op);
+ mutex_unlock(&ftrace_lock);
+
+ /* The cleanup is optional, iggore any errors */
+ if (found_op && op->ops_func)
+ op->ops_func(op, FTRACE_OPS_CMD_DISABLE_SHARE_IPMODIFY_PEER);
+ }
+ }
+ mutex_unlock(&direct_mutex);
+}
+
/**
* unregister_ftrace_function - unregister a function for profiling.
* @ops: ops structure that holds the function to unregister
@@ -8045,6 +8217,7 @@ int unregister_ftrace_function(struct ftrace_ops *ops)
ret = ftrace_shutdown(ops, 0);
mutex_unlock(&ftrace_lock);

+ cleanup_direct_functions_after_ipmodify(ops);
return ret;
}
EXPORT_SYMBOL_GPL(unregister_ftrace_function);
--
2.30.2

2022-07-18 00:27:40

by Song Liu

[permalink] [raw]
Subject: [PATCH v3 bpf-next 3/4] bpf, x64: Allow to use caller address from stack

From: Jiri Olsa <[email protected]>

Currently we call the original function by using the absolute address
given at the JIT generation. That's not usable when having trampoline
attached to multiple functions, or the target address changes dynamically
(in case of live patch). In such cases we need to take the return address
from the stack.

Adding support to retrieve the original function address from the stack
by adding new BPF_TRAMP_F_ORIG_STACK flag for arch_prepare_bpf_trampoline
function.

Basically we take the return address of the 'fentry' call:

function + 0: call fentry # stores 'function + 5' address on stack
function + 5: ...

The 'function + 5' address will be used as the address for the
original function to call.

Signed-off-by: Jiri Olsa <[email protected]>
Signed-off-by: Song Liu <[email protected]>
---
arch/x86/net/bpf_jit_comp.c | 13 +++++++++----
include/linux/bpf.h | 5 +++++
2 files changed, 14 insertions(+), 4 deletions(-)

diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index 54c7f46c453f..e1b0c5ed0b7c 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -2119,10 +2119,15 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
if (flags & BPF_TRAMP_F_CALL_ORIG) {
restore_regs(m, &prog, nr_args, regs_off);

- /* call original function */
- if (emit_call(&prog, orig_call, prog)) {
- ret = -EINVAL;
- goto cleanup;
+ if (flags & BPF_TRAMP_F_ORIG_STACK) {
+ emit_ldx(&prog, BPF_DW, BPF_REG_0, BPF_REG_FP, 8);
+ EMIT2(0xff, 0xd0); /* call *rax */
+ } else {
+ /* call original function */
+ if (emit_call(&prog, orig_call, prog)) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
}
/* remember return value in a stack for bpf prog to access */
emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -8);
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index a5bf00649995..7496842a4671 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -751,6 +751,11 @@ struct btf_func_model {
/* Return the return value of fentry prog. Only used by bpf_struct_ops. */
#define BPF_TRAMP_F_RET_FENTRY_RET BIT(4)

+/* Get original function from stack instead of from provided direct address.
+ * Makes sense for trampolines with fexit or fmod_ret programs.
+ */
+#define BPF_TRAMP_F_ORIG_STACK BIT(5)
+
/* Each call __bpf_prog_enter + call bpf_func + call __bpf_prog_exit is ~50
* bytes on x86.
*/
--
2.30.2

2022-07-18 00:34:01

by Song Liu

[permalink] [raw]
Subject: [PATCH v3 bpf-next 4/4] bpf: support bpf_trampoline on functions with IPMODIFY (e.g. livepatch)

When tracing a function with IPMODIFY ftrace_ops (livepatch), the bpf
trampoline must follow the instruction pointer saved on stack. This needs
extra handling for bpf trampolines with BPF_TRAMP_F_CALL_ORIG flag.

Implement bpf_tramp_ftrace_ops_func and use it for the ftrace_ops used
by BPF trampoline. This enables tracing functions with livepatch.

This also requires moving bpf trampoline to *_ftrace_direct_mult APIs.

Link: https://lore.kernel.org/all/[email protected]/
Signed-off-by: Song Liu <[email protected]>
---
include/linux/bpf.h | 8 ++
kernel/bpf/trampoline.c | 158 +++++++++++++++++++++++++++++++++++-----
2 files changed, 149 insertions(+), 17 deletions(-)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 7496842a4671..f35c59e0b742 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -47,6 +47,7 @@ struct kobject;
struct mem_cgroup;
struct module;
struct bpf_func_state;
+struct ftrace_ops;

extern struct idr btf_idr;
extern spinlock_t btf_idr_lock;
@@ -756,6 +757,11 @@ struct btf_func_model {
*/
#define BPF_TRAMP_F_ORIG_STACK BIT(5)

+/* This trampoline is on a function with another ftrace_ops with IPMODIFY,
+ * e.g., a live patch. This flag is set and cleared by ftrace call backs,
+ */
+#define BPF_TRAMP_F_SHARE_IPMODIFY BIT(6)
+
/* Each call __bpf_prog_enter + call bpf_func + call __bpf_prog_exit is ~50
* bytes on x86.
*/
@@ -838,9 +844,11 @@ struct bpf_tramp_image {
struct bpf_trampoline {
/* hlist for trampoline_table */
struct hlist_node hlist;
+ struct ftrace_ops *fops;
/* serializes access to fields of this trampoline */
struct mutex mutex;
refcount_t refcnt;
+ u32 flags;
u64 key;
struct {
struct btf_func_model model;
diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
index fd69812412ca..fa901aef7930 100644
--- a/kernel/bpf/trampoline.c
+++ b/kernel/bpf/trampoline.c
@@ -13,6 +13,7 @@
#include <linux/static_call.h>
#include <linux/bpf_verifier.h>
#include <linux/bpf_lsm.h>
+#include <linux/delay.h>

/* dummy _ops. The verifier will operate on target program's ops. */
const struct bpf_verifier_ops bpf_extension_verifier_ops = {
@@ -29,6 +30,81 @@ static struct hlist_head trampoline_table[TRAMPOLINE_TABLE_SIZE];
/* serializes access to trampoline_table */
static DEFINE_MUTEX(trampoline_mutex);

+#ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
+static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mutex);
+
+static int bpf_tramp_ftrace_ops_func(struct ftrace_ops *ops, enum ftrace_ops_cmd cmd)
+{
+ struct bpf_trampoline *tr = ops->private;
+ int ret = 0;
+
+ if (cmd == FTRACE_OPS_CMD_ENABLE_SHARE_IPMODIFY_SELF) {
+ /* This is called inside register_ftrace_direct_multi(), so
+ * tr->mutex is already locked.
+ */
+ WARN_ON_ONCE(!mutex_is_locked(&tr->mutex));
+
+ /* Instead of updating the trampoline here, we propagate
+ * -EAGAIN to register_ftrace_direct_multi(). Then we can
+ * retry register_ftrace_direct_multi() after updating the
+ * trampoline.
+ */
+ if ((tr->flags & BPF_TRAMP_F_CALL_ORIG) &&
+ !(tr->flags & BPF_TRAMP_F_ORIG_STACK)) {
+ if (WARN_ON_ONCE(tr->flags & BPF_TRAMP_F_SHARE_IPMODIFY))
+ return -EBUSY;
+
+ tr->flags |= BPF_TRAMP_F_SHARE_IPMODIFY;
+ return -EAGAIN;
+ }
+
+ return 0;
+ }
+
+ /* The normal locking order is
+ * tr->mutex => direct_mutex (ftrace.c) => ftrace_lock (ftrace.c)
+ *
+ * The following two commands are called from
+ *
+ * prepare_direct_functions_for_ipmodify
+ * cleanup_direct_functions_after_ipmodify
+ *
+ * In both cases, direct_mutex is already locked. Use
+ * mutex_trylock(&tr->mutex) to avoid deadlock in race condition
+ * (something else is making changes to this same trampoline).
+ */
+ if (!mutex_trylock(&tr->mutex)) {
+ /* sleep 1 ms to make sure whatever holding tr->mutex makes
+ * some progress.
+ */
+ msleep(1);
+ return -EAGAIN;
+ }
+
+ switch (cmd) {
+ case FTRACE_OPS_CMD_ENABLE_SHARE_IPMODIFY_PEER:
+ tr->flags |= BPF_TRAMP_F_SHARE_IPMODIFY;
+
+ if ((tr->flags & BPF_TRAMP_F_CALL_ORIG) &&
+ !(tr->flags & BPF_TRAMP_F_ORIG_STACK))
+ ret = bpf_trampoline_update(tr, false /* lock_direct_mutex */);
+ break;
+ case FTRACE_OPS_CMD_DISABLE_SHARE_IPMODIFY_PEER:
+ tr->flags &= ~BPF_TRAMP_F_SHARE_IPMODIFY;
+
+ if (tr->flags & BPF_TRAMP_F_ORIG_STACK)
+ ret = bpf_trampoline_update(tr, false /* lock_direct_mutex */);
+ break;
+ default:
+ ret = -EINVAL;
+ break;
+ };
+
+ mutex_unlock(&tr->mutex);
+ return ret;
+}
+#endif
+
bool bpf_prog_has_trampoline(const struct bpf_prog *prog)
{
enum bpf_attach_type eatype = prog->expected_attach_type;
@@ -89,6 +165,16 @@ static struct bpf_trampoline *bpf_trampoline_lookup(u64 key)
tr = kzalloc(sizeof(*tr), GFP_KERNEL);
if (!tr)
goto out;
+#ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
+ tr->fops = kzalloc(sizeof(struct ftrace_ops), GFP_KERNEL);
+ if (!tr->fops) {
+ kfree(tr);
+ tr = NULL;
+ goto out;
+ }
+ tr->fops->private = tr;
+ tr->fops->ops_func = bpf_tramp_ftrace_ops_func;
+#endif

tr->key = key;
INIT_HLIST_NODE(&tr->hlist);
@@ -128,7 +214,7 @@ static int unregister_fentry(struct bpf_trampoline *tr, void *old_addr)
int ret;

if (tr->func.ftrace_managed)
- ret = unregister_ftrace_direct((long)ip, (long)old_addr);
+ ret = unregister_ftrace_direct_multi(tr->fops, (long)old_addr);
else
ret = bpf_arch_text_poke(ip, BPF_MOD_CALL, old_addr, NULL);

@@ -137,15 +223,20 @@ static int unregister_fentry(struct bpf_trampoline *tr, void *old_addr)
return ret;
}

-static int modify_fentry(struct bpf_trampoline *tr, void *old_addr, void *new_addr)
+static int modify_fentry(struct bpf_trampoline *tr, void *old_addr, void *new_addr,
+ bool lock_direct_mutex)
{
void *ip = tr->func.addr;
int ret;

- if (tr->func.ftrace_managed)
- ret = modify_ftrace_direct((long)ip, (long)old_addr, (long)new_addr);
- else
+ if (tr->func.ftrace_managed) {
+ if (lock_direct_mutex)
+ ret = modify_ftrace_direct_multi(tr->fops, (long)new_addr);
+ else
+ ret = modify_ftrace_direct_multi_nolock(tr->fops, (long)new_addr);
+ } else {
ret = bpf_arch_text_poke(ip, BPF_MOD_CALL, old_addr, new_addr);
+ }
return ret;
}

@@ -163,10 +254,12 @@ static int register_fentry(struct bpf_trampoline *tr, void *new_addr)
if (bpf_trampoline_module_get(tr))
return -ENOENT;

- if (tr->func.ftrace_managed)
- ret = register_ftrace_direct((long)ip, (long)new_addr);
- else
+ if (tr->func.ftrace_managed) {
+ ftrace_set_filter_ip(tr->fops, (unsigned long)ip, 0, 0);
+ ret = register_ftrace_direct_multi(tr->fops, (long)new_addr);
+ } else {
ret = bpf_arch_text_poke(ip, BPF_MOD_CALL, NULL, new_addr);
+ }

if (ret)
bpf_trampoline_module_put(tr);
@@ -332,11 +425,11 @@ static struct bpf_tramp_image *bpf_tramp_image_alloc(u64 key, u32 idx)
return ERR_PTR(err);
}

-static int bpf_trampoline_update(struct bpf_trampoline *tr)
+static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mutex)
{
struct bpf_tramp_image *im;
struct bpf_tramp_links *tlinks;
- u32 flags = BPF_TRAMP_F_RESTORE_REGS;
+ u32 orig_flags = tr->flags;
bool ip_arg = false;
int err, total;

@@ -358,18 +451,31 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr)
goto out;
}

+ /* clear all bits except SHARE_IPMODIFY */
+ tr->flags &= BPF_TRAMP_F_SHARE_IPMODIFY;
+
if (tlinks[BPF_TRAMP_FEXIT].nr_links ||
- tlinks[BPF_TRAMP_MODIFY_RETURN].nr_links)
+ tlinks[BPF_TRAMP_MODIFY_RETURN].nr_links) {
/* NOTE: BPF_TRAMP_F_RESTORE_REGS and BPF_TRAMP_F_SKIP_FRAME
* should not be set together.
*/
- flags = BPF_TRAMP_F_CALL_ORIG | BPF_TRAMP_F_SKIP_FRAME;
+ tr->flags |= BPF_TRAMP_F_CALL_ORIG | BPF_TRAMP_F_SKIP_FRAME;
+ } else {
+ tr->flags |= BPF_TRAMP_F_RESTORE_REGS;
+ }

if (ip_arg)
- flags |= BPF_TRAMP_F_IP_ARG;
+ tr->flags |= BPF_TRAMP_F_IP_ARG;
+
+#ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
+again:
+ if ((tr->flags & BPF_TRAMP_F_SHARE_IPMODIFY) &&
+ (tr->flags & BPF_TRAMP_F_CALL_ORIG))
+ tr->flags |= BPF_TRAMP_F_ORIG_STACK;
+#endif

err = arch_prepare_bpf_trampoline(im, im->image, im->image + PAGE_SIZE,
- &tr->func.model, flags, tlinks,
+ &tr->func.model, tr->flags, tlinks,
tr->func.addr);
if (err < 0)
goto out;
@@ -378,17 +484,34 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr)
WARN_ON(!tr->cur_image && tr->selector);
if (tr->cur_image)
/* progs already running at this address */
- err = modify_fentry(tr, tr->cur_image->image, im->image);
+ err = modify_fentry(tr, tr->cur_image->image, im->image, lock_direct_mutex);
else
/* first time registering */
err = register_fentry(tr, im->image);
+
+#ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
+ if (err == -EAGAIN) {
+ /* -EAGAIN from bpf_tramp_ftrace_ops_func. Now
+ * BPF_TRAMP_F_SHARE_IPMODIFY is set, we can generate the
+ * trampoline again, and retry register.
+ */
+ /* reset fops->func and fops->trampoline for re-register */
+ tr->fops->func = NULL;
+ tr->fops->trampoline = 0;
+ goto again;
+ }
+#endif
if (err)
goto out;
+
if (tr->cur_image)
bpf_tramp_image_put(tr->cur_image);
tr->cur_image = im;
tr->selector++;
out:
+ /* If any error happens, restore previous flags */
+ if (err)
+ tr->flags = orig_flags;
kfree(tlinks);
return err;
}
@@ -454,7 +577,7 @@ static int __bpf_trampoline_link_prog(struct bpf_tramp_link *link, struct bpf_tr

hlist_add_head(&link->tramp_hlist, &tr->progs_hlist[kind]);
tr->progs_cnt[kind]++;
- err = bpf_trampoline_update(tr);
+ err = bpf_trampoline_update(tr, true /* lock_direct_mutex */);
if (err) {
hlist_del_init(&link->tramp_hlist);
tr->progs_cnt[kind]--;
@@ -487,7 +610,7 @@ static int __bpf_trampoline_unlink_prog(struct bpf_tramp_link *link, struct bpf_
}
hlist_del_init(&link->tramp_hlist);
tr->progs_cnt[kind]--;
- return bpf_trampoline_update(tr);
+ return bpf_trampoline_update(tr, true /* lock_direct_mutex */);
}

/* bpf_trampoline_unlink_prog() should never fail. */
@@ -715,6 +838,7 @@ void bpf_trampoline_put(struct bpf_trampoline *tr)
* multiple rcu callbacks.
*/
hlist_del(&tr->hlist);
+ kfree(tr->fops);
kfree(tr);
out:
mutex_unlock(&trampoline_mutex);
--
2.30.2

2022-07-18 03:18:34

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH v3 bpf-next 2/4] ftrace: allow IPMODIFY and DIRECT ops on the same function

Hi Song,

I love your patch! Yet something to improve:

[auto build test ERROR on bpf-next/master]

url: https://github.com/intel-lab-lkp/linux/commits/Song-Liu/ftrace-host-klp-and-bpf-trampoline-together/20220718-081533
base: https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git master
config: x86_64-randconfig-a006 (https://download.01.org/0day-ci/archive/20220718/[email protected]/config)
compiler: gcc-11 (Debian 11.3.0-3) 11.3.0
reproduce (this is a W=1 build):
# https://github.com/intel-lab-lkp/linux/commit/1535f287d288f9b7540ec50f56da1fe437ac6512
git remote add linux-review https://github.com/intel-lab-lkp/linux
git fetch --no-tags linux-review Song-Liu/ftrace-host-klp-and-bpf-trampoline-together/20220718-081533
git checkout 1535f287d288f9b7540ec50f56da1fe437ac6512
# save the config file
mkdir build_dir && cp config build_dir/.config
make W=1 O=build_dir ARCH=x86_64 SHELL=/bin/bash

If you fix the issue, kindly add following tag where applicable
Reported-by: kernel test robot <[email protected]>

All errors (new ones prefixed by >>):

In file included from include/linux/rhashtable-types.h:14,
from include/linux/ipc.h:7,
from include/uapi/linux/sem.h:5,
from include/linux/sem.h:5,
from include/linux/sched.h:15,
from include/linux/ratelimit.h:6,
from include/linux/dev_printk.h:16,
from include/linux/device.h:15,
from include/linux/node.h:18,
from include/linux/cpu.h:17,
from include/linux/stop_machine.h:5,
from kernel/trace/ftrace.c:17:
kernel/trace/ftrace.c: In function 'prepare_direct_functions_for_ipmodify':
>> kernel/trace/ftrace.c:8082:21: error: 'direct_mutex' undeclared (first use in this function); did you mean 'event_mutex'?
8082 | mutex_lock(&direct_mutex);
| ^~~~~~~~~~~~
include/linux/mutex.h:187:44: note: in definition of macro 'mutex_lock'
187 | #define mutex_lock(lock) mutex_lock_nested(lock, 0)
| ^~~~
kernel/trace/ftrace.c:8082:21: note: each undeclared identifier is reported only once for each function it appears in
8082 | mutex_lock(&direct_mutex);
| ^~~~~~~~~~~~
include/linux/mutex.h:187:44: note: in definition of macro 'mutex_lock'
187 | #define mutex_lock(lock) mutex_lock_nested(lock, 0)
| ^~~~
>> kernel/trace/ftrace.c:8084:19: error: 'struct ftrace_ops' has no member named 'func_hash'
8084 | hash = ops->func_hash->filter_hash;
| ^~
>> kernel/trace/ftrace.c:8095:37: error: implicit declaration of function 'ops_references_ip' [-Werror=implicit-function-declaration]
8095 | if (ops_references_ip(op, ip)) {
| ^~~~~~~~~~~~~~~~~
>> kernel/trace/ftrace.c:8103:40: error: 'struct ftrace_ops' has no member named 'ops_func'
8103 | if (!op->ops_func) {
| ^~
kernel/trace/ftrace.c:8107:41: error: 'struct ftrace_ops' has no member named 'ops_func'
8107 | ret = op->ops_func(op, FTRACE_OPS_CMD_ENABLE_SHARE_IPMODIFY_PEER);
| ^~
kernel/trace/ftrace.c: In function 'register_ftrace_function':
kernel/trace/ftrace.c:8158:31: error: 'direct_mutex' undeclared (first use in this function); did you mean 'event_mutex'?
8158 | mutex_unlock(&direct_mutex);
| ^~~~~~~~~~~~
| event_mutex
In file included from include/linux/rhashtable-types.h:14,
from include/linux/ipc.h:7,
from include/uapi/linux/sem.h:5,
from include/linux/sem.h:5,
from include/linux/sched.h:15,
from include/linux/ratelimit.h:6,
from include/linux/dev_printk.h:16,
from include/linux/device.h:15,
from include/linux/node.h:18,
from include/linux/cpu.h:17,
from include/linux/stop_machine.h:5,
from kernel/trace/ftrace.c:17:
kernel/trace/ftrace.c: In function 'cleanup_direct_functions_after_ipmodify':
kernel/trace/ftrace.c:8178:21: error: 'direct_mutex' undeclared (first use in this function); did you mean 'event_mutex'?
8178 | mutex_lock(&direct_mutex);
| ^~~~~~~~~~~~
include/linux/mutex.h:187:44: note: in definition of macro 'mutex_lock'
187 | #define mutex_lock(lock) mutex_lock_nested(lock, 0)
| ^~~~
kernel/trace/ftrace.c:8180:19: error: 'struct ftrace_ops' has no member named 'func_hash'
8180 | hash = ops->func_hash->filter_hash;
| ^~
kernel/trace/ftrace.c:8199:43: error: 'struct ftrace_ops' has no member named 'ops_func'
8199 | if (found_op && op->ops_func)
| ^~
kernel/trace/ftrace.c:8200:35: error: 'struct ftrace_ops' has no member named 'ops_func'
8200 | op->ops_func(op, FTRACE_OPS_CMD_DISABLE_SHARE_IPMODIFY_PEER);
| ^~
cc1: some warnings being treated as errors


vim +8082 kernel/trace/ftrace.c

8051
8052 /*
8053 * When registering ftrace_ops with IPMODIFY, it is necessary to make sure
8054 * it doesn't conflict with any direct ftrace_ops. If there is existing
8055 * direct ftrace_ops on a kernel function being patched, call
8056 * FTRACE_OPS_CMD_ENABLE_SHARE_IPMODIFY_PEER on it to enable sharing.
8057 *
8058 * @ops: ftrace_ops being registered.
8059 *
8060 * Returns:
8061 * 0 - @ops does not have IPMODIFY or @ops itself is DIRECT, no
8062 * change needed;
8063 * 1 - @ops has IPMODIFY, hold direct_mutex;
8064 * -EBUSY - currently registered DIRECT ftrace_ops cannot share the
8065 * same function with IPMODIFY, abort the register.
8066 * -EAGAIN - cannot make changes to currently registered DIRECT
8067 * ftrace_ops due to rare race conditions. Should retry
8068 * later. This is needed to avoid potential deadlocks
8069 * on the DIRECT ftrace_ops side.
8070 */
8071 static int prepare_direct_functions_for_ipmodify(struct ftrace_ops *ops)
8072 __acquires(&direct_mutex)
8073 {
8074 struct ftrace_func_entry *entry;
8075 struct ftrace_hash *hash;
8076 struct ftrace_ops *op;
8077 int size, i, ret;
8078
8079 if (!(ops->flags & FTRACE_OPS_FL_IPMODIFY))
8080 return 0;
8081
> 8082 mutex_lock(&direct_mutex);
8083
> 8084 hash = ops->func_hash->filter_hash;
8085 size = 1 << hash->size_bits;
8086 for (i = 0; i < size; i++) {
8087 hlist_for_each_entry(entry, &hash->buckets[i], hlist) {
8088 unsigned long ip = entry->ip;
8089 bool found_op = false;
8090
8091 mutex_lock(&ftrace_lock);
8092 do_for_each_ftrace_op(op, ftrace_ops_list) {
8093 if (!(op->flags & FTRACE_OPS_FL_DIRECT))
8094 continue;
> 8095 if (ops_references_ip(op, ip)) {
8096 found_op = true;
8097 break;
8098 }
8099 } while_for_each_ftrace_op(op);
8100 mutex_unlock(&ftrace_lock);
8101
8102 if (found_op) {
> 8103 if (!op->ops_func) {
8104 ret = -EBUSY;
8105 goto err_out;
8106 }
8107 ret = op->ops_func(op, FTRACE_OPS_CMD_ENABLE_SHARE_IPMODIFY_PEER);
8108 if (ret)
8109 goto err_out;
8110 }
8111 }
8112 }
8113
8114 /*
8115 * Didn't find any overlap with direct ftrace_ops, or the direct
8116 * function can share with ipmodify. Hold direct_mutex to make sure
8117 * this doesn't change until we are done.
8118 */
8119 return 1;
8120
8121 err_out:
8122 mutex_unlock(&direct_mutex);
8123 return ret;
8124 }
8125

--
0-DAY CI Kernel Test Service
https://01.org/lkp

2022-07-18 03:31:35

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH v3 bpf-next 2/4] ftrace: allow IPMODIFY and DIRECT ops on the same function

Hi Song,

I love your patch! Yet something to improve:

[auto build test ERROR on bpf-next/master]

url: https://github.com/intel-lab-lkp/linux/commits/Song-Liu/ftrace-host-klp-and-bpf-trampoline-together/20220718-081533
base: https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git master
config: powerpc-randconfig-r023-20220717 (https://download.01.org/0day-ci/archive/20220718/[email protected]/config)
compiler: clang version 15.0.0 (https://github.com/llvm/llvm-project d74b88c69dc2644bd0dc5d64e2d7413a0d4040e5)
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# install powerpc cross compiling tool for clang build
# apt-get install binutils-powerpc-linux-gnu
# https://github.com/intel-lab-lkp/linux/commit/1535f287d288f9b7540ec50f56da1fe437ac6512
git remote add linux-review https://github.com/intel-lab-lkp/linux
git fetch --no-tags linux-review Song-Liu/ftrace-host-klp-and-bpf-trampoline-together/20220718-081533
git checkout 1535f287d288f9b7540ec50f56da1fe437ac6512
# save the config file
mkdir build_dir && cp config build_dir/.config
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=powerpc SHELL=/bin/bash kernel/trace/

If you fix the issue, kindly add following tag where applicable
Reported-by: kernel test robot <[email protected]>

All errors (new ones prefixed by >>):

kernel/trace/ftrace.c:299:5: warning: no previous prototype for function '__register_ftrace_function' [-Wmissing-prototypes]
int __register_ftrace_function(struct ftrace_ops *ops)
^
kernel/trace/ftrace.c:299:1: note: declare 'static' if the function is not intended to be used outside of this translation unit
int __register_ftrace_function(struct ftrace_ops *ops)
^
static
kernel/trace/ftrace.c:342:5: warning: no previous prototype for function '__unregister_ftrace_function' [-Wmissing-prototypes]
int __unregister_ftrace_function(struct ftrace_ops *ops)
^
kernel/trace/ftrace.c:342:1: note: declare 'static' if the function is not intended to be used outside of this translation unit
int __unregister_ftrace_function(struct ftrace_ops *ops)
^
static
kernel/trace/ftrace.c:4030:15: warning: no previous prototype for function 'arch_ftrace_match_adjust' [-Wmissing-prototypes]
char * __weak arch_ftrace_match_adjust(char *str, const char *search)
^
kernel/trace/ftrace.c:4030:1: note: declare 'static' if the function is not intended to be used outside of this translation unit
char * __weak arch_ftrace_match_adjust(char *str, const char *search)
^
static
>> kernel/trace/ftrace.c:8082:14: error: use of undeclared identifier 'direct_mutex'; did you mean 'event_mutex'?
mutex_lock(&direct_mutex);
^~~~~~~~~~~~
event_mutex
include/linux/mutex.h:187:44: note: expanded from macro 'mutex_lock'
#define mutex_lock(lock) mutex_lock_nested(lock, 0)
^
kernel/trace/trace.h:1523:21: note: 'event_mutex' declared here
extern struct mutex event_mutex;
^
kernel/trace/ftrace.c:8122:16: error: use of undeclared identifier 'direct_mutex'; did you mean 'event_mutex'?
mutex_unlock(&direct_mutex);
^~~~~~~~~~~~
event_mutex
kernel/trace/trace.h:1523:21: note: 'event_mutex' declared here
extern struct mutex event_mutex;
^
kernel/trace/ftrace.c:8158:17: error: use of undeclared identifier 'direct_mutex'; did you mean 'event_mutex'?
mutex_unlock(&direct_mutex);
^~~~~~~~~~~~
event_mutex
kernel/trace/trace.h:1523:21: note: 'event_mutex' declared here
extern struct mutex event_mutex;
^
kernel/trace/ftrace.c:8178:14: error: use of undeclared identifier 'direct_mutex'; did you mean 'event_mutex'?
mutex_lock(&direct_mutex);
^~~~~~~~~~~~
event_mutex
include/linux/mutex.h:187:44: note: expanded from macro 'mutex_lock'
#define mutex_lock(lock) mutex_lock_nested(lock, 0)
^
kernel/trace/trace.h:1523:21: note: 'event_mutex' declared here
extern struct mutex event_mutex;
^
kernel/trace/ftrace.c:8203:16: error: use of undeclared identifier 'direct_mutex'; did you mean 'event_mutex'?
mutex_unlock(&direct_mutex);
^~~~~~~~~~~~
event_mutex
kernel/trace/trace.h:1523:21: note: 'event_mutex' declared here
extern struct mutex event_mutex;
^
3 warnings and 5 errors generated.


vim +8082 kernel/trace/ftrace.c

8051
8052 /*
8053 * When registering ftrace_ops with IPMODIFY, it is necessary to make sure
8054 * it doesn't conflict with any direct ftrace_ops. If there is existing
8055 * direct ftrace_ops on a kernel function being patched, call
8056 * FTRACE_OPS_CMD_ENABLE_SHARE_IPMODIFY_PEER on it to enable sharing.
8057 *
8058 * @ops: ftrace_ops being registered.
8059 *
8060 * Returns:
8061 * 0 - @ops does not have IPMODIFY or @ops itself is DIRECT, no
8062 * change needed;
8063 * 1 - @ops has IPMODIFY, hold direct_mutex;
8064 * -EBUSY - currently registered DIRECT ftrace_ops cannot share the
8065 * same function with IPMODIFY, abort the register.
8066 * -EAGAIN - cannot make changes to currently registered DIRECT
8067 * ftrace_ops due to rare race conditions. Should retry
8068 * later. This is needed to avoid potential deadlocks
8069 * on the DIRECT ftrace_ops side.
8070 */
8071 static int prepare_direct_functions_for_ipmodify(struct ftrace_ops *ops)
8072 __acquires(&direct_mutex)
8073 {
8074 struct ftrace_func_entry *entry;
8075 struct ftrace_hash *hash;
8076 struct ftrace_ops *op;
8077 int size, i, ret;
8078
8079 if (!(ops->flags & FTRACE_OPS_FL_IPMODIFY))
8080 return 0;
8081
> 8082 mutex_lock(&direct_mutex);
8083
8084 hash = ops->func_hash->filter_hash;
8085 size = 1 << hash->size_bits;
8086 for (i = 0; i < size; i++) {
8087 hlist_for_each_entry(entry, &hash->buckets[i], hlist) {
8088 unsigned long ip = entry->ip;
8089 bool found_op = false;
8090
8091 mutex_lock(&ftrace_lock);
8092 do_for_each_ftrace_op(op, ftrace_ops_list) {
8093 if (!(op->flags & FTRACE_OPS_FL_DIRECT))
8094 continue;
8095 if (ops_references_ip(op, ip)) {
8096 found_op = true;
8097 break;
8098 }
8099 } while_for_each_ftrace_op(op);
8100 mutex_unlock(&ftrace_lock);
8101
8102 if (found_op) {
8103 if (!op->ops_func) {
8104 ret = -EBUSY;
8105 goto err_out;
8106 }
8107 ret = op->ops_func(op, FTRACE_OPS_CMD_ENABLE_SHARE_IPMODIFY_PEER);
8108 if (ret)
8109 goto err_out;
8110 }
8111 }
8112 }
8113
8114 /*
8115 * Didn't find any overlap with direct ftrace_ops, or the direct
8116 * function can share with ipmodify. Hold direct_mutex to make sure
8117 * this doesn't change until we are done.
8118 */
8119 return 1;
8120
8121 err_out:
8122 mutex_unlock(&direct_mutex);
8123 return ret;
8124 }
8125

--
0-DAY CI Kernel Test Service
https://01.org/lkp

2022-07-18 03:52:29

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH v3 bpf-next 2/4] ftrace: allow IPMODIFY and DIRECT ops on the same function

Hi Song,

I love your patch! Yet something to improve:

[auto build test ERROR on bpf-next/master]

url: https://github.com/intel-lab-lkp/linux/commits/Song-Liu/ftrace-host-klp-and-bpf-trampoline-together/20220718-081533
base: https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git master
config: x86_64-randconfig-a001 (https://download.01.org/0day-ci/archive/20220718/[email protected]/config)
compiler: clang version 15.0.0 (https://github.com/llvm/llvm-project d74b88c69dc2644bd0dc5d64e2d7413a0d4040e5)
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# https://github.com/intel-lab-lkp/linux/commit/1535f287d288f9b7540ec50f56da1fe437ac6512
git remote add linux-review https://github.com/intel-lab-lkp/linux
git fetch --no-tags linux-review Song-Liu/ftrace-host-klp-and-bpf-trampoline-together/20220718-081533
git checkout 1535f287d288f9b7540ec50f56da1fe437ac6512
# save the config file
mkdir build_dir && cp config build_dir/.config
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=x86_64 SHELL=/bin/bash

If you fix the issue, kindly add following tag where applicable
Reported-by: kernel test robot <[email protected]>

All errors (new ones prefixed by >>):

>> kernel/trace/ftrace.c:8082:14: error: use of undeclared identifier 'direct_mutex'; did you mean 'event_mutex'?
mutex_lock(&direct_mutex);
^~~~~~~~~~~~
event_mutex
include/linux/mutex.h:187:44: note: expanded from macro 'mutex_lock'
#define mutex_lock(lock) mutex_lock_nested(lock, 0)
^
kernel/trace/trace.h:1523:21: note: 'event_mutex' declared here
extern struct mutex event_mutex;
^
>> kernel/trace/ftrace.c:8084:14: error: no member named 'func_hash' in 'struct ftrace_ops'
hash = ops->func_hash->filter_hash;
~~~ ^
>> kernel/trace/ftrace.c:8095:9: error: call to undeclared function 'ops_references_ip'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
if (ops_references_ip(op, ip)) {
^
>> kernel/trace/ftrace.c:8103:14: error: no member named 'ops_func' in 'struct ftrace_ops'
if (!op->ops_func) {
~~ ^
kernel/trace/ftrace.c:8107:15: error: no member named 'ops_func' in 'struct ftrace_ops'
ret = op->ops_func(op, FTRACE_OPS_CMD_ENABLE_SHARE_IPMODIFY_PEER);
~~ ^
kernel/trace/ftrace.c:8122:16: error: use of undeclared identifier 'direct_mutex'; did you mean 'event_mutex'?
mutex_unlock(&direct_mutex);
^~~~~~~~~~~~
event_mutex
kernel/trace/trace.h:1523:21: note: 'event_mutex' declared here
extern struct mutex event_mutex;
^
kernel/trace/ftrace.c:8158:17: error: use of undeclared identifier 'direct_mutex'; did you mean 'event_mutex'?
mutex_unlock(&direct_mutex);
^~~~~~~~~~~~
event_mutex
kernel/trace/trace.h:1523:21: note: 'event_mutex' declared here
extern struct mutex event_mutex;
^
kernel/trace/ftrace.c:8178:14: error: use of undeclared identifier 'direct_mutex'; did you mean 'event_mutex'?
mutex_lock(&direct_mutex);
^~~~~~~~~~~~
event_mutex
include/linux/mutex.h:187:44: note: expanded from macro 'mutex_lock'
#define mutex_lock(lock) mutex_lock_nested(lock, 0)
^
kernel/trace/trace.h:1523:21: note: 'event_mutex' declared here
extern struct mutex event_mutex;
^
kernel/trace/ftrace.c:8180:14: error: no member named 'func_hash' in 'struct ftrace_ops'
hash = ops->func_hash->filter_hash;
~~~ ^
kernel/trace/ftrace.c:8191:9: error: call to undeclared function 'ops_references_ip'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
if (ops_references_ip(op, ip)) {
^
kernel/trace/ftrace.c:8199:24: error: no member named 'ops_func' in 'struct ftrace_ops'
if (found_op && op->ops_func)
~~ ^
kernel/trace/ftrace.c:8200:9: error: no member named 'ops_func' in 'struct ftrace_ops'
op->ops_func(op, FTRACE_OPS_CMD_DISABLE_SHARE_IPMODIFY_PEER);
~~ ^
kernel/trace/ftrace.c:8203:16: error: use of undeclared identifier 'direct_mutex'; did you mean 'event_mutex'?
mutex_unlock(&direct_mutex);
^~~~~~~~~~~~
event_mutex
kernel/trace/trace.h:1523:21: note: 'event_mutex' declared here
extern struct mutex event_mutex;
^
13 errors generated.


vim +8082 kernel/trace/ftrace.c

8051
8052 /*
8053 * When registering ftrace_ops with IPMODIFY, it is necessary to make sure
8054 * it doesn't conflict with any direct ftrace_ops. If there is existing
8055 * direct ftrace_ops on a kernel function being patched, call
8056 * FTRACE_OPS_CMD_ENABLE_SHARE_IPMODIFY_PEER on it to enable sharing.
8057 *
8058 * @ops: ftrace_ops being registered.
8059 *
8060 * Returns:
8061 * 0 - @ops does not have IPMODIFY or @ops itself is DIRECT, no
8062 * change needed;
8063 * 1 - @ops has IPMODIFY, hold direct_mutex;
8064 * -EBUSY - currently registered DIRECT ftrace_ops cannot share the
8065 * same function with IPMODIFY, abort the register.
8066 * -EAGAIN - cannot make changes to currently registered DIRECT
8067 * ftrace_ops due to rare race conditions. Should retry
8068 * later. This is needed to avoid potential deadlocks
8069 * on the DIRECT ftrace_ops side.
8070 */
8071 static int prepare_direct_functions_for_ipmodify(struct ftrace_ops *ops)
8072 __acquires(&direct_mutex)
8073 {
8074 struct ftrace_func_entry *entry;
8075 struct ftrace_hash *hash;
8076 struct ftrace_ops *op;
8077 int size, i, ret;
8078
8079 if (!(ops->flags & FTRACE_OPS_FL_IPMODIFY))
8080 return 0;
8081
> 8082 mutex_lock(&direct_mutex);
8083
> 8084 hash = ops->func_hash->filter_hash;
8085 size = 1 << hash->size_bits;
8086 for (i = 0; i < size; i++) {
8087 hlist_for_each_entry(entry, &hash->buckets[i], hlist) {
8088 unsigned long ip = entry->ip;
8089 bool found_op = false;
8090
8091 mutex_lock(&ftrace_lock);
8092 do_for_each_ftrace_op(op, ftrace_ops_list) {
8093 if (!(op->flags & FTRACE_OPS_FL_DIRECT))
8094 continue;
> 8095 if (ops_references_ip(op, ip)) {
8096 found_op = true;
8097 break;
8098 }
8099 } while_for_each_ftrace_op(op);
8100 mutex_unlock(&ftrace_lock);
8101
8102 if (found_op) {
> 8103 if (!op->ops_func) {
8104 ret = -EBUSY;
8105 goto err_out;
8106 }
8107 ret = op->ops_func(op, FTRACE_OPS_CMD_ENABLE_SHARE_IPMODIFY_PEER);
8108 if (ret)
8109 goto err_out;
8110 }
8111 }
8112 }
8113
8114 /*
8115 * Didn't find any overlap with direct ftrace_ops, or the direct
8116 * function can share with ipmodify. Hold direct_mutex to make sure
8117 * this doesn't change until we are done.
8118 */
8119 return 1;
8120
8121 err_out:
8122 mutex_unlock(&direct_mutex);
8123 return ret;
8124 }
8125

--
0-DAY CI Kernel Test Service
https://01.org/lkp

2022-07-18 05:54:34

by Song Liu

[permalink] [raw]
Subject: Re: [PATCH v3 bpf-next 2/4] ftrace: allow IPMODIFY and DIRECT ops on the same function



> On Jul 17, 2022, at 8:36 PM, kernel test robot <[email protected]> wrote:
>
> Hi Song,
>
> I love your patch! Yet something to improve:
>
> [auto build test ERROR on bpf-next/master]
>
> url: https://github.com/intel-lab-lkp/linux/commits/Song-Liu/ftrace-host-klp-and-bpf-trampoline-together/20220718-081533
> base: https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git master
> config: x86_64-randconfig-a001 (https://download.01.org/0day-ci/archive/20220718/[email protected]/config )
> compiler: clang version 15.0.0 (https://github.com/llvm/llvm-project d74b88c69dc2644bd0dc5d64e2d7413a0d4040e5)
> reproduce (this is a W=1 build):
> wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
> chmod +x ~/bin/make.cross
> # https://github.com/intel-lab-lkp/linux/commit/1535f287d288f9b7540ec50f56da1fe437ac6512
> git remote add linux-review https://github.com/intel-lab-lkp/linux
> git fetch --no-tags linux-review Song-Liu/ftrace-host-klp-and-bpf-trampoline-together/20220718-081533
> git checkout 1535f287d288f9b7540ec50f56da1fe437ac6512
> # save the config file
> mkdir build_dir && cp config build_dir/.config
> COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=x86_64 SHELL=/bin/bash
>
> If you fix the issue, kindly add following tag where applicable
> Reported-by: kernel test robot <[email protected]>
>
> All errors (new ones prefixed by >>):
>
>>> kernel/trace/ftrace.c:8082:14: error: use of undeclared identifier 'direct_mutex'; did you mean 'event_mutex'?
> mutex_lock(&direct_mutex);
> ^~~~~~~~~~~~
> event_mutex

Folded the the fix in. I guess this should also fail on v2, but
the kernel test robot didn't seem to find it.

Song

> include/linux/mutex.h:187:44: note: expanded from macro 'mutex_lock'
> #define mutex_lock(lock) mutex_lock_nested(lock, 0)
> ^
> kernel/trace/trace.h:1523:21: note: 'event_mutex' declared here
> extern struct mutex event_mutex;
> ^
>>> kernel/trace/ftrace.c:8084:14: error: no member named 'func_hash' in 'struct ftrace_ops'
> hash = ops->func_hash->filter_hash;
> ~~~ ^
>>> kernel/trace/ftrace.c:8095:9: error: call to undeclared function 'ops_references_ip'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
> if (ops_references_ip(op, ip)) {
> ^
>>> kernel/trace/ftrace.c:8103:14: error: no member named 'ops_func' in 'struct ftrace_ops'
> if (!op->ops_func) {
> ~~ ^
> kernel/trace/ftrace.c:8107:15: error: no member named 'ops_func' in 'struct ftrace_ops'
> ret = op->ops_func(op, FTRACE_OPS_CMD_ENABLE_SHARE_IPMODIFY_PEER);
> ~~ ^
> kernel/trace/ftrace.c:8122:16: error: use of undeclared identifier 'direct_mutex'; did you mean 'event_mutex'?
> mutex_unlock(&direct_mutex);
> ^~~~~~~~~~~~
> event_mutex
> kernel/trace/trace.h:1523:21: note: 'event_mutex' declared here
> extern struct mutex event_mutex;
> ^
> kernel/trace/ftrace.c:8158:17: error: use of undeclared identifier 'direct_mutex'; did you mean 'event_mutex'?
> mutex_unlock(&direct_mutex);
> ^~~~~~~~~~~~
> event_mutex
> kernel/trace/trace.h:1523:21: note: 'event_mutex' declared here
> extern struct mutex event_mutex;
> ^
> kernel/trace/ftrace.c:8178:14: error: use of undeclared identifier 'direct_mutex'; did you mean 'event_mutex'?
> mutex_lock(&direct_mutex);
> ^~~~~~~~~~~~
> event_mutex
> include/linux/mutex.h:187:44: note: expanded from macro 'mutex_lock'
> #define mutex_lock(lock) mutex_lock_nested(lock, 0)
> ^
> kernel/trace/trace.h:1523:21: note: 'event_mutex' declared here
> extern struct mutex event_mutex;
> ^
> kernel/trace/ftrace.c:8180:14: error: no member named 'func_hash' in 'struct ftrace_ops'
> hash = ops->func_hash->filter_hash;
> ~~~ ^
> kernel/trace/ftrace.c:8191:9: error: call to undeclared function 'ops_references_ip'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
> if (ops_references_ip(op, ip)) {
> ^
> kernel/trace/ftrace.c:8199:24: error: no member named 'ops_func' in 'struct ftrace_ops'
> if (found_op && op->ops_func)
> ~~ ^
> kernel/trace/ftrace.c:8200:9: error: no member named 'ops_func' in 'struct ftrace_ops'
> op->ops_func(op, FTRACE_OPS_CMD_DISABLE_SHARE_IPMODIFY_PEER);
> ~~ ^
> kernel/trace/ftrace.c:8203:16: error: use of undeclared identifier 'direct_mutex'; did you mean 'event_mutex'?
> mutex_unlock(&direct_mutex);
> ^~~~~~~~~~~~
> event_mutex
> kernel/trace/trace.h:1523:21: note: 'event_mutex' declared here
> extern struct mutex event_mutex;
> ^
> 13 errors generated.
>
>
> vim +8082 kernel/trace/ftrace.c
>
> 8051
> 8052 /*
> 8053 * When registering ftrace_ops with IPMODIFY, it is necessary to make sure
> 8054 * it doesn't conflict with any direct ftrace_ops. If there is existing
> 8055 * direct ftrace_ops on a kernel function being patched, call
> 8056 * FTRACE_OPS_CMD_ENABLE_SHARE_IPMODIFY_PEER on it to enable sharing.
> 8057 *
> 8058 * @ops: ftrace_ops being registered.
> 8059 *
> 8060 * Returns:
> 8061 * 0 - @ops does not have IPMODIFY or @ops itself is DIRECT, no
> 8062 * change needed;
> 8063 * 1 - @ops has IPMODIFY, hold direct_mutex;
> 8064 * -EBUSY - currently registered DIRECT ftrace_ops cannot share the
> 8065 * same function with IPMODIFY, abort the register.
> 8066 * -EAGAIN - cannot make changes to currently registered DIRECT
> 8067 * ftrace_ops due to rare race conditions. Should retry
> 8068 * later. This is needed to avoid potential deadlocks
> 8069 * on the DIRECT ftrace_ops side.
> 8070 */
> 8071 static int prepare_direct_functions_for_ipmodify(struct ftrace_ops *ops)
> 8072 __acquires(&direct_mutex)
> 8073 {
> 8074 struct ftrace_func_entry *entry;
> 8075 struct ftrace_hash *hash;
> 8076 struct ftrace_ops *op;
> 8077 int size, i, ret;
> 8078
> 8079 if (!(ops->flags & FTRACE_OPS_FL_IPMODIFY))
> 8080 return 0;
> 8081
>> 8082 mutex_lock(&direct_mutex);
> 8083
>> 8084 hash = ops->func_hash->filter_hash;
> 8085 size = 1 << hash->size_bits;
> 8086 for (i = 0; i < size; i++) {
> 8087 hlist_for_each_entry(entry, &hash->buckets[i], hlist) {
> 8088 unsigned long ip = entry->ip;
> 8089 bool found_op = false;
> 8090
> 8091 mutex_lock(&ftrace_lock);
> 8092 do_for_each_ftrace_op(op, ftrace_ops_list) {
> 8093 if (!(op->flags & FTRACE_OPS_FL_DIRECT))
> 8094 continue;
>> 8095 if (ops_references_ip(op, ip)) {
> 8096 found_op = true;
> 8097 break;
> 8098 }
> 8099 } while_for_each_ftrace_op(op);
> 8100 mutex_unlock(&ftrace_lock);
> 8101
> 8102 if (found_op) {
>> 8103 if (!op->ops_func) {
> 8104 ret = -EBUSY;
> 8105 goto err_out;
> 8106 }
> 8107 ret = op->ops_func(op, FTRACE_OPS_CMD_ENABLE_SHARE_IPMODIFY_PEER);
> 8108 if (ret)
> 8109 goto err_out;
> 8110 }
> 8111 }
> 8112 }
> 8113
> 8114 /*
> 8115 * Didn't find any overlap with direct ftrace_ops, or the direct
> 8116 * function can share with ipmodify. Hold direct_mutex to make sure
> 8117 * this doesn't change until we are done.
> 8118 */
> 8119 return 1;
> 8120
> 8121 err_out:
> 8122 mutex_unlock(&direct_mutex);
> 8123 return ret;
> 8124 }
> 8125
>
> --
> 0-DAY CI Kernel Test Service
> https://01.org/lkp

2022-07-18 13:02:44

by Petr Mladek

[permalink] [raw]
Subject: Re: [PATCH v3 bpf-next 1/4] ftrace: add modify_ftrace_direct_multi_nolock

On Sun 2022-07-17 17:14:02, Song Liu wrote:
> This is similar to modify_ftrace_direct_multi, but does not acquire
> direct_mutex. This is useful when direct_mutex is already locked by the
> user.
>
> --- a/kernel/trace/ftrace.c
> +++ b/kernel/trace/ftrace.c
> @@ -5691,22 +5691,8 @@ int unregister_ftrace_direct_multi(struct ftrace_ops *ops, unsigned long addr)
> @@ -5717,12 +5703,8 @@ int modify_ftrace_direct_multi(struct ftrace_ops *ops, unsigned long addr)
> int i, size;
> int err;
>
> - if (check_direct_multi(ops))
> + if (WARN_ON_ONCE(!mutex_is_locked(&direct_mutex)))
> return -EINVAL;

IMHO, it is better to use:

lockdep_assert_held_once(&direct_mutex);

It will always catch the problem when called without the lock and
lockdep is enabled.

> - if (!(ops->flags & FTRACE_OPS_FL_ENABLED))
> - return -EINVAL;
> -
> - mutex_lock(&direct_mutex);
>
> /* Enable the tmp_ops to have the same functions as the direct ops */
> ftrace_ops_init(&tmp_ops);
> @@ -5730,7 +5712,7 @@ int modify_ftrace_direct_multi(struct ftrace_ops *ops, unsigned long addr)
>
> err = register_ftrace_function(&tmp_ops);
> if (err)
> - goto out_direct;
> + return err;
>
> /*
> * Now the ftrace_ops_list_func() is called to do the direct callers.
> @@ -5754,7 +5736,64 @@ int modify_ftrace_direct_multi(struct ftrace_ops *ops, unsigned long addr)
> /* Removing the tmp_ops will add the updated direct callers to the functions */
> unregister_ftrace_function(&tmp_ops);
>
> - out_direct:
> + return err;
> +}
> +
> +/**
> + * modify_ftrace_direct_multi_nolock - Modify an existing direct 'multi' call
> + * to call something else
> + * @ops: The address of the struct ftrace_ops object
> + * @addr: The address of the new trampoline to call at @ops functions
> + *
> + * This is used to unregister currently registered direct caller and
> + * register new one @addr on functions registered in @ops object.
> + *
> + * Note there's window between ftrace_shutdown and ftrace_startup calls
> + * where there will be no callbacks called.
> + *
> + * Caller should already have direct_mutex locked, so we don't lock
> + * direct_mutex here.
> + *
> + * Returns: zero on success. Non zero on error, which includes:
> + * -EINVAL - The @ops object was not properly registered.
> + */
> +int modify_ftrace_direct_multi_nolock(struct ftrace_ops *ops, unsigned long addr)
> +{
> + if (check_direct_multi(ops))
> + return -EINVAL;
> + if (!(ops->flags & FTRACE_OPS_FL_ENABLED))
> + return -EINVAL;
> +
> + return __modify_ftrace_direct_multi(ops, addr);
> +}
> +EXPORT_SYMBOL_GPL(modify_ftrace_direct_multi_nolock);
> +
> +/**
> + * modify_ftrace_direct_multi - Modify an existing direct 'multi' call
> + * to call something else
> + * @ops: The address of the struct ftrace_ops object
> + * @addr: The address of the new trampoline to call at @ops functions
> + *
> + * This is used to unregister currently registered direct caller and
> + * register new one @addr on functions registered in @ops object.
> + *
> + * Note there's window between ftrace_shutdown and ftrace_startup calls
> + * where there will be no callbacks called.
> + *
> + * Returns: zero on success. Non zero on error, which includes:
> + * -EINVAL - The @ops object was not properly registered.
> + */
> +int modify_ftrace_direct_multi(struct ftrace_ops *ops, unsigned long addr)
> +{
> + int err;
> +
> + if (check_direct_multi(ops))
> + return -EINVAL;
> + if (!(ops->flags & FTRACE_OPS_FL_ENABLED))
> + return -EINVAL;
> +
> + mutex_lock(&direct_mutex);
> + err = __modify_ftrace_direct_multi(ops, addr);
> mutex_unlock(&direct_mutex);
> return err;
> }

I would personally do:

int __modify_ftrace_direct_multi(struct ftrace_ops *ops,
unsigned long addr, bool lock)
{
int err;

if (check_direct_multi(ops))
return -EINVAL;
if (!(ops->flags & FTRACE_OPS_FL_ENABLED))
return -EINVAL;

if (lock)
mutex_lock(&direct_mutex);

err = __modify_ftrace_direct_multi(ops, addr);

if (lock)
mutex_unlock(&direct_mutex);

return err;
}

int modify_ftrace_direct_multi(struct ftrace_ops *ops, unsigned long addr)
{
__modify_ftrace_direct_multi(ops, addr, true);
}

int modify_ftrace_direct_multi_nolock(struct ftrace_ops *ops, unsigned long addr)
{
__modify_ftrace_direct_multi(ops, addr, false);
}

To avoid duplication of the checks. But it is a matter of taste.

Best Regards,
Petr

2022-07-18 13:08:59

by Petr Mladek

[permalink] [raw]
Subject: Re: [PATCH v3 bpf-next 4/4] bpf: support bpf_trampoline on functions with IPMODIFY (e.g. livepatch)

On Sun 2022-07-17 17:14:05, Song Liu wrote:
> When tracing a function with IPMODIFY ftrace_ops (livepatch), the bpf
> trampoline must follow the instruction pointer saved on stack. This needs
> extra handling for bpf trampolines with BPF_TRAMP_F_CALL_ORIG flag.
>
> Implement bpf_tramp_ftrace_ops_func and use it for the ftrace_ops used
> by BPF trampoline. This enables tracing functions with livepatch.
>
> This also requires moving bpf trampoline to *_ftrace_direct_mult APIs.
>
> --- a/kernel/bpf/trampoline.c
> +++ b/kernel/bpf/trampoline.c
> @@ -13,6 +13,7 @@
> #include <linux/static_call.h>
> #include <linux/bpf_verifier.h>
> #include <linux/bpf_lsm.h>
> +#include <linux/delay.h>
>
> /* dummy _ops. The verifier will operate on target program's ops. */
> const struct bpf_verifier_ops bpf_extension_verifier_ops = {
> @@ -29,6 +30,81 @@ static struct hlist_head trampoline_table[TRAMPOLINE_TABLE_SIZE];
> /* serializes access to trampoline_table */
> static DEFINE_MUTEX(trampoline_mutex);
>
> +#ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
> +static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mutex);
> +
> +static int bpf_tramp_ftrace_ops_func(struct ftrace_ops *ops, enum ftrace_ops_cmd cmd)
> +{
> + struct bpf_trampoline *tr = ops->private;
> + int ret = 0;
> +
> + if (cmd == FTRACE_OPS_CMD_ENABLE_SHARE_IPMODIFY_SELF) {
> + /* This is called inside register_ftrace_direct_multi(), so
> + * tr->mutex is already locked.
> + */
> + WARN_ON_ONCE(!mutex_is_locked(&tr->mutex));

Again, better is:

lockdep_assert_held_once(&tr->mutex);

> +
> + /* Instead of updating the trampoline here, we propagate
> + * -EAGAIN to register_ftrace_direct_multi(). Then we can
> + * retry register_ftrace_direct_multi() after updating the
> + * trampoline.
> + */
> + if ((tr->flags & BPF_TRAMP_F_CALL_ORIG) &&
> + !(tr->flags & BPF_TRAMP_F_ORIG_STACK)) {
> + if (WARN_ON_ONCE(tr->flags & BPF_TRAMP_F_SHARE_IPMODIFY))
> + return -EBUSY;
> +
> + tr->flags |= BPF_TRAMP_F_SHARE_IPMODIFY;
> + return -EAGAIN;
> + }
> +
> + return 0;
> + }
> +
> + /* The normal locking order is
> + * tr->mutex => direct_mutex (ftrace.c) => ftrace_lock (ftrace.c)
> + *
> + * The following two commands are called from
> + *
> + * prepare_direct_functions_for_ipmodify
> + * cleanup_direct_functions_after_ipmodify
> + *
> + * In both cases, direct_mutex is already locked. Use
> + * mutex_trylock(&tr->mutex) to avoid deadlock in race condition
> + * (something else is making changes to this same trampoline).
> + */
> + if (!mutex_trylock(&tr->mutex)) {
> + /* sleep 1 ms to make sure whatever holding tr->mutex makes
> + * some progress.
> + */
> + msleep(1);
> + return -EAGAIN;
> + }

Huh, this looks horrible. And I do not get it. The above block prints
a warning when the mutex is not taken. Why it is already taken
when cmd == FTRACE_OPS_CMD_ENABLE_SHARE_IPMODIFY_SELF
and why it has to be explicitly taken otherwise?

Would it be possible to call prepare_direct_functions_for_ipmodify(),
cleanup_direct_functions_after_ipmodify() with rt->mutex already taken
so that the ordering is correct even in this case.

That said, this is the first version when I am in Cc. I am not sure
if it has already been discussed.


> + switch (cmd) {
> + case FTRACE_OPS_CMD_ENABLE_SHARE_IPMODIFY_PEER:
> + tr->flags |= BPF_TRAMP_F_SHARE_IPMODIFY;
> +
> + if ((tr->flags & BPF_TRAMP_F_CALL_ORIG) &&
> + !(tr->flags & BPF_TRAMP_F_ORIG_STACK))
> + ret = bpf_trampoline_update(tr, false /* lock_direct_mutex */);
> + break;
> + case FTRACE_OPS_CMD_DISABLE_SHARE_IPMODIFY_PEER:
> + tr->flags &= ~BPF_TRAMP_F_SHARE_IPMODIFY;
> +
> + if (tr->flags & BPF_TRAMP_F_ORIG_STACK)
> + ret = bpf_trampoline_update(tr, false /* lock_direct_mutex */);
> + break;
> + default:
> + ret = -EINVAL;
> + break;
> + };
> +
> + mutex_unlock(&tr->mutex);
> + return ret;
> +}
> +#endif
> +
> bool bpf_prog_has_trampoline(const struct bpf_prog *prog)
> {
> enum bpf_attach_type eatype = prog->expected_attach_type;

Note that I did not do proper review. I not much familiar with the
ftrace code. I just wanted to check how much this patchset affects
livepatching and noticed the commented things.

Best Regards,
Petr

2022-07-18 16:40:58

by Song Liu

[permalink] [raw]
Subject: Re: [PATCH v3 bpf-next 1/4] ftrace: add modify_ftrace_direct_multi_nolock

Hi Petr,

Thanks for your quick review!

> On Jul 18, 2022, at 5:50 AM, Petr Mladek <[email protected]> wrote:
>
> On Sun 2022-07-17 17:14:02, Song Liu wrote:
>> This is similar to modify_ftrace_direct_multi, but does not acquire
>> direct_mutex. This is useful when direct_mutex is already locked by the
>> user.
>>
>> --- a/kernel/trace/ftrace.c
>> +++ b/kernel/trace/ftrace.c
>> @@ -5691,22 +5691,8 @@ int unregister_ftrace_direct_multi(struct ftrace_ops *ops, unsigned long addr)
>> @@ -5717,12 +5703,8 @@ int modify_ftrace_direct_multi(struct ftrace_ops *ops, unsigned long addr)
>> int i, size;
>> int err;
>>
>> - if (check_direct_multi(ops))
>> + if (WARN_ON_ONCE(!mutex_is_locked(&direct_mutex)))
>> return -EINVAL;
>
> IMHO, it is better to use:
>
> lockdep_assert_held_once(&direct_mutex);
>
> It will always catch the problem when called without the lock and
> lockdep is enabled.

Will fix.

>
>> - if (!(ops->flags & FTRACE_OPS_FL_ENABLED))
>> - return -EINVAL;
>> -
>> - mutex_lock(&direct_mutex);
>>
>> /* Enable the tmp_ops to have the same functions as the direct ops */
>> ftrace_ops_init(&tmp_ops);
>> @@ -5730,7 +5712,7 @@ int modify_ftrace_direct_multi(struct ftrace_ops *ops, unsigned long addr)
>>
>> err = register_ftrace_function(&tmp_ops);
>> if (err)
>> - goto out_direct;
>> + return err;
>>
>> /*
>> * Now the ftrace_ops_list_func() is called to do the direct callers.
>> @@ -5754,7 +5736,64 @@ int modify_ftrace_direct_multi(struct ftrace_ops *ops, unsigned long addr)
>> /* Removing the tmp_ops will add the updated direct callers to the functions */
>> unregister_ftrace_function(&tmp_ops);
>>
>> - out_direct:
>> + return err;
>> +}
>> +
>> +/**
>> + * modify_ftrace_direct_multi_nolock - Modify an existing direct 'multi' call
>> + * to call something else
>> + * @ops: The address of the struct ftrace_ops object
>> + * @addr: The address of the new trampoline to call at @ops functions
>> + *
>> + * This is used to unregister currently registered direct caller and
>> + * register new one @addr on functions registered in @ops object.
>> + *
>> + * Note there's window between ftrace_shutdown and ftrace_startup calls
>> + * where there will be no callbacks called.
>> + *
>> + * Caller should already have direct_mutex locked, so we don't lock
>> + * direct_mutex here.
>> + *
>> + * Returns: zero on success. Non zero on error, which includes:
>> + * -EINVAL - The @ops object was not properly registered.
>> + */
>> +int modify_ftrace_direct_multi_nolock(struct ftrace_ops *ops, unsigned long addr)
>> +{
>> + if (check_direct_multi(ops))
>> + return -EINVAL;
>> + if (!(ops->flags & FTRACE_OPS_FL_ENABLED))
>> + return -EINVAL;
>> +
>> + return __modify_ftrace_direct_multi(ops, addr);
>> +}
>> +EXPORT_SYMBOL_GPL(modify_ftrace_direct_multi_nolock);
>> +
>> +/**
>> + * modify_ftrace_direct_multi - Modify an existing direct 'multi' call
>> + * to call something else
>> + * @ops: The address of the struct ftrace_ops object
>> + * @addr: The address of the new trampoline to call at @ops functions
>> + *
>> + * This is used to unregister currently registered direct caller and
>> + * register new one @addr on functions registered in @ops object.
>> + *
>> + * Note there's window between ftrace_shutdown and ftrace_startup calls
>> + * where there will be no callbacks called.
>> + *
>> + * Returns: zero on success. Non zero on error, which includes:
>> + * -EINVAL - The @ops object was not properly registered.
>> + */
>> +int modify_ftrace_direct_multi(struct ftrace_ops *ops, unsigned long addr)
>> +{
>> + int err;
>> +
>> + if (check_direct_multi(ops))
>> + return -EINVAL;
>> + if (!(ops->flags & FTRACE_OPS_FL_ENABLED))
>> + return -EINVAL;
>> +
>> + mutex_lock(&direct_mutex);
>> + err = __modify_ftrace_direct_multi(ops, addr);
>> mutex_unlock(&direct_mutex);
>> return err;
>> }
>
> I would personally do:
>
> int __modify_ftrace_direct_multi(struct ftrace_ops *ops,
> unsigned long addr, bool lock)
> {
> int err;
>
> if (check_direct_multi(ops))
> return -EINVAL;
> if (!(ops->flags & FTRACE_OPS_FL_ENABLED))
> return -EINVAL;
>
> if (lock)
> mutex_lock(&direct_mutex);
>
> err = __modify_ftrace_direct_multi(ops, addr);
>
> if (lock)
> mutex_unlock(&direct_mutex);

The "if (lock) lock" pattern bothers me a little. But I agrees this is
a matter of taste. If other folks prefers this way, I will make the
change.

Thanks,
Song

>
> return err;
> }
>
> int modify_ftrace_direct_multi(struct ftrace_ops *ops, unsigned long addr)
> {
> __modify_ftrace_direct_multi(ops, addr, true);
> }
>
> int modify_ftrace_direct_multi_nolock(struct ftrace_ops *ops, unsigned long addr)
> {
> __modify_ftrace_direct_multi(ops, addr, false);
> }
>
> To avoid duplication of the checks. But it is a matter of taste.
>
> Best Regards,
> Petr

2022-07-18 16:59:10

by Song Liu

[permalink] [raw]
Subject: Re: [PATCH v3 bpf-next 4/4] bpf: support bpf_trampoline on functions with IPMODIFY (e.g. livepatch)



> On Jul 18, 2022, at 6:07 AM, Petr Mladek <[email protected]> wrote:
>
> On Sun 2022-07-17 17:14:05, Song Liu wrote:
>> When tracing a function with IPMODIFY ftrace_ops (livepatch), the bpf
>> trampoline must follow the instruction pointer saved on stack. This needs
>> extra handling for bpf trampolines with BPF_TRAMP_F_CALL_ORIG flag.
>>
>> Implement bpf_tramp_ftrace_ops_func and use it for the ftrace_ops used
>> by BPF trampoline. This enables tracing functions with livepatch.
>>
>> This also requires moving bpf trampoline to *_ftrace_direct_mult APIs.
>>
>> --- a/kernel/bpf/trampoline.c
>> +++ b/kernel/bpf/trampoline.c
>> @@ -13,6 +13,7 @@
>> #include <linux/static_call.h>
>> #include <linux/bpf_verifier.h>
>> #include <linux/bpf_lsm.h>
>> +#include <linux/delay.h>
>>
>> /* dummy _ops. The verifier will operate on target program's ops. */
>> const struct bpf_verifier_ops bpf_extension_verifier_ops = {
>> @@ -29,6 +30,81 @@ static struct hlist_head trampoline_table[TRAMPOLINE_TABLE_SIZE];
>> /* serializes access to trampoline_table */
>> static DEFINE_MUTEX(trampoline_mutex);
>>
>> +#ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
>> +static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mutex);
>> +
>> +static int bpf_tramp_ftrace_ops_func(struct ftrace_ops *ops, enum ftrace_ops_cmd cmd)
>> +{
>> + struct bpf_trampoline *tr = ops->private;
>> + int ret = 0;
>> +
>> + if (cmd == FTRACE_OPS_CMD_ENABLE_SHARE_IPMODIFY_SELF) {
>> + /* This is called inside register_ftrace_direct_multi(), so
>> + * tr->mutex is already locked.
>> + */
>> + WARN_ON_ONCE(!mutex_is_locked(&tr->mutex));
>
> Again, better is:
>
> lockdep_assert_held_once(&tr->mutex);

Will fix.

>
>> +
>> + /* Instead of updating the trampoline here, we propagate
>> + * -EAGAIN to register_ftrace_direct_multi(). Then we can
>> + * retry register_ftrace_direct_multi() after updating the
>> + * trampoline.
>> + */
>> + if ((tr->flags & BPF_TRAMP_F_CALL_ORIG) &&
>> + !(tr->flags & BPF_TRAMP_F_ORIG_STACK)) {
>> + if (WARN_ON_ONCE(tr->flags & BPF_TRAMP_F_SHARE_IPMODIFY))
>> + return -EBUSY;
>> +
>> + tr->flags |= BPF_TRAMP_F_SHARE_IPMODIFY;
>> + return -EAGAIN;
>> + }
>> +
>> + return 0;
>> + }
>> +
>> + /* The normal locking order is
>> + * tr->mutex => direct_mutex (ftrace.c) => ftrace_lock (ftrace.c)
>> + *
>> + * The following two commands are called from
>> + *
>> + * prepare_direct_functions_for_ipmodify
>> + * cleanup_direct_functions_after_ipmodify
>> + *
>> + * In both cases, direct_mutex is already locked. Use
>> + * mutex_trylock(&tr->mutex) to avoid deadlock in race condition
>> + * (something else is making changes to this same trampoline).
>> + */
>> + if (!mutex_trylock(&tr->mutex)) {
>> + /* sleep 1 ms to make sure whatever holding tr->mutex makes
>> + * some progress.
>> + */
>> + msleep(1);
>> + return -EAGAIN;
>> + }
>
> Huh, this looks horrible. And I do not get it. The above block prints
> a warning when the mutex is not taken. Why it is already taken
> when cmd == FTRACE_OPS_CMD_ENABLE_SHARE_IPMODIFY_SELF
> and why it has to be explicitly taken otherwise?

There are two different scenarios:

1. livepatch applied first, then bpf trampoline is registered.

In this case, we call ops_func(FTRACE_OPS_CMD_ENABLE_SHARE_IPMODIFY_SELF).
_SELF means we are making change to the DIRECT ops (bpf trampoline) itself.
In this case, tr->mutex is already taken.

2. bpf_trampoline registered first, then livepatch is applied.

In this case, ftrace cannot take tr->mutex first. Instead, ftrace has to
lock direct_mutex, find any conflict DIRECT ops, and call
ops_func(FTRACE_OPS_CMD_ENABLE_SHARE_IPMODIFY_PEER). _PEER means this is
called by a peer ftrace ops (livepatch).

>
> Would it be possible to call prepare_direct_functions_for_ipmodify(),
> cleanup_direct_functions_after_ipmodify() with rt->mutex already taken
> so that the ordering is correct even in this case.
>
> That said, this is the first version when I am in Cc. I am not sure
> if it has already been discussed.
>
>
>> + switch (cmd) {
>> + case FTRACE_OPS_CMD_ENABLE_SHARE_IPMODIFY_PEER:
>> + tr->flags |= BPF_TRAMP_F_SHARE_IPMODIFY;
>> +
>> + if ((tr->flags & BPF_TRAMP_F_CALL_ORIG) &&
>> + !(tr->flags & BPF_TRAMP_F_ORIG_STACK))
>> + ret = bpf_trampoline_update(tr, false /* lock_direct_mutex */);
>> + break;
>> + case FTRACE_OPS_CMD_DISABLE_SHARE_IPMODIFY_PEER:
>> + tr->flags &= ~BPF_TRAMP_F_SHARE_IPMODIFY;
>> +
>> + if (tr->flags & BPF_TRAMP_F_ORIG_STACK)
>> + ret = bpf_trampoline_update(tr, false /* lock_direct_mutex */);
>> + break;
>> + default:
>> + ret = -EINVAL;
>> + break;
>> + };
>> +
>> + mutex_unlock(&tr->mutex);
>> + return ret;
>> +}
>> +#endif
>> +
>> bool bpf_prog_has_trampoline(const struct bpf_prog *prog)
>> {
>> enum bpf_attach_type eatype = prog->expected_attach_type;
>
> Note that I did not do proper review. I not much familiar with the
> ftrace code. I just wanted to check how much this patchset affects
> livepatching and noticed the commented things.

Before this set, livepatch and bpf trampoline cannot work on the same
function. Whichever applied latter will fail. After this, the two
will work almost perfectly together. By "almost" I mean the race
condition protected by the mutex_trylock:

if (!mutex_trylock(&tr->mutex)) {
/* sleep 1 ms to make sure whatever holding tr->mutex makes
* some progress.
*/
msleep(1);
return -EAGAIN;
}

If livepatch is applied while something is making changes to the bpf
trampoline at the same time, livepatch code will get -EAGAIN from
register_ftrace_function(). Then we need to retry from livepatch side.
AFAICT, kpatch user space already does the retry, so it gonna work
as-is. I am not sure about other user space though.

The msleep(1) is suggested by Steven to avoid deadlock in RT case [1].

Does this make sense?

Thanks,
Song

[1] https://lore.kernel.org/bpf/[email protected]/


2022-07-18 17:06:54

by Song Liu

[permalink] [raw]
Subject: Re: [PATCH v3 bpf-next 4/4] bpf: support bpf_trampoline on functions with IPMODIFY (e.g. livepatch)



> On Jul 18, 2022, at 6:07 AM, Petr Mladek <[email protected]> wrote:
>
> On Sun 2022-07-17 17:14:05, Song Liu wrote:
>> When tracing a function with IPMODIFY ftrace_ops (livepatch), the bpf
>> trampoline must follow the instruction pointer saved on stack. This needs
>> extra handling for bpf trampolines with BPF_TRAMP_F_CALL_ORIG flag.
>>
>> Implement bpf_tramp_ftrace_ops_func and use it for the ftrace_ops used
>> by BPF trampoline. This enables tracing functions with livepatch.
>>
>> This also requires moving bpf trampoline to *_ftrace_direct_mult APIs.
>>
>> --- a/kernel/bpf/trampoline.c
>> +++ b/kernel/bpf/trampoline.c
>> @@ -13,6 +13,7 @@
>> #include <linux/static_call.h>
>> #include <linux/bpf_verifier.h>
>> #include <linux/bpf_lsm.h>
>> +#include <linux/delay.h>
>>
>> /* dummy _ops. The verifier will operate on target program's ops. */
>> const struct bpf_verifier_ops bpf_extension_verifier_ops = {
>> @@ -29,6 +30,81 @@ static struct hlist_head trampoline_table[TRAMPOLINE_TABLE_SIZE];
>> /* serializes access to trampoline_table */
>> static DEFINE_MUTEX(trampoline_mutex);
>>
>> +#ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
>> +static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mutex);
>> +
>> +static int bpf_tramp_ftrace_ops_func(struct ftrace_ops *ops, enum ftrace_ops_cmd cmd)
>> +{
>> + struct bpf_trampoline *tr = ops->private;
>> + int ret = 0;
>> +
>> + if (cmd == FTRACE_OPS_CMD_ENABLE_SHARE_IPMODIFY_SELF) {
>> + /* This is called inside register_ftrace_direct_multi(), so
>> + * tr->mutex is already locked.
>> + */
>> + WARN_ON_ONCE(!mutex_is_locked(&tr->mutex));
>
> Again, better is:
>
> lockdep_assert_held_once(&tr->mutex);

Will fix.

>
>> +
>> + /* Instead of updating the trampoline here, we propagate
>> + * -EAGAIN to register_ftrace_direct_multi(). Then we can
>> + * retry register_ftrace_direct_multi() after updating the
>> + * trampoline.
>> + */
>> + if ((tr->flags & BPF_TRAMP_F_CALL_ORIG) &&
>> + !(tr->flags & BPF_TRAMP_F_ORIG_STACK)) {
>> + if (WARN_ON_ONCE(tr->flags & BPF_TRAMP_F_SHARE_IPMODIFY))
>> + return -EBUSY;
>> +
>> + tr->flags |= BPF_TRAMP_F_SHARE_IPMODIFY;
>> + return -EAGAIN;
>> + }
>> +
>> + return 0;
>> + }
>> +
>> + /* The normal locking order is
>> + * tr->mutex => direct_mutex (ftrace.c) => ftrace_lock (ftrace.c)
>> + *
>> + * The following two commands are called from
>> + *
>> + * prepare_direct_functions_for_ipmodify
>> + * cleanup_direct_functions_after_ipmodify
>> + *
>> + * In both cases, direct_mutex is already locked. Use
>> + * mutex_trylock(&tr->mutex) to avoid deadlock in race condition
>> + * (something else is making changes to this same trampoline).
>> + */
>> + if (!mutex_trylock(&tr->mutex)) {
>> + /* sleep 1 ms to make sure whatever holding tr->mutex makes
>> + * some progress.
>> + */
>> + msleep(1);
>> + return -EAGAIN;
>> + }
>
> Huh, this looks horrible. And I do not get it. The above block prints
> a warning when the mutex is not taken. Why it is already taken
> when cmd == FTRACE_OPS_CMD_ENABLE_SHARE_IPMODIFY_SELF
> and why it has to be explicitly taken otherwise?

There are two different scenarios:

1. livepatch applied first, then bpf trampoline is registered.

In this case, we call ops_func(FTRACE_OPS_CMD_ENABLE_SHARE_IPMODIFY_SELF).
_SELF means we are making change to the DIRECT ops (bpf trampoline) itself.
In this case, tr->mutex is already taken.

2. bpf_trampoline registered first, then livepatch is applied.

In this case, ftrace cannot take tr->mutex first. Instead, ftrace has to
lock direct_mutex, find any conflict DIRECT ops, and call
ops_func(FTRACE_OPS_CMD_ENABLE_SHARE_IPMODIFY_PEER). _PEER means this is
called by a peer ftrace ops (livepatch).

>
> Would it be possible to call prepare_direct_functions_for_ipmodify(),
> cleanup_direct_functions_after_ipmodify() with rt->mutex already taken
> so that the ordering is correct even in this case.
>
> That said, this is the first version when I am in Cc. I am not sure
> if it has already been discussed.
>
>
>> + switch (cmd) {
>> + case FTRACE_OPS_CMD_ENABLE_SHARE_IPMODIFY_PEER:
>> + tr->flags |= BPF_TRAMP_F_SHARE_IPMODIFY;
>> +
>> + if ((tr->flags & BPF_TRAMP_F_CALL_ORIG) &&
>> + !(tr->flags & BPF_TRAMP_F_ORIG_STACK))
>> + ret = bpf_trampoline_update(tr, false /* lock_direct_mutex */);
>> + break;
>> + case FTRACE_OPS_CMD_DISABLE_SHARE_IPMODIFY_PEER:
>> + tr->flags &= ~BPF_TRAMP_F_SHARE_IPMODIFY;
>> +
>> + if (tr->flags & BPF_TRAMP_F_ORIG_STACK)
>> + ret = bpf_trampoline_update(tr, false /* lock_direct_mutex */);
>> + break;
>> + default:
>> + ret = -EINVAL;
>> + break;
>> + };
>> +
>> + mutex_unlock(&tr->mutex);
>> + return ret;
>> +}
>> +#endif
>> +
>> bool bpf_prog_has_trampoline(const struct bpf_prog *prog)
>> {
>> enum bpf_attach_type eatype = prog->expected_attach_type;
>
> Note that I did not do proper review. I not much familiar with the
> ftrace code. I just wanted to check how much this patchset affects
> livepatching and noticed the commented things.

Before this set, livepatch and bpf trampoline cannot work on the same
function. Whichever applied latter will fail. After this, the two
will work almost perfectly together. By "almost" I mean the race
condition protected by the mutex_trylock:

if (!mutex_trylock(&tr->mutex)) {
/* sleep 1 ms to make sure whatever holding tr->mutex makes
* some progress.
*/
msleep(1);
return -EAGAIN;
}

If livepatch is applied while something is making changes to the bpf
trampoline at the same time, livepatch code will get -EAGAIN from
register_ftrace_function(). Then we need to retry from livepatch side.
AFAICT, kpatch user space already does the retry, so it gonna work
as-is. I am not sure about other user space though.

The msleep(1) is suggested by Steven to avoid deadlock in RT case [1].

Does this make sense?

Thanks,
Song

[1] https://lore.kernel.org/bpf/[email protected]/