2022-09-13 17:50:19

by Xu Kuohai

[permalink] [raw]
Subject: [PATCH bpf-next v2 0/4] Add ftrace direct call for arm64

This series adds ftrace direct call for arm64, which is required to attach
bpf trampoline to fentry.

Although there is no agreement on how to support ftrace direct call on arm64,
no patch has been posted except the one I posted in [1], so this series
continues the work of [1] with the addition of long jump support. Now ftrace
direct call works regardless of the distance between the callsite and custom
trampoline.

[1] https://lore.kernel.org/bpf/[email protected]/

v2:
- Fix compile and runtime errors caused by ftrace_rec_arch_init

v1: https://lore.kernel.org/bpf/[email protected]/

Xu Kuohai (4):
ftrace: Allow users to disable ftrace direct call
arm64: ftrace: Support long jump for ftrace direct call
arm64: ftrace: Add ftrace direct call support
ftrace: Fix dead loop caused by direct call in ftrace selftest

arch/arm64/Kconfig | 2 +
arch/arm64/Makefile | 4 +
arch/arm64/include/asm/ftrace.h | 35 ++++--
arch/arm64/include/asm/patching.h | 2 +
arch/arm64/include/asm/ptrace.h | 6 +-
arch/arm64/kernel/asm-offsets.c | 1 +
arch/arm64/kernel/entry-ftrace.S | 39 ++++--
arch/arm64/kernel/ftrace.c | 198 ++++++++++++++++++++++++++++--
arch/arm64/kernel/patching.c | 14 +++
arch/arm64/net/bpf_jit_comp.c | 4 +
include/linux/ftrace.h | 2 +
kernel/trace/Kconfig | 7 +-
kernel/trace/ftrace.c | 9 +-
kernel/trace/trace_selftest.c | 2 +
14 files changed, 296 insertions(+), 29 deletions(-)

--
2.30.2


2022-09-13 17:53:10

by Xu Kuohai

[permalink] [raw]
Subject: [PATCH bpf-next v2 2/4] arm64: ftrace: Support long jump for ftrace direct call

From: Xu Kuohai <[email protected]>

Add long jump support to fentry, so dynamically allocated trampolines
like bpf trampoline can be called from fentry directly, as these
trampoline addresses may be out of the range that a single bl
instruction can jump to.

The scheme used here is basically the same as commit b2ad54e1533e
("bpf, arm64: Implement bpf_arch_text_poke() for arm64").

1. At compile time, we use -fpatchable-function-entry=7,5 to insert 5
NOPs before function entry and 2 NOPs after function entry:

NOP
NOP
NOP
NOP
NOP
func:
BTI C // if BTI
NOP
NOP

The reason for inserting 5 NOPs before the function entry is that
2 NOPs are patched to LDR and BR instructions, 2 NOPs are used to
store the destination jump address, and 1 NOP is used to adjust
alignment to ensure the destination jump address is stored in 8-byte
aligned memory, which is required by atomic store and load.

2. When there is no trampoline attached, the callsite is patched to:

NOP // extra NOP if func is 8-byte aligned
literal:
.quad ftrace_dummy_tramp
NOP // extra NOP if func is NOT 8-byte aligned
literal_call:
LDR X16, literal
BR X16
func:
BTI C // if BTI
MOV X9, LR
NOP

3. When long jump trampoline is attached, the callsite is patched to:

NOP // extra NOP if func is 8-byte aligned
literal:
.quad <long-jump-trampoline>
NOP // extra NOP if func is NOT 8-byte aligned
literal_call:
LDR X16, literal
BR X16
func:
BTI C // if BTI
MOV X9, LR
BL literal_call

4. When short jump trampoline is attached, the callsite is patched to:

NOP // extra NOP if func is 8-byte aligned
literal:
.quad ftrace_dummy_tramp
NOP // extra NOP if func is NOT 8-byte aligned
literal_call:
LDR X16, literal
BR X16
func:
BTI C // if BTI
MOV X9, LR
BL <short-jump-trampoline>

Note that there is always a valid jump address in literal, either custom
trampoline address or the dummy trampoline address, which ensures
that we'll never jump from callsite to an unknown place.

Also note that the callsite is only ensured to be patched atomically and
securely. Whether the custom trampoline can be freed should be checked
by the trampoline user. For example, bpf uses refcnt and task based rcu
to ensure bpf trampoline could be freed safely.

In my environment, before this patch, there are 2 NOPs inserted in function
entry, and the generated vmlinux size is 463,649,280 bytes, while after
this patch, the vmlinux size is 465,069,368 bytes, increased 1,420,088
bytes, about 0.3%. In vmlinux, there are 14,376 8-byte aligned functions
and 41,847 unaligned functions. For each aligned function, one of the
five NOPs before the function entry is unnecessary, wasting 57,504 bytes.

Signed-off-by: Xu Kuohai <[email protected]>
---
arch/arm64/Makefile | 4 +
arch/arm64/include/asm/ftrace.h | 27 ++--
arch/arm64/include/asm/patching.h | 2 +
arch/arm64/kernel/entry-ftrace.S | 21 +++-
arch/arm64/kernel/ftrace.c | 198 ++++++++++++++++++++++++++++--
arch/arm64/kernel/patching.c | 14 +++
arch/arm64/net/bpf_jit_comp.c | 4 +
include/linux/ftrace.h | 2 +
kernel/trace/ftrace.c | 9 +-
9 files changed, 253 insertions(+), 28 deletions(-)

diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
index 6d9d4a58b898..e540b50db5b8 100644
--- a/arch/arm64/Makefile
+++ b/arch/arm64/Makefile
@@ -130,7 +130,11 @@ CHECKFLAGS += -D__aarch64__

ifeq ($(CONFIG_DYNAMIC_FTRACE_WITH_REGS),y)
KBUILD_CPPFLAGS += -DCC_USING_PATCHABLE_FUNCTION_ENTRY
+ ifeq ($(CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS),y)
+ CC_FLAGS_FTRACE := -fpatchable-function-entry=7,5
+ else
CC_FLAGS_FTRACE := -fpatchable-function-entry=2
+ endif
endif

# Default value
diff --git a/arch/arm64/include/asm/ftrace.h b/arch/arm64/include/asm/ftrace.h
index dbc45a4157fa..40e63435965b 100644
--- a/arch/arm64/include/asm/ftrace.h
+++ b/arch/arm64/include/asm/ftrace.h
@@ -56,27 +56,16 @@ extern void _mcount(unsigned long);
extern void *return_address(unsigned int);

struct dyn_arch_ftrace {
- /* No extra data needed for arm64 */
+#ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
+ unsigned long func; /* start address of function */
+#endif
};

extern unsigned long ftrace_graph_call;

extern void return_to_handler(void);

-static inline unsigned long ftrace_call_adjust(unsigned long addr)
-{
- /*
- * Adjust addr to point at the BL in the callsite.
- * See ftrace_init_nop() for the callsite sequence.
- */
- if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_REGS))
- return addr + AARCH64_INSN_SIZE;
- /*
- * addr is the address of the mcount call instruction.
- * recordmcount does the necessary offset calculation.
- */
- return addr;
-}
+unsigned long ftrace_call_adjust(unsigned long addr);

#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
struct dyn_ftrace;
@@ -121,6 +110,14 @@ static inline bool arch_syscall_match_sym_name(const char *sym,
*/
return !strcmp(sym + 8, name);
}
+
+#ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
+
+#define ftrace_dummy_tramp ftrace_dummy_tramp
+extern void ftrace_dummy_tramp(void);
+
+#endif /* CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS */
+
#endif /* ifndef __ASSEMBLY__ */

#endif /* __ASM_FTRACE_H */
diff --git a/arch/arm64/include/asm/patching.h b/arch/arm64/include/asm/patching.h
index 6bf5adc56295..b9077205e6b2 100644
--- a/arch/arm64/include/asm/patching.h
+++ b/arch/arm64/include/asm/patching.h
@@ -10,4 +10,6 @@ int aarch64_insn_write(void *addr, u32 insn);
int aarch64_insn_patch_text_nosync(void *addr, u32 insn);
int aarch64_insn_patch_text(void *addrs[], u32 insns[], int cnt);

+void aarch64_literal64_write(void *addr, u64 data);
+
#endif /* __ASM_PATCHING_H */
diff --git a/arch/arm64/kernel/entry-ftrace.S b/arch/arm64/kernel/entry-ftrace.S
index bd5df50e4643..0bebe3ffdb58 100644
--- a/arch/arm64/kernel/entry-ftrace.S
+++ b/arch/arm64/kernel/entry-ftrace.S
@@ -14,14 +14,16 @@

#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
/*
- * Due to -fpatchable-function-entry=2, the compiler has placed two NOPs before
- * the regular function prologue. For an enabled callsite, ftrace_init_nop() and
- * ftrace_make_call() have patched those NOPs to:
+ * Due to -fpatchable-function-entry=2 or -fpatchable-function-entry=7,5, the
+ * compiler has placed two NOPs before the regular function prologue. For an
+ * enabled callsite, ftrace_init_nop() and ftrace_make_call() have patched those
+ * NOPs to:
*
* MOV X9, LR
* BL <entry>
*
- * ... where <entry> is either ftrace_caller or ftrace_regs_caller.
+ * ... where <entry> is ftrace_caller or ftrace_regs_caller or custom
+ * trampoline.
*
* Each instrumented function follows the AAPCS, so here x0-x8 and x18-x30 are
* live (x18 holds the Shadow Call Stack pointer), and x9-x17 are safe to
@@ -327,3 +329,14 @@ SYM_CODE_START(return_to_handler)
ret
SYM_CODE_END(return_to_handler)
#endif /* CONFIG_FUNCTION_GRAPH_TRACER */
+
+#ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
+SYM_FUNC_START(ftrace_dummy_tramp)
+#if IS_ENABLED(CONFIG_ARM64_BTI_KERNEL)
+ bti j /* ftrace_dummy_tramp is called via "br x10" */
+#endif
+ mov x10, x30
+ mov x30, x9
+ ret x10
+SYM_FUNC_END(ftrace_dummy_tramp)
+#endif /* CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS */
diff --git a/arch/arm64/kernel/ftrace.c b/arch/arm64/kernel/ftrace.c
index ea5dc7c90f46..a311c19bf06a 100644
--- a/arch/arm64/kernel/ftrace.c
+++ b/arch/arm64/kernel/ftrace.c
@@ -77,6 +77,123 @@ static struct plt_entry *get_ftrace_plt(struct module *mod, unsigned long addr)
return NULL;
}

+enum ftrace_callsite_action {
+ FC_INIT,
+ FC_REMOVE_CALL,
+ FC_ADD_CALL,
+ FC_REPLACE_CALL,
+};
+
+#ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
+
+/*
+ * When func is 8-byte aligned, literal_call is located at func - 8 and literal
+ * is located at func - 16:
+ *
+ * NOP
+ * literal:
+ * .quad ftrace_dummy_tramp
+ * literal_call:
+ * LDR X16, literal
+ * BR X16
+ * func:
+ * BTI C // if BTI
+ * MOV X9, LR
+ * NOP
+ *
+ * When func is not 8-byte aligned, literal_call is located at func - 8 and
+ * literal is located at func - 20:
+ *
+ * literal:
+ * .quad ftrace_dummy_tramp
+ * NOP
+ * literal_call:
+ * LDR X16, literal
+ * BR X16
+ * func:
+ * BTI C // if BTI
+ * MOV X9, LR
+ * NOP
+ */
+
+static unsigned long ftrace_literal_call_addr(struct dyn_ftrace *rec)
+{
+ return rec->arch.func - 2 * AARCH64_INSN_SIZE;
+}
+
+static unsigned long ftrace_literal_addr(struct dyn_ftrace *rec)
+{
+ unsigned long addr = 0;
+
+ addr = ftrace_literal_call_addr(rec);
+ if (addr % sizeof(long))
+ addr -= 3 * AARCH64_INSN_SIZE;
+ else
+ addr -= 2 * AARCH64_INSN_SIZE;
+
+ return addr;
+}
+
+static void ftrace_update_literal(unsigned long literal_addr, unsigned long call_target,
+ int action)
+{
+ unsigned long dummy_tramp = (unsigned long)&ftrace_dummy_tramp;
+
+ if (action == FC_INIT || action == FC_REMOVE_CALL)
+ aarch64_literal64_write((void *)literal_addr, dummy_tramp);
+ else if (action == FC_ADD_CALL)
+ aarch64_literal64_write((void *)literal_addr, call_target);
+}
+
+static int ftrace_init_literal(struct module *mod, struct dyn_ftrace *rec)
+{
+ int ret;
+ u32 old, new;
+ unsigned long addr;
+ unsigned long pc = rec->ip - AARCH64_INSN_SIZE;
+
+ old = aarch64_insn_gen_nop();
+
+ addr = ftrace_literal_addr(rec);
+ ftrace_update_literal(addr, 0, FC_INIT);
+
+ pc = ftrace_literal_call_addr(rec);
+ new = aarch64_insn_gen_load_literal(pc, addr, AARCH64_INSN_REG_16,
+ true);
+ ret = ftrace_modify_code(pc, old, new, true);
+ if (ret)
+ return ret;
+
+ pc += AARCH64_INSN_SIZE;
+ new = aarch64_insn_gen_branch_reg(AARCH64_INSN_REG_16,
+ AARCH64_INSN_BRANCH_NOLINK);
+ return ftrace_modify_code(pc, old, new, true);
+}
+
+#else
+
+static unsigned long ftrace_literal_addr(struct dyn_ftrace *rec)
+{
+ return 0;
+}
+
+static unsigned long ftrace_literal_call_addr(struct dyn_ftrace *rec)
+{
+ return 0;
+}
+
+static void ftrace_update_literal(unsigned long literal_addr, unsigned long call_target,
+ int action)
+{
+}
+
+static int ftrace_init_literal(struct module *mod, struct dyn_ftrace *rec)
+{
+ return 0;
+}
+
+#endif /* CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS */
+
/*
* Find the address the callsite must branch to in order to reach '*addr'.
*
@@ -88,7 +205,8 @@ static struct plt_entry *get_ftrace_plt(struct module *mod, unsigned long addr)
*/
static bool ftrace_find_callable_addr(struct dyn_ftrace *rec,
struct module *mod,
- unsigned long *addr)
+ unsigned long *addr,
+ int action)
{
unsigned long pc = rec->ip;
long offset = (long)*addr - (long)pc;
@@ -101,6 +219,15 @@ static bool ftrace_find_callable_addr(struct dyn_ftrace *rec,
if (offset >= -SZ_128M && offset < SZ_128M)
return true;

+ if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS)) {
+ unsigned long literal_addr;
+
+ literal_addr = ftrace_literal_addr(rec);
+ ftrace_update_literal(literal_addr, *addr, action);
+ *addr = ftrace_literal_call_addr(rec);
+ return true;
+ }
+
/*
* When the target is outside of the range of a 'BL' instruction, we
* must use a PLT to reach it. We can only place PLTs for modules, and
@@ -145,7 +272,7 @@ int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
unsigned long pc = rec->ip;
u32 old, new;

- if (!ftrace_find_callable_addr(rec, NULL, &addr))
+ if (!ftrace_find_callable_addr(rec, NULL, &addr, FC_ADD_CALL))
return -EINVAL;

old = aarch64_insn_gen_nop();
@@ -161,9 +288,9 @@ int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr,
unsigned long pc = rec->ip;
u32 old, new;

- if (!ftrace_find_callable_addr(rec, NULL, &old_addr))
+ if (!ftrace_find_callable_addr(rec, NULL, &old_addr, FC_REPLACE_CALL))
return -EINVAL;
- if (!ftrace_find_callable_addr(rec, NULL, &addr))
+ if (!ftrace_find_callable_addr(rec, NULL, &addr, FC_ADD_CALL))
return -EINVAL;

old = aarch64_insn_gen_branch_imm(pc, old_addr,
@@ -188,18 +315,26 @@ int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr,
* | NOP | MOV X9, LR | MOV X9, LR |
* | NOP | NOP | BL <entry> |
*
- * The LR value will be recovered by ftrace_regs_entry, and restored into LR
- * before returning to the regular function prologue. When a function is not
- * being traced, the MOV is not harmful given x9 is not live per the AAPCS.
+ * The LR value will be recovered by ftrace_regs_entry or custom trampoline,
+ * and restored into LR before returning to the regular function prologue.
+ * When a function is not being traced, the MOV is not harmful given x9 is
+ * not live per the AAPCS.
*
* Note: ftrace_process_locs() has pre-adjusted rec->ip to be the address of
* the BL.
*/
int ftrace_init_nop(struct module *mod, struct dyn_ftrace *rec)
{
+ int ret;
unsigned long pc = rec->ip - AARCH64_INSN_SIZE;
u32 old, new;

+ if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS)) {
+ ret = ftrace_init_literal(mod, rec);
+ if (ret)
+ return ret;
+ }
+
old = aarch64_insn_gen_nop();
new = aarch64_insn_gen_move_reg(AARCH64_INSN_REG_9,
AARCH64_INSN_REG_LR,
@@ -208,6 +343,45 @@ int ftrace_init_nop(struct module *mod, struct dyn_ftrace *rec)
}
#endif

+unsigned long ftrace_call_adjust(unsigned long addr)
+{
+ if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS)) {
+ u32 insn;
+ u32 nop = aarch64_insn_gen_nop();
+
+ /* Skip the first 5 NOPS */
+ addr += 5 * AARCH64_INSN_SIZE;
+
+ if (aarch64_insn_read((void *)addr, &insn))
+ return 0;
+
+ if (IS_ENABLED(CONFIG_ARM64_BTI_KERNEL)) {
+ if (insn != nop) {
+ addr += AARCH64_INSN_SIZE;
+ if (aarch64_insn_read((void *)addr, &insn))
+ return 0;
+ }
+ }
+
+ if (WARN_ON_ONCE(insn != nop))
+ return 0;
+
+ return addr + AARCH64_INSN_SIZE;
+ } else if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_REGS)) {
+ /*
+ * Adjust addr to point at the BL in the callsite.
+ * See ftrace_init_nop() for the callsite sequence.
+ */
+ return addr + AARCH64_INSN_SIZE;
+ }
+
+ /*
+ * addr is the address of the mcount call instruction.
+ * recordmcount does the necessary offset calculation.
+ */
+ return addr;
+}
+
/*
* Turn off the call to ftrace_caller() in instrumented function
*/
@@ -217,7 +391,7 @@ int ftrace_make_nop(struct module *mod, struct dyn_ftrace *rec,
unsigned long pc = rec->ip;
u32 old = 0, new;

- if (!ftrace_find_callable_addr(rec, mod, &addr))
+ if (!ftrace_find_callable_addr(rec, mod, &addr, FC_REMOVE_CALL))
return -EINVAL;

old = aarch64_insn_gen_branch_imm(pc, addr, AARCH64_INSN_BRANCH_LINK);
@@ -231,6 +405,14 @@ void arch_ftrace_update_code(int command)
command |= FTRACE_MAY_SLEEP;
ftrace_modify_all_code(command);
}
+
+void ftrace_rec_arch_init(struct dyn_ftrace *rec, unsigned long func)
+{
+#ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
+ rec->arch.func = func + 5 * AARCH64_INSN_SIZE;
+#endif
+}
+
#endif /* CONFIG_DYNAMIC_FTRACE */

#ifdef CONFIG_FUNCTION_GRAPH_TRACER
diff --git a/arch/arm64/kernel/patching.c b/arch/arm64/kernel/patching.c
index 33e0fabc0b79..3a4326c1ca80 100644
--- a/arch/arm64/kernel/patching.c
+++ b/arch/arm64/kernel/patching.c
@@ -83,6 +83,20 @@ static int __kprobes __aarch64_insn_write(void *addr, __le32 insn)
return ret;
}

+void __kprobes aarch64_literal64_write(void *addr, u64 data)
+{
+ u64 *waddr;
+ unsigned long flags = 0;
+
+ raw_spin_lock_irqsave(&patch_lock, flags);
+ waddr = patch_map(addr, FIX_TEXT_POKE0);
+
+ WRITE_ONCE(*waddr, data);
+
+ patch_unmap(FIX_TEXT_POKE0);
+ raw_spin_unlock_irqrestore(&patch_lock, flags);
+}
+
int __kprobes aarch64_insn_write(void *addr, u32 insn)
{
return __aarch64_insn_write(addr, cpu_to_le32(insn));
diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
index 34d78ca16beb..e42955b78174 100644
--- a/arch/arm64/net/bpf_jit_comp.c
+++ b/arch/arm64/net/bpf_jit_comp.c
@@ -625,6 +625,9 @@ static int emit_ll_sc_atomic(const struct bpf_insn *insn, struct jit_ctx *ctx)
return 0;
}

+#ifdef ftrace_dummy_tramp
+#define dummy_tramp ftrace_dummy_tramp
+#else
void dummy_tramp(void);

asm (
@@ -641,6 +644,7 @@ asm (
" .size dummy_tramp, .-dummy_tramp\n"
" .popsection\n"
);
+#endif

/* build a plt initialized like this:
*
diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
index 0b61371e287b..d5a385453b17 100644
--- a/include/linux/ftrace.h
+++ b/include/linux/ftrace.h
@@ -566,6 +566,8 @@ struct dyn_ftrace {
struct dyn_arch_ftrace arch;
};

+void ftrace_rec_arch_init(struct dyn_ftrace *rec, unsigned long addr);
+
int ftrace_set_filter_ip(struct ftrace_ops *ops, unsigned long ip,
int remove, int reset);
int ftrace_set_filter_ips(struct ftrace_ops *ops, unsigned long *ips,
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index bc921a3f7ea8..4e5b5aa9812b 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -6664,6 +6664,10 @@ static void test_is_sorted(unsigned long *start, unsigned long count)
}
#endif

+void __weak ftrace_rec_arch_init(struct dyn_ftrace *rec, unsigned long addr)
+{
+}
+
static int ftrace_process_locs(struct module *mod,
unsigned long *start,
unsigned long *end)
@@ -6726,7 +6730,9 @@ static int ftrace_process_locs(struct module *mod,
pg = start_pg;
while (p < end) {
unsigned long end_offset;
- addr = ftrace_call_adjust(*p++);
+ unsigned long nop_addr = *p++;
+
+ addr = ftrace_call_adjust(nop_addr);
/*
* Some architecture linkers will pad between
* the different mcount_loc sections of different
@@ -6746,6 +6752,7 @@ static int ftrace_process_locs(struct module *mod,

rec = &pg->records[pg->index++];
rec->ip = addr;
+ ftrace_rec_arch_init(rec, nop_addr);
}

/* We should have used all pages */
--
2.30.2

2022-09-13 17:57:19

by Xu Kuohai

[permalink] [raw]
Subject: [PATCH bpf-next v2 1/4] ftrace: Allow users to disable ftrace direct call

From: Xu Kuohai <[email protected]>

To support ftrace direct call on arm64, multiple NOP instructions need
to be added to the ftrace fentry, which will make the kernel image
larger. For users who don't need direct calls, they should not pay this
unnecessary price, so they should be allowed to disable this option.

Signed-off-by: Xu Kuohai <[email protected]>
---
kernel/trace/Kconfig | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig
index 1052126bdca2..fc8a22f1a6a0 100644
--- a/kernel/trace/Kconfig
+++ b/kernel/trace/Kconfig
@@ -240,9 +240,14 @@ config DYNAMIC_FTRACE_WITH_REGS
depends on HAVE_DYNAMIC_FTRACE_WITH_REGS

config DYNAMIC_FTRACE_WITH_DIRECT_CALLS
- def_bool y
+ bool "Support for calling custom trampoline from fentry directly"
+ default y
depends on DYNAMIC_FTRACE_WITH_REGS
depends on HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
+ help
+ This option enables calling custom trampoline from ftrace fentry
+ directly, instead of using ftrace regs caller. This may reserve more
+ space in the fentry, making the kernel image larger.

config DYNAMIC_FTRACE_WITH_ARGS
def_bool y
--
2.30.2

2022-09-22 18:16:20

by Daniel Borkmann

[permalink] [raw]
Subject: Re: [PATCH bpf-next v2 0/4] Add ftrace direct call for arm64

On 9/13/22 6:27 PM, Xu Kuohai wrote:
> This series adds ftrace direct call for arm64, which is required to attach
> bpf trampoline to fentry.
>
> Although there is no agreement on how to support ftrace direct call on arm64,
> no patch has been posted except the one I posted in [1], so this series
> continues the work of [1] with the addition of long jump support. Now ftrace
> direct call works regardless of the distance between the callsite and custom
> trampoline.
>
> [1] https://lore.kernel.org/bpf/[email protected]/
>
> v2:
> - Fix compile and runtime errors caused by ftrace_rec_arch_init
>
> v1: https://lore.kernel.org/bpf/[email protected]/
>
> Xu Kuohai (4):
> ftrace: Allow users to disable ftrace direct call
> arm64: ftrace: Support long jump for ftrace direct call
> arm64: ftrace: Add ftrace direct call support
> ftrace: Fix dead loop caused by direct call in ftrace selftest

Given there's just a tiny fraction touching BPF JIT and most are around core arm64,
it probably makes sense that this series goes via Catalin/Will through arm64 tree
instead of bpf-next if it looks good to them. Catalin/Will, thoughts (Ack + bpf-next
could work too, but I'd presume this just results in merge conflicts)?

> arch/arm64/Kconfig | 2 +
> arch/arm64/Makefile | 4 +
> arch/arm64/include/asm/ftrace.h | 35 ++++--
> arch/arm64/include/asm/patching.h | 2 +
> arch/arm64/include/asm/ptrace.h | 6 +-
> arch/arm64/kernel/asm-offsets.c | 1 +
> arch/arm64/kernel/entry-ftrace.S | 39 ++++--
> arch/arm64/kernel/ftrace.c | 198 ++++++++++++++++++++++++++++--
> arch/arm64/kernel/patching.c | 14 +++
> arch/arm64/net/bpf_jit_comp.c | 4 +
> include/linux/ftrace.h | 2 +
> kernel/trace/Kconfig | 7 +-
> kernel/trace/ftrace.c | 9 +-
> kernel/trace/trace_selftest.c | 2 +
> 14 files changed, 296 insertions(+), 29 deletions(-)

Thanks,
Daniel

2022-09-26 17:15:17

by Catalin Marinas

[permalink] [raw]
Subject: Re: [PATCH bpf-next v2 0/4] Add ftrace direct call for arm64

On Thu, Sep 22, 2022 at 08:01:16PM +0200, Daniel Borkmann wrote:
> On 9/13/22 6:27 PM, Xu Kuohai wrote:
> > This series adds ftrace direct call for arm64, which is required to attach
> > bpf trampoline to fentry.
> >
> > Although there is no agreement on how to support ftrace direct call on arm64,
> > no patch has been posted except the one I posted in [1], so this series
> > continues the work of [1] with the addition of long jump support. Now ftrace
> > direct call works regardless of the distance between the callsite and custom
> > trampoline.
> >
> > [1] https://lore.kernel.org/bpf/[email protected]/
> >
> > v2:
> > - Fix compile and runtime errors caused by ftrace_rec_arch_init
> >
> > v1: https://lore.kernel.org/bpf/[email protected]/
> >
> > Xu Kuohai (4):
> > ftrace: Allow users to disable ftrace direct call
> > arm64: ftrace: Support long jump for ftrace direct call
> > arm64: ftrace: Add ftrace direct call support
> > ftrace: Fix dead loop caused by direct call in ftrace selftest
>
> Given there's just a tiny fraction touching BPF JIT and most are around core arm64,
> it probably makes sense that this series goes via Catalin/Will through arm64 tree
> instead of bpf-next if it looks good to them. Catalin/Will, thoughts (Ack + bpf-next
> could work too, but I'd presume this just results in merge conflicts)?

I think it makes sense for the series to go via the arm64 tree but I'd
like Mark to have a look at the ftrace changes first.

Thanks.

--
Catalin

2022-09-26 19:04:29

by Mark Rutland

[permalink] [raw]
Subject: Re: [PATCH bpf-next v2 0/4] Add ftrace direct call for arm64

On Mon, Sep 26, 2022 at 03:40:20PM +0100, Catalin Marinas wrote:
> On Thu, Sep 22, 2022 at 08:01:16PM +0200, Daniel Borkmann wrote:
> > On 9/13/22 6:27 PM, Xu Kuohai wrote:
> > > This series adds ftrace direct call for arm64, which is required to attach
> > > bpf trampoline to fentry.
> > >
> > > Although there is no agreement on how to support ftrace direct call on arm64,
> > > no patch has been posted except the one I posted in [1], so this series
> > > continues the work of [1] with the addition of long jump support. Now ftrace
> > > direct call works regardless of the distance between the callsite and custom
> > > trampoline.
> > >
> > > [1] https://lore.kernel.org/bpf/[email protected]/
> > >
> > > v2:
> > > - Fix compile and runtime errors caused by ftrace_rec_arch_init
> > >
> > > v1: https://lore.kernel.org/bpf/[email protected]/
> > >
> > > Xu Kuohai (4):
> > > ftrace: Allow users to disable ftrace direct call
> > > arm64: ftrace: Support long jump for ftrace direct call
> > > arm64: ftrace: Add ftrace direct call support
> > > ftrace: Fix dead loop caused by direct call in ftrace selftest
> >
> > Given there's just a tiny fraction touching BPF JIT and most are around core arm64,
> > it probably makes sense that this series goes via Catalin/Will through arm64 tree
> > instead of bpf-next if it looks good to them. Catalin/Will, thoughts (Ack + bpf-next
> > could work too, but I'd presume this just results in merge conflicts)?
>
> I think it makes sense for the series to go via the arm64 tree but I'd
> like Mark to have a look at the ftrace changes first.

From a quick scan, I still don't think this is quite right, and as it stands I
believe this will break backtracing (as the instructions before the function
entry point will not be symbolized correctly, getting in the way of
RELIABLE_STACKTRACE). I think I was insufficiently clear with my earlier
feedback there, as I have a mechanism in mind that wa a little simpler.

I'll try to reply with some more detail tomorrow, but I don't think this is the
right approach, and as mentioned previously (and e.g. at LPC) I'd strongly
prefer to *not* implement direct calls, so that we can have more consistent
entry/exit handling.

Thanks,
Mark.

2022-09-28 17:29:22

by Mark Rutland

[permalink] [raw]
Subject: Re: [PATCH bpf-next v2 0/4] Add ftrace direct call for arm64

On Tue, Sep 27, 2022 at 12:49:58PM +0800, Xu Kuohai wrote:
> On 9/27/2022 1:43 AM, Mark Rutland wrote:
> > On Mon, Sep 26, 2022 at 03:40:20PM +0100, Catalin Marinas wrote:
> > > On Thu, Sep 22, 2022 at 08:01:16PM +0200, Daniel Borkmann wrote:
> > > > On 9/13/22 6:27 PM, Xu Kuohai wrote:
> > > > > This series adds ftrace direct call for arm64, which is required to attach
> > > > > bpf trampoline to fentry.
> > > > >
> > > > > Although there is no agreement on how to support ftrace direct call on arm64,
> > > > > no patch has been posted except the one I posted in [1], so this series
> > > > > continues the work of [1] with the addition of long jump support. Now ftrace
> > > > > direct call works regardless of the distance between the callsite and custom
> > > > > trampoline.
> > > > >
> > > > > [1] https://lore.kernel.org/bpf/[email protected]/
> > > > >
> > > > > v2:
> > > > > - Fix compile and runtime errors caused by ftrace_rec_arch_init
> > > > >
> > > > > v1: https://lore.kernel.org/bpf/[email protected]/
> > > > >
> > > > > Xu Kuohai (4):
> > > > > ftrace: Allow users to disable ftrace direct call
> > > > > arm64: ftrace: Support long jump for ftrace direct call
> > > > > arm64: ftrace: Add ftrace direct call support
> > > > > ftrace: Fix dead loop caused by direct call in ftrace selftest
> > > >
> > > > Given there's just a tiny fraction touching BPF JIT and most are around core arm64,
> > > > it probably makes sense that this series goes via Catalin/Will through arm64 tree
> > > > instead of bpf-next if it looks good to them. Catalin/Will, thoughts (Ack + bpf-next
> > > > could work too, but I'd presume this just results in merge conflicts)?
> > >
> > > I think it makes sense for the series to go via the arm64 tree but I'd
> > > like Mark to have a look at the ftrace changes first.
> >
> > > From a quick scan, I still don't think this is quite right, and as it stands I
> > believe this will break backtracing (as the instructions before the function
> > entry point will not be symbolized correctly, getting in the way of
> > RELIABLE_STACKTRACE). I think I was insufficiently clear with my earlier
> > feedback there, as I have a mechanism in mind that wa a little simpler.
>
> Thanks for the review. I have some thoughts about reliable stacktrace.
>
> If PC is not in the range of literal_call, stacktrace works as before without
> changes.
>
> If PC is in the range of literal_call, for example, interrupted by an
> irq, I think there are 2 problems:
>
> 1. Caller LR is not pushed to the stack yet, so caller's address and name
> will be missing from the backtrace.
>
> 2. Since PC is not in func's address range, no symbol name will be found, so
> func name is also missing.
>
> Problem 1 is not introduced by this patchset, but the occurring probability
> may be increased by this patchset. I think this problem should be addressed by
> a reliable stacktrace scheme, such as ORC on x86.

I agree problem 1 is not introduced by this patch set; I have plans fo how to
address that for reliable stacktrace based on identifying the ftrace
trampoline. This is one of the reasons I do not want direct calls, as
identifying all direct call trampolines is going to be very painful and slow,
whereas identifying a statically allocated ftrace trampoline is far simpler.

> Problem 2 is indeed introduced by this patchset. I think there are at least 3
> ways to deal with it:

What I would like to do here, as mentioned previously in other threads, is to
avoid direct calls, and implement "FTRACE_WITH_OPS", where we can associate
each patch-site with a specific set of ops, and invoke that directly from the
regular ftrace trampoline.

With that, the patch site would look like:

pre_func_literal:
NOP // Patched to a pointer to
NOP // ftrace_ops
func:
< optional BTI here >
NOP // Patched to MOV X9, LR
NOP // Patched to a BL to the ftrace trampoline

... then in the ftrace trampoline we can recover the ops pointer at a negative
offset from the LR based on the LR, and invoke the ops from there (passing a
struct ftrace_regs with the saved regs).

That way the patch-site is less significantly affected, and there's no impact
to backtracing. That gets most of the benefit of the direct calls avoiding the
ftrace ops list traversal, without having to do anything special at all. That
should be much easier to maintain, too.

I started implementing that before LPC (and you can find some branches on my
kernel.org repo), but I haven't yet had the time to rebase those and sort out
the remaining issues:

https://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git/log/?h=arm64/ftrace/per-callsite-ops

Note that as a prerequisite for that I also want to reduce the set of registers
we save/restore down to the set required by our calling convention, as the
existing pt_regs is both large and generally unsound (since we can not and do
not fill in many of the fields we only acquire at an exception boundary).
That'll further reduce the ftrace overhead generally, and remove the needs for
the two trampolines we currently have. I have a WIP at:

https://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git/log/?h=arm64/ftrace/minimal-regs

I intend to get back to both of those shortly (along with some related bits for
kretprobes and stacktracing); I just haven't had much time recently due to
other work and illness.

> 1. Add a symbol name for literal_call.

That'll require a number of invasive changes to make RELIABLE_STACKTRACE work,
so I don't think we want to do that.

> 2. Hack the backtrace routine, if no symbol name found for a PC during backtrace,
> we can check if the PC is in literal_call, then adjust PC and try again.

The problem is that the existing symbolization code doesn't know the length of
the prior symbol, so it will find *some* symbol associated with the previous
function rather than finding no symbol.

To bodge around this we'dd need to special-case each patchable-function-entry
site in symbolization, which is going to be painful and slow down unwinding
unless we try to fix this up at boot-time or compile time.

> 3. Move literal_call to the func's address range, for example:
>
> a. Compile with -fpatchable-function-entry=7
> func:
> BTI C
> NOP
> NOP
> NOP
> NOP
> NOP
> NOP
> NOP

This is a non-starter. We are not going to add 7 NOPs at the start of every
function.

Thanks,
Mark.

2022-10-04 16:57:24

by Florent Revest

[permalink] [raw]
Subject: Re: [PATCH bpf-next v2 0/4] Add ftrace direct call for arm64

On Fri, Sep 30, 2022 at 6:07 AM Xu Kuohai <[email protected]> wrote:
>
> On 9/29/2022 12:42 AM, Mark Rutland wrote:
> > On Tue, Sep 27, 2022 at 12:49:58PM +0800, Xu Kuohai wrote:
> >> On 9/27/2022 1:43 AM, Mark Rutland wrote:
> >>> On Mon, Sep 26, 2022 at 03:40:20PM +0100, Catalin Marinas wrote:
> >>>> On Thu, Sep 22, 2022 at 08:01:16PM +0200, Daniel Borkmann wrote:
> >>>>> On 9/13/22 6:27 PM, Xu Kuohai wrote:
> >>>>>> This series adds ftrace direct call for arm64, which is required to attach
> >>>>>> bpf trampoline to fentry.
> >>>>>>
> >>>>>> Although there is no agreement on how to support ftrace direct call on arm64,
> >>>>>> no patch has been posted except the one I posted in [1], so this series

Hey Xu :) Sorry I wasn't more pro-active about communicating what i
was experimenting with! A lot of conversations happened off-the-list
at LPC and LSS so I was playing on the side with the ideas that got
suggested to me. I start to have a little something to share.
Hopefully if we work closer together now we can get quicker results.

> >>>>>> continues the work of [1] with the addition of long jump support. Now ftrace
> >>>>>> direct call works regardless of the distance between the callsite and custom
> >>>>>> trampoline.
> >>>>>>
> >>>>>> [1] https://lore.kernel.org/bpf/[email protected]/
> >>>>>>
> >>>>>> v2:
> >>>>>> - Fix compile and runtime errors caused by ftrace_rec_arch_init
> >>>>>>
> >>>>>> v1: https://lore.kernel.org/bpf/[email protected]/
> >>>>>>
> >>>>>> Xu Kuohai (4):
> >>>>>> ftrace: Allow users to disable ftrace direct call
> >>>>>> arm64: ftrace: Support long jump for ftrace direct call
> >>>>>> arm64: ftrace: Add ftrace direct call support
> >>>>>> ftrace: Fix dead loop caused by direct call in ftrace selftest
> >>>>>
> >>>>> Given there's just a tiny fraction touching BPF JIT and most are around core arm64,
> >>>>> it probably makes sense that this series goes via Catalin/Will through arm64 tree
> >>>>> instead of bpf-next if it looks good to them. Catalin/Will, thoughts (Ack + bpf-next
> >>>>> could work too, but I'd presume this just results in merge conflicts)?
> >>>>
> >>>> I think it makes sense for the series to go via the arm64 tree but I'd
> >>>> like Mark to have a look at the ftrace changes first.
> >>>
> >>>> From a quick scan, I still don't think this is quite right, and as it stands I
> >>> believe this will break backtracing (as the instructions before the function
> >>> entry point will not be symbolized correctly, getting in the way of
> >>> RELIABLE_STACKTRACE). I think I was insufficiently clear with my earlier
> >>> feedback there, as I have a mechanism in mind that wa a little simpler.
> >>
> >> Thanks for the review. I have some thoughts about reliable stacktrace.
> >>
> >> If PC is not in the range of literal_call, stacktrace works as before without
> >> changes.
> >>
> >> If PC is in the range of literal_call, for example, interrupted by an
> >> irq, I think there are 2 problems:
> >>
> >> 1. Caller LR is not pushed to the stack yet, so caller's address and name
> >> will be missing from the backtrace.
> >>
> >> 2. Since PC is not in func's address range, no symbol name will be found, so
> >> func name is also missing.
> >>
> >> Problem 1 is not introduced by this patchset, but the occurring probability
> >> may be increased by this patchset. I think this problem should be addressed by
> >> a reliable stacktrace scheme, such as ORC on x86.
> >
> > I agree problem 1 is not introduced by this patch set; I have plans fo how to
> > address that for reliable stacktrace based on identifying the ftrace
> > trampoline. This is one of the reasons I do not want direct calls, as
> > identifying all direct call trampolines is going to be very painful and slow,
> > whereas identifying a statically allocated ftrace trampoline is far simpler.
> >
> >> Problem 2 is indeed introduced by this patchset. I think there are at least 3
> >> ways to deal with it:
> >
> > What I would like to do here, as mentioned previously in other threads, is to
> > avoid direct calls, and implement "FTRACE_WITH_OPS", where we can associate
> > each patch-site with a specific set of ops, and invoke that directly from the
> > regular ftrace trampoline.
> >
> > With that, the patch site would look like:
> >
> > pre_func_literal:
> > NOP // Patched to a pointer to
> > NOP // ftrace_ops
> > func:
> > < optional BTI here >
> > NOP // Patched to MOV X9, LR
> > NOP // Patched to a BL to the ftrace trampoline
> >
> > ... then in the ftrace trampoline we can recover the ops pointer at a negative
> > offset from the LR based on the LR, and invoke the ops from there (passing a
> > struct ftrace_regs with the saved regs).
> >
> > That way the patch-site is less significantly affected, and there's no impact
> > to backtracing. That gets most of the benefit of the direct calls avoiding the
> > ftrace ops list traversal, without having to do anything special at all. That
> > should be much easier to maintain, too.
> >
> > I started implementing that before LPC (and you can find some branches on my
> > kernel.org repo), but I haven't yet had the time to rebase those and sort out
> > the remaining issues:
> >
> > https://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git/log/?h=arm64/ftrace/per-callsite-ops
> >
>
> I've read this code before, but it doesn't run and since you haven't updated

I also tried to use this but indeed the "TODO: mess with protection to
set this" in 5437aa788d needs to be addressed before we can use it.

> it, I assumed you dropped it :(
>
> This approach seems appropriate for dynamic ftrace trampolines, but I think
> there are two more issues for bpf.
>
> 1. bpf trampoline was designed to be called directly from fentry (located in
> kernel function or bpf prog). So to make it work as ftrace_op, we may end
> up with two different bpf trampoline types on arm64, one for bpf prog and
> the other for ftrace.
>
> 2. Performance overhead, as we always jump to a static ftrace trampoline to
> construct execution environment for bpf trampoline, then jump to the bpf
> trampoline to construct execution environment for bpf prog, then jump to
> the bpf prog, so for some small bpf progs or hot functions, the calling
> overhead may be unacceptable.

From the conversations I've had at LPC, Steven, Mark, Jiri and Masami
(all in CC) would like to see an ftrace ops based solution (or rather,
something that doesn't require direct calls) for invoking BPF tracing
programs. I figured that the best way to move forward on the question
of whether the performance impact of that would be acceptable or not
is to just build it and measure it. I understand you're testing your
work on real hardware (I work on an emulator at the moment) , would
you be able to compare the impact of my proof of concept branch with
your direct call based approach ?

https://github.com/FlorentRevest/linux/commits/fprobe-min-args

I first tried to implement this as an ftrace op myself but realized I
was re-implementing a lot of the function graph tracer. So I then
tried to use the function graph tracer API but realized I was missing
some features which Steven had addressed in an RFC few years back. So
I rebuilt on that until I realized Masami has been upstreaming the
fprobe and rethook APIs as spiritual successors of Steven's RFC... So
I've now rebuilt yet another proof of concept based on fprobe and
rethook.

That branch is still very much WIP and there are a few things I'd like
to address before sending even an RFC (when kretprobe is built on
rethook for example, I construct pt_regs on the stack in which I copy
the content of ftrace_regs... or program linking/unlinking is racy
right now) but I think it's good enough for performance measurements
already. (fentry_fexit and lsm tests pass)

> > Note that as a prerequisite for that I also want to reduce the set of registers
> > we save/restore down to the set required by our calling convention, as the
> > existing pt_regs is both large and generally unsound (since we can not and do
> > not fill in many of the fields we only acquire at an exception boundary).
> > That'll further reduce the ftrace overhead generally, and remove the needs for
> > the two trampolines we currently have. I have a WIP at:
> >
> > https://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git/log/?h=arm64/ftrace/minimal-regs

Note that I integrated this work to my branch too. I extended it to
also have fprobe and rethook save and pass ftrace_regs structures to
their callbacks. Most performance improvements would come from your
arm64/ftrace/per-callsite-ops branch but we'd need to fix the above
TODO for it to work.

> > I intend to get back to both of those shortly (along with some related bits for
> > kretprobes and stacktracing); I just haven't had much time recently due to
> > other work and illness.
> >
>
> Sorry for that, hope you getting better soon.

Oh, that sucks. Get better Mark!

2022-10-05 15:15:35

by Florent Revest

[permalink] [raw]
Subject: Re: [PATCH bpf-next v2 0/4] Add ftrace direct call for arm64

On Wed, Oct 5, 2022 at 5:07 PM Steven Rostedt <[email protected]> wrote:
>
> On Wed, 5 Oct 2022 22:54:15 +0800
> Xu Kuohai <[email protected]> wrote:
>
> > 1.3 attach bpf prog with with direct call, bpftrace -e 'kfunc:vfs_write {}'
> >
> > # dd if=/dev/zero of=/dev/null count=1000000
> > 1000000+0 records in
> > 1000000+0 records out
> > 512000000 bytes (512 MB, 488 MiB) copied, 1.72973 s, 296 MB/s
> >
> >
> > 1.4 attach bpf prog with with indirect call, bpftrace -e 'kfunc:vfs_write {}'
> >
> > # dd if=/dev/zero of=/dev/null count=1000000
> > 1000000+0 records in
> > 1000000+0 records out
> > 512000000 bytes (512 MB, 488 MiB) copied, 1.99179 s, 257 MB/s

Thanks for the measurements Xu!

> Can you show the implementation of the indirect call you used?

Xu used my development branch here
https://github.com/FlorentRevest/linux/commits/fprobe-min-args

As it stands, the performance impact of the fprobe based
implementation would be too high for us. I wonder how much Mark's idea
here https://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git/log/?h=arm64/ftrace/per-callsite-ops
would help but it doesn't work right now.

2022-10-05 15:32:12

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH bpf-next v2 0/4] Add ftrace direct call for arm64

On Wed, 5 Oct 2022 22:54:15 +0800
Xu Kuohai <[email protected]> wrote:

> 1.3 attach bpf prog with with direct call, bpftrace -e 'kfunc:vfs_write {}'
>
> # dd if=/dev/zero of=/dev/null count=1000000
> 1000000+0 records in
> 1000000+0 records out
> 512000000 bytes (512 MB, 488 MiB) copied, 1.72973 s, 296 MB/s
>
>
> 1.4 attach bpf prog with with indirect call, bpftrace -e 'kfunc:vfs_write {}'
>
> # dd if=/dev/zero of=/dev/null count=1000000
> 1000000+0 records in
> 1000000+0 records out
> 512000000 bytes (512 MB, 488 MiB) copied, 1.99179 s, 257 MB/s

Can you show the implementation of the indirect call you used?

Thanks,

-- Steve

2022-10-05 15:37:27

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH bpf-next v2 0/4] Add ftrace direct call for arm64

On Wed, 5 Oct 2022 17:10:33 +0200
Florent Revest <[email protected]> wrote:

> On Wed, Oct 5, 2022 at 5:07 PM Steven Rostedt <[email protected]> wrote:
> >
> > On Wed, 5 Oct 2022 22:54:15 +0800
> > Xu Kuohai <[email protected]> wrote:
> >
> > > 1.3 attach bpf prog with with direct call, bpftrace -e 'kfunc:vfs_write {}'
> > >
> > > # dd if=/dev/zero of=/dev/null count=1000000
> > > 1000000+0 records in
> > > 1000000+0 records out
> > > 512000000 bytes (512 MB, 488 MiB) copied, 1.72973 s, 296 MB/s
> > >
> > >
> > > 1.4 attach bpf prog with with indirect call, bpftrace -e 'kfunc:vfs_write {}'
> > >
> > > # dd if=/dev/zero of=/dev/null count=1000000
> > > 1000000+0 records in
> > > 1000000+0 records out
> > > 512000000 bytes (512 MB, 488 MiB) copied, 1.99179 s, 257 MB/s
>
> Thanks for the measurements Xu!
>
> > Can you show the implementation of the indirect call you used?
>
> Xu used my development branch here
> https://github.com/FlorentRevest/linux/commits/fprobe-min-args

That looks like it could be optimized quite a bit too.

Specifically this part:

static bool bpf_fprobe_entry(struct fprobe *fp, unsigned long ip, struct ftrace_regs *regs, void *private)
{
struct bpf_fprobe_call_context *call_ctx = private;
struct bpf_fprobe_context *fprobe_ctx = fp->ops.private;
struct bpf_tramp_links *links = fprobe_ctx->links;
struct bpf_tramp_links *fentry = &links[BPF_TRAMP_FENTRY];
struct bpf_tramp_links *fmod_ret = &links[BPF_TRAMP_MODIFY_RETURN];
struct bpf_tramp_links *fexit = &links[BPF_TRAMP_FEXIT];
int i, ret;

memset(&call_ctx->ctx, 0, sizeof(call_ctx->ctx));
call_ctx->ip = ip;
for (i = 0; i < fprobe_ctx->nr_args; i++)
call_ctx->args[i] = ftrace_regs_get_argument(regs, i);

for (i = 0; i < fentry->nr_links; i++)
call_bpf_prog(fentry->links[i], &call_ctx->ctx, call_ctx->args);

call_ctx->args[fprobe_ctx->nr_args] = 0;
for (i = 0; i < fmod_ret->nr_links; i++) {
ret = call_bpf_prog(fmod_ret->links[i], &call_ctx->ctx,
call_ctx->args);

if (ret) {
ftrace_regs_set_return_value(regs, ret);
ftrace_override_function_with_return(regs);

bpf_fprobe_exit(fp, ip, regs, private);
return false;
}
}

return fexit->nr_links;
}

There's a lot of low hanging fruit to speed up there. I wouldn't be too
fast to throw out this solution if it hasn't had the care that direct calls
have had to speed that up.

For example, trampolines currently only allow to attach to functions with 6
parameters or less (3 on x86_32). You could make 7 specific callbacks, with
zero to 6 parameters, and unroll the argument loop.

Would also be interesting to run perf to see where the overhead is. There
may be other locations to work on to make it almost as fast as direct
callers without the other baggage.

-- Steve

>
> As it stands, the performance impact of the fprobe based
> implementation would be too high for us. I wonder how much Mark's idea
> here https://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git/log/?h=arm64/ftrace/per-callsite-ops
> would help but it doesn't work right now.

2022-10-05 22:15:26

by Jiri Olsa

[permalink] [raw]
Subject: Re: [PATCH bpf-next v2 0/4] Add ftrace direct call for arm64

On Wed, Oct 05, 2022 at 11:30:19AM -0400, Steven Rostedt wrote:
> On Wed, 5 Oct 2022 17:10:33 +0200
> Florent Revest <[email protected]> wrote:
>
> > On Wed, Oct 5, 2022 at 5:07 PM Steven Rostedt <[email protected]> wrote:
> > >
> > > On Wed, 5 Oct 2022 22:54:15 +0800
> > > Xu Kuohai <[email protected]> wrote:
> > >
> > > > 1.3 attach bpf prog with with direct call, bpftrace -e 'kfunc:vfs_write {}'
> > > >
> > > > # dd if=/dev/zero of=/dev/null count=1000000
> > > > 1000000+0 records in
> > > > 1000000+0 records out
> > > > 512000000 bytes (512 MB, 488 MiB) copied, 1.72973 s, 296 MB/s
> > > >
> > > >
> > > > 1.4 attach bpf prog with with indirect call, bpftrace -e 'kfunc:vfs_write {}'
> > > >
> > > > # dd if=/dev/zero of=/dev/null count=1000000
> > > > 1000000+0 records in
> > > > 1000000+0 records out
> > > > 512000000 bytes (512 MB, 488 MiB) copied, 1.99179 s, 257 MB/s
> >
> > Thanks for the measurements Xu!
> >
> > > Can you show the implementation of the indirect call you used?
> >
> > Xu used my development branch here
> > https://github.com/FlorentRevest/linux/commits/fprobe-min-args

nice :) I guess you did not try to run it on x86, I had to add some small
changes and disable HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS to compile it

>
> That looks like it could be optimized quite a bit too.
>
> Specifically this part:
>
> static bool bpf_fprobe_entry(struct fprobe *fp, unsigned long ip, struct ftrace_regs *regs, void *private)
> {
> struct bpf_fprobe_call_context *call_ctx = private;
> struct bpf_fprobe_context *fprobe_ctx = fp->ops.private;
> struct bpf_tramp_links *links = fprobe_ctx->links;
> struct bpf_tramp_links *fentry = &links[BPF_TRAMP_FENTRY];
> struct bpf_tramp_links *fmod_ret = &links[BPF_TRAMP_MODIFY_RETURN];
> struct bpf_tramp_links *fexit = &links[BPF_TRAMP_FEXIT];
> int i, ret;
>
> memset(&call_ctx->ctx, 0, sizeof(call_ctx->ctx));
> call_ctx->ip = ip;
> for (i = 0; i < fprobe_ctx->nr_args; i++)
> call_ctx->args[i] = ftrace_regs_get_argument(regs, i);
>
> for (i = 0; i < fentry->nr_links; i++)
> call_bpf_prog(fentry->links[i], &call_ctx->ctx, call_ctx->args);
>
> call_ctx->args[fprobe_ctx->nr_args] = 0;
> for (i = 0; i < fmod_ret->nr_links; i++) {
> ret = call_bpf_prog(fmod_ret->links[i], &call_ctx->ctx,
> call_ctx->args);
>
> if (ret) {
> ftrace_regs_set_return_value(regs, ret);
> ftrace_override_function_with_return(regs);
>
> bpf_fprobe_exit(fp, ip, regs, private);
> return false;
> }
> }
>
> return fexit->nr_links;
> }
>
> There's a lot of low hanging fruit to speed up there. I wouldn't be too
> fast to throw out this solution if it hasn't had the care that direct calls
> have had to speed that up.
>
> For example, trampolines currently only allow to attach to functions with 6
> parameters or less (3 on x86_32). You could make 7 specific callbacks, with
> zero to 6 parameters, and unroll the argument loop.
>
> Would also be interesting to run perf to see where the overhead is. There
> may be other locations to work on to make it almost as fast as direct
> callers without the other baggage.

I can boot the change and run tests in qemu but for some reason it
won't boot on hw, so I have just perf report from qemu so far

there's fprobe/rethook machinery showing out as expected

jirka


---
# To display the perf.data header info, please use --header/--header-only options.
#
#
# Total Lost Samples: 0
#
# Samples: 23K of event 'cpu-clock:k'
# Event count (approx.): 5841250000
#
# Overhead Command Shared Object Symbol
# ........ ....... .............................................. ..................................................
#
18.65% bench [kernel.kallsyms] [k] syscall_enter_from_user_mode
|
---syscall_enter_from_user_mode
do_syscall_64
entry_SYSCALL_64_after_hwframe
syscall

13.03% bench [kernel.kallsyms] [k] seqcount_lockdep_reader_access.constprop.0
|
---seqcount_lockdep_reader_access.constprop.0
ktime_get_coarse_real_ts64
syscall_trace_enter.constprop.0
do_syscall_64
entry_SYSCALL_64_after_hwframe
syscall

9.49% bench [kernel.kallsyms] [k] rethook_try_get
|
---rethook_try_get
fprobe_handler
ftrace_trampoline
__x64_sys_getpgid
do_syscall_64
entry_SYSCALL_64_after_hwframe
syscall

8.71% bench [kernel.kallsyms] [k] rethook_recycle
|
---rethook_recycle
fprobe_handler
ftrace_trampoline
__x64_sys_getpgid
do_syscall_64
entry_SYSCALL_64_after_hwframe
syscall

4.31% bench [kernel.kallsyms] [k] rcu_is_watching
|
---rcu_is_watching
|
|--1.49%--rethook_try_get
| fprobe_handler
| ftrace_trampoline
| __x64_sys_getpgid
| do_syscall_64
| entry_SYSCALL_64_after_hwframe
| syscall
|
|--1.10%--do_getpgid
| __x64_sys_getpgid
| do_syscall_64
| entry_SYSCALL_64_after_hwframe
| syscall
|
|--1.02%--__bpf_prog_exit
| call_bpf_prog.isra.0
| bpf_fprobe_entry
| fprobe_handler
| ftrace_trampoline
| __x64_sys_getpgid
| do_syscall_64
| entry_SYSCALL_64_after_hwframe
| syscall
|
--0.70%--__bpf_prog_enter
call_bpf_prog.isra.0
bpf_fprobe_entry
fprobe_handler
ftrace_trampoline
__x64_sys_getpgid
do_syscall_64
entry_SYSCALL_64_after_hwframe
syscall

2.94% bench [kernel.kallsyms] [k] lock_release
|
---lock_release
|
|--1.51%--call_bpf_prog.isra.0
| bpf_fprobe_entry
| fprobe_handler
| ftrace_trampoline
| __x64_sys_getpgid
| do_syscall_64
| entry_SYSCALL_64_after_hwframe
| syscall
|
--1.43%--do_getpgid
__x64_sys_getpgid
do_syscall_64
entry_SYSCALL_64_after_hwframe
syscall

2.91% bench bpf_prog_21856463590f61f1_bench_trigger_fentry [k] bpf_prog_21856463590f61f1_bench_trigger_fentry
|
---bpf_prog_21856463590f61f1_bench_trigger_fentry
|
--2.66%--call_bpf_prog.isra.0
bpf_fprobe_entry
fprobe_handler
ftrace_trampoline
__x64_sys_getpgid
do_syscall_64
entry_SYSCALL_64_after_hwframe
syscall

2.69% bench [kernel.kallsyms] [k] bpf_fprobe_entry
|
---bpf_fprobe_entry
fprobe_handler
ftrace_trampoline
__x64_sys_getpgid
do_syscall_64
entry_SYSCALL_64_after_hwframe
syscall

2.60% bench [kernel.kallsyms] [k] lock_acquire
|
---lock_acquire
|
|--1.34%--__bpf_prog_enter
| call_bpf_prog.isra.0
| bpf_fprobe_entry
| fprobe_handler
| ftrace_trampoline
| __x64_sys_getpgid
| do_syscall_64
| entry_SYSCALL_64_after_hwframe
| syscall
|
--1.24%--do_getpgid
__x64_sys_getpgid
do_syscall_64
entry_SYSCALL_64_after_hwframe
syscall

2.42% bench [kernel.kallsyms] [k] syscall_exit_to_user_mode_prepare
|
---syscall_exit_to_user_mode_prepare
syscall_exit_to_user_mode
do_syscall_64
entry_SYSCALL_64_after_hwframe
syscall

2.37% bench [kernel.kallsyms] [k] __audit_syscall_entry
|
---__audit_syscall_entry
syscall_trace_enter.constprop.0
do_syscall_64
entry_SYSCALL_64_after_hwframe
|
--2.36%--syscall

2.35% bench [kernel.kallsyms] [k] syscall_trace_enter.constprop.0
|
---syscall_trace_enter.constprop.0
do_syscall_64
entry_SYSCALL_64_after_hwframe
syscall

2.12% bench [kernel.kallsyms] [k] check_preemption_disabled
|
---check_preemption_disabled
|
--1.55%--rcu_is_watching
|
--0.59%--do_getpgid
__x64_sys_getpgid
do_syscall_64
entry_SYSCALL_64_after_hwframe
syscall

2.00% bench [kernel.kallsyms] [k] fprobe_handler
|
---fprobe_handler
ftrace_trampoline
__x64_sys_getpgid
do_syscall_64
entry_SYSCALL_64_after_hwframe
syscall

1.94% bench [kernel.kallsyms] [k] local_irq_disable_exit_to_user
|
---local_irq_disable_exit_to_user
syscall_exit_to_user_mode
do_syscall_64
entry_SYSCALL_64_after_hwframe
syscall

1.84% bench [kernel.kallsyms] [k] rcu_read_lock_sched_held
|
---rcu_read_lock_sched_held
|
|--0.93%--lock_acquire
|
--0.90%--lock_release

1.71% bench [kernel.kallsyms] [k] migrate_enable
|
---migrate_enable
__bpf_prog_exit
call_bpf_prog.isra.0
bpf_fprobe_entry
fprobe_handler
ftrace_trampoline
__x64_sys_getpgid
do_syscall_64
entry_SYSCALL_64_after_hwframe
syscall

1.66% bench [kernel.kallsyms] [k] call_bpf_prog.isra.0
|
---call_bpf_prog.isra.0
bpf_fprobe_entry
fprobe_handler
ftrace_trampoline
__x64_sys_getpgid
do_syscall_64
entry_SYSCALL_64_after_hwframe
syscall

1.53% bench [kernel.kallsyms] [k] __rcu_read_unlock
|
---__rcu_read_unlock
|
|--0.86%--__bpf_prog_exit
| call_bpf_prog.isra.0
| bpf_fprobe_entry
| fprobe_handler
| ftrace_trampoline
| __x64_sys_getpgid
| do_syscall_64
| entry_SYSCALL_64_after_hwframe
| syscall
|
--0.66%--do_getpgid
__x64_sys_getpgid
do_syscall_64
entry_SYSCALL_64_after_hwframe
syscall

1.31% bench [kernel.kallsyms] [k] debug_smp_processor_id
|
---debug_smp_processor_id
|
--0.77%--rcu_is_watching

1.22% bench [kernel.kallsyms] [k] migrate_disable
|
---migrate_disable
__bpf_prog_enter
call_bpf_prog.isra.0
bpf_fprobe_entry
fprobe_handler
ftrace_trampoline
__x64_sys_getpgid
do_syscall_64
entry_SYSCALL_64_after_hwframe
syscall

1.19% bench [kernel.kallsyms] [k] __bpf_prog_enter
|
---__bpf_prog_enter
call_bpf_prog.isra.0
bpf_fprobe_entry
fprobe_handler
ftrace_trampoline
__x64_sys_getpgid
do_syscall_64
entry_SYSCALL_64_after_hwframe
syscall

0.84% bench [kernel.kallsyms] [k] __radix_tree_lookup
|
---__radix_tree_lookup
find_task_by_pid_ns
do_getpgid
__x64_sys_getpgid
do_syscall_64
entry_SYSCALL_64_after_hwframe
syscall

0.82% bench [kernel.kallsyms] [k] do_getpgid
|
---do_getpgid
__x64_sys_getpgid
do_syscall_64
entry_SYSCALL_64_after_hwframe
syscall

0.78% bench [kernel.kallsyms] [k] debug_lockdep_rcu_enabled
|
---debug_lockdep_rcu_enabled
|
--0.63%--rcu_read_lock_sched_held

0.74% bench ftrace_trampoline [k] ftrace_trampoline
|
---ftrace_trampoline
__x64_sys_getpgid
do_syscall_64
entry_SYSCALL_64_after_hwframe
syscall

0.72% bench [kernel.kallsyms] [k] preempt_count_add
|
---preempt_count_add

0.71% bench [kernel.kallsyms] [k] ktime_get_coarse_real_ts64
|
---ktime_get_coarse_real_ts64
syscall_trace_enter.constprop.0
do_syscall_64
entry_SYSCALL_64_after_hwframe
syscall

0.69% bench [kernel.kallsyms] [k] do_syscall_64
|
---do_syscall_64
entry_SYSCALL_64_after_hwframe
|
--0.68%--syscall

0.60% bench [kernel.kallsyms] [k] preempt_count_sub
|
---preempt_count_sub

0.59% bench [kernel.kallsyms] [k] __rcu_read_lock
|
---__rcu_read_lock

0.59% bench [kernel.kallsyms] [k] __x64_sys_getpgid
|
---__x64_sys_getpgid
do_syscall_64
entry_SYSCALL_64_after_hwframe
syscall

0.58% bench [kernel.kallsyms] [k] __audit_syscall_exit
|
---__audit_syscall_exit
syscall_exit_to_user_mode_prepare
syscall_exit_to_user_mode
do_syscall_64
entry_SYSCALL_64_after_hwframe
syscall

0.53% bench [kernel.kallsyms] [k] audit_reset_context
|
---audit_reset_context
syscall_exit_to_user_mode_prepare
syscall_exit_to_user_mode
do_syscall_64
entry_SYSCALL_64_after_hwframe
syscall

0.45% bench [kernel.kallsyms] [k] rcu_read_lock_held
0.36% bench [kernel.kallsyms] [k] find_task_by_vpid
0.32% bench [kernel.kallsyms] [k] __bpf_prog_exit
0.26% bench [kernel.kallsyms] [k] syscall_exit_to_user_mode
0.20% bench [kernel.kallsyms] [k] idr_find
0.18% bench [kernel.kallsyms] [k] find_task_by_pid_ns
0.17% bench [kernel.kallsyms] [k] update_prog_stats
0.16% bench [kernel.kallsyms] [k] _raw_spin_unlock_irqrestore
0.14% bench [kernel.kallsyms] [k] pid_task
0.04% bench [kernel.kallsyms] [k] memchr_inv
0.04% bench [kernel.kallsyms] [k] smp_call_function_many_cond
0.03% bench [kernel.kallsyms] [k] do_user_addr_fault
0.03% bench [kernel.kallsyms] [k] kallsyms_expand_symbol.constprop.0
0.03% bench [kernel.kallsyms] [k] native_flush_tlb_global
0.03% bench [kernel.kallsyms] [k] __change_page_attr_set_clr
0.02% bench [kernel.kallsyms] [k] memcpy_erms
0.02% bench [kernel.kallsyms] [k] unwind_next_frame
0.02% bench [kernel.kallsyms] [k] copy_user_enhanced_fast_string
0.01% bench [kernel.kallsyms] [k] __orc_find
0.01% bench [kernel.kallsyms] [k] call_rcu
0.01% bench [kernel.kallsyms] [k] __alloc_pages
0.01% bench [kernel.kallsyms] [k] __purge_vmap_area_lazy
0.01% bench [kernel.kallsyms] [k] __softirqentry_text_start
0.01% bench [kernel.kallsyms] [k] __stack_depot_save
0.01% bench [kernel.kallsyms] [k] __up_read
0.01% bench [kernel.kallsyms] [k] __virt_addr_valid
0.01% bench [kernel.kallsyms] [k] clear_page_erms
0.01% bench [kernel.kallsyms] [k] deactivate_slab
0.01% bench [kernel.kallsyms] [k] do_check_common
0.01% bench [kernel.kallsyms] [k] finish_task_switch.isra.0
0.01% bench [kernel.kallsyms] [k] free_unref_page_list
0.01% bench [kernel.kallsyms] [k] ftrace_rec_iter_next
0.01% bench [kernel.kallsyms] [k] handle_mm_fault
0.01% bench [kernel.kallsyms] [k] orc_find.part.0
0.01% bench [kernel.kallsyms] [k] try_charge_memcg
0.00% bench [kernel.kallsyms] [k] ___slab_alloc
0.00% bench [kernel.kallsyms] [k] __fdget_pos
0.00% bench [kernel.kallsyms] [k] __handle_mm_fault
0.00% bench [kernel.kallsyms] [k] __is_insn_slot_addr
0.00% bench [kernel.kallsyms] [k] __kmalloc
0.00% bench [kernel.kallsyms] [k] __mod_lruvec_page_state
0.00% bench [kernel.kallsyms] [k] __mod_node_page_state
0.00% bench [kernel.kallsyms] [k] __mutex_lock
0.00% bench [kernel.kallsyms] [k] __raw_spin_lock_init
0.00% bench [kernel.kallsyms] [k] alloc_vmap_area
0.00% bench [kernel.kallsyms] [k] allocate_slab
0.00% bench [kernel.kallsyms] [k] audit_get_tty
0.00% bench [kernel.kallsyms] [k] bpf_ksym_find
0.00% bench [kernel.kallsyms] [k] btf_check_all_metas
0.00% bench [kernel.kallsyms] [k] btf_put
0.00% bench [kernel.kallsyms] [k] cmpxchg_double_slab.constprop.0.isra.0
0.00% bench [kernel.kallsyms] [k] do_fault
0.00% bench [kernel.kallsyms] [k] do_raw_spin_trylock
0.00% bench [kernel.kallsyms] [k] find_vma
0.00% bench [kernel.kallsyms] [k] fs_reclaim_release
0.00% bench [kernel.kallsyms] [k] ftrace_check_record
0.00% bench [kernel.kallsyms] [k] ftrace_replace_code
0.00% bench [kernel.kallsyms] [k] get_mem_cgroup_from_mm
0.00% bench [kernel.kallsyms] [k] get_page_from_freelist
0.00% bench [kernel.kallsyms] [k] in_gate_area_no_mm
0.00% bench [kernel.kallsyms] [k] in_task_stack
0.00% bench [kernel.kallsyms] [k] kernel_text_address
0.00% bench [kernel.kallsyms] [k] kernfs_fop_read_iter
0.00% bench [kernel.kallsyms] [k] kernfs_put_active
0.00% bench [kernel.kallsyms] [k] kfree
0.00% bench [kernel.kallsyms] [k] kmem_cache_alloc
0.00% bench [kernel.kallsyms] [k] ksys_read
0.00% bench [kernel.kallsyms] [k] lookup_address_in_pgd
0.00% bench [kernel.kallsyms] [k] mlock_page_drain_local
0.00% bench [kernel.kallsyms] [k] page_remove_rmap
0.00% bench [kernel.kallsyms] [k] post_alloc_hook
0.00% bench [kernel.kallsyms] [k] preempt_schedule_irq
0.00% bench [kernel.kallsyms] [k] queue_work_on
0.00% bench [kernel.kallsyms] [k] stack_trace_save
0.00% bench [kernel.kallsyms] [k] within_error_injection_list


#
# (Tip: To record callchains for each sample: perf record -g)
#

2022-10-06 16:39:16

by Florent Revest

[permalink] [raw]
Subject: Re: [PATCH bpf-next v2 0/4] Add ftrace direct call for arm64

On Wed, Oct 5, 2022 at 5:30 PM Steven Rostedt <[email protected]> wrote:
>
> On Wed, 5 Oct 2022 17:10:33 +0200
> Florent Revest <[email protected]> wrote:
>
> > On Wed, Oct 5, 2022 at 5:07 PM Steven Rostedt <[email protected]> wrote:
> > >
> > > Can you show the implementation of the indirect call you used?
> >
> > Xu used my development branch here
> > https://github.com/FlorentRevest/linux/commits/fprobe-min-args
>
> That looks like it could be optimized quite a bit too.
>
> Specifically this part:
>
> static bool bpf_fprobe_entry(struct fprobe *fp, unsigned long ip, struct ftrace_regs *regs, void *private)
> {
> struct bpf_fprobe_call_context *call_ctx = private;
> struct bpf_fprobe_context *fprobe_ctx = fp->ops.private;
> struct bpf_tramp_links *links = fprobe_ctx->links;
> struct bpf_tramp_links *fentry = &links[BPF_TRAMP_FENTRY];
> struct bpf_tramp_links *fmod_ret = &links[BPF_TRAMP_MODIFY_RETURN];
> struct bpf_tramp_links *fexit = &links[BPF_TRAMP_FEXIT];
> int i, ret;
>
> memset(&call_ctx->ctx, 0, sizeof(call_ctx->ctx));
> call_ctx->ip = ip;
> for (i = 0; i < fprobe_ctx->nr_args; i++)
> call_ctx->args[i] = ftrace_regs_get_argument(regs, i);
>
> for (i = 0; i < fentry->nr_links; i++)
> call_bpf_prog(fentry->links[i], &call_ctx->ctx, call_ctx->args);
>
> call_ctx->args[fprobe_ctx->nr_args] = 0;
> for (i = 0; i < fmod_ret->nr_links; i++) {
> ret = call_bpf_prog(fmod_ret->links[i], &call_ctx->ctx,
> call_ctx->args);
>
> if (ret) {
> ftrace_regs_set_return_value(regs, ret);
> ftrace_override_function_with_return(regs);
>
> bpf_fprobe_exit(fp, ip, regs, private);
> return false;
> }
> }
>
> return fexit->nr_links;
> }
>
> There's a lot of low hanging fruit to speed up there. I wouldn't be too
> fast to throw out this solution if it hasn't had the care that direct calls
> have had to speed that up.
>
> For example, trampolines currently only allow to attach to functions with 6
> parameters or less (3 on x86_32). You could make 7 specific callbacks, with
> zero to 6 parameters, and unroll the argument loop.

Sure, we can give this a try, I'll work on a macro that generates the
7 callbacks and we can check how much that helps. My belief right now
is that ftrace's iteration over all ops on arm64 is where we lose most
time but now that we have numbers it's pretty easy to check hypothesis
:)

2022-10-06 16:57:52

by Florent Revest

[permalink] [raw]
Subject: Re: [PATCH bpf-next v2 0/4] Add ftrace direct call for arm64

On Thu, Oct 6, 2022 at 12:12 AM Jiri Olsa <[email protected]> wrote:
>
> On Wed, Oct 05, 2022 at 11:30:19AM -0400, Steven Rostedt wrote:
> > On Wed, 5 Oct 2022 17:10:33 +0200
> > Florent Revest <[email protected]> wrote:
> >
> > > On Wed, Oct 5, 2022 at 5:07 PM Steven Rostedt <[email protected]> wrote:
> > > >
> > > > On Wed, 5 Oct 2022 22:54:15 +0800
> > > > Xu Kuohai <[email protected]> wrote:
> > > >
> > > > > 1.3 attach bpf prog with with direct call, bpftrace -e 'kfunc:vfs_write {}'
> > > > >
> > > > > # dd if=/dev/zero of=/dev/null count=1000000
> > > > > 1000000+0 records in
> > > > > 1000000+0 records out
> > > > > 512000000 bytes (512 MB, 488 MiB) copied, 1.72973 s, 296 MB/s
> > > > >
> > > > >
> > > > > 1.4 attach bpf prog with with indirect call, bpftrace -e 'kfunc:vfs_write {}'
> > > > >
> > > > > # dd if=/dev/zero of=/dev/null count=1000000
> > > > > 1000000+0 records in
> > > > > 1000000+0 records out
> > > > > 512000000 bytes (512 MB, 488 MiB) copied, 1.99179 s, 257 MB/s
> > >
> > > Thanks for the measurements Xu!
> > >
> > > > Can you show the implementation of the indirect call you used?
> > >
> > > Xu used my development branch here
> > > https://github.com/FlorentRevest/linux/commits/fprobe-min-args
>
> nice :) I guess you did not try to run it on x86, I had to add some small
> changes and disable HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS to compile it

Indeed, I haven't tried building on x86 yet, I'll have a look at what
I broke, thanks. :)
That branch is just an outline of the idea at this point anyway. Just
enough for performance measurements, not particularly ready for
review.

> >
> > That looks like it could be optimized quite a bit too.
> >
> > Specifically this part:
> >
> > static bool bpf_fprobe_entry(struct fprobe *fp, unsigned long ip, struct ftrace_regs *regs, void *private)
> > {
> > struct bpf_fprobe_call_context *call_ctx = private;
> > struct bpf_fprobe_context *fprobe_ctx = fp->ops.private;
> > struct bpf_tramp_links *links = fprobe_ctx->links;
> > struct bpf_tramp_links *fentry = &links[BPF_TRAMP_FENTRY];
> > struct bpf_tramp_links *fmod_ret = &links[BPF_TRAMP_MODIFY_RETURN];
> > struct bpf_tramp_links *fexit = &links[BPF_TRAMP_FEXIT];
> > int i, ret;
> >
> > memset(&call_ctx->ctx, 0, sizeof(call_ctx->ctx));
> > call_ctx->ip = ip;
> > for (i = 0; i < fprobe_ctx->nr_args; i++)
> > call_ctx->args[i] = ftrace_regs_get_argument(regs, i);
> >
> > for (i = 0; i < fentry->nr_links; i++)
> > call_bpf_prog(fentry->links[i], &call_ctx->ctx, call_ctx->args);
> >
> > call_ctx->args[fprobe_ctx->nr_args] = 0;
> > for (i = 0; i < fmod_ret->nr_links; i++) {
> > ret = call_bpf_prog(fmod_ret->links[i], &call_ctx->ctx,
> > call_ctx->args);
> >
> > if (ret) {
> > ftrace_regs_set_return_value(regs, ret);
> > ftrace_override_function_with_return(regs);
> >
> > bpf_fprobe_exit(fp, ip, regs, private);
> > return false;
> > }
> > }
> >
> > return fexit->nr_links;
> > }
> >
> > There's a lot of low hanging fruit to speed up there. I wouldn't be too
> > fast to throw out this solution if it hasn't had the care that direct calls
> > have had to speed that up.
> >
> > For example, trampolines currently only allow to attach to functions with 6
> > parameters or less (3 on x86_32). You could make 7 specific callbacks, with
> > zero to 6 parameters, and unroll the argument loop.
> >
> > Would also be interesting to run perf to see where the overhead is. There
> > may be other locations to work on to make it almost as fast as direct
> > callers without the other baggage.
>
> I can boot the change and run tests in qemu but for some reason it
> won't boot on hw, so I have just perf report from qemu so far

Oh, ok, that's interesting. The changes look pretty benign (only
fprobe and arm64 specific code) I'm curious how that would break the
boot uh :p

>
> there's fprobe/rethook machinery showing out as expected
>
> jirka
>
>
> ---
> # To display the perf.data header info, please use --header/--header-only options.
> #
> #
> # Total Lost Samples: 0
> #
> # Samples: 23K of event 'cpu-clock:k'
> # Event count (approx.): 5841250000
> #
> # Overhead Command Shared Object Symbol
> # ........ ....... .............................................. ..................................................
> #
> 18.65% bench [kernel.kallsyms] [k] syscall_enter_from_user_mode
> |
> ---syscall_enter_from_user_mode
> do_syscall_64
> entry_SYSCALL_64_after_hwframe
> syscall
>
> 13.03% bench [kernel.kallsyms] [k] seqcount_lockdep_reader_access.constprop.0
> |
> ---seqcount_lockdep_reader_access.constprop.0
> ktime_get_coarse_real_ts64
> syscall_trace_enter.constprop.0
> do_syscall_64
> entry_SYSCALL_64_after_hwframe
> syscall
>
> 9.49% bench [kernel.kallsyms] [k] rethook_try_get
> |
> ---rethook_try_get
> fprobe_handler
> ftrace_trampoline
> __x64_sys_getpgid
> do_syscall_64
> entry_SYSCALL_64_after_hwframe
> syscall
>
> 8.71% bench [kernel.kallsyms] [k] rethook_recycle
> |
> ---rethook_recycle
> fprobe_handler
> ftrace_trampoline
> __x64_sys_getpgid
> do_syscall_64
> entry_SYSCALL_64_after_hwframe
> syscall
>
> 4.31% bench [kernel.kallsyms] [k] rcu_is_watching
> |
> ---rcu_is_watching
> |
> |--1.49%--rethook_try_get
> | fprobe_handler
> | ftrace_trampoline
> | __x64_sys_getpgid
> | do_syscall_64
> | entry_SYSCALL_64_after_hwframe
> | syscall
> |
> |--1.10%--do_getpgid
> | __x64_sys_getpgid
> | do_syscall_64
> | entry_SYSCALL_64_after_hwframe
> | syscall
> |
> |--1.02%--__bpf_prog_exit
> | call_bpf_prog.isra.0
> | bpf_fprobe_entry
> | fprobe_handler
> | ftrace_trampoline
> | __x64_sys_getpgid
> | do_syscall_64
> | entry_SYSCALL_64_after_hwframe
> | syscall
> |
> --0.70%--__bpf_prog_enter
> call_bpf_prog.isra.0
> bpf_fprobe_entry
> fprobe_handler
> ftrace_trampoline
> __x64_sys_getpgid
> do_syscall_64
> entry_SYSCALL_64_after_hwframe
> syscall
>
> 2.94% bench [kernel.kallsyms] [k] lock_release
> |
> ---lock_release
> |
> |--1.51%--call_bpf_prog.isra.0
> | bpf_fprobe_entry
> | fprobe_handler
> | ftrace_trampoline
> | __x64_sys_getpgid
> | do_syscall_64
> | entry_SYSCALL_64_after_hwframe
> | syscall
> |
> --1.43%--do_getpgid
> __x64_sys_getpgid
> do_syscall_64
> entry_SYSCALL_64_after_hwframe
> syscall
>
> 2.91% bench bpf_prog_21856463590f61f1_bench_trigger_fentry [k] bpf_prog_21856463590f61f1_bench_trigger_fentry
> |
> ---bpf_prog_21856463590f61f1_bench_trigger_fentry
> |
> --2.66%--call_bpf_prog.isra.0
> bpf_fprobe_entry
> fprobe_handler
> ftrace_trampoline
> __x64_sys_getpgid
> do_syscall_64
> entry_SYSCALL_64_after_hwframe
> syscall
>
> 2.69% bench [kernel.kallsyms] [k] bpf_fprobe_entry
> |
> ---bpf_fprobe_entry
> fprobe_handler
> ftrace_trampoline
> __x64_sys_getpgid
> do_syscall_64
> entry_SYSCALL_64_after_hwframe
> syscall
>
> 2.60% bench [kernel.kallsyms] [k] lock_acquire
> |
> ---lock_acquire
> |
> |--1.34%--__bpf_prog_enter
> | call_bpf_prog.isra.0
> | bpf_fprobe_entry
> | fprobe_handler
> | ftrace_trampoline
> | __x64_sys_getpgid
> | do_syscall_64
> | entry_SYSCALL_64_after_hwframe
> | syscall
> |
> --1.24%--do_getpgid
> __x64_sys_getpgid
> do_syscall_64
> entry_SYSCALL_64_after_hwframe
> syscall
>
> 2.42% bench [kernel.kallsyms] [k] syscall_exit_to_user_mode_prepare
> |
> ---syscall_exit_to_user_mode_prepare
> syscall_exit_to_user_mode
> do_syscall_64
> entry_SYSCALL_64_after_hwframe
> syscall
>
> 2.37% bench [kernel.kallsyms] [k] __audit_syscall_entry
> |
> ---__audit_syscall_entry
> syscall_trace_enter.constprop.0
> do_syscall_64
> entry_SYSCALL_64_after_hwframe
> |
> --2.36%--syscall
>
> 2.35% bench [kernel.kallsyms] [k] syscall_trace_enter.constprop.0
> |
> ---syscall_trace_enter.constprop.0
> do_syscall_64
> entry_SYSCALL_64_after_hwframe
> syscall
>
> 2.12% bench [kernel.kallsyms] [k] check_preemption_disabled
> |
> ---check_preemption_disabled
> |
> --1.55%--rcu_is_watching
> |
> --0.59%--do_getpgid
> __x64_sys_getpgid
> do_syscall_64
> entry_SYSCALL_64_after_hwframe
> syscall
>
> 2.00% bench [kernel.kallsyms] [k] fprobe_handler
> |
> ---fprobe_handler
> ftrace_trampoline
> __x64_sys_getpgid
> do_syscall_64
> entry_SYSCALL_64_after_hwframe
> syscall
>
> 1.94% bench [kernel.kallsyms] [k] local_irq_disable_exit_to_user
> |
> ---local_irq_disable_exit_to_user
> syscall_exit_to_user_mode
> do_syscall_64
> entry_SYSCALL_64_after_hwframe
> syscall
>
> 1.84% bench [kernel.kallsyms] [k] rcu_read_lock_sched_held
> |
> ---rcu_read_lock_sched_held
> |
> |--0.93%--lock_acquire
> |
> --0.90%--lock_release
>
> 1.71% bench [kernel.kallsyms] [k] migrate_enable
> |
> ---migrate_enable
> __bpf_prog_exit
> call_bpf_prog.isra.0
> bpf_fprobe_entry
> fprobe_handler
> ftrace_trampoline
> __x64_sys_getpgid
> do_syscall_64
> entry_SYSCALL_64_after_hwframe
> syscall
>
> 1.66% bench [kernel.kallsyms] [k] call_bpf_prog.isra.0
> |
> ---call_bpf_prog.isra.0
> bpf_fprobe_entry
> fprobe_handler
> ftrace_trampoline
> __x64_sys_getpgid
> do_syscall_64
> entry_SYSCALL_64_after_hwframe
> syscall
>
> 1.53% bench [kernel.kallsyms] [k] __rcu_read_unlock
> |
> ---__rcu_read_unlock
> |
> |--0.86%--__bpf_prog_exit
> | call_bpf_prog.isra.0
> | bpf_fprobe_entry
> | fprobe_handler
> | ftrace_trampoline
> | __x64_sys_getpgid
> | do_syscall_64
> | entry_SYSCALL_64_after_hwframe
> | syscall
> |
> --0.66%--do_getpgid
> __x64_sys_getpgid
> do_syscall_64
> entry_SYSCALL_64_after_hwframe
> syscall
>
> 1.31% bench [kernel.kallsyms] [k] debug_smp_processor_id
> |
> ---debug_smp_processor_id
> |
> --0.77%--rcu_is_watching
>
> 1.22% bench [kernel.kallsyms] [k] migrate_disable
> |
> ---migrate_disable
> __bpf_prog_enter
> call_bpf_prog.isra.0
> bpf_fprobe_entry
> fprobe_handler
> ftrace_trampoline
> __x64_sys_getpgid
> do_syscall_64
> entry_SYSCALL_64_after_hwframe
> syscall
>
> 1.19% bench [kernel.kallsyms] [k] __bpf_prog_enter
> |
> ---__bpf_prog_enter
> call_bpf_prog.isra.0
> bpf_fprobe_entry
> fprobe_handler
> ftrace_trampoline
> __x64_sys_getpgid
> do_syscall_64
> entry_SYSCALL_64_after_hwframe
> syscall
>
> 0.84% bench [kernel.kallsyms] [k] __radix_tree_lookup
> |
> ---__radix_tree_lookup
> find_task_by_pid_ns
> do_getpgid
> __x64_sys_getpgid
> do_syscall_64
> entry_SYSCALL_64_after_hwframe
> syscall
>
> 0.82% bench [kernel.kallsyms] [k] do_getpgid
> |
> ---do_getpgid
> __x64_sys_getpgid
> do_syscall_64
> entry_SYSCALL_64_after_hwframe
> syscall
>
> 0.78% bench [kernel.kallsyms] [k] debug_lockdep_rcu_enabled
> |
> ---debug_lockdep_rcu_enabled
> |
> --0.63%--rcu_read_lock_sched_held
>
> 0.74% bench ftrace_trampoline [k] ftrace_trampoline
> |
> ---ftrace_trampoline
> __x64_sys_getpgid
> do_syscall_64
> entry_SYSCALL_64_after_hwframe
> syscall
>
> 0.72% bench [kernel.kallsyms] [k] preempt_count_add
> |
> ---preempt_count_add
>
> 0.71% bench [kernel.kallsyms] [k] ktime_get_coarse_real_ts64
> |
> ---ktime_get_coarse_real_ts64
> syscall_trace_enter.constprop.0
> do_syscall_64
> entry_SYSCALL_64_after_hwframe
> syscall
>
> 0.69% bench [kernel.kallsyms] [k] do_syscall_64
> |
> ---do_syscall_64
> entry_SYSCALL_64_after_hwframe
> |
> --0.68%--syscall
>
> 0.60% bench [kernel.kallsyms] [k] preempt_count_sub
> |
> ---preempt_count_sub
>
> 0.59% bench [kernel.kallsyms] [k] __rcu_read_lock
> |
> ---__rcu_read_lock
>
> 0.59% bench [kernel.kallsyms] [k] __x64_sys_getpgid
> |
> ---__x64_sys_getpgid
> do_syscall_64
> entry_SYSCALL_64_after_hwframe
> syscall
>
> 0.58% bench [kernel.kallsyms] [k] __audit_syscall_exit
> |
> ---__audit_syscall_exit
> syscall_exit_to_user_mode_prepare
> syscall_exit_to_user_mode
> do_syscall_64
> entry_SYSCALL_64_after_hwframe
> syscall
>
> 0.53% bench [kernel.kallsyms] [k] audit_reset_context
> |
> ---audit_reset_context
> syscall_exit_to_user_mode_prepare
> syscall_exit_to_user_mode
> do_syscall_64
> entry_SYSCALL_64_after_hwframe
> syscall
>
> 0.45% bench [kernel.kallsyms] [k] rcu_read_lock_held
> 0.36% bench [kernel.kallsyms] [k] find_task_by_vpid
> 0.32% bench [kernel.kallsyms] [k] __bpf_prog_exit
> 0.26% bench [kernel.kallsyms] [k] syscall_exit_to_user_mode
> 0.20% bench [kernel.kallsyms] [k] idr_find
> 0.18% bench [kernel.kallsyms] [k] find_task_by_pid_ns
> 0.17% bench [kernel.kallsyms] [k] update_prog_stats
> 0.16% bench [kernel.kallsyms] [k] _raw_spin_unlock_irqrestore
> 0.14% bench [kernel.kallsyms] [k] pid_task
> 0.04% bench [kernel.kallsyms] [k] memchr_inv
> 0.04% bench [kernel.kallsyms] [k] smp_call_function_many_cond
> 0.03% bench [kernel.kallsyms] [k] do_user_addr_fault
> 0.03% bench [kernel.kallsyms] [k] kallsyms_expand_symbol.constprop.0
> 0.03% bench [kernel.kallsyms] [k] native_flush_tlb_global
> 0.03% bench [kernel.kallsyms] [k] __change_page_attr_set_clr
> 0.02% bench [kernel.kallsyms] [k] memcpy_erms
> 0.02% bench [kernel.kallsyms] [k] unwind_next_frame
> 0.02% bench [kernel.kallsyms] [k] copy_user_enhanced_fast_string
> 0.01% bench [kernel.kallsyms] [k] __orc_find
> 0.01% bench [kernel.kallsyms] [k] call_rcu
> 0.01% bench [kernel.kallsyms] [k] __alloc_pages
> 0.01% bench [kernel.kallsyms] [k] __purge_vmap_area_lazy
> 0.01% bench [kernel.kallsyms] [k] __softirqentry_text_start
> 0.01% bench [kernel.kallsyms] [k] __stack_depot_save
> 0.01% bench [kernel.kallsyms] [k] __up_read
> 0.01% bench [kernel.kallsyms] [k] __virt_addr_valid
> 0.01% bench [kernel.kallsyms] [k] clear_page_erms
> 0.01% bench [kernel.kallsyms] [k] deactivate_slab
> 0.01% bench [kernel.kallsyms] [k] do_check_common
> 0.01% bench [kernel.kallsyms] [k] finish_task_switch.isra.0
> 0.01% bench [kernel.kallsyms] [k] free_unref_page_list
> 0.01% bench [kernel.kallsyms] [k] ftrace_rec_iter_next
> 0.01% bench [kernel.kallsyms] [k] handle_mm_fault
> 0.01% bench [kernel.kallsyms] [k] orc_find.part.0
> 0.01% bench [kernel.kallsyms] [k] try_charge_memcg
> 0.00% bench [kernel.kallsyms] [k] ___slab_alloc
> 0.00% bench [kernel.kallsyms] [k] __fdget_pos
> 0.00% bench [kernel.kallsyms] [k] __handle_mm_fault
> 0.00% bench [kernel.kallsyms] [k] __is_insn_slot_addr
> 0.00% bench [kernel.kallsyms] [k] __kmalloc
> 0.00% bench [kernel.kallsyms] [k] __mod_lruvec_page_state
> 0.00% bench [kernel.kallsyms] [k] __mod_node_page_state
> 0.00% bench [kernel.kallsyms] [k] __mutex_lock
> 0.00% bench [kernel.kallsyms] [k] __raw_spin_lock_init
> 0.00% bench [kernel.kallsyms] [k] alloc_vmap_area
> 0.00% bench [kernel.kallsyms] [k] allocate_slab
> 0.00% bench [kernel.kallsyms] [k] audit_get_tty
> 0.00% bench [kernel.kallsyms] [k] bpf_ksym_find
> 0.00% bench [kernel.kallsyms] [k] btf_check_all_metas
> 0.00% bench [kernel.kallsyms] [k] btf_put
> 0.00% bench [kernel.kallsyms] [k] cmpxchg_double_slab.constprop.0.isra.0
> 0.00% bench [kernel.kallsyms] [k] do_fault
> 0.00% bench [kernel.kallsyms] [k] do_raw_spin_trylock
> 0.00% bench [kernel.kallsyms] [k] find_vma
> 0.00% bench [kernel.kallsyms] [k] fs_reclaim_release
> 0.00% bench [kernel.kallsyms] [k] ftrace_check_record
> 0.00% bench [kernel.kallsyms] [k] ftrace_replace_code
> 0.00% bench [kernel.kallsyms] [k] get_mem_cgroup_from_mm
> 0.00% bench [kernel.kallsyms] [k] get_page_from_freelist
> 0.00% bench [kernel.kallsyms] [k] in_gate_area_no_mm
> 0.00% bench [kernel.kallsyms] [k] in_task_stack
> 0.00% bench [kernel.kallsyms] [k] kernel_text_address
> 0.00% bench [kernel.kallsyms] [k] kernfs_fop_read_iter
> 0.00% bench [kernel.kallsyms] [k] kernfs_put_active
> 0.00% bench [kernel.kallsyms] [k] kfree
> 0.00% bench [kernel.kallsyms] [k] kmem_cache_alloc
> 0.00% bench [kernel.kallsyms] [k] ksys_read
> 0.00% bench [kernel.kallsyms] [k] lookup_address_in_pgd
> 0.00% bench [kernel.kallsyms] [k] mlock_page_drain_local
> 0.00% bench [kernel.kallsyms] [k] page_remove_rmap
> 0.00% bench [kernel.kallsyms] [k] post_alloc_hook
> 0.00% bench [kernel.kallsyms] [k] preempt_schedule_irq
> 0.00% bench [kernel.kallsyms] [k] queue_work_on
> 0.00% bench [kernel.kallsyms] [k] stack_trace_save
> 0.00% bench [kernel.kallsyms] [k] within_error_injection_list
>
>
> #
> # (Tip: To record callchains for each sample: perf record -g)
> #
>

Thanks for the measurements Jiri! :) At this point, my hypothesis is
that the biggest part of the performance hit comes from arm64 specific
code in ftrace so I would rather wait to see what Xu finds out on his
pi4. Also, I found an arm64 board today so I should soon be able to
make measurements there too.

2022-10-06 17:13:06

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH bpf-next v2 0/4] Add ftrace direct call for arm64

On Thu, 6 Oct 2022 18:19:12 +0200
Florent Revest <[email protected]> wrote:

> Sure, we can give this a try, I'll work on a macro that generates the
> 7 callbacks and we can check how much that helps. My belief right now
> is that ftrace's iteration over all ops on arm64 is where we lose most
> time but now that we have numbers it's pretty easy to check hypothesis
> :)

Ah, I forgot that's what Mark's code is doing. But yes, that needs to be
fixed first. I forget that arm64 doesn't have the dedicated trampolines yet.

So, let's hold off until that is complete.

-- Steve

2022-10-17 18:03:07

by Florent Revest

[permalink] [raw]
Subject: Re: [PATCH bpf-next v2 0/4] Add ftrace direct call for arm64

On Thu, Oct 6, 2022 at 6:29 PM Steven Rostedt <[email protected]> wrote:
>
> On Thu, 6 Oct 2022 18:19:12 +0200
> Florent Revest <[email protected]> wrote:
>
> > Sure, we can give this a try, I'll work on a macro that generates the
> > 7 callbacks and we can check how much that helps. My belief right now
> > is that ftrace's iteration over all ops on arm64 is where we lose most
> > time but now that we have numbers it's pretty easy to check hypothesis
> > :)
>
> Ah, I forgot that's what Mark's code is doing. But yes, that needs to be
> fixed first. I forget that arm64 doesn't have the dedicated trampolines yet.
>
> So, let's hold off until that is complete.
>
> -- Steve

Mark finished an implementation of his per-callsite-ops and min-args
branches (meaning that we can now skip the expensive ftrace's saving
of all registers and iteration over all ops if only one is attached)
- https://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git/log/?h=arm64-ftrace-call-ops-20221017

And Masami wrote similar patches to what I had originally done to
fprobe in my branch:
- https://github.com/mhiramat/linux/commits/kprobes/fprobe-update

So I could rebase my previous "bpf on fprobe" branch on top of these:
(as before, it's just good enough for benchmarking and to give a
general sense of the idea, not for a thorough code review):
- https://github.com/FlorentRevest/linux/commits/fprobe-min-args-3

And I could run the benchmarks against my rpi4. I have different
baseline numbers as Xu so I ran everything again and tried to keep the
format the same. "indirect call" refers to my branch I just linked and
"direct call" refers to the series this is a reply to (Xu's work)

1. test with dd

1.1 when no bpf prog attached to vfs_write

# dd if=/dev/zero of=/dev/null count=1000000
1000000+0 records in
1000000+0 records out
512000000 bytes (512 MB, 488 MiB) copied, 3.94315 s, 130 MB/s


1.2 attach bpf prog with kprobe, bpftrace -e kprobe:vfs_write {}

# dd if=/dev/zero of=/dev/null count=1000000
1000000+0 records in
1000000+0 records out
512000000 bytes (512 MB, 488 MiB) copied, 5.80493 s, 88.2 MB/s


1.3 attach bpf prog with with direct call, bpftrace -e kfunc:vfs_write {}

# dd if=/dev/zero of=/dev/null count=1000000
1000000+0 records in
1000000+0 records out
512000000 bytes (512 MB, 488 MiB) copied, 4.18579 s, 122 MB/s


1.4 attach bpf prog with with indirect call, bpftrace -e kfunc:vfs_write {}

# dd if=/dev/zero of=/dev/null count=1000000
1000000+0 records in
1000000+0 records out
512000000 bytes (512 MB, 488 MiB) copied, 4.92616 s, 104 MB/s


2. test with bpf/bench

2.1 bench trig-base
Iter 0 ( 86.518us): hits 0.700M/s ( 0.700M/prod), drops
0.000M/s, total operations 0.700M/s
Iter 1 (-26.352us): hits 0.701M/s ( 0.701M/prod), drops
0.000M/s, total operations 0.701M/s
Iter 2 ( 1.092us): hits 0.701M/s ( 0.701M/prod), drops
0.000M/s, total operations 0.701M/s
Iter 3 ( -1.890us): hits 0.701M/s ( 0.701M/prod), drops
0.000M/s, total operations 0.701M/s
Iter 4 ( -2.315us): hits 0.701M/s ( 0.701M/prod), drops
0.000M/s, total operations 0.701M/s
Iter 5 ( 4.184us): hits 0.701M/s ( 0.701M/prod), drops
0.000M/s, total operations 0.701M/s
Iter 6 ( -3.241us): hits 0.701M/s ( 0.701M/prod), drops
0.000M/s, total operations 0.701M/s
Summary: hits 0.701 ± 0.000M/s ( 0.701M/prod), drops 0.000 ±
0.000M/s, total operations 0.701 ± 0.000M/s

2.2 bench trig-kprobe
Iter 0 ( 96.833us): hits 0.290M/s ( 0.290M/prod), drops
0.000M/s, total operations 0.290M/s
Iter 1 (-20.834us): hits 0.291M/s ( 0.291M/prod), drops
0.000M/s, total operations 0.291M/s
Iter 2 ( -2.426us): hits 0.291M/s ( 0.291M/prod), drops
0.000M/s, total operations 0.291M/s
Iter 3 ( 22.332us): hits 0.292M/s ( 0.292M/prod), drops
0.000M/s, total operations 0.292M/s
Iter 4 (-18.204us): hits 0.292M/s ( 0.292M/prod), drops
0.000M/s, total operations 0.292M/s
Iter 5 ( 5.370us): hits 0.292M/s ( 0.292M/prod), drops
0.000M/s, total operations 0.292M/s
Iter 6 ( -7.853us): hits 0.290M/s ( 0.290M/prod), drops
0.000M/s, total operations 0.290M/s
Summary: hits 0.291 ± 0.001M/s ( 0.291M/prod), drops 0.000 ±
0.000M/s, total operations 0.291 ± 0.001M/s

2.3 bench trig-fentry, with direct call
Iter 0 ( 86.481us): hits 0.530M/s ( 0.530M/prod), drops
0.000M/s, total operations 0.530M/s
Iter 1 (-12.593us): hits 0.536M/s ( 0.536M/prod), drops
0.000M/s, total operations 0.536M/s
Iter 2 ( -5.760us): hits 0.532M/s ( 0.532M/prod), drops
0.000M/s, total operations 0.532M/s
Iter 3 ( 1.629us): hits 0.532M/s ( 0.532M/prod), drops
0.000M/s, total operations 0.532M/s
Iter 4 ( -1.945us): hits 0.533M/s ( 0.533M/prod), drops
0.000M/s, total operations 0.533M/s
Iter 5 ( -1.297us): hits 0.532M/s ( 0.532M/prod), drops
0.000M/s, total operations 0.532M/s
Iter 6 ( 0.444us): hits 0.535M/s ( 0.535M/prod), drops
0.000M/s, total operations 0.535M/s
Summary: hits 0.533 ± 0.002M/s ( 0.533M/prod), drops 0.000 ±
0.000M/s, total operations 0.533 ± 0.002M/s

2.3 bench trig-fentry, with indirect call
Iter 0 ( 84.463us): hits 0.404M/s ( 0.404M/prod), drops
0.000M/s, total operations 0.404M/s
Iter 1 (-16.260us): hits 0.405M/s ( 0.405M/prod), drops
0.000M/s, total operations 0.405M/s
Iter 2 ( -1.038us): hits 0.405M/s ( 0.405M/prod), drops
0.000M/s, total operations 0.405M/s
Iter 3 ( -3.797us): hits 0.405M/s ( 0.405M/prod), drops
0.000M/s, total operations 0.405M/s
Iter 4 ( -0.537us): hits 0.402M/s ( 0.402M/prod), drops
0.000M/s, total operations 0.402M/s
Iter 5 ( 3.536us): hits 0.403M/s ( 0.403M/prod), drops
0.000M/s, total operations 0.403M/s
Iter 6 ( 12.203us): hits 0.404M/s ( 0.404M/prod), drops
0.000M/s, total operations 0.404M/s
Summary: hits 0.404 ± 0.001M/s ( 0.404M/prod), drops 0.000 ±
0.000M/s, total operations 0.404 ± 0.001M/s


3. perf report of bench trig-fentry

3.1 with direct call

98.67% 0.27% bench bench
[.] trigger_producer
|
--98.40%--trigger_producer
|
|--96.63%--syscall
| |
| --71.90%--el0t_64_sync
| el0t_64_sync_handler
| el0_svc
| do_el0_svc
| |
| |--70.94%--el0_svc_common
| | |
| |
|--29.55%--invoke_syscall
| | | |
| | |
|--26.23%--__arm64_sys_getpgid
| | | |
|
| | | |
|--18.88%--bpf_trampoline_6442462665_0
| | | |
| |
| | | |
| |--6.85%--__bpf_prog_enter
| | | |
| | |
| | | |
| | --2.68%--migrate_disable
| | | |
| |
| | | |
| |--5.28%--__bpf_prog_exit
| | | |
| | |
| | | |
| | --1.29%--migrate_enable
| | | |
| |
| | | |
|
|--3.96%--bpf_prog_21856463590f61f1_bench_trigger_fentry
| | | |
| |
| | | |
| --0.61%--__rcu_read_lock
| | | |
|
| | | |
--4.42%--find_task_by_vpid
| | | |
|
| | | |
|--2.53%--radix_tree_lookup
| | | |
|
| | | |
--0.61%--idr_find
| | | |
| | |
--0.81%--pid_vnr
| | |
| |
--0.53%--__arm64_sys_getpgid
| |
| --0.95%--invoke_syscall
|
--0.99%--syscall@plt


3.2 with indirect call

98.68% 0.20% bench bench
[.] trigger_producer
|
--98.48%--trigger_producer
|
--97.47%--syscall
|
--76.11%--el0t_64_sync
el0t_64_sync_handler
el0_svc
do_el0_svc
|
|--75.52%--el0_svc_common
| |
|
|--46.35%--invoke_syscall
| | |
| |
--44.06%--__arm64_sys_getpgid
| |
|
| |
|--35.40%--ftrace_caller
| |
| |
| |
| --34.04%--fprobe_handler
| |
| |
| |
| |--15.61%--bpf_fprobe_entry
| |
| | |
| |
| | |--3.79%--__bpf_prog_enter
| |
| | | |
| |
| | |
--0.80%--migrate_disable
| |
| | |
| |
| | |--3.74%--__bpf_prog_exit
| |
| | | |
| |
| | |
--0.77%--migrate_enable
| |
| | |
| |
| |
--2.65%--bpf_prog_21856463590f61f1_bench_trigger_fentry
| |
| |
| |
| |--12.65%--rethook_trampoline_handler
| |
| |
| |
| |--1.70%--rethook_try_get
| |
| | |
| |
| | --1.48%--rcu_is_watching
| |
| |
| |
| |--1.46%--freelist_try_get
| |
| |
| |
| --0.65%--rethook_recycle
| |
|
| |
--6.36%--find_task_by_vpid
| |
|
| |
|--3.64%--radix_tree_lookup
| |
|
| |
--1.74%--idr_find
| |
| --1.05%--ftrace_caller
|
--0.59%--invoke_syscall

This looks slightly better than before but it is actually still a
pretty significant performance hit compared to direct calls.

Note that I can't really make sense of the perf report with indirect
calls. it always reports it spent 12% of the time in
rethook_trampoline_handler but I verified with both a WARN in that
function and a breakpoint with a debugger, this function does *not*
get called when running this "bench trig-fentry" benchmark. Also it
wouldn't make sense for fprobe_handler to call it so I'm quite
confused why perf would report this call and such a long time spent
there. Anyone know what I could be missing here ?

2022-10-17 19:12:12

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH bpf-next v2 0/4] Add ftrace direct call for arm64

On Mon, 17 Oct 2022 19:55:06 +0200
Florent Revest <[email protected]> wrote:

> Note that I can't really make sense of the perf report with indirect
> calls. it always reports it spent 12% of the time in
> rethook_trampoline_handler but I verified with both a WARN in that
> function and a breakpoint with a debugger, this function does *not*
> get called when running this "bench trig-fentry" benchmark. Also it
> wouldn't make sense for fprobe_handler to call it so I'm quite
> confused why perf would report this call and such a long time spent
> there. Anyone know what I could be missing here ?

The trace shows __bpf_prog_exit, which I'm guessing is tracing the end of
the function. Right?

In which case I believe it must call rethook_trampoline_handler:

-> fprobe_handler() /* Which could use some "unlikely()" to move disabled
paths out of the hot path */

/* And also calls rethook_try_get () which does a cmpxchg! */

-> ret_hook()
-> arch_rethook_prepare()
Sets regs->lr = arch_rethook_trampoline

On return of the function, it jumps to arch_rethook_trampoline()

-> arch_rethook_trampoline()
-> arch_rethook_trampoline_callback()
-> rethook_trampoline_handler()

So I do not know how it wouldn't trigger the WARNING or breakpoint if you
added it there.

-- Steve

2022-10-17 19:34:47

by Florent Revest

[permalink] [raw]
Subject: Re: [PATCH bpf-next v2 0/4] Add ftrace direct call for arm64

Uhuh, apologies for my perf report formatting! I'll try to figure it
out for next time, meanwhile you can find it better formatted here
https://paste.debian.net/1257405/

On Mon, Oct 17, 2022 at 8:49 PM Steven Rostedt <[email protected]> wrote:
>
> On Mon, 17 Oct 2022 19:55:06 +0200
> Florent Revest <[email protected]> wrote:
>
> > Note that I can't really make sense of the perf report with indirect
> > calls. it always reports it spent 12% of the time in
> > rethook_trampoline_handler but I verified with both a WARN in that
> > function and a breakpoint with a debugger, this function does *not*
> > get called when running this "bench trig-fentry" benchmark. Also it
> > wouldn't make sense for fprobe_handler to call it so I'm quite
> > confused why perf would report this call and such a long time spent
> > there. Anyone know what I could be missing here ?
>
> The trace shows __bpf_prog_exit, which I'm guessing is tracing the end of
> the function. Right?

Actually no, this function is called to end the context of a BPF
program execution. Here it is called at the end of the fentry program
(so still before the traced function). I hope the pastebin helps
clarify this!

> In which case I believe it must call rethook_trampoline_handler:
>
> -> fprobe_handler() /* Which could use some "unlikely()" to move disabled
> paths out of the hot path */
>
> /* And also calls rethook_try_get () which does a cmpxchg! */
>
> -> ret_hook()
> -> arch_rethook_prepare()
> Sets regs->lr = arch_rethook_trampoline
>
> On return of the function, it jumps to arch_rethook_trampoline()
>
> -> arch_rethook_trampoline()
> -> arch_rethook_trampoline_callback()
> -> rethook_trampoline_handler()

This is indeed what happens when an fexit program is also attached.
But when running "bench trig-fentry", only an fentry program is
attached so bpf_fprobe_entry returns a non-zero value and fprobe
doesn't call rethook_hook.

Also, in this situation arch_rethook_trampoline is called by the
traced function's return but in the perf report, iiuc, it shows up as
being called from fprobe_handler and that should never happen. I
wonder if this is some sort of stack unwinding artifact during the
perf record?

> So I do not know how it wouldn't trigger the WARNING or breakpoint if you
> added it there.

By the way, the WARNING does trigger if I also attach an fexit program
(then rethook_hook is called). But I made sure we skip the whole
rethook logic if no fexit program is attached so bench trig-fentry
should not go through rethook_trampoline_handler.

2022-10-21 12:01:26

by Masami Hiramatsu

[permalink] [raw]
Subject: Re: [PATCH bpf-next v2 0/4] Add ftrace direct call for arm64

Hi Florent,

On Mon, 17 Oct 2022 19:55:06 +0200
Florent Revest <[email protected]> wrote:

> On Thu, Oct 6, 2022 at 6:29 PM Steven Rostedt <[email protected]> wrote:
> >
> > On Thu, 6 Oct 2022 18:19:12 +0200
> > Florent Revest <[email protected]> wrote:
> >
> > > Sure, we can give this a try, I'll work on a macro that generates the
> > > 7 callbacks and we can check how much that helps. My belief right now
> > > is that ftrace's iteration over all ops on arm64 is where we lose most
> > > time but now that we have numbers it's pretty easy to check hypothesis
> > > :)
> >
> > Ah, I forgot that's what Mark's code is doing. But yes, that needs to be
> > fixed first. I forget that arm64 doesn't have the dedicated trampolines yet.
> >
> > So, let's hold off until that is complete.
> >
> > -- Steve
>
> Mark finished an implementation of his per-callsite-ops and min-args
> branches (meaning that we can now skip the expensive ftrace's saving
> of all registers and iteration over all ops if only one is attached)
> - https://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git/log/?h=arm64-ftrace-call-ops-20221017
>
> And Masami wrote similar patches to what I had originally done to
> fprobe in my branch:
> - https://github.com/mhiramat/linux/commits/kprobes/fprobe-update
>
> So I could rebase my previous "bpf on fprobe" branch on top of these:
> (as before, it's just good enough for benchmarking and to give a
> general sense of the idea, not for a thorough code review):
> - https://github.com/FlorentRevest/linux/commits/fprobe-min-args-3
>
> And I could run the benchmarks against my rpi4. I have different
> baseline numbers as Xu so I ran everything again and tried to keep the
> format the same. "indirect call" refers to my branch I just linked and
> "direct call" refers to the series this is a reply to (Xu's work)

Thanks for sharing the measurement results. Yes, fprobes/rethook
implementation is just porting the kretprobes implementation, thus
it may not be so optimized.

BTW, I remember Wuqiang's patch for kretprobes.

https://lore.kernel.org/all/[email protected]/T/#u

This is for the scalability fixing, but may possible to improve
the performance a bit. It is not hard to port to the recent kernel.
Can you try it too?

Anyway, eventually, I would like to remove the current kretprobe
based implementation and unify fexit hook with function-graph
tracer. It should make more better perfromance on it.

Thank you,


>
> 1. test with dd
>
> 1.1 when no bpf prog attached to vfs_write
>
> # dd if=/dev/zero of=/dev/null count=1000000
> 1000000+0 records in
> 1000000+0 records out
> 512000000 bytes (512 MB, 488 MiB) copied, 3.94315 s, 130 MB/s
>
>
> 1.2 attach bpf prog with kprobe, bpftrace -e kprobe:vfs_write {}
>
> # dd if=/dev/zero of=/dev/null count=1000000
> 1000000+0 records in
> 1000000+0 records out
> 512000000 bytes (512 MB, 488 MiB) copied, 5.80493 s, 88.2 MB/s
>
>
> 1.3 attach bpf prog with with direct call, bpftrace -e kfunc:vfs_write {}
>
> # dd if=/dev/zero of=/dev/null count=1000000
> 1000000+0 records in
> 1000000+0 records out
> 512000000 bytes (512 MB, 488 MiB) copied, 4.18579 s, 122 MB/s
>
>
> 1.4 attach bpf prog with with indirect call, bpftrace -e kfunc:vfs_write {}
>
> # dd if=/dev/zero of=/dev/null count=1000000
> 1000000+0 records in
> 1000000+0 records out
> 512000000 bytes (512 MB, 488 MiB) copied, 4.92616 s, 104 MB/s
>
>
> 2. test with bpf/bench
>
> 2.1 bench trig-base
> Iter 0 ( 86.518us): hits 0.700M/s ( 0.700M/prod), drops
> 0.000M/s, total operations 0.700M/s
> Iter 1 (-26.352us): hits 0.701M/s ( 0.701M/prod), drops
> 0.000M/s, total operations 0.701M/s
> Iter 2 ( 1.092us): hits 0.701M/s ( 0.701M/prod), drops
> 0.000M/s, total operations 0.701M/s
> Iter 3 ( -1.890us): hits 0.701M/s ( 0.701M/prod), drops
> 0.000M/s, total operations 0.701M/s
> Iter 4 ( -2.315us): hits 0.701M/s ( 0.701M/prod), drops
> 0.000M/s, total operations 0.701M/s
> Iter 5 ( 4.184us): hits 0.701M/s ( 0.701M/prod), drops
> 0.000M/s, total operations 0.701M/s
> Iter 6 ( -3.241us): hits 0.701M/s ( 0.701M/prod), drops
> 0.000M/s, total operations 0.701M/s
> Summary: hits 0.701 $B!^(B 0.000M/s ( 0.701M/prod), drops 0.000 $B!^(B
> 0.000M/s, total operations 0.701 $B!^(B 0.000M/s
>
> 2.2 bench trig-kprobe
> Iter 0 ( 96.833us): hits 0.290M/s ( 0.290M/prod), drops
> 0.000M/s, total operations 0.290M/s
> Iter 1 (-20.834us): hits 0.291M/s ( 0.291M/prod), drops
> 0.000M/s, total operations 0.291M/s
> Iter 2 ( -2.426us): hits 0.291M/s ( 0.291M/prod), drops
> 0.000M/s, total operations 0.291M/s
> Iter 3 ( 22.332us): hits 0.292M/s ( 0.292M/prod), drops
> 0.000M/s, total operations 0.292M/s
> Iter 4 (-18.204us): hits 0.292M/s ( 0.292M/prod), drops
> 0.000M/s, total operations 0.292M/s
> Iter 5 ( 5.370us): hits 0.292M/s ( 0.292M/prod), drops
> 0.000M/s, total operations 0.292M/s
> Iter 6 ( -7.853us): hits 0.290M/s ( 0.290M/prod), drops
> 0.000M/s, total operations 0.290M/s
> Summary: hits 0.291 $B!^(B 0.001M/s ( 0.291M/prod), drops 0.000 $B!^(B
> 0.000M/s, total operations 0.291 $B!^(B 0.001M/s
>
> 2.3 bench trig-fentry, with direct call
> Iter 0 ( 86.481us): hits 0.530M/s ( 0.530M/prod), drops
> 0.000M/s, total operations 0.530M/s
> Iter 1 (-12.593us): hits 0.536M/s ( 0.536M/prod), drops
> 0.000M/s, total operations 0.536M/s
> Iter 2 ( -5.760us): hits 0.532M/s ( 0.532M/prod), drops
> 0.000M/s, total operations 0.532M/s
> Iter 3 ( 1.629us): hits 0.532M/s ( 0.532M/prod), drops
> 0.000M/s, total operations 0.532M/s
> Iter 4 ( -1.945us): hits 0.533M/s ( 0.533M/prod), drops
> 0.000M/s, total operations 0.533M/s
> Iter 5 ( -1.297us): hits 0.532M/s ( 0.532M/prod), drops
> 0.000M/s, total operations 0.532M/s
> Iter 6 ( 0.444us): hits 0.535M/s ( 0.535M/prod), drops
> 0.000M/s, total operations 0.535M/s
> Summary: hits 0.533 $B!^(B 0.002M/s ( 0.533M/prod), drops 0.000 $B!^(B
> 0.000M/s, total operations 0.533 $B!^(B 0.002M/s
>
> 2.3 bench trig-fentry, with indirect call
> Iter 0 ( 84.463us): hits 0.404M/s ( 0.404M/prod), drops
> 0.000M/s, total operations 0.404M/s
> Iter 1 (-16.260us): hits 0.405M/s ( 0.405M/prod), drops
> 0.000M/s, total operations 0.405M/s
> Iter 2 ( -1.038us): hits 0.405M/s ( 0.405M/prod), drops
> 0.000M/s, total operations 0.405M/s
> Iter 3 ( -3.797us): hits 0.405M/s ( 0.405M/prod), drops
> 0.000M/s, total operations 0.405M/s
> Iter 4 ( -0.537us): hits 0.402M/s ( 0.402M/prod), drops
> 0.000M/s, total operations 0.402M/s
> Iter 5 ( 3.536us): hits 0.403M/s ( 0.403M/prod), drops
> 0.000M/s, total operations 0.403M/s
> Iter 6 ( 12.203us): hits 0.404M/s ( 0.404M/prod), drops
> 0.000M/s, total operations 0.404M/s
> Summary: hits 0.404 $B!^(B 0.001M/s ( 0.404M/prod), drops 0.000 $B!^(B
> 0.000M/s, total operations 0.404 $B!^(B 0.001M/s
>
>
> 3. perf report of bench trig-fentry
>
> 3.1 with direct call
>
> 98.67% 0.27% bench bench
> [.] trigger_producer
> |
> --98.40%--trigger_producer
> |
> |--96.63%--syscall
> | |
> | --71.90%--el0t_64_sync
> | el0t_64_sync_handler
> | el0_svc
> | do_el0_svc
> | |
> | |--70.94%--el0_svc_common
> | | |
> | |
> |--29.55%--invoke_syscall
> | | | |
> | | |
> |--26.23%--__arm64_sys_getpgid
> | | | |
> |
> | | | |
> |--18.88%--bpf_trampoline_6442462665_0
> | | | |
> | |
> | | | |
> | |--6.85%--__bpf_prog_enter
> | | | |
> | | |
> | | | |
> | | --2.68%--migrate_disable
> | | | |
> | |
> | | | |
> | |--5.28%--__bpf_prog_exit
> | | | |
> | | |
> | | | |
> | | --1.29%--migrate_enable
> | | | |
> | |
> | | | |
> |
> |--3.96%--bpf_prog_21856463590f61f1_bench_trigger_fentry
> | | | |
> | |
> | | | |
> | --0.61%--__rcu_read_lock
> | | | |
> |
> | | | |
> --4.42%--find_task_by_vpid
> | | | |
> |
> | | | |
> |--2.53%--radix_tree_lookup
> | | | |
> |
> | | | |
> --0.61%--idr_find
> | | | |
> | | |
> --0.81%--pid_vnr
> | | |
> | |
> --0.53%--__arm64_sys_getpgid
> | |
> | --0.95%--invoke_syscall
> |
> --0.99%--syscall@plt
>
>
> 3.2 with indirect call
>
> 98.68% 0.20% bench bench
> [.] trigger_producer
> |
> --98.48%--trigger_producer
> |
> --97.47%--syscall
> |
> --76.11%--el0t_64_sync
> el0t_64_sync_handler
> el0_svc
> do_el0_svc
> |
> |--75.52%--el0_svc_common
> | |
> |
> |--46.35%--invoke_syscall
> | | |
> | |
> --44.06%--__arm64_sys_getpgid
> | |
> |
> | |
> |--35.40%--ftrace_caller
> | |
> | |
> | |
> | --34.04%--fprobe_handler
> | |
> | |
> | |
> | |--15.61%--bpf_fprobe_entry
> | |
> | | |
> | |
> | | |--3.79%--__bpf_prog_enter
> | |
> | | | |
> | |
> | | |
> --0.80%--migrate_disable
> | |
> | | |
> | |
> | | |--3.74%--__bpf_prog_exit
> | |
> | | | |
> | |
> | | |
> --0.77%--migrate_enable
> | |
> | | |
> | |
> | |
> --2.65%--bpf_prog_21856463590f61f1_bench_trigger_fentry
> | |
> | |
> | |
> | |--12.65%--rethook_trampoline_handler
> | |
> | |
> | |
> | |--1.70%--rethook_try_get
> | |
> | | |
> | |
> | | --1.48%--rcu_is_watching
> | |
> | |
> | |
> | |--1.46%--freelist_try_get
> | |
> | |
> | |
> | --0.65%--rethook_recycle
> | |
> |
> | |
> --6.36%--find_task_by_vpid
> | |
> |
> | |
> |--3.64%--radix_tree_lookup
> | |
> |
> | |
> --1.74%--idr_find
> | |
> | --1.05%--ftrace_caller
> |
> --0.59%--invoke_syscall
>
> This looks slightly better than before but it is actually still a
> pretty significant performance hit compared to direct calls.
>
> Note that I can't really make sense of the perf report with indirect
> calls. it always reports it spent 12% of the time in
> rethook_trampoline_handler but I verified with both a WARN in that
> function and a breakpoint with a debugger, this function does *not*
> get called when running this "bench trig-fentry" benchmark. Also it
> wouldn't make sense for fprobe_handler to call it so I'm quite
> confused why perf would report this call and such a long time spent
> there. Anyone know what I could be missing here ?


--
Masami Hiramatsu (Google) <[email protected]>

2022-10-21 17:20:17

by Florent Revest

[permalink] [raw]
Subject: Re: [PATCH bpf-next v2 0/4] Add ftrace direct call for arm64

On Fri, Oct 21, 2022 at 1:32 PM Masami Hiramatsu <[email protected]> wrote:
> On Mon, 17 Oct 2022 19:55:06 +0200
> Florent Revest <[email protected]> wrote:
> > Mark finished an implementation of his per-callsite-ops and min-args
> > branches (meaning that we can now skip the expensive ftrace's saving
> > of all registers and iteration over all ops if only one is attached)
> > - https://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git/log/?h=arm64-ftrace-call-ops-20221017
> >
> > And Masami wrote similar patches to what I had originally done to
> > fprobe in my branch:
> > - https://github.com/mhiramat/linux/commits/kprobes/fprobe-update
> >
> > So I could rebase my previous "bpf on fprobe" branch on top of these:
> > (as before, it's just good enough for benchmarking and to give a
> > general sense of the idea, not for a thorough code review):
> > - https://github.com/FlorentRevest/linux/commits/fprobe-min-args-3
> >
> > And I could run the benchmarks against my rpi4. I have different
> > baseline numbers as Xu so I ran everything again and tried to keep the
> > format the same. "indirect call" refers to my branch I just linked and
> > "direct call" refers to the series this is a reply to (Xu's work)
>
> Thanks for sharing the measurement results. Yes, fprobes/rethook
> implementation is just porting the kretprobes implementation, thus
> it may not be so optimized.
>
> BTW, I remember Wuqiang's patch for kretprobes.
>
> https://lore.kernel.org/all/[email protected]/T/#u

Oh that's a great idea, thanks for pointing it out Masami!

> This is for the scalability fixing, but may possible to improve
> the performance a bit. It is not hard to port to the recent kernel.
> Can you try it too?

I rebased it on my branch
https://github.com/FlorentRevest/linux/commits/fprobe-min-args-3

And I got measurements again. Unfortunately it looks like this does not help :/

New benchmark results: https://paste.debian.net/1257856/
New perf report: https://paste.debian.net/1257859/

The fprobe based approach is still significantly slower than the
direct call approach.

> Anyway, eventually, I would like to remove the current kretprobe
> based implementation and unify fexit hook with function-graph
> tracer. It should make more better perfromance on it.

That makes sense. :) How do you imagine the unified solution ?
Would both the fgraph and fprobe APIs keep existing but under the hood
one would be implemented on the other ? (or would one be gone ?) Would
we replace the rethook freelist with the function graph's per-task
shadow stacks ? (or the other way around ?))

> > Note that I can't really make sense of the perf report with indirect
> > calls. it always reports it spent 12% of the time in
> > rethook_trampoline_handler but I verified with both a WARN in that
> > function and a breakpoint with a debugger, this function does *not*
> > get called when running this "bench trig-fentry" benchmark. Also it
> > wouldn't make sense for fprobe_handler to call it so I'm quite
> > confused why perf would report this call and such a long time spent
> > there. Anyone know what I could be missing here ?

I made slight progress on this. If I put the vmlinux file in the cwd
where I run perf report, the reports no longer contain references to
rethook_trampoline_handler. Instead, they have a few
0xffff800008xxxxxx addresses under fprobe_handler. (like in the
pastebin I just linked)

It's still pretty weird because that range is the vmalloc area on
arm64 and I don't understand why anything under fprobe_handler would
execute there. However, I'm also definitely sure that these 12% are
actually spent getting buffers from the rethook memory pool because if
I replace rethook_try_get and rethook_recycle calls with the usage of
a dummy static bss buffer (for the sake of benchmarking the
"theoretical best case scenario") these weird perf report traces are
gone and the 12% are saved. https://paste.debian.net/1257862/

This is why I would be interested in seeing rethook's memory pool
reimplemented on top of something like
https://lwn.net/Articles/788923/ If we get closer to the performance
of the the theoretical best case scenario where getting a blob of
memory is ~free (and I think it could be the case with a per task
shadow stack like fgraph's), then a bpf on fprobe implementation would
start to approach the performances of a direct called trampoline on
arm64: https://paste.debian.net/1257863/

2022-10-24 16:27:31

by Masami Hiramatsu

[permalink] [raw]
Subject: Re: [PATCH bpf-next v2 0/4] Add ftrace direct call for arm64

On Fri, 21 Oct 2022 18:49:38 +0200
Florent Revest <[email protected]> wrote:

> On Fri, Oct 21, 2022 at 1:32 PM Masami Hiramatsu <[email protected]> wrote:
> > On Mon, 17 Oct 2022 19:55:06 +0200
> > Florent Revest <[email protected]> wrote:
> > > Mark finished an implementation of his per-callsite-ops and min-args
> > > branches (meaning that we can now skip the expensive ftrace's saving
> > > of all registers and iteration over all ops if only one is attached)
> > > - https://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git/log/?h=arm64-ftrace-call-ops-20221017
> > >
> > > And Masami wrote similar patches to what I had originally done to
> > > fprobe in my branch:
> > > - https://github.com/mhiramat/linux/commits/kprobes/fprobe-update
> > >
> > > So I could rebase my previous "bpf on fprobe" branch on top of these:
> > > (as before, it's just good enough for benchmarking and to give a
> > > general sense of the idea, not for a thorough code review):
> > > - https://github.com/FlorentRevest/linux/commits/fprobe-min-args-3
> > >
> > > And I could run the benchmarks against my rpi4. I have different
> > > baseline numbers as Xu so I ran everything again and tried to keep the
> > > format the same. "indirect call" refers to my branch I just linked and
> > > "direct call" refers to the series this is a reply to (Xu's work)
> >
> > Thanks for sharing the measurement results. Yes, fprobes/rethook
> > implementation is just porting the kretprobes implementation, thus
> > it may not be so optimized.
> >
> > BTW, I remember Wuqiang's patch for kretprobes.
> >
> > https://lore.kernel.org/all/[email protected]/T/#u
>
> Oh that's a great idea, thanks for pointing it out Masami!
>
> > This is for the scalability fixing, but may possible to improve
> > the performance a bit. It is not hard to port to the recent kernel.
> > Can you try it too?
>
> I rebased it on my branch
> https://github.com/FlorentRevest/linux/commits/fprobe-min-args-3
>
> And I got measurements again. Unfortunately it looks like this does not help :/
>
> New benchmark results: https://paste.debian.net/1257856/
> New perf report: https://paste.debian.net/1257859/

Hmm, OK. That is only for the scalability.

>
> The fprobe based approach is still significantly slower than the
> direct call approach.
>
> > Anyway, eventually, I would like to remove the current kretprobe
> > based implementation and unify fexit hook with function-graph
> > tracer. It should make more better perfromance on it.
>
> That makes sense. :) How do you imagine the unified solution ?
> Would both the fgraph and fprobe APIs keep existing but under the hood
> one would be implemented on the other ? (or would one be gone ?) Would
> we replace the rethook freelist with the function graph's per-task
> shadow stacks ? (or the other way around ?))

Yes, that's right. As far as using a global object pool, there must
be a performance bottleneck to pick up an object and returning the
object to the pool. Per-CPU pool may give a better performance but
more complicated to balance pools. Per-task shadow stack will solve it.
So I plan to expand fgraph API and use it in fprobe instead of rethook.
(I planned to re-implement rethook, but I realized that it has more issue
than I thought.)

> > > Note that I can't really make sense of the perf report with indirect
> > > calls. it always reports it spent 12% of the time in
> > > rethook_trampoline_handler but I verified with both a WARN in that
> > > function and a breakpoint with a debugger, this function does *not*
> > > get called when running this "bench trig-fentry" benchmark. Also it
> > > wouldn't make sense for fprobe_handler to call it so I'm quite
> > > confused why perf would report this call and such a long time spent
> > > there. Anyone know what I could be missing here ?
>
> I made slight progress on this. If I put the vmlinux file in the cwd
> where I run perf report, the reports no longer contain references to
> rethook_trampoline_handler. Instead, they have a few
> 0xffff800008xxxxxx addresses under fprobe_handler. (like in the
> pastebin I just linked)
>
> It's still pretty weird because that range is the vmalloc area on
> arm64 and I don't understand why anything under fprobe_handler would
> execute there. However, I'm also definitely sure that these 12% are
> actually spent getting buffers from the rethook memory pool because if
> I replace rethook_try_get and rethook_recycle calls with the usage of
> a dummy static bss buffer (for the sake of benchmarking the
> "theoretical best case scenario") these weird perf report traces are
> gone and the 12% are saved. https://paste.debian.net/1257862/

Yeah, I understand that. Rethook (and kretprobes) is not designed
for such heavy workload.

> This is why I would be interested in seeing rethook's memory pool
> reimplemented on top of something like
> https://lwn.net/Articles/788923/ If we get closer to the performance
> of the the theoretical best case scenario where getting a blob of
> memory is ~free (and I think it could be the case with a per task
> shadow stack like fgraph's), then a bpf on fprobe implementation would
> start to approach the performances of a direct called trampoline on
> arm64: https://paste.debian.net/1257863/

OK, I think we are on the same page and same direction.

Thank you,

--
Masami Hiramatsu (Google) <[email protected]>

2022-11-10 05:09:21

by wuqiang.matt

[permalink] [raw]
Subject: Re: [PATCH bpf-next v2 0/4] Add ftrace direct call for arm64

On 2022/10/22 00:49, Florent Revest wrote:
> On Fri, Oct 21, 2022 at 1:32 PM Masami Hiramatsu <[email protected]> wrote:
>> On Mon, 17 Oct 2022 19:55:06 +0200
>> Florent Revest <[email protected]> wrote:
>>> Mark finished an implementation of his per-callsite-ops and min-args
>>> branches (meaning that we can now skip the expensive ftrace's saving
>>> of all registers and iteration over all ops if only one is attached)
>>> - https://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git/log/?h=arm64-ftrace-call-ops-20221017
>>>
>>> And Masami wrote similar patches to what I had originally done to
>>> fprobe in my branch:
>>> - https://github.com/mhiramat/linux/commits/kprobes/fprobe-update
>>>
>>> So I could rebase my previous "bpf on fprobe" branch on top of these:
>>> (as before, it's just good enough for benchmarking and to give a
>>> general sense of the idea, not for a thorough code review):
>>> - https://github.com/FlorentRevest/linux/commits/fprobe-min-args-3
>>>
>>> And I could run the benchmarks against my rpi4. I have different
>>> baseline numbers as Xu so I ran everything again and tried to keep the
>>> format the same. "indirect call" refers to my branch I just linked and
>>> "direct call" refers to the series this is a reply to (Xu's work)
>>
>> Thanks for sharing the measurement results. Yes, fprobes/rethook
>> implementation is just porting the kretprobes implementation, thus
>> it may not be so optimized.
>>
>> BTW, I remember Wuqiang's patch for kretprobes.
>>
>> https://lore.kernel.org/all/[email protected]/T/#u
>
> Oh that's a great idea, thanks for pointing it out Masami!
>
>> This is for the scalability fixing, but may possible to improve
>> the performance a bit. It is not hard to port to the recent kernel.
>> Can you try it too?
>
> I rebased it on my branch
> https://github.com/FlorentRevest/linux/commits/fprobe-min-args-3
>
> And I got measurements again. Unfortunately it looks like this does not help :/
>
> New benchmark results: https://paste.debian.net/1257856/
> New perf report: https://paste.debian.net/1257859/
>
> The fprobe based approach is still significantly slower than the
> direct call approach.

FYI, a new version was released, basing on ring-array, which brings a 6.96%
increase in throughput of 1-thread case for ARM64.

https://lore.kernel.org/all/[email protected]/

Could you share more details of the test ? I'll give it a try.

>> Anyway, eventually, I would like to remove the current kretprobe
>> based implementation and unify fexit hook with function-graph
>> tracer. It should make more better perfromance on it.
>
> That makes sense. :) How do you imagine the unified solution ?
> Would both the fgraph and fprobe APIs keep existing but under the hood
> one would be implemented on the other ? (or would one be gone ?) Would
> we replace the rethook freelist with the function graph's per-task
> shadow stacks ? (or the other way around ?))

How about a private pool designate for local cpu ? If the fprobed routine
sticks to the same CPU when returning, the object allocation and reclaim
can go a quick path, that should bring same performance as shadow stack.
Otherwise the return of an object will go a slow path (slow as current
freelist or objpool).

>>> Note that I can't really make sense of the perf report with indirect
>>> calls. it always reports it spent 12% of the time in
>>> rethook_trampoline_handler but I verified with both a WARN in that
>>> function and a breakpoint with a debugger, this function does *not*
>>> get called when running this "bench trig-fentry" benchmark. Also it
>>> wouldn't make sense for fprobe_handler to call it so I'm quite
>>> confused why perf would report this call and such a long time spent
>>> there. Anyone know what I could be missing here ?
>
> I made slight progress on this. If I put the vmlinux file in the cwd
> where I run perf report, the reports no longer contain references to
> rethook_trampoline_handler. Instead, they have a few
> 0xffff800008xxxxxx addresses under fprobe_handler. (like in the
> pastebin I just linked)
>
> It's still pretty weird because that range is the vmalloc area on
> arm64 and I don't understand why anything under fprobe_handler would
> execute there. However, I'm also definitely sure that these 12% are
> actually spent getting buffers from the rethook memory pool because if
> I replace rethook_try_get and rethook_recycle calls with the usage of
> a dummy static bss buffer (for the sake of benchmarking the
> "theoretical best case scenario") these weird perf report traces are
> gone and the 12% are saved. https://paste.debian.net/1257862/
>
> This is why I would be interested in seeing rethook's memory pool
> reimplemented on top of something like
> https://lwn.net/Articles/788923/ If we get closer to the performance
> of the the theoretical best case scenario where getting a blob of
> memory is ~free (and I think it could be the case with a per task
> shadow stack like fgraph's), then a bpf on fprobe implementation would
> start to approach the performances of a direct called trampoline on
> arm64: https://paste.debian.net/1257863/