2023-06-13 03:02:10

by Menglong Dong

[permalink] [raw]
Subject: [PATCH bpf-next v5 0/3] bpf, x86: allow function arguments up to 12 for TRACING

From: Menglong Dong <[email protected]>

For now, the BPF program of type BPF_PROG_TYPE_TRACING can only be used
on the kernel functions whose arguments count less than 6. This is not
friendly at all, as too many functions have arguments count more than 6.
According to the current kernel version, below is a statistics of the
function arguments count:

argument count | function count
7 | 704
8 | 270
9 | 84
10 | 47
11 | 47
12 | 27
13 | 22
14 | 5
15 | 0
16 | 1

Therefore, let's enhance it by increasing the function arguments count
allowed in arch_prepare_bpf_trampoline(), for now, only x86_64.

In the 1st patch, we clean garbage value in upper bytes of the trampoline
when we store the arguments from regs into stack.

In the 2nd patch, we make arch_prepare_bpf_trampoline() support to copy
function arguments in stack for x86 arch. Therefore, the maximum
arguments can be up to MAX_BPF_FUNC_ARGS for FENTRY and FEXIT. Meanwhile,
we clean the potentian garbage value when we copy the arguments on-stack.

And the 3rd patches are for the testcases of the this series.

Changes since v4:
- consider the case of the struct in arguments can't be hold by regs
- add comment for some code
- add testcases for MODIFY_RETURN
- rebase to the latest

Changes since v3:
- try make the stack pointer 16-byte aligned. Not sure if I'm right :)
- introduce clean_garbage() to clean the grabage when argument count is 7
- use different data type in bpf_testmod_fentry_test{7,12}
- add testcase for grabage values in ctx

Changes since v2:
- keep MAX_BPF_FUNC_ARGS still
- clean garbage value in upper bytes in the 2nd patch
- move bpf_fentry_test{7,12} to bpf_testmod.c and rename them to
bpf_testmod_fentry_test{7,12} meanwhile in the 3rd patch

Changes since v1:
- change the maximun function arguments to 14 from 12
- add testcases (Jiri Olsa)
- instead EMIT4 with EMIT3_off32 for "lea" to prevent overflow

Menglong Dong (3):
bpf, x86: clean garbage values when store args from regs into stack
bpf, x86: allow function arguments up to 12 for TRACING
selftests/bpf: add testcase for TRACING with 6+ arguments

arch/x86/net/bpf_jit_comp.c | 232 ++++++++++++++----
net/bpf/test_run.c | 23 +-
.../selftests/bpf/bpf_testmod/bpf_testmod.c | 45 +++-
.../selftests/bpf/prog_tests/fentry_fexit.c | 4 +-
.../selftests/bpf/prog_tests/fentry_test.c | 2 +
.../selftests/bpf/prog_tests/fexit_test.c | 2 +
.../selftests/bpf/prog_tests/modify_return.c | 20 +-
.../testing/selftests/bpf/progs/fentry_test.c | 52 ++++
.../testing/selftests/bpf/progs/fexit_test.c | 54 ++++
.../selftests/bpf/progs/modify_return.c | 40 +++
10 files changed, 424 insertions(+), 50 deletions(-)

--
2.40.1



2023-06-13 03:02:35

by Menglong Dong

[permalink] [raw]
Subject: [PATCH bpf-next v5 3/3] selftests/bpf: add testcase for TRACING with 6+ arguments

From: Menglong Dong <[email protected]>

Add test9/test10 in fexit_test.c and fentry_test.c to test the fentry
and fexit whose target function have 7/11 arguments.

Correspondingly, add bpf_testmod_fentry_test7() and
bpf_testmod_fentry_test11() to bpf_testmod.c

Meanwhile, add bpf_modify_return_test2() to test_run.c to test the
MODIFY_RETURN with 7 arguments.

And the testcases passed:

./test_progs -t fexit
Summary: 5/12 PASSED, 0 SKIPPED, 0 FAILED

./test_progs -t fentry
Summary: 3/0 PASSED, 0 SKIPPED, 0 FAILED

./test_progs -t modify_return
Summary: 1/0 PASSED, 0 SKIPPED, 0 FAILED

Signed-off-by: Menglong Dong <[email protected]>
---
v5:
- add testcases for MODIFY_RETURN
v4:
- use different type for args in bpf_testmod_fentry_test{7,12}
- add testcase for grabage values in ctx
v3:
- move bpf_fentry_test{7,12} to bpf_testmod.c and rename them to
bpf_testmod_fentry_test{7,12} meanwhile
- get return value by bpf_get_func_ret() in
"fexit/bpf_testmod_fentry_test12", as we don't change ___bpf_ctx_cast()
in this version
---
net/bpf/test_run.c | 23 ++++++--
.../selftests/bpf/bpf_testmod/bpf_testmod.c | 45 +++++++++++++++-
.../selftests/bpf/prog_tests/fentry_fexit.c | 4 +-
.../selftests/bpf/prog_tests/fentry_test.c | 2 +
.../selftests/bpf/prog_tests/fexit_test.c | 2 +
.../selftests/bpf/prog_tests/modify_return.c | 20 ++++++-
.../testing/selftests/bpf/progs/fentry_test.c | 52 ++++++++++++++++++
.../testing/selftests/bpf/progs/fexit_test.c | 54 +++++++++++++++++++
.../selftests/bpf/progs/modify_return.c | 40 ++++++++++++++
9 files changed, 235 insertions(+), 7 deletions(-)

diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
index 2321bd2f9964..df58e8bf5e07 100644
--- a/net/bpf/test_run.c
+++ b/net/bpf/test_run.c
@@ -561,6 +561,13 @@ __bpf_kfunc int bpf_modify_return_test(int a, int *b)
return a + *b;
}

+__bpf_kfunc int bpf_modify_return_test2(int a, int *b, short c, int d,
+ void *e, char f, int g)
+{
+ *b += 1;
+ return a + *b + c + d + (long)e + f + g;
+}
+
int noinline bpf_fentry_shadow_test(int a)
{
return a + 1;
@@ -596,9 +603,13 @@ __diag_pop();

BTF_SET8_START(bpf_test_modify_return_ids)
BTF_ID_FLAGS(func, bpf_modify_return_test)
+BTF_ID_FLAGS(func, bpf_modify_return_test2)
BTF_ID_FLAGS(func, bpf_fentry_test1, KF_SLEEPABLE)
BTF_SET8_END(bpf_test_modify_return_ids)

+BTF_ID_LIST(bpf_modify_return_test_id)
+BTF_ID(func, bpf_modify_return_test)
+
static const struct btf_kfunc_id_set bpf_test_modify_return_set = {
.owner = THIS_MODULE,
.set = &bpf_test_modify_return_ids,
@@ -661,9 +672,15 @@ int bpf_prog_test_run_tracing(struct bpf_prog *prog,
goto out;
break;
case BPF_MODIFY_RETURN:
- ret = bpf_modify_return_test(1, &b);
- if (b != 2)
- side_effect = 1;
+ if (prog->aux->attach_btf_id == *bpf_modify_return_test_id) {
+ ret = bpf_modify_return_test(1, &b);
+ if (b != 2)
+ side_effect = 1;
+ } else {
+ ret = bpf_modify_return_test2(1, &b, 3, 4, (void *)5, 6, 7);
+ if (b != 2)
+ side_effect = 1;
+ }
break;
default:
goto out;
diff --git a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
index aaf6ef1201c7..737ef0a5dca4 100644
--- a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
+++ b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
@@ -34,6 +34,11 @@ struct bpf_testmod_struct_arg_3 {
int b[];
};

+struct bpf_testmod_struct_arg_4 {
+ u64 a;
+ int b;
+};
+
__diag_push();
__diag_ignore_all("-Wmissing-prototypes",
"Global functions as their definitions will be in bpf_testmod.ko BTF");
@@ -191,6 +196,35 @@ noinline int bpf_testmod_fentry_test3(char a, int b, u64 c)
return a + b + c;
}

+noinline int bpf_testmod_fentry_test7(u64 a, void *b, short c, int d,
+ void *e, char f, int g)
+{
+ return a + (long)b + c + d + (long)e + f + g;
+}
+
+noinline int bpf_testmod_fentry_test11(u64 a, void *b, short c, int d,
+ void *e, char f, int g,
+ unsigned int h, long i, __u64 j,
+ unsigned long k)
+{
+ return a + (long)b + c + d + (long)e + f + g + h + i + j + k;
+}
+
+noinline int bpf_testmod_fentry_test_struct1(u64 a, void *b, short c,
+ int d, void *e,
+ struct bpf_testmod_struct_arg_4 f)
+{
+ return a + (long)b + c + d + (long)e + f.a + f.b;
+}
+
+noinline int bpf_testmod_fentry_test_struct2(u64 a, void *b, short c,
+ int d, void *e,
+ struct bpf_testmod_struct_arg_4 f,
+ int g)
+{
+ return a + (long)b + c + d + (long)e + f.a + f.b + g;
+}
+
int bpf_testmod_fentry_ok;

noinline ssize_t
@@ -206,6 +240,7 @@ bpf_testmod_test_read(struct file *file, struct kobject *kobj,
struct bpf_testmod_struct_arg_1 struct_arg1 = {10};
struct bpf_testmod_struct_arg_2 struct_arg2 = {2, 3};
struct bpf_testmod_struct_arg_3 *struct_arg3;
+ struct bpf_testmod_struct_arg_4 struct_arg4 = {21, 22};
int i = 1;

while (bpf_testmod_return_ptr(i))
@@ -243,7 +278,15 @@ bpf_testmod_test_read(struct file *file, struct kobject *kobj,

if (bpf_testmod_fentry_test1(1) != 2 ||
bpf_testmod_fentry_test2(2, 3) != 5 ||
- bpf_testmod_fentry_test3(4, 5, 6) != 15)
+ bpf_testmod_fentry_test3(4, 5, 6) != 15 ||
+ bpf_testmod_fentry_test7(16, (void *)17, 18, 19, (void *)20,
+ 21, 22) != 133 ||
+ bpf_testmod_fentry_test11(16, (void *)17, 18, 19, (void *)20,
+ 21, 22, 23, 24, 25, 26) != 231 ||
+ bpf_testmod_fentry_test_struct1(16, (void *)17, 18, 19,
+ (void *)20, struct_arg4) != 133 ||
+ bpf_testmod_fentry_test_struct2(16, (void *)17, 18, 19,
+ (void *)20, struct_arg4, 23) != 156)
goto out;

bpf_testmod_fentry_ok = 1;
diff --git a/tools/testing/selftests/bpf/prog_tests/fentry_fexit.c b/tools/testing/selftests/bpf/prog_tests/fentry_fexit.c
index 130f5b82d2e6..7decf57e7591 100644
--- a/tools/testing/selftests/bpf/prog_tests/fentry_fexit.c
+++ b/tools/testing/selftests/bpf/prog_tests/fentry_fexit.c
@@ -31,10 +31,12 @@ void test_fentry_fexit(void)
ASSERT_OK(err, "ipv6 test_run");
ASSERT_OK(topts.retval, "ipv6 test retval");

+ ASSERT_OK(trigger_module_test_read(1), "trigger_read");
+
fentry_res = (__u64 *)fentry_skel->bss;
fexit_res = (__u64 *)fexit_skel->bss;
printf("%lld\n", fentry_skel->bss->test1_result);
- for (i = 0; i < 8; i++) {
+ for (i = 0; i < 13; i++) {
ASSERT_EQ(fentry_res[i], 1, "fentry result");
ASSERT_EQ(fexit_res[i], 1, "fexit result");
}
diff --git a/tools/testing/selftests/bpf/prog_tests/fentry_test.c b/tools/testing/selftests/bpf/prog_tests/fentry_test.c
index c0d1d61d5f66..e1c0ce40febf 100644
--- a/tools/testing/selftests/bpf/prog_tests/fentry_test.c
+++ b/tools/testing/selftests/bpf/prog_tests/fentry_test.c
@@ -24,6 +24,8 @@ static int fentry_test(struct fentry_test_lskel *fentry_skel)
ASSERT_OK(err, "test_run");
ASSERT_EQ(topts.retval, 0, "test_run");

+ ASSERT_OK(trigger_module_test_read(1), "trigger_read");
+
result = (__u64 *)fentry_skel->bss;
for (i = 0; i < sizeof(*fentry_skel->bss) / sizeof(__u64); i++) {
if (!ASSERT_EQ(result[i], 1, "fentry_result"))
diff --git a/tools/testing/selftests/bpf/prog_tests/fexit_test.c b/tools/testing/selftests/bpf/prog_tests/fexit_test.c
index 101b7343036b..ea81fa913ec6 100644
--- a/tools/testing/selftests/bpf/prog_tests/fexit_test.c
+++ b/tools/testing/selftests/bpf/prog_tests/fexit_test.c
@@ -24,6 +24,8 @@ static int fexit_test(struct fexit_test_lskel *fexit_skel)
ASSERT_OK(err, "test_run");
ASSERT_EQ(topts.retval, 0, "test_run");

+ ASSERT_OK(trigger_module_test_read(1), "trigger_read");
+
result = (__u64 *)fexit_skel->bss;
for (i = 0; i < sizeof(*fexit_skel->bss) / sizeof(__u64); i++) {
if (!ASSERT_EQ(result[i], 1, "fexit_result"))
diff --git a/tools/testing/selftests/bpf/prog_tests/modify_return.c b/tools/testing/selftests/bpf/prog_tests/modify_return.c
index 5d9955af6247..93febb6d81ef 100644
--- a/tools/testing/selftests/bpf/prog_tests/modify_return.c
+++ b/tools/testing/selftests/bpf/prog_tests/modify_return.c
@@ -11,7 +11,8 @@
#define UPPER(x) ((x) >> 16)


-static void run_test(__u32 input_retval, __u16 want_side_effect, __s16 want_ret)
+static void run_test(__u32 input_retval, __u16 want_side_effect,
+ __s16 want_ret, __s16 want_ret2)
{
struct modify_return *skel = NULL;
int err, prog_fd;
@@ -41,6 +42,19 @@ static void run_test(__u32 input_retval, __u16 want_side_effect, __s16 want_ret)
ASSERT_EQ(skel->bss->fexit_result, 1, "modify_return fexit_result");
ASSERT_EQ(skel->bss->fmod_ret_result, 1, "modify_return fmod_ret_result");

+ prog_fd = bpf_program__fd(skel->progs.fmod_ret_test2);
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ ASSERT_OK(err, "test_run");
+
+ side_effect = UPPER(topts.retval);
+ ret = LOWER(topts.retval);
+
+ ASSERT_EQ(ret, want_ret2, "test_run ret2");
+ ASSERT_EQ(side_effect, want_side_effect, "modify_return side_effect2");
+ ASSERT_EQ(skel->bss->fentry_result2, 1, "modify_return fentry_result2");
+ ASSERT_EQ(skel->bss->fexit_result2, 1, "modify_return fexit_result2");
+ ASSERT_EQ(skel->bss->fmod_ret_result2, 1, "modify_return fmod_ret_result2");
+
cleanup:
modify_return__destroy(skel);
}
@@ -50,8 +64,10 @@ void serial_test_modify_return(void)
{
run_test(0 /* input_retval */,
1 /* want_side_effect */,
- 4 /* want_ret */);
+ 4 /* want_ret */,
+ 29 /* want_ret */);
run_test(-EINVAL /* input_retval */,
0 /* want_side_effect */,
+ -EINVAL /* want_ret */,
-EINVAL /* want_ret */);
}
diff --git a/tools/testing/selftests/bpf/progs/fentry_test.c b/tools/testing/selftests/bpf/progs/fentry_test.c
index 52a550d281d9..3614c85c4ead 100644
--- a/tools/testing/selftests/bpf/progs/fentry_test.c
+++ b/tools/testing/selftests/bpf/progs/fentry_test.c
@@ -77,3 +77,55 @@ int BPF_PROG(test8, struct bpf_fentry_test_t *arg)
test8_result = 1;
return 0;
}
+
+__u64 test9_result = 0;
+SEC("fentry/bpf_testmod_fentry_test7")
+int BPF_PROG(test9, __u64 a, void *b, short c, int d, void *e, char f,
+ int g)
+{
+ test9_result = a == 16 && b == (void *)17 && c == 18 && d == 19 &&
+ e == (void *)20 && f == 21 && g == 22;
+ return 0;
+}
+
+__u64 test10_result = 0;
+SEC("fentry/bpf_testmod_fentry_test11")
+int BPF_PROG(test10, __u64 a, void *b, short c, int d, void *e, char f,
+ int g, unsigned int h, long i, __u64 j, unsigned long k)
+{
+ test10_result = a == 16 && b == (void *)17 && c == 18 && d == 19 &&
+ e == (void *)20 && f == 21 && g == 22 && h == 23 &&
+ i == 24 && j == 25 && k == 26;
+ return 0;
+}
+
+__u64 test11_result = 0;
+SEC("fentry/bpf_testmod_fentry_test11")
+int BPF_PROG(test11, __u64 a, __u64 b, __u64 c, __u64 d, __u64 e, __u64 f,
+ __u64 g, __u64 h, __u64 i, __u64 j, __u64 k)
+{
+ test11_result = a == 16 && b == 17 && c == 18 && d == 19 &&
+ e == 20 && f == 21 && g == 22 && h == 23 &&
+ i == 24 && j == 25 && k == 26;
+ return 0;
+}
+
+__u64 test12_result = 0;
+SEC("fentry/bpf_testmod_fentry_test_struct1")
+int BPF_PROG(test12, __u64 a, void *b, short c, int d, void *e, __u64 s_a,
+ int s_b)
+{
+ test12_result = a == 16 && b == (void *)17 && c == 18 && d == 19 &&
+ e == (void *)20 && s_a == 21 && s_b == 22;
+ return 0;
+}
+
+__u64 test13_result = 0;
+SEC("fentry/bpf_testmod_fentry_test_struct2")
+int BPF_PROG(test13, __u64 a, void *b, short c, int d, void *e, __u64 s_a,
+ int s_b, int f)
+{
+ test13_result = a == 16 && b == (void *)17 && c == 18 && d == 19 &&
+ e == (void *)20 && s_a == 21 && s_b == 22 && f == 23;
+ return 0;
+}
diff --git a/tools/testing/selftests/bpf/progs/fexit_test.c b/tools/testing/selftests/bpf/progs/fexit_test.c
index 8f1ccb7302e1..f6c50e4663db 100644
--- a/tools/testing/selftests/bpf/progs/fexit_test.c
+++ b/tools/testing/selftests/bpf/progs/fexit_test.c
@@ -78,3 +78,57 @@ int BPF_PROG(test8, struct bpf_fentry_test_t *arg)
test8_result = 1;
return 0;
}
+
+__u64 test9_result = 0;
+SEC("fexit/bpf_testmod_fentry_test7")
+int BPF_PROG(test9, __u64 a, void *b, short c, int d, void *e, char f,
+ int g, int ret)
+{
+ test9_result = a == 16 && b == (void *)17 && c == 18 && d == 19 &&
+ e == (void *)20 && f == 21 && g == 22 && ret == 133;
+ return 0;
+}
+
+__u64 test10_result = 0;
+SEC("fexit/bpf_testmod_fentry_test11")
+int BPF_PROG(test10, __u64 a, void *b, short c, int d, void *e, char f,
+ int g, unsigned int h, long i, __u64 j, unsigned long k,
+ int ret)
+{
+ test10_result = a == 16 && b == (void *)17 && c == 18 && d == 19 &&
+ e == (void *)20 && f == 21 && g == 22 && h == 23 &&
+ i == 24 && j == 25 && k == 26 && ret == 231;
+ return 0;
+}
+
+__u64 test11_result = 0;
+SEC("fexit/bpf_testmod_fentry_test11")
+int BPF_PROG(test11, __u64 a, __u64 b, __u64 c, __u64 d, __u64 e, __u64 f,
+ __u64 g, __u64 h, __u64 i, __u64 j, __u64 k, __u64 ret)
+{
+ test11_result = a == 16 && b == 17 && c == 18 && d == 19 &&
+ e == 20 && f == 21 && g == 22 && h == 23 &&
+ i == 24 && j == 25 && k == 26 && ret == 231;
+ return 0;
+}
+
+__u64 test12_result = 0;
+SEC("fexit/bpf_testmod_fentry_test_struct1")
+int BPF_PROG(test12, __u64 a, void *b, short c, int d, void *e, __u64 s_a,
+ int s_b, __u64 ret)
+{
+ test12_result = a == 16 && b == (void *)17 && c == 18 && d == 19 &&
+ e == (void *)20 && s_a == 21 && s_b == 22 && ret == 133;
+ return 0;
+}
+
+__u64 test13_result = 0;
+SEC("fexit/bpf_testmod_fentry_test_struct2")
+int BPF_PROG(test13, __u64 a, void *b, short c, int d, void *e, __u64 s_a,
+ int s_b, int f, __u64 ret)
+{
+ test13_result = a == 16 && b == (void *)17 && c == 18 && d == 19 &&
+ e == (void *)20 && s_a == 21 && s_b == 22 && f == 23 &&
+ ret == 156;
+ return 0;
+}
diff --git a/tools/testing/selftests/bpf/progs/modify_return.c b/tools/testing/selftests/bpf/progs/modify_return.c
index 8b7466a15c6b..3376d4849f58 100644
--- a/tools/testing/selftests/bpf/progs/modify_return.c
+++ b/tools/testing/selftests/bpf/progs/modify_return.c
@@ -47,3 +47,43 @@ int BPF_PROG(fexit_test, int a, __u64 b, int ret)

return 0;
}
+
+static int sequence2;
+
+__u64 fentry_result2 = 0;
+SEC("fentry/bpf_modify_return_test2")
+int BPF_PROG(fentry_test2, int a, int *b, short c, int d, void *e, char f,
+ int g)
+{
+ sequence2++;
+ fentry_result2 = (sequence2 == 1);
+ return 0;
+}
+
+__u64 fmod_ret_result2 = 0;
+SEC("fmod_ret/bpf_modify_return_test2")
+int BPF_PROG(fmod_ret_test2, int a, int *b, short c, int d, void *e, char f,
+ int g, int ret)
+{
+ sequence2++;
+ /* This is the first fmod_ret program, the ret passed should be 0 */
+ fmod_ret_result2 = (sequence2 == 2 && ret == 0);
+ return input_retval;
+}
+
+__u64 fexit_result2 = 0;
+SEC("fexit/bpf_modify_return_test2")
+int BPF_PROG(fexit_test2, int a, int *b, short c, int d, void *e, char f,
+ int g, int ret)
+{
+ sequence2++;
+ /* If the input_reval is non-zero a successful modification should have
+ * occurred.
+ */
+ if (input_retval)
+ fexit_result2 = (sequence2 == 3 && ret == input_retval);
+ else
+ fexit_result2 = (sequence2 == 3 && ret == 29);
+
+ return 0;
+}
--
2.40.1


2023-06-13 03:02:39

by Menglong Dong

[permalink] [raw]
Subject: [PATCH bpf-next v5 2/3] bpf, x86: allow function arguments up to 12 for TRACING

From: Menglong Dong <[email protected]>

For now, the BPF program of type BPF_PROG_TYPE_TRACING can only be used
on the kernel functions whose arguments count less than 6. This is not
friendly at all, as too many functions have arguments count more than 6.

According to the current kernel version, below is a statistics of the
function arguments count:

argument count | function count
7 | 704
8 | 270
9 | 84
10 | 47
11 | 47
12 | 27
13 | 22
14 | 5
15 | 0
16 | 1

Therefore, let's enhance it by increasing the function arguments count
allowed in arch_prepare_bpf_trampoline(), for now, only x86_64.

For the case that we don't need to call origin function, which means
without BPF_TRAMP_F_CALL_ORIG, we need only copy the function arguments
that stored in the frame of the caller to current frame. The arguments
of arg6-argN are stored in "$rbp + 0x18", we need copy them to
"$rbp - regs_off + (6 * 8)".

For the case with BPF_TRAMP_F_CALL_ORIG, we need prepare the arguments
in stack before call origin function, which means we need alloc extra
"8 * (arg_count - 6)" memory in the top of the stack. Note, there should
not be any data be pushed to the stack before call the origin function.
Then, we have to store rbx with 'mov' instead of 'push'.

We use EMIT3_off32() or EMIT4() for "lea" and "sub". The range of the
imm in "lea" and "sub" is [-128, 127] if EMIT4() is used. Therefore,
we use EMIT3_off32() instead if the imm out of the range.

It works well for the FENTRY/FEXIT/MODIFY_RETURN.

Signed-off-by: Menglong Dong <[email protected]>
---
v5:
- consider the case of the struct in arguments can't be hold by regs
v4:
- make the stack 16-byte aligned if passing args on-stack is needed
- add the function arguments statistics to the commit log
v3:
- use EMIT3_off32() for "lea" and "sub" only on necessary
- make 12 as the maximum arguments count
v2:
- instead EMIT4 with EMIT3_off32 for "lea" to prevent overflow
- make MAX_BPF_FUNC_ARGS as the maximum argument count
---
arch/x86/net/bpf_jit_comp.c | 221 +++++++++++++++++++++++++++++++-----
1 file changed, 195 insertions(+), 26 deletions(-)

diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index a407fbbffecd..47c699594dd8 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -1857,37 +1857,165 @@ st: if (is_imm8(insn->off))
return proglen;
}

-static void save_regs(const struct btf_func_model *m, u8 **prog, int nr_regs,
- int stack_size)
+static inline void clean_stack_garbage(const struct btf_func_model *m,
+ u8 **pprog, int nr_args_on_stack,
+ int stack_size)
{
- int i;
+ int arg_size, off;
+ u8 *prog;
+
+ if (nr_args_on_stack != 1)
+ return;
+
+ /* the size of the last argument */
+ arg_size = m->arg_size[m->nr_args - 1];
+
+ /* Generally speaking, the compiler will pass the arguments
+ * on-stack with "push" instruction, which will take 8-byte
+ * on the stack. On this case, there won't be garbage values
+ * while we copy the arguments from origin stack frame to current
+ * in BPF_DW.
+ *
+ * However, sometimes the compiler will only allocate 4-byte on
+ * the stack for the arguments. For now, this case will only
+ * happen if there is only one argument on-stack and its size
+ * not more than 4 byte. On this case, there will be garbage
+ * values on the upper 4-byte where we store the argument on
+ * current stack frame.
+ *
+ * arguments on origin stack:
+ *
+ * stack_arg_1(4-byte) xxx(4-byte)
+ *
+ * what we copy:
+ *
+ * stack_arg_1(8-byte): stack_arg_1(origin) xxx
+ *
+ * and the xxx is the garbage values which we should clean here.
+ */
+ if (arg_size <= 4) {
+ off = -(stack_size - 4);
+ prog = *pprog;
+ /* mov DWORD PTR [rbp + off], 0 */
+ if (!is_imm8(off))
+ EMIT2_off32(0xC7, 0x85, off);
+ else
+ EMIT3(0xC7, 0x45, off);
+ EMIT(0, 4);
+ *pprog = prog;
+ }
+}
+
+static void save_args(const struct btf_func_model *m, u8 **prog,
+ int stack_size, bool on_stack)
+{
+ int arg_regs, first_off, nr_regs = 0, nr_stack = 0;
+ int i, j;

/* Store function arguments to stack.
* For a function that accepts two pointers the sequence will be:
* mov QWORD PTR [rbp-0x10],rdi
* mov QWORD PTR [rbp-0x8],rsi
*/
- for (i = 0; i < min(nr_regs, 6); i++)
- emit_stx(prog, BPF_DW, BPF_REG_FP,
- i == 5 ? X86_REG_R9 : BPF_REG_1 + i,
- -(stack_size - i * 8));
+ for (i = 0; i < min_t(int, m->nr_args, MAX_BPF_FUNC_ARGS); i++) {
+ arg_regs = (m->arg_size[i] + 7) / 8;
+
+ /* According to the research of Yonghong, struct members
+ * should be all in register or all on the stack.
+ * Meanwhile, the compiler will pass the argument on regs
+ * if the remained regs can hold the argument.
+ *
+ * Disorder of the args can happen. For example:
+ *
+ * struct foo_struct {
+ * long a;
+ * int b;
+ * };
+ * int foo(char, char, char, char, char, struct foo_struct,
+ * char);
+ *
+ * the arg1-5,arg7 will be passed by regs, and arg6 will
+ * by stack.
+ *
+ * Therefore, we should keep the same logic as here when
+ * we restore the regs in restore_regs.
+ */
+ if (nr_regs + arg_regs > 6) {
+ /* copy function arguments from origin stack frame
+ * into current stack frame.
+ *
+ * The starting address of the arguments on-stack
+ * is:
+ * rbp + 8(push rbp) +
+ * 8(return addr of origin call) +
+ * 8(return addr of the caller)
+ * which means: rbp + 24
+ */
+ for (j = 0; j < arg_regs; j++) {
+ emit_ldx(prog, BPF_DW, BPF_REG_0, BPF_REG_FP,
+ nr_stack * 8 + 0x18);
+ emit_stx(prog, BPF_DW, BPF_REG_FP, BPF_REG_0,
+ -stack_size);
+
+ if (!nr_stack)
+ first_off = stack_size;
+ stack_size -= 8;
+ nr_stack++;
+ }
+ } else {
+ /* Only copy the arguments on-stack to current
+ * 'stack_size' and ignore the regs, used to
+ * prepare the arguments on-stack for orign call.
+ */
+ if (on_stack) {
+ nr_regs += arg_regs;
+ continue;
+ }
+
+ /* copy the arguments from regs into stack */
+ for (j = 0; j < arg_regs; j++) {
+ emit_stx(prog, BPF_DW, BPF_REG_FP,
+ nr_regs == 5 ? X86_REG_R9 : BPF_REG_1 + i,
+ -stack_size);
+ stack_size -= 8;
+ nr_regs++;
+ }
+ }
+ }
+
+ clean_stack_garbage(m, prog, nr_stack, first_off);
}

-static void restore_regs(const struct btf_func_model *m, u8 **prog, int nr_regs,
+static void restore_regs(const struct btf_func_model *m, u8 **prog,
int stack_size)
{
- int i;
+ int i, j, arg_regs, nr_regs = 0;

/* Restore function arguments from stack.
* For a function that accepts two pointers the sequence will be:
* EMIT4(0x48, 0x8B, 0x7D, 0xF0); mov rdi,QWORD PTR [rbp-0x10]
* EMIT4(0x48, 0x8B, 0x75, 0xF8); mov rsi,QWORD PTR [rbp-0x8]
+ *
+ * The logic here is similar to what we do in save_args()
*/
- for (i = 0; i < min(nr_regs, 6); i++)
- emit_ldx(prog, BPF_DW,
- i == 5 ? X86_REG_R9 : BPF_REG_1 + i,
- BPF_REG_FP,
- -(stack_size - i * 8));
+ for (i = 0; i < min_t(int, m->nr_args, MAX_BPF_FUNC_ARGS); i++) {
+ arg_regs = (m->arg_size[i] + 7) / 8;
+ if (nr_regs + arg_regs <= 6) {
+ for (j = 0; j < arg_regs; j++) {
+ emit_ldx(prog, BPF_DW,
+ nr_regs == 5 ? X86_REG_R9 : BPF_REG_1 + i,
+ BPF_REG_FP,
+ -stack_size);
+ stack_size -= 8;
+ nr_regs++;
+ }
+ } else {
+ stack_size -= 8 * arg_regs;
+ }
+
+ if (nr_regs >= 6)
+ break;
+ }
}

static int invoke_bpf_prog(const struct btf_func_model *m, u8 **pprog,
@@ -1915,7 +2043,10 @@ static int invoke_bpf_prog(const struct btf_func_model *m, u8 **pprog,
/* arg1: mov rdi, progs[i] */
emit_mov_imm64(&prog, BPF_REG_1, (long) p >> 32, (u32) (long) p);
/* arg2: lea rsi, [rbp - ctx_cookie_off] */
- EMIT4(0x48, 0x8D, 0x75, -run_ctx_off);
+ if (!is_imm8(-run_ctx_off))
+ EMIT3_off32(0x48, 0x8D, 0xB5, -run_ctx_off);
+ else
+ EMIT4(0x48, 0x8D, 0x75, -run_ctx_off);

if (emit_rsb_call(&prog, bpf_trampoline_enter(p), prog))
return -EINVAL;
@@ -1931,7 +2062,10 @@ static int invoke_bpf_prog(const struct btf_func_model *m, u8 **pprog,
emit_nops(&prog, 2);

/* arg1: lea rdi, [rbp - stack_size] */
- EMIT4(0x48, 0x8D, 0x7D, -stack_size);
+ if (!is_imm8(-stack_size))
+ EMIT3_off32(0x48, 0x8D, 0xBD, -stack_size);
+ else
+ EMIT4(0x48, 0x8D, 0x7D, -stack_size);
/* arg2: progs[i]->insnsi for interpreter */
if (!p->jited)
emit_mov_imm64(&prog, BPF_REG_2,
@@ -1961,7 +2095,10 @@ static int invoke_bpf_prog(const struct btf_func_model *m, u8 **pprog,
/* arg2: mov rsi, rbx <- start time in nsec */
emit_mov_reg(&prog, true, BPF_REG_2, BPF_REG_6);
/* arg3: lea rdx, [rbp - run_ctx_off] */
- EMIT4(0x48, 0x8D, 0x55, -run_ctx_off);
+ if (!is_imm8(-run_ctx_off))
+ EMIT3_off32(0x48, 0x8D, 0x95, -run_ctx_off);
+ else
+ EMIT4(0x48, 0x8D, 0x55, -run_ctx_off);
if (emit_rsb_call(&prog, bpf_trampoline_exit(p), prog))
return -EINVAL;

@@ -2113,7 +2250,7 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
void *func_addr)
{
int i, ret, nr_regs = m->nr_args, stack_size = 0;
- int regs_off, nregs_off, ip_off, run_ctx_off;
+ int regs_off, nregs_off, ip_off, run_ctx_off, arg_stack_off, rbx_off;
struct bpf_tramp_links *fentry = &tlinks[BPF_TRAMP_FENTRY];
struct bpf_tramp_links *fexit = &tlinks[BPF_TRAMP_FEXIT];
struct bpf_tramp_links *fmod_ret = &tlinks[BPF_TRAMP_MODIFY_RETURN];
@@ -2127,8 +2264,10 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
if (m->arg_flags[i] & BTF_FMODEL_STRUCT_ARG)
nr_regs += (m->arg_size[i] + 7) / 8 - 1;

- /* x86-64 supports up to 6 arguments. 7+ can be added in the future */
- if (nr_regs > 6)
+ /* x86-64 supports up to MAX_BPF_FUNC_ARGS arguments. 1-6
+ * are passed through regs, the remains are through stack.
+ */
+ if (nr_regs > MAX_BPF_FUNC_ARGS)
return -ENOTSUPP;

/* Generated trampoline stack layout:
@@ -2147,7 +2286,14 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
*
* RBP - ip_off [ traced function ] BPF_TRAMP_F_IP_ARG flag
*
+ * RBP - rbx_off [ rbx value ] always
+ *
* RBP - run_ctx_off [ bpf_tramp_run_ctx ]
+ *
+ * [ stack_argN ] BPF_TRAMP_F_CALL_ORIG
+ * [ ... ]
+ * [ stack_arg2 ]
+ * RBP - arg_stack_off [ stack_arg1 ]
*/

/* room for return value of orig_call or fentry prog */
@@ -2167,9 +2313,25 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i

ip_off = stack_size;

+ stack_size += 8;
+ rbx_off = stack_size;
+
stack_size += (sizeof(struct bpf_tramp_run_ctx) + 7) & ~0x7;
run_ctx_off = stack_size;

+ if (nr_regs > 6 && (flags & BPF_TRAMP_F_CALL_ORIG)) {
+ stack_size += (nr_regs - 6) * 8;
+ /* make sure the stack pointer is 16-byte aligned if we
+ * need pass arguments on stack, which means
+ * [stack_size + 8(rbp) + 8(rip) + 8(origin rip)]
+ * should be 16-byte aligned. Following code depend on
+ * that stack_size is already 8-byte aligned.
+ */
+ stack_size += (stack_size % 16) ? 0 : 8;
+ }
+
+ arg_stack_off = stack_size;
+
if (flags & BPF_TRAMP_F_SKIP_FRAME) {
/* skip patched call instruction and point orig_call to actual
* body of the kernel function.
@@ -2189,8 +2351,14 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
x86_call_depth_emit_accounting(&prog, NULL);
EMIT1(0x55); /* push rbp */
EMIT3(0x48, 0x89, 0xE5); /* mov rbp, rsp */
- EMIT4(0x48, 0x83, 0xEC, stack_size); /* sub rsp, stack_size */
- EMIT1(0x53); /* push rbx */
+ if (!is_imm8(stack_size))
+ /* sub rsp, stack_size */
+ EMIT3_off32(0x48, 0x81, 0xEC, stack_size);
+ else
+ /* sub rsp, stack_size */
+ EMIT4(0x48, 0x83, 0xEC, stack_size);
+ /* mov QWORD PTR [rbp - rbx_off], rbx */
+ emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_6, -rbx_off);

/* Store number of argument registers of the traced function:
* mov rax, nr_regs
@@ -2208,7 +2376,7 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -ip_off);
}

- save_regs(m, &prog, nr_regs, regs_off);
+ save_args(m, &prog, regs_off, false);

if (flags & BPF_TRAMP_F_CALL_ORIG) {
/* arg1: mov rdi, im */
@@ -2238,7 +2406,8 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
}

if (flags & BPF_TRAMP_F_CALL_ORIG) {
- restore_regs(m, &prog, nr_regs, regs_off);
+ restore_regs(m, &prog, regs_off);
+ save_args(m, &prog, arg_stack_off, true);

if (flags & BPF_TRAMP_F_ORIG_STACK) {
emit_ldx(&prog, BPF_DW, BPF_REG_0, BPF_REG_FP, 8);
@@ -2279,7 +2448,7 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
}

if (flags & BPF_TRAMP_F_RESTORE_REGS)
- restore_regs(m, &prog, nr_regs, regs_off);
+ restore_regs(m, &prog, regs_off);

/* This needs to be done regardless. If there were fmod_ret programs,
* the return value is only updated on the stack and still needs to be
@@ -2298,7 +2467,7 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
if (save_ret)
emit_ldx(&prog, BPF_DW, BPF_REG_0, BPF_REG_FP, -8);

- EMIT1(0x5B); /* pop rbx */
+ emit_ldx(&prog, BPF_DW, BPF_REG_6, BPF_REG_FP, -rbx_off);
EMIT1(0xC9); /* leave */
if (flags & BPF_TRAMP_F_SKIP_FRAME)
/* skip our return address and return to parent */
--
2.40.1


2023-06-15 04:22:36

by Menglong Dong

[permalink] [raw]
Subject: Re: [PATCH bpf-next v5 2/3] bpf, x86: allow function arguments up to 12 for TRACING

On Tue, Jun 13, 2023 at 10:53 AM <[email protected]> wrote:
>
> From: Menglong Dong <[email protected]>
>
> For now, the BPF program of type BPF_PROG_TYPE_TRACING can only be used
> on the kernel functions whose arguments count less than 6. This is not
> friendly at all, as too many functions have arguments count more than 6.
>
> According to the current kernel version, below is a statistics of the
> function arguments count:
>
> argument count | function count
> 7 | 704
> 8 | 270
> 9 | 84
> 10 | 47
> 11 | 47
> 12 | 27
> 13 | 22
> 14 | 5
> 15 | 0
> 16 | 1
>
> Therefore, let's enhance it by increasing the function arguments count
> allowed in arch_prepare_bpf_trampoline(), for now, only x86_64.
>
> For the case that we don't need to call origin function, which means
> without BPF_TRAMP_F_CALL_ORIG, we need only copy the function arguments
> that stored in the frame of the caller to current frame. The arguments
> of arg6-argN are stored in "$rbp + 0x18", we need copy them to
> "$rbp - regs_off + (6 * 8)".
>
> For the case with BPF_TRAMP_F_CALL_ORIG, we need prepare the arguments
> in stack before call origin function, which means we need alloc extra
> "8 * (arg_count - 6)" memory in the top of the stack. Note, there should
> not be any data be pushed to the stack before call the origin function.
> Then, we have to store rbx with 'mov' instead of 'push'.
>
> We use EMIT3_off32() or EMIT4() for "lea" and "sub". The range of the
> imm in "lea" and "sub" is [-128, 127] if EMIT4() is used. Therefore,
> we use EMIT3_off32() instead if the imm out of the range.
>
> It works well for the FENTRY/FEXIT/MODIFY_RETURN.
>
> Signed-off-by: Menglong Dong <[email protected]>
> ---
> v5:
> - consider the case of the struct in arguments can't be hold by regs
> v4:
> - make the stack 16-byte aligned if passing args on-stack is needed
> - add the function arguments statistics to the commit log
> v3:
> - use EMIT3_off32() for "lea" and "sub" only on necessary
> - make 12 as the maximum arguments count
> v2:
> - instead EMIT4 with EMIT3_off32 for "lea" to prevent overflow
> - make MAX_BPF_FUNC_ARGS as the maximum argument count
> ---
> arch/x86/net/bpf_jit_comp.c | 221 +++++++++++++++++++++++++++++++-----
> 1 file changed, 195 insertions(+), 26 deletions(-)
>
> diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
> index a407fbbffecd..47c699594dd8 100644
> --- a/arch/x86/net/bpf_jit_comp.c
> +++ b/arch/x86/net/bpf_jit_comp.c
> @@ -1857,37 +1857,165 @@ st: if (is_imm8(insn->off))
> return proglen;
> }
>
> -static void save_regs(const struct btf_func_model *m, u8 **prog, int nr_regs,
> - int stack_size)
> +static inline void clean_stack_garbage(const struct btf_func_model *m,
> + u8 **pprog, int nr_args_on_stack,
> + int stack_size)
> {
> - int i;
> + int arg_size, off;
> + u8 *prog;
> +
> + if (nr_args_on_stack != 1)
> + return;
> +
> + /* the size of the last argument */
> + arg_size = m->arg_size[m->nr_args - 1];
> +
> + /* Generally speaking, the compiler will pass the arguments
> + * on-stack with "push" instruction, which will take 8-byte
> + * on the stack. On this case, there won't be garbage values
> + * while we copy the arguments from origin stack frame to current
> + * in BPF_DW.
> + *
> + * However, sometimes the compiler will only allocate 4-byte on
> + * the stack for the arguments. For now, this case will only
> + * happen if there is only one argument on-stack and its size
> + * not more than 4 byte. On this case, there will be garbage
> + * values on the upper 4-byte where we store the argument on
> + * current stack frame.
> + *
> + * arguments on origin stack:
> + *
> + * stack_arg_1(4-byte) xxx(4-byte)
> + *
> + * what we copy:
> + *
> + * stack_arg_1(8-byte): stack_arg_1(origin) xxx
> + *
> + * and the xxx is the garbage values which we should clean here.
> + */
> + if (arg_size <= 4) {
> + off = -(stack_size - 4);
> + prog = *pprog;
> + /* mov DWORD PTR [rbp + off], 0 */
> + if (!is_imm8(off))
> + EMIT2_off32(0xC7, 0x85, off);
> + else
> + EMIT3(0xC7, 0x45, off);
> + EMIT(0, 4);
> + *pprog = prog;
> + }
> +}
> +
> +static void save_args(const struct btf_func_model *m, u8 **prog,
> + int stack_size, bool on_stack)
> +{
> + int arg_regs, first_off, nr_regs = 0, nr_stack = 0;
> + int i, j;
>
> /* Store function arguments to stack.
> * For a function that accepts two pointers the sequence will be:
> * mov QWORD PTR [rbp-0x10],rdi
> * mov QWORD PTR [rbp-0x8],rsi
> */
> - for (i = 0; i < min(nr_regs, 6); i++)
> - emit_stx(prog, BPF_DW, BPF_REG_FP,
> - i == 5 ? X86_REG_R9 : BPF_REG_1 + i,
> - -(stack_size - i * 8));
> + for (i = 0; i < min_t(int, m->nr_args, MAX_BPF_FUNC_ARGS); i++) {
> + arg_regs = (m->arg_size[i] + 7) / 8;
> +
> + /* According to the research of Yonghong, struct members
> + * should be all in register or all on the stack.
> + * Meanwhile, the compiler will pass the argument on regs
> + * if the remained regs can hold the argument.
> + *
> + * Disorder of the args can happen. For example:
> + *
> + * struct foo_struct {
> + * long a;
> + * int b;
> + * };
> + * int foo(char, char, char, char, char, struct foo_struct,
> + * char);
> + *
> + * the arg1-5,arg7 will be passed by regs, and arg6 will
> + * by stack.
> + *
> + * Therefore, we should keep the same logic as here when
> + * we restore the regs in restore_regs.
> + */
> + if (nr_regs + arg_regs > 6) {
> + /* copy function arguments from origin stack frame
> + * into current stack frame.
> + *
> + * The starting address of the arguments on-stack
> + * is:
> + * rbp + 8(push rbp) +
> + * 8(return addr of origin call) +
> + * 8(return addr of the caller)
> + * which means: rbp + 24
> + */
> + for (j = 0; j < arg_regs; j++) {
> + emit_ldx(prog, BPF_DW, BPF_REG_0, BPF_REG_FP,
> + nr_stack * 8 + 0x18);
> + emit_stx(prog, BPF_DW, BPF_REG_FP, BPF_REG_0,
> + -stack_size);
> +
> + if (!nr_stack)
> + first_off = stack_size;
> + stack_size -= 8;
> + nr_stack++;
> + }
> + } else {
> + /* Only copy the arguments on-stack to current
> + * 'stack_size' and ignore the regs, used to
> + * prepare the arguments on-stack for orign call.
> + */
> + if (on_stack) {
> + nr_regs += arg_regs;
> + continue;
> + }
> +
> + /* copy the arguments from regs into stack */
> + for (j = 0; j < arg_regs; j++) {
> + emit_stx(prog, BPF_DW, BPF_REG_FP,
> + nr_regs == 5 ? X86_REG_R9 : BPF_REG_1 + i,

Oops, this should be:

+ nr_regs == 5 ? X86_REG_R9 :
BPF_REG_1 + nr_regs,

and cause the failure of the testcase tracing_struct.

I'll fix it in the next version.

> + -stack_size);
> + stack_size -= 8;
> + nr_regs++;
> + }
> + }
> + }
> +
> + clean_stack_garbage(m, prog, nr_stack, first_off);
> }
>
> -static void restore_regs(const struct btf_func_model *m, u8 **prog, int nr_regs,
> +static void restore_regs(const struct btf_func_model *m, u8 **prog,
> int stack_size)
> {
> - int i;
> + int i, j, arg_regs, nr_regs = 0;
>
> /* Restore function arguments from stack.
> * For a function that accepts two pointers the sequence will be:
> * EMIT4(0x48, 0x8B, 0x7D, 0xF0); mov rdi,QWORD PTR [rbp-0x10]
> * EMIT4(0x48, 0x8B, 0x75, 0xF8); mov rsi,QWORD PTR [rbp-0x8]
> + *
> + * The logic here is similar to what we do in save_args()
> */
> - for (i = 0; i < min(nr_regs, 6); i++)
> - emit_ldx(prog, BPF_DW,
> - i == 5 ? X86_REG_R9 : BPF_REG_1 + i,
> - BPF_REG_FP,
> - -(stack_size - i * 8));
> + for (i = 0; i < min_t(int, m->nr_args, MAX_BPF_FUNC_ARGS); i++) {
> + arg_regs = (m->arg_size[i] + 7) / 8;
> + if (nr_regs + arg_regs <= 6) {
> + for (j = 0; j < arg_regs; j++) {
> + emit_ldx(prog, BPF_DW,
> + nr_regs == 5 ? X86_REG_R9 : BPF_REG_1 + i,

Here too.

> + BPF_REG_FP,
> + -stack_size);
> + stack_size -= 8;
> + nr_regs++;
> + }
> + } else {
> + stack_size -= 8 * arg_regs;
> + }
> +
> + if (nr_regs >= 6)
> + break;
> + }
> }
>
> static int invoke_bpf_prog(const struct btf_func_model *m, u8 **pprog,
> @@ -1915,7 +2043,10 @@ static int invoke_bpf_prog(const struct btf_func_model *m, u8 **pprog,
> /* arg1: mov rdi, progs[i] */
> emit_mov_imm64(&prog, BPF_REG_1, (long) p >> 32, (u32) (long) p);
> /* arg2: lea rsi, [rbp - ctx_cookie_off] */
> - EMIT4(0x48, 0x8D, 0x75, -run_ctx_off);
> + if (!is_imm8(-run_ctx_off))
> + EMIT3_off32(0x48, 0x8D, 0xB5, -run_ctx_off);
> + else
> + EMIT4(0x48, 0x8D, 0x75, -run_ctx_off);
>
> if (emit_rsb_call(&prog, bpf_trampoline_enter(p), prog))
> return -EINVAL;
> @@ -1931,7 +2062,10 @@ static int invoke_bpf_prog(const struct btf_func_model *m, u8 **pprog,
> emit_nops(&prog, 2);
>
> /* arg1: lea rdi, [rbp - stack_size] */
> - EMIT4(0x48, 0x8D, 0x7D, -stack_size);
> + if (!is_imm8(-stack_size))
> + EMIT3_off32(0x48, 0x8D, 0xBD, -stack_size);
> + else
> + EMIT4(0x48, 0x8D, 0x7D, -stack_size);
> /* arg2: progs[i]->insnsi for interpreter */
> if (!p->jited)
> emit_mov_imm64(&prog, BPF_REG_2,
> @@ -1961,7 +2095,10 @@ static int invoke_bpf_prog(const struct btf_func_model *m, u8 **pprog,
> /* arg2: mov rsi, rbx <- start time in nsec */
> emit_mov_reg(&prog, true, BPF_REG_2, BPF_REG_6);
> /* arg3: lea rdx, [rbp - run_ctx_off] */
> - EMIT4(0x48, 0x8D, 0x55, -run_ctx_off);
> + if (!is_imm8(-run_ctx_off))
> + EMIT3_off32(0x48, 0x8D, 0x95, -run_ctx_off);
> + else
> + EMIT4(0x48, 0x8D, 0x55, -run_ctx_off);
> if (emit_rsb_call(&prog, bpf_trampoline_exit(p), prog))
> return -EINVAL;
>
> @@ -2113,7 +2250,7 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
> void *func_addr)
> {
> int i, ret, nr_regs = m->nr_args, stack_size = 0;
> - int regs_off, nregs_off, ip_off, run_ctx_off;
> + int regs_off, nregs_off, ip_off, run_ctx_off, arg_stack_off, rbx_off;
> struct bpf_tramp_links *fentry = &tlinks[BPF_TRAMP_FENTRY];
> struct bpf_tramp_links *fexit = &tlinks[BPF_TRAMP_FEXIT];
> struct bpf_tramp_links *fmod_ret = &tlinks[BPF_TRAMP_MODIFY_RETURN];
> @@ -2127,8 +2264,10 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
> if (m->arg_flags[i] & BTF_FMODEL_STRUCT_ARG)
> nr_regs += (m->arg_size[i] + 7) / 8 - 1;
>
> - /* x86-64 supports up to 6 arguments. 7+ can be added in the future */
> - if (nr_regs > 6)
> + /* x86-64 supports up to MAX_BPF_FUNC_ARGS arguments. 1-6
> + * are passed through regs, the remains are through stack.
> + */
> + if (nr_regs > MAX_BPF_FUNC_ARGS)
> return -ENOTSUPP;
>
> /* Generated trampoline stack layout:
> @@ -2147,7 +2286,14 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
> *
> * RBP - ip_off [ traced function ] BPF_TRAMP_F_IP_ARG flag
> *
> + * RBP - rbx_off [ rbx value ] always
> + *
> * RBP - run_ctx_off [ bpf_tramp_run_ctx ]
> + *
> + * [ stack_argN ] BPF_TRAMP_F_CALL_ORIG
> + * [ ... ]
> + * [ stack_arg2 ]
> + * RBP - arg_stack_off [ stack_arg1 ]
> */
>
> /* room for return value of orig_call or fentry prog */
> @@ -2167,9 +2313,25 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
>
> ip_off = stack_size;
>
> + stack_size += 8;
> + rbx_off = stack_size;
> +
> stack_size += (sizeof(struct bpf_tramp_run_ctx) + 7) & ~0x7;
> run_ctx_off = stack_size;
>
> + if (nr_regs > 6 && (flags & BPF_TRAMP_F_CALL_ORIG)) {
> + stack_size += (nr_regs - 6) * 8;
> + /* make sure the stack pointer is 16-byte aligned if we
> + * need pass arguments on stack, which means
> + * [stack_size + 8(rbp) + 8(rip) + 8(origin rip)]
> + * should be 16-byte aligned. Following code depend on
> + * that stack_size is already 8-byte aligned.
> + */
> + stack_size += (stack_size % 16) ? 0 : 8;
> + }
> +
> + arg_stack_off = stack_size;
> +
> if (flags & BPF_TRAMP_F_SKIP_FRAME) {
> /* skip patched call instruction and point orig_call to actual
> * body of the kernel function.
> @@ -2189,8 +2351,14 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
> x86_call_depth_emit_accounting(&prog, NULL);
> EMIT1(0x55); /* push rbp */
> EMIT3(0x48, 0x89, 0xE5); /* mov rbp, rsp */
> - EMIT4(0x48, 0x83, 0xEC, stack_size); /* sub rsp, stack_size */
> - EMIT1(0x53); /* push rbx */
> + if (!is_imm8(stack_size))
> + /* sub rsp, stack_size */
> + EMIT3_off32(0x48, 0x81, 0xEC, stack_size);
> + else
> + /* sub rsp, stack_size */
> + EMIT4(0x48, 0x83, 0xEC, stack_size);
> + /* mov QWORD PTR [rbp - rbx_off], rbx */
> + emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_6, -rbx_off);
>
> /* Store number of argument registers of the traced function:
> * mov rax, nr_regs
> @@ -2208,7 +2376,7 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
> emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -ip_off);
> }
>
> - save_regs(m, &prog, nr_regs, regs_off);
> + save_args(m, &prog, regs_off, false);
>
> if (flags & BPF_TRAMP_F_CALL_ORIG) {
> /* arg1: mov rdi, im */
> @@ -2238,7 +2406,8 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
> }
>
> if (flags & BPF_TRAMP_F_CALL_ORIG) {
> - restore_regs(m, &prog, nr_regs, regs_off);
> + restore_regs(m, &prog, regs_off);
> + save_args(m, &prog, arg_stack_off, true);
>
> if (flags & BPF_TRAMP_F_ORIG_STACK) {
> emit_ldx(&prog, BPF_DW, BPF_REG_0, BPF_REG_FP, 8);
> @@ -2279,7 +2448,7 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
> }
>
> if (flags & BPF_TRAMP_F_RESTORE_REGS)
> - restore_regs(m, &prog, nr_regs, regs_off);
> + restore_regs(m, &prog, regs_off);
>
> /* This needs to be done regardless. If there were fmod_ret programs,
> * the return value is only updated on the stack and still needs to be
> @@ -2298,7 +2467,7 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
> if (save_ret)
> emit_ldx(&prog, BPF_DW, BPF_REG_0, BPF_REG_FP, -8);
>
> - EMIT1(0x5B); /* pop rbx */
> + emit_ldx(&prog, BPF_DW, BPF_REG_6, BPF_REG_FP, -rbx_off);
> EMIT1(0xC9); /* leave */
> if (flags & BPF_TRAMP_F_SKIP_FRAME)
> /* skip our return address and return to parent */
> --
> 2.40.1
>

2023-06-18 23:16:07

by Yonghong Song

[permalink] [raw]
Subject: Re: [PATCH bpf-next v5 2/3] bpf, x86: allow function arguments up to 12 for TRACING



On 6/12/23 7:52 PM, [email protected] wrote:
> From: Menglong Dong <[email protected]>
>
> For now, the BPF program of type BPF_PROG_TYPE_TRACING can only be used
> on the kernel functions whose arguments count less than 6. This is not
> friendly at all, as too many functions have arguments count more than 6.
>
> According to the current kernel version, below is a statistics of the
> function arguments count:
>
> argument count | function count
> 7 | 704
> 8 | 270
> 9 | 84
> 10 | 47
> 11 | 47
> 12 | 27
> 13 | 22
> 14 | 5
> 15 | 0
> 16 | 1
>
> Therefore, let's enhance it by increasing the function arguments count
> allowed in arch_prepare_bpf_trampoline(), for now, only x86_64.
>
> For the case that we don't need to call origin function, which means
> without BPF_TRAMP_F_CALL_ORIG, we need only copy the function arguments
> that stored in the frame of the caller to current frame. The arguments
> of arg6-argN are stored in "$rbp + 0x18", we need copy them to
> "$rbp - regs_off + (6 * 8)".
>
> For the case with BPF_TRAMP_F_CALL_ORIG, we need prepare the arguments
> in stack before call origin function, which means we need alloc extra
> "8 * (arg_count - 6)" memory in the top of the stack. Note, there should
> not be any data be pushed to the stack before call the origin function.
> Then, we have to store rbx with 'mov' instead of 'push'.

Please also mention special case related to 16-byte struct argument
in the comments of save_args().

>
> We use EMIT3_off32() or EMIT4() for "lea" and "sub". The range of the
> imm in "lea" and "sub" is [-128, 127] if EMIT4() is used. Therefore,
> we use EMIT3_off32() instead if the imm out of the range.
>
> It works well for the FENTRY/FEXIT/MODIFY_RETURN.
>
> Signed-off-by: Menglong Dong <[email protected]>
> ---
> v5:
> - consider the case of the struct in arguments can't be hold by regs
> v4:
> - make the stack 16-byte aligned if passing args on-stack is needed
> - add the function arguments statistics to the commit log
> v3:
> - use EMIT3_off32() for "lea" and "sub" only on necessary
> - make 12 as the maximum arguments count
> v2:
> - instead EMIT4 with EMIT3_off32 for "lea" to prevent overflow
> - make MAX_BPF_FUNC_ARGS as the maximum argument count
> ---
> arch/x86/net/bpf_jit_comp.c | 221 +++++++++++++++++++++++++++++++-----
> 1 file changed, 195 insertions(+), 26 deletions(-)
>
> diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
> index a407fbbffecd..47c699594dd8 100644
> --- a/arch/x86/net/bpf_jit_comp.c
> +++ b/arch/x86/net/bpf_jit_comp.c
> @@ -1857,37 +1857,165 @@ st: if (is_imm8(insn->off))
> return proglen;
> }
>
> -static void save_regs(const struct btf_func_model *m, u8 **prog, int nr_regs,
> - int stack_size)
> +static inline void clean_stack_garbage(const struct btf_func_model *m,
> + u8 **pprog, int nr_args_on_stack,

nr_args_on_stack is actually nr_stack_slots, right? Maybe rename to
nr_stack_slots?

> + int stack_size)
> {
> - int i;
> + int arg_size, off;
> + u8 *prog;
> +
> + if (nr_args_on_stack != 1)
> + return;
> +
> + /* the size of the last argument */
> + arg_size = m->arg_size[m->nr_args - 1];
> +
> + /* Generally speaking, the compiler will pass the arguments
> + * on-stack with "push" instruction, which will take 8-byte
> + * on the stack. On this case, there won't be garbage values

On this case -> In this case. The same for below another case.

> + * while we copy the arguments from origin stack frame to current
> + * in BPF_DW.
> + *
> + * However, sometimes the compiler will only allocate 4-byte on
> + * the stack for the arguments. For now, this case will only
> + * happen if there is only one argument on-stack and its size
> + * not more than 4 byte. On this case, there will be garbage
> + * values on the upper 4-byte where we store the argument on
> + * current stack frame.
> + *
> + * arguments on origin stack:
> + *
> + * stack_arg_1(4-byte) xxx(4-byte)
> + *
> + * what we copy:
> + *
> + * stack_arg_1(8-byte): stack_arg_1(origin) xxx
> + *
> + * and the xxx is the garbage values which we should clean here.
> + */

let us put the above comments before
> + if (nr_args_on_stack != 1)
> + return;


> + if (arg_size <= 4) {
> + off = -(stack_size - 4);
> + prog = *pprog;
> + /* mov DWORD PTR [rbp + off], 0 */
> + if (!is_imm8(off))
> + EMIT2_off32(0xC7, 0x85, off);
> + else
> + EMIT3(0xC7, 0x45, off);
> + EMIT(0, 4);
> + *pprog = prog;
> + }
> +}
> +
> +static void save_args(const struct btf_func_model *m, u8 **prog,
> + int stack_size, bool on_stack)

Rename 'on_stack' to 'for_call_origin'? This should be more
clear about the use case.

> +{
> + int arg_regs, first_off, nr_regs = 0, nr_stack = 0;
nr_stack -> nr_stack_slots?
> + int i, j;
>
> /* Store function arguments to stack.
> * For a function that accepts two pointers the sequence will be:
> * mov QWORD PTR [rbp-0x10],rdi
> * mov QWORD PTR [rbp-0x8],rsi
> */
> - for (i = 0; i < min(nr_regs, 6); i++)
> - emit_stx(prog, BPF_DW, BPF_REG_FP,
> - i == 5 ? X86_REG_R9 : BPF_REG_1 + i,
> - -(stack_size - i * 8));
> + for (i = 0; i < min_t(int, m->nr_args, MAX_BPF_FUNC_ARGS); i++) {
> + arg_regs = (m->arg_size[i] + 7) / 8;
> +
> + /* According to the research of Yonghong, struct members
> + * should be all in register or all on the stack.
> + * Meanwhile, the compiler will pass the argument on regs
> + * if the remained regs can hold the argument.
remained -> remaining
> + *
> + * Disorder of the args can happen. For example:
> + *
> + * struct foo_struct {
> + * long a;
> + * int b;
> + * };
> + * int foo(char, char, char, char, char, struct foo_struct,
> + * char);
> + *
> + * the arg1-5,arg7 will be passed by regs, and arg6 will
> + * by stack.
> + *
> + * Therefore, we should keep the same logic as here when
> + * we restore the regs in restore_regs.
> + */
> + if (nr_regs + arg_regs > 6) {
> + /* copy function arguments from origin stack frame
> + * into current stack frame.
> + *
> + * The starting address of the arguments on-stack
> + * is:
> + * rbp + 8(push rbp) +
> + * 8(return addr of origin call) +
> + * 8(return addr of the caller)
> + * which means: rbp + 24
> + */
> + for (j = 0; j < arg_regs; j++) {
> + emit_ldx(prog, BPF_DW, BPF_REG_0, BPF_REG_FP,
> + nr_stack * 8 + 0x18);
> + emit_stx(prog, BPF_DW, BPF_REG_FP, BPF_REG_0,
> + -stack_size);
> +
> + if (!nr_stack)
> + first_off = stack_size;
> + stack_size -= 8;
> + nr_stack++;
> + }
> + } else {
> + /* Only copy the arguments on-stack to current
> + * 'stack_size' and ignore the regs, used to
> + * prepare the arguments on-stack for orign call.
> + */
> + if (on_stack) {
> + nr_regs += arg_regs;
> + continue;
> + }
> +
> + /* copy the arguments from regs into stack */
> + for (j = 0; j < arg_regs; j++) {
> + emit_stx(prog, BPF_DW, BPF_REG_FP,
> + nr_regs == 5 ? X86_REG_R9 : BPF_REG_1 + i,
> + -stack_size);
> + stack_size -= 8;
> + nr_regs++;
> + }
> + }
> + }
> +
> + clean_stack_garbage(m, prog, nr_stack, first_off);
> }
>
[...]
> /* Generated trampoline stack layout:
> @@ -2147,7 +2286,14 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
> *
> * RBP - ip_off [ traced function ] BPF_TRAMP_F_IP_ARG flag
> *
> + * RBP - rbx_off [ rbx value ] always
> + *
> * RBP - run_ctx_off [ bpf_tramp_run_ctx ]
> + *
> + * [ stack_argN ] BPF_TRAMP_F_CALL_ORIG
> + * [ ... ]
> + * [ stack_arg2 ]
> + * RBP - arg_stack_off [ stack_arg1 ]
> */
>
> /* room for return value of orig_call or fentry prog */
> @@ -2167,9 +2313,25 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
>
> ip_off = stack_size;
>
> + stack_size += 8;
> + rbx_off = stack_size;
> +
> stack_size += (sizeof(struct bpf_tramp_run_ctx) + 7) & ~0x7;
> run_ctx_off = stack_size;
>
> + if (nr_regs > 6 && (flags & BPF_TRAMP_F_CALL_ORIG)) {
> + stack_size += (nr_regs - 6) * 8;

Please double check. Is this okay for the case below?
foo(int, int, int, int, int, 16_byte_struct)
here, nr_regs is 7, yes, to-be-increased stack size should be 2.


> + /* make sure the stack pointer is 16-byte aligned if we
> + * need pass arguments on stack, which means
> + * [stack_size + 8(rbp) + 8(rip) + 8(origin rip)]
> + * should be 16-byte aligned. Following code depend on
> + * that stack_size is already 8-byte aligned.
> + */
> + stack_size += (stack_size % 16) ? 0 : 8;
> + }
> +
> + arg_stack_off = stack_size;
> +
[...]

2023-06-19 02:44:34

by Menglong Dong

[permalink] [raw]
Subject: Re: [PATCH bpf-next v5 2/3] bpf, x86: allow function arguments up to 12 for TRACING

On Mon, Jun 19, 2023 at 7:11 AM Yonghong Song <[email protected]> wrote:
>
>
>
> On 6/12/23 7:52 PM, [email protected] wrote:
> > From: Menglong Dong <[email protected]>
> >
> > For now, the BPF program of type BPF_PROG_TYPE_TRACING can only be used
> > on the kernel functions whose arguments count less than 6. This is not
> > friendly at all, as too many functions have arguments count more than 6.
> >
> > According to the current kernel version, below is a statistics of the
> > function arguments count:
> >
> > argument count | function count
> > 7 | 704
> > 8 | 270
> > 9 | 84
> > 10 | 47
> > 11 | 47
> > 12 | 27
> > 13 | 22
> > 14 | 5
> > 15 | 0
> > 16 | 1
> >
> > Therefore, let's enhance it by increasing the function arguments count
> > allowed in arch_prepare_bpf_trampoline(), for now, only x86_64.
> >
> > For the case that we don't need to call origin function, which means
> > without BPF_TRAMP_F_CALL_ORIG, we need only copy the function arguments
> > that stored in the frame of the caller to current frame. The arguments
> > of arg6-argN are stored in "$rbp + 0x18", we need copy them to
> > "$rbp - regs_off + (6 * 8)".
> >
> > For the case with BPF_TRAMP_F_CALL_ORIG, we need prepare the arguments
> > in stack before call origin function, which means we need alloc extra
> > "8 * (arg_count - 6)" memory in the top of the stack. Note, there should
> > not be any data be pushed to the stack before call the origin function.
> > Then, we have to store rbx with 'mov' instead of 'push'.
>
> Please also mention special case related to 16-byte struct argument
> in the comments of save_args().
>
> >
> > We use EMIT3_off32() or EMIT4() for "lea" and "sub". The range of the
> > imm in "lea" and "sub" is [-128, 127] if EMIT4() is used. Therefore,
> > we use EMIT3_off32() instead if the imm out of the range.
> >
> > It works well for the FENTRY/FEXIT/MODIFY_RETURN.
> >
> > Signed-off-by: Menglong Dong <[email protected]>
> > ---
> > v5:
> > - consider the case of the struct in arguments can't be hold by regs
> > v4:
> > - make the stack 16-byte aligned if passing args on-stack is needed
> > - add the function arguments statistics to the commit log
> > v3:
> > - use EMIT3_off32() for "lea" and "sub" only on necessary
> > - make 12 as the maximum arguments count
> > v2:
> > - instead EMIT4 with EMIT3_off32 for "lea" to prevent overflow
> > - make MAX_BPF_FUNC_ARGS as the maximum argument count
> > ---
> > arch/x86/net/bpf_jit_comp.c | 221 +++++++++++++++++++++++++++++++-----
> > 1 file changed, 195 insertions(+), 26 deletions(-)
> >
> > diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
> > index a407fbbffecd..47c699594dd8 100644
> > --- a/arch/x86/net/bpf_jit_comp.c
> > +++ b/arch/x86/net/bpf_jit_comp.c
> > @@ -1857,37 +1857,165 @@ st: if (is_imm8(insn->off))
> > return proglen;
> > }
> >
> > -static void save_regs(const struct btf_func_model *m, u8 **prog, int nr_regs,
> > - int stack_size)
> > +static inline void clean_stack_garbage(const struct btf_func_model *m,
> > + u8 **pprog, int nr_args_on_stack,
>
> nr_args_on_stack is actually nr_stack_slots, right? Maybe rename to
> nr_stack_slots?
>
> > + int stack_size)
> > {
> > - int i;
> > + int arg_size, off;
> > + u8 *prog;
> > +
> > + if (nr_args_on_stack != 1)
> > + return;
> > +
> > + /* the size of the last argument */
> > + arg_size = m->arg_size[m->nr_args - 1];
> > +
> > + /* Generally speaking, the compiler will pass the arguments
> > + * on-stack with "push" instruction, which will take 8-byte
> > + * on the stack. On this case, there won't be garbage values
>
> On this case -> In this case. The same for below another case.
>
> > + * while we copy the arguments from origin stack frame to current
> > + * in BPF_DW.
> > + *
> > + * However, sometimes the compiler will only allocate 4-byte on
> > + * the stack for the arguments. For now, this case will only
> > + * happen if there is only one argument on-stack and its size
> > + * not more than 4 byte. On this case, there will be garbage
> > + * values on the upper 4-byte where we store the argument on
> > + * current stack frame.
> > + *
> > + * arguments on origin stack:
> > + *
> > + * stack_arg_1(4-byte) xxx(4-byte)
> > + *
> > + * what we copy:
> > + *
> > + * stack_arg_1(8-byte): stack_arg_1(origin) xxx
> > + *
> > + * and the xxx is the garbage values which we should clean here.
> > + */
>
> let us put the above comments before
> > + if (nr_args_on_stack != 1)
> > + return;
>
>
> > + if (arg_size <= 4) {
> > + off = -(stack_size - 4);
> > + prog = *pprog;
> > + /* mov DWORD PTR [rbp + off], 0 */
> > + if (!is_imm8(off))
> > + EMIT2_off32(0xC7, 0x85, off);
> > + else
> > + EMIT3(0xC7, 0x45, off);
> > + EMIT(0, 4);
> > + *pprog = prog;
> > + }
> > +}
> > +
> > +static void save_args(const struct btf_func_model *m, u8 **prog,
> > + int stack_size, bool on_stack)
>
> Rename 'on_stack' to 'for_call_origin'? This should be more
> clear about the use case.
>
> > +{
> > + int arg_regs, first_off, nr_regs = 0, nr_stack = 0;
> nr_stack -> nr_stack_slots?
> > + int i, j;
> >
> > /* Store function arguments to stack.
> > * For a function that accepts two pointers the sequence will be:
> > * mov QWORD PTR [rbp-0x10],rdi
> > * mov QWORD PTR [rbp-0x8],rsi
> > */
> > - for (i = 0; i < min(nr_regs, 6); i++)
> > - emit_stx(prog, BPF_DW, BPF_REG_FP,
> > - i == 5 ? X86_REG_R9 : BPF_REG_1 + i,
> > - -(stack_size - i * 8));
> > + for (i = 0; i < min_t(int, m->nr_args, MAX_BPF_FUNC_ARGS); i++) {
> > + arg_regs = (m->arg_size[i] + 7) / 8;
> > +
> > + /* According to the research of Yonghong, struct members
> > + * should be all in register or all on the stack.
> > + * Meanwhile, the compiler will pass the argument on regs
> > + * if the remained regs can hold the argument.
> remained -> remaining
> > + *
> > + * Disorder of the args can happen. For example:
> > + *
> > + * struct foo_struct {
> > + * long a;
> > + * int b;
> > + * };
> > + * int foo(char, char, char, char, char, struct foo_struct,
> > + * char);
> > + *
> > + * the arg1-5,arg7 will be passed by regs, and arg6 will
> > + * by stack.
> > + *
> > + * Therefore, we should keep the same logic as here when
> > + * we restore the regs in restore_regs.
> > + */
> > + if (nr_regs + arg_regs > 6) {
> > + /* copy function arguments from origin stack frame
> > + * into current stack frame.
> > + *
> > + * The starting address of the arguments on-stack
> > + * is:
> > + * rbp + 8(push rbp) +
> > + * 8(return addr of origin call) +
> > + * 8(return addr of the caller)
> > + * which means: rbp + 24
> > + */
> > + for (j = 0; j < arg_regs; j++) {
> > + emit_ldx(prog, BPF_DW, BPF_REG_0, BPF_REG_FP,
> > + nr_stack * 8 + 0x18);
> > + emit_stx(prog, BPF_DW, BPF_REG_FP, BPF_REG_0,
> > + -stack_size);
> > +
> > + if (!nr_stack)
> > + first_off = stack_size;
> > + stack_size -= 8;
> > + nr_stack++;
> > + }
> > + } else {
> > + /* Only copy the arguments on-stack to current
> > + * 'stack_size' and ignore the regs, used to
> > + * prepare the arguments on-stack for orign call.
> > + */
> > + if (on_stack) {
> > + nr_regs += arg_regs;
> > + continue;
> > + }
> > +
> > + /* copy the arguments from regs into stack */
> > + for (j = 0; j < arg_regs; j++) {
> > + emit_stx(prog, BPF_DW, BPF_REG_FP,
> > + nr_regs == 5 ? X86_REG_R9 : BPF_REG_1 + i,
> > + -stack_size);
> > + stack_size -= 8;
> > + nr_regs++;
> > + }
> > + }
> > + }
> > +
> > + clean_stack_garbage(m, prog, nr_stack, first_off);
> > }
> >
> [...]
> > /* Generated trampoline stack layout:
> > @@ -2147,7 +2286,14 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
> > *
> > * RBP - ip_off [ traced function ] BPF_TRAMP_F_IP_ARG flag
> > *
> > + * RBP - rbx_off [ rbx value ] always
> > + *
> > * RBP - run_ctx_off [ bpf_tramp_run_ctx ]
> > + *
> > + * [ stack_argN ] BPF_TRAMP_F_CALL_ORIG
> > + * [ ... ]
> > + * [ stack_arg2 ]
> > + * RBP - arg_stack_off [ stack_arg1 ]
> > */
> >
> > /* room for return value of orig_call or fentry prog */
> > @@ -2167,9 +2313,25 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
> >
> > ip_off = stack_size;
> >
> > + stack_size += 8;
> > + rbx_off = stack_size;
> > +
> > stack_size += (sizeof(struct bpf_tramp_run_ctx) + 7) & ~0x7;
> > run_ctx_off = stack_size;
> >
> > + if (nr_regs > 6 && (flags & BPF_TRAMP_F_CALL_ORIG)) {
> > + stack_size += (nr_regs - 6) * 8;
>
> Please double check. Is this okay for the case below?
> foo(int, int, int, int, int, 16_byte_struct)
> here, nr_regs is 7, yes, to-be-increased stack size should be 2.
>

You are right, here should be:
stack_size += (nr_regs - nr_arg_on_regs) * 8

The test case "bpf_testmod_fentry_test_struct1" shouldn't have
passed, and I'll figure out the reason too.

Thank you for the comment above, and I'll change them in
the next version too.

>
> > + /* make sure the stack pointer is 16-byte aligned if we
> > + * need pass arguments on stack, which means
> > + * [stack_size + 8(rbp) + 8(rip) + 8(origin rip)]
> > + * should be 16-byte aligned. Following code depend on
> > + * that stack_size is already 8-byte aligned.
> > + */
> > + stack_size += (stack_size % 16) ? 0 : 8;
> > + }
> > +
> > + arg_stack_off = stack_size;
> > +
> [...]

2023-06-22 09:54:55

by David Laight

[permalink] [raw]
Subject: RE: [PATCH bpf-next v5 2/3] bpf, x86: allow function arguments up to 12 for TRACING

...
> > + /* Generally speaking, the compiler will pass the arguments
> > + * on-stack with "push" instruction, which will take 8-byte
> > + * on the stack. On this case, there won't be garbage values
>
> On this case -> In this case. The same for below another case.
>
> > + * while we copy the arguments from origin stack frame to current
> > + * in BPF_DW.
> > + *
> > + * However, sometimes the compiler will only allocate 4-byte on
> > + * the stack for the arguments. For now, this case will only
> > + * happen if there is only one argument on-stack and its size
> > + * not more than 4 byte. On this case, there will be garbage
> > + * values on the upper 4-byte where we store the argument on
> > + * current stack frame.

Is that right for 86-64?

IIRC arguments always take (at least) 64bits.
For any 32bit argument (register or stack) the high bits are undefined.
(Maybe in kernel they are always zero?
From 32bit userspace they are definitely random.)

I think the called code is also responsible form masking 8 and 16bit
values (in reality char/short args and return values just add code
bloat).

A 128bit value is either passed in two registers or two stack
slots. If the last register is skipped it will be used for the
next argument.

David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)

2023-06-22 13:35:25

by Menglong Dong

[permalink] [raw]
Subject: Re: [PATCH bpf-next v5 2/3] bpf, x86: allow function arguments up to 12 for TRACING

On Thu, Jun 22, 2023 at 5:06 PM David Laight <[email protected]> wrote:
>
> ...
> > > + /* Generally speaking, the compiler will pass the arguments
> > > + * on-stack with "push" instruction, which will take 8-byte
> > > + * on the stack. On this case, there won't be garbage values
> >
> > On this case -> In this case. The same for below another case.
> >
> > > + * while we copy the arguments from origin stack frame to current
> > > + * in BPF_DW.
> > > + *
> > > + * However, sometimes the compiler will only allocate 4-byte on
> > > + * the stack for the arguments. For now, this case will only
> > > + * happen if there is only one argument on-stack and its size
> > > + * not more than 4 byte. On this case, there will be garbage
> > > + * values on the upper 4-byte where we store the argument on
> > > + * current stack frame.
>
> Is that right for 86-64?
>
> IIRC arguments always take (at least) 64bits.
> For any 32bit argument (register or stack) the high bits are undefined.
> (Maybe in kernel they are always zero?
> From 32bit userspace they are definitely random.)
>

Hello,

According to my testing, the compiler will always
pass the arguments on 8-byte size with "push" insn
if the count of the arguments that need to be passed
on stack more than 1 and the size of the argument
doesn't exceed 8-byte. In this case, there won't be
garbage. For example, the high 4-byte will be made 0
if the size of the argument is 4-byte, as the "push" insn
will copy the argument from regs or imm into stack
in 8-byte.

If the count of the arguments on-stack is 1 and its size
doesn't exceed 4-byte, some compiler, like clang, may
not use the "push" insn. Instead, it allocates 4 bytes in the
stack, and copies the arguments from regs or imm into
stack in 4-byte. This is the case we deal with here.

I'm not sure if I understand you correctly. Do you mean
that there will be garbage values for 32bit args?

> I think the called code is also responsible form masking 8 and 16bit
> values (in reality char/short args and return values just add code
> bloat).
>
> A 128bit value is either passed in two registers or two stack
> slots. If the last register is skipped it will be used for the
> next argument.
>

Yeah, this point is considered in save_args(). Once
this happen, the count of stack slots should more
then 1, and the arguments on-stack will be stored with
"push" insn in 8-byte. Therefore, there shouldn't be garbage
values in this case?

Do I miss something?

Thanks!
Menglong Dong

> David
>
> -
> Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
> Registration No: 1397386 (Wales)

2023-06-22 14:30:59

by David Laight

[permalink] [raw]
Subject: RE: [PATCH bpf-next v5 2/3] bpf, x86: allow function arguments up to 12 for TRACING

...
> > Is that right for 86-64?
> >
> > IIRC arguments always take (at least) 64bits.
> > For any 32bit argument (register or stack) the high bits are undefined.
> > (Maybe in kernel they are always zero?
> > From 32bit userspace they are definitely random.)
> >
>
> Hello,
>
> According to my testing, the compiler will always
> pass the arguments on 8-byte size with "push" insn
> if the count of the arguments that need to be passed
> on stack more than 1 and the size of the argument
> doesn't exceed 8-byte. In this case, there won't be
> garbage. For example, the high 4-byte will be made 0
> if the size of the argument is 4-byte, as the "push" insn
> will copy the argument from regs or imm into stack
> in 8-byte.

You have to know whether a value is expected to be 4 or 8
bytes - a negative 32bit value is zero extended so can't
be treated as a 64bit value.

That is even true for values passed in registers.

There is also a common problem with values passed in registers
to system calls by 32bit code (maybe bpf is tracing these).
In this case the high 32 bits of the register are random.
They don't get zerod in 32bit mode.

> If the count of the arguments on-stack is 1 and its size
> doesn't exceed 4-byte, some compiler, like clang, may
> not use the "push" insn. Instead, it allocates 4 bytes in the
> stack, and copies the arguments from regs or imm into
> stack in 4-byte. This is the case we deal with here.

If the compiler sometimes writes a 4 byte (or smaller) value
to pre-allocated stack then it is always allowed to do that.
So the high bytes of the stack slot that contains a 32bit
argument might always be junk.
The count of on-stack arguments isn't relevant.

> I'm not sure if I understand you correctly. Do you mean
> that there will be garbage values for 32bit args?

I'm pretty sure that the function call ABI doesn't require the
caller set the high bits of sub-64bit arguments.
The fact that they are often written with a push instruction
that zeros the high bytes isn't really relevant.

> > I think the called code is also responsible form masking 8 and 16bit
> > values (in reality char/short args and return values just add code
> > bloat).
> >
> > A 128bit value is either passed in two registers or two stack
> > slots. If the last register is skipped it will be used for the
> > next argument.
> >
>
> Yeah, this point is considered in save_args(). Once
> this happen, the count of stack slots should more
> then 1, and the arguments on-stack will be stored with
> "push" insn in 8-byte. Therefore, there shouldn't be garbage
> values in this case?
>
> Do I miss something?

The register/stack for these two calls is the same:
foo(1, 2, 3, 4, 5, 6, (int128_t)7);
bar(1, 2, 3, 4, 5, (int128_t)7, 6);

David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)

2023-06-22 16:54:30

by Yonghong Song

[permalink] [raw]
Subject: Re: [PATCH bpf-next v5 2/3] bpf, x86: allow function arguments up to 12 for TRACING



On 6/22/23 2:06 AM, David Laight wrote:
> ...
>>> + /* Generally speaking, the compiler will pass the arguments
>>> + * on-stack with "push" instruction, which will take 8-byte
>>> + * on the stack. On this case, there won't be garbage values
>>
>> On this case -> In this case. The same for below another case.
>>
>>> + * while we copy the arguments from origin stack frame to current
>>> + * in BPF_DW.
>>> + *
>>> + * However, sometimes the compiler will only allocate 4-byte on
>>> + * the stack for the arguments. For now, this case will only
>>> + * happen if there is only one argument on-stack and its size
>>> + * not more than 4 byte. On this case, there will be garbage
>>> + * values on the upper 4-byte where we store the argument on
>>> + * current stack frame.
>
> Is that right for 86-64?

yes,

>
> IIRC arguments always take (at least) 64bits.
> For any 32bit argument (register or stack) the high bits are undefined.
> (Maybe in kernel they are always zero?
> From 32bit userspace they are definitely random.)
>
> I think the called code is also responsible form masking 8 and 16bit
> values (in reality char/short args and return values just add code
> bloat).

yes, it does. For example, if an argument has type u8, so
x86_64 might only put a u8 value into 1-byte subregister
and rest of if is undefined. This is what happened to bpf program,
(1). the whole register/stack is saved to 8-byte stack slot.
(2). in bpf program, the 8-byte stack slot will be read
and then cast to u8, so the compiler will do proper
left shift and right shift to get proper value.

If the argument is u32/s32, the 32-bit subregister 'w*' could
be used without left/right shifting (similar to x86_64 subregister).

So we should be okay here.

>
> A 128bit value is either passed in two registers or two stack
> slots. If the last register is skipped it will be used for the
> next argument.
>
> David
>
> -
> Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
> Registration No: 1397386 (Wales)

2023-06-23 13:20:16

by Menglong Dong

[permalink] [raw]
Subject: Re: [PATCH bpf-next v5 2/3] bpf, x86: allow function arguments up to 12 for TRACING

On Thu, Jun 22, 2023 at 10:19 PM David Laight <[email protected]> wrote:
>
> ...
> > > Is that right for 86-64?
> > >
> > > IIRC arguments always take (at least) 64bits.
> > > For any 32bit argument (register or stack) the high bits are undefined.
> > > (Maybe in kernel they are always zero?
> > > From 32bit userspace they are definitely random.)
> > >
> >
> > Hello,
> >
> > According to my testing, the compiler will always
> > pass the arguments on 8-byte size with "push" insn
> > if the count of the arguments that need to be passed
> > on stack more than 1 and the size of the argument
> > doesn't exceed 8-byte. In this case, there won't be
> > garbage. For example, the high 4-byte will be made 0
> > if the size of the argument is 4-byte, as the "push" insn
> > will copy the argument from regs or imm into stack
> > in 8-byte.
>
> You have to know whether a value is expected to be 4 or 8
> bytes - a negative 32bit value is zero extended so can't
> be treated as a 64bit value.
>
> That is even true for values passed in registers.
>
> There is also a common problem with values passed in registers
> to system calls by 32bit code (maybe bpf is tracing these).
> In this case the high 32 bits of the register are random.
> They don't get zerod in 32bit mode.
>
> > If the count of the arguments on-stack is 1 and its size
> > doesn't exceed 4-byte, some compiler, like clang, may
> > not use the "push" insn. Instead, it allocates 4 bytes in the
> > stack, and copies the arguments from regs or imm into
> > stack in 4-byte. This is the case we deal with here.
>
> If the compiler sometimes writes a 4 byte (or smaller) value
> to pre-allocated stack then it is always allowed to do that.
> So the high bytes of the stack slot that contains a 32bit
> argument might always be junk.
> The count of on-stack arguments isn't relevant.
>

Yes, the way we clean garbage values is not
relevant, which comes from assumption. However,
It should be ok with the BPF program? like what Yonghong
said.

> > I'm not sure if I understand you correctly. Do you mean
> > that there will be garbage values for 32bit args?
>
> I'm pretty sure that the function call ABI doesn't require the
> caller set the high bits of sub-64bit arguments.
> The fact that they are often written with a push instruction
> that zeros the high bytes isn't really relevant.
>
> > > I think the called code is also responsible form masking 8 and 16bit
> > > values (in reality char/short args and return values just add code
> > > bloat).
> > >
> > > A 128bit value is either passed in two registers or two stack
> > > slots. If the last register is skipped it will be used for the
> > > next argument.
> > >
> >
> > Yeah, this point is considered in save_args(). Once
> > this happen, the count of stack slots should more
> > then 1, and the arguments on-stack will be stored with
> > "push" insn in 8-byte. Therefore, there shouldn't be garbage
> > values in this case?
> >
> > Do I miss something?
>
> The register/stack for these two calls is the same:
> foo(1, 2, 3, 4, 5, 6, (int128_t)7);
> bar(1, 2, 3, 4, 5, (int128_t)7, 6);
>

It is ok, as we already consider such cases. For
the foo(), the order we copy args is:

reg1, reg2, reg3, reg4, reg5, reg6, stack1, stack2

and for the bar (), it is:

reg1, reg2, reg3, reg4, reg5, stack1,stack2, reg6

The order of the arguments in the array we passed
to the BPF program is ok.

Thanks!
Menglong Dong

> David
>
> -
> Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
> Registration No: 1397386 (Wales)