2023-01-24 16:09:29

by David Vernet

[permalink] [raw]
Subject: [PATCH bpf-next v2 0/4] Enable struct_ops programs to be sleepable

This is part 2 of https://lore.kernel.org/bpf/[email protected]/

Changelog:
----------
v1 -> v2:
- Add support for specifying sleepable struct_ops programs with
struct_ops.s in libbpf (Alexei).
- Move failure test case into new dummy_st_ops_fail.c prog file.
- Update test_dummy_sleepable() to use struct_ops.s instead of manually
setting prog flags. Also remove open_load_skel() helper which is no
longer needed.
- Fix verifier tests to expect new sleepable prog failure message.

David Vernet (4):
bpf: Allow BPF_PROG_TYPE_STRUCT_OPS programs to be sleepable
libbpf: Support sleepable struct_ops.s section
bpf: Pass const struct bpf_prog * to .check_member
bpf/selftests: Verify struct_ops prog sleepable behavior

include/linux/bpf.h | 4 +-
kernel/bpf/verifier.c | 7 ++-
net/bpf/bpf_dummy_struct_ops.c | 18 ++++++
net/bpf/test_run.c | 6 ++
net/ipv4/bpf_tcp_ca.c | 3 +-
tools/lib/bpf/libbpf.c | 1 +
.../selftests/bpf/prog_tests/dummy_st_ops.c | 56 ++++++++++++++-----
.../selftests/bpf/progs/dummy_st_ops_common.h | 20 +++++++
.../selftests/bpf/progs/dummy_st_ops_fail.c | 41 ++++++++++++++
...{dummy_st_ops.c => dummy_st_ops_success.c} | 18 +++---
10 files changed, 145 insertions(+), 29 deletions(-)
create mode 100644 tools/testing/selftests/bpf/progs/dummy_st_ops_common.h
create mode 100644 tools/testing/selftests/bpf/progs/dummy_st_ops_fail.c
rename tools/testing/selftests/bpf/progs/{dummy_st_ops.c => dummy_st_ops_success.c} (75%)

--
2.39.0



2023-01-24 16:09:32

by David Vernet

[permalink] [raw]
Subject: [PATCH bpf-next v2 1/4] bpf: Allow BPF_PROG_TYPE_STRUCT_OPS programs to be sleepable

BPF struct_ops programs currently cannot be marked as sleepable. This
need not be the case -- struct_ops programs can be sleepable, and e.g.
invoke kfuncs that export the KF_SLEEPABLE flag. So as to allow future
struct_ops programs to invoke such kfuncs, this patch updates the
verifier to allow struct_ops programs to be sleepable. A follow-on patch
will add support to libbpf for specifying struct_ops.s as a sleepable
struct_ops program, and then another patch will add testcases to the
dummy_st_ops selftest suite which test sleepable struct_ops behavior.

Signed-off-by: David Vernet <[email protected]>
---
kernel/bpf/verifier.c | 5 +++--
tools/testing/selftests/bpf/verifier/sleepable.c | 2 +-
2 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 800488289297..2b8f0c0aa0cc 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -17054,7 +17054,8 @@ static bool can_be_sleepable(struct bpf_prog *prog)
}
}
return prog->type == BPF_PROG_TYPE_LSM ||
- prog->type == BPF_PROG_TYPE_KPROBE; /* only for uprobes */
+ prog->type == BPF_PROG_TYPE_KPROBE /* only for uprobes */ ||
+ prog->type == BPF_PROG_TYPE_STRUCT_OPS;
}

static int check_attach_btf_id(struct bpf_verifier_env *env)
@@ -17076,7 +17077,7 @@ static int check_attach_btf_id(struct bpf_verifier_env *env)
}

if (prog->aux->sleepable && !can_be_sleepable(prog)) {
- verbose(env, "Only fentry/fexit/fmod_ret, lsm, iter and uprobe programs can be sleepable\n");
+ verbose(env, "Only fentry/fexit/fmod_ret, lsm, iter, uprobe, and struct_ops programs can be sleepable\n");
return -EINVAL;
}

diff --git a/tools/testing/selftests/bpf/verifier/sleepable.c b/tools/testing/selftests/bpf/verifier/sleepable.c
index bea0daef908a..1f0d2bdc673f 100644
--- a/tools/testing/selftests/bpf/verifier/sleepable.c
+++ b/tools/testing/selftests/bpf/verifier/sleepable.c
@@ -85,7 +85,7 @@
.expected_attach_type = BPF_TRACE_RAW_TP,
.kfunc = "sched_switch",
.result = REJECT,
- .errstr = "Only fentry/fexit/fmod_ret, lsm, iter and uprobe programs can be sleepable",
+ .errstr = "Only fentry/fexit/fmod_ret, lsm, iter, uprobe, and struct_ops programs can be sleepable",
.flags = BPF_F_SLEEPABLE,
.runs = -1,
},
--
2.39.0


2023-01-24 16:09:37

by David Vernet

[permalink] [raw]
Subject: [PATCH bpf-next v2 2/4] libbpf: Support sleepable struct_ops.s section

In a prior change, the verifier was updated to support sleepable
BPF_PROG_TYPE_STRUCT_OPS programs. A caller could set the program as
sleepable with bpf_program__set_flags(), but it would be more ergonomic
and more in-line with other sleepable program types if we supported
suffixing a struct_ops section name with .s to indicate that it's
sleepable.

Signed-off-by: David Vernet <[email protected]>
---
tools/lib/bpf/libbpf.c | 1 +
1 file changed, 1 insertion(+)

diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 27d9faa80471..eed5cec6f510 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -8605,6 +8605,7 @@ static const struct bpf_sec_def section_defs[] = {
SEC_DEF("cgroup/setsockopt", CGROUP_SOCKOPT, BPF_CGROUP_SETSOCKOPT, SEC_ATTACHABLE),
SEC_DEF("cgroup/dev", CGROUP_DEVICE, BPF_CGROUP_DEVICE, SEC_ATTACHABLE_OPT),
SEC_DEF("struct_ops+", STRUCT_OPS, 0, SEC_NONE),
+ SEC_DEF("struct_ops.s+", STRUCT_OPS, 0, SEC_SLEEPABLE),
SEC_DEF("sk_lookup", SK_LOOKUP, BPF_SK_LOOKUP, SEC_ATTACHABLE),
};

--
2.39.0


2023-01-24 16:09:59

by David Vernet

[permalink] [raw]
Subject: [PATCH bpf-next v2 3/4] bpf: Pass const struct bpf_prog * to .check_member

The .check_member field of struct bpf_struct_ops is currently passed the
member's btf_type via const struct btf_type *t, and a const struct
btf_member *member. This allows the struct_ops implementation to check
whether e.g. an ops is supported, but it would be useful to also enforce
that the struct_ops prog being loaded for that member has other
qualities, like being sleepable (or not). This patch therefore updates
the .check_member() callback to also take a const struct bpf_prog *prog
argument.

Signed-off-by: David Vernet <[email protected]>
---
include/linux/bpf.h | 3 ++-
kernel/bpf/verifier.c | 2 +-
net/ipv4/bpf_tcp_ca.c | 3 ++-
3 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index ad4bb36d4c10..50123afab9bf 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1422,7 +1422,8 @@ struct bpf_struct_ops {
const struct bpf_verifier_ops *verifier_ops;
int (*init)(struct btf *btf);
int (*check_member)(const struct btf_type *t,
- const struct btf_member *member);
+ const struct btf_member *member,
+ const struct bpf_prog *prog);
int (*init_member)(const struct btf_type *t,
const struct btf_member *member,
void *kdata, const void *udata);
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 2b8f0c0aa0cc..20dd03498e56 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -16732,7 +16732,7 @@ static int check_struct_ops_btf_id(struct bpf_verifier_env *env)
}

if (st_ops->check_member) {
- int err = st_ops->check_member(t, member);
+ int err = st_ops->check_member(t, member, prog);

if (err) {
verbose(env, "attach to unsupported member %s of struct %s\n",
diff --git a/net/ipv4/bpf_tcp_ca.c b/net/ipv4/bpf_tcp_ca.c
index 4517d2bd186a..13fc0c185cd9 100644
--- a/net/ipv4/bpf_tcp_ca.c
+++ b/net/ipv4/bpf_tcp_ca.c
@@ -248,7 +248,8 @@ static int bpf_tcp_ca_init_member(const struct btf_type *t,
}

static int bpf_tcp_ca_check_member(const struct btf_type *t,
- const struct btf_member *member)
+ const struct btf_member *member,
+ const struct bpf_prog *prog)
{
if (is_unsupported(__btf_member_bit_offset(t, member) / 8))
return -ENOTSUPP;
--
2.39.0


2023-01-24 16:10:03

by David Vernet

[permalink] [raw]
Subject: [PATCH bpf-next v2 4/4] bpf/selftests: Verify struct_ops prog sleepable behavior

In a set of prior changes, we added the ability for struct_ops programs
to be sleepable. This patch enhances the dummy_st_ops selftest suite to
validate this behavior by adding a new struct_ops entry to dummy_st_ops
which calls a KF_SLEEPABLE kfunc.

Signed-off-by: David Vernet <[email protected]>
---
include/linux/bpf.h | 1 +
net/bpf/bpf_dummy_struct_ops.c | 18 ++++++
net/bpf/test_run.c | 6 ++
.../selftests/bpf/prog_tests/dummy_st_ops.c | 56 ++++++++++++++-----
.../selftests/bpf/progs/dummy_st_ops_common.h | 20 +++++++
.../selftests/bpf/progs/dummy_st_ops_fail.c | 41 ++++++++++++++
...{dummy_st_ops.c => dummy_st_ops_success.c} | 18 +++---
7 files changed, 136 insertions(+), 24 deletions(-)
create mode 100644 tools/testing/selftests/bpf/progs/dummy_st_ops_common.h
create mode 100644 tools/testing/selftests/bpf/progs/dummy_st_ops_fail.c
rename tools/testing/selftests/bpf/progs/{dummy_st_ops.c => dummy_st_ops_success.c} (75%)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 50123afab9bf..64034311c5f7 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1474,6 +1474,7 @@ struct bpf_dummy_ops {
int (*test_1)(struct bpf_dummy_ops_state *cb);
int (*test_2)(struct bpf_dummy_ops_state *cb, int a1, unsigned short a2,
char a3, unsigned long a4);
+ int (*test_3)(struct bpf_dummy_ops_state *cb);
};

int bpf_struct_ops_test_run(struct bpf_prog *prog, const union bpf_attr *kattr,
diff --git a/net/bpf/bpf_dummy_struct_ops.c b/net/bpf/bpf_dummy_struct_ops.c
index 1ac4467928a9..46099737d1da 100644
--- a/net/bpf/bpf_dummy_struct_ops.c
+++ b/net/bpf/bpf_dummy_struct_ops.c
@@ -154,6 +154,23 @@ static bool bpf_dummy_ops_is_valid_access(int off, int size,
return bpf_tracing_btf_ctx_access(off, size, type, prog, info);
}

+static int bpf_dummy_ops_check_member(const struct btf_type *t,
+ const struct btf_member *member,
+ const struct bpf_prog *prog)
+{
+ u32 moff = __btf_member_bit_offset(t, member) / 8;
+
+ switch (moff) {
+ case offsetof(struct bpf_dummy_ops, test_3):
+ break;
+ default:
+ if (prog->aux->sleepable)
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
static int bpf_dummy_ops_btf_struct_access(struct bpf_verifier_log *log,
const struct bpf_reg_state *reg,
int off, int size, enum bpf_access_type atype,
@@ -208,6 +225,7 @@ static void bpf_dummy_unreg(void *kdata)
struct bpf_struct_ops bpf_bpf_dummy_ops = {
.verifier_ops = &bpf_dummy_verifier_ops,
.init = bpf_dummy_init,
+ .check_member = bpf_dummy_ops_check_member,
.init_member = bpf_dummy_init_member,
.reg = bpf_dummy_reg,
.unreg = bpf_dummy_unreg,
diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
index 8da0d73b368e..33ea57d34c0b 100644
--- a/net/bpf/test_run.c
+++ b/net/bpf/test_run.c
@@ -730,6 +730,10 @@ noinline void bpf_kfunc_call_test_destructive(void)
{
}

+noinline void bpf_kfunc_call_test_sleepable(void)
+{
+}
+
__diag_pop();

BTF_SET8_START(bpf_test_modify_return_ids)
@@ -767,6 +771,7 @@ BTF_ID_FLAGS(func, bpf_kfunc_call_test_mem_len_fail1)
BTF_ID_FLAGS(func, bpf_kfunc_call_test_mem_len_fail2)
BTF_ID_FLAGS(func, bpf_kfunc_call_test_ref, KF_TRUSTED_ARGS)
BTF_ID_FLAGS(func, bpf_kfunc_call_test_destructive, KF_DESTRUCTIVE)
+BTF_ID_FLAGS(func, bpf_kfunc_call_test_sleepable, KF_SLEEPABLE)
BTF_SET8_END(test_sk_check_kfunc_ids)

static void *bpf_test_init(const union bpf_attr *kattr, u32 user_size,
@@ -1680,6 +1685,7 @@ static int __init bpf_prog_test_run_init(void)
ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_SCHED_CLS, &bpf_prog_test_kfunc_set);
ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_TRACING, &bpf_prog_test_kfunc_set);
ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_SYSCALL, &bpf_prog_test_kfunc_set);
+ ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_STRUCT_OPS, &bpf_prog_test_kfunc_set);
return ret ?: register_btf_id_dtor_kfuncs(bpf_prog_test_dtor_kfunc,
ARRAY_SIZE(bpf_prog_test_dtor_kfunc),
THIS_MODULE);
diff --git a/tools/testing/selftests/bpf/prog_tests/dummy_st_ops.c b/tools/testing/selftests/bpf/prog_tests/dummy_st_ops.c
index c11832657d2b..dbf568e0ade2 100644
--- a/tools/testing/selftests/bpf/prog_tests/dummy_st_ops.c
+++ b/tools/testing/selftests/bpf/prog_tests/dummy_st_ops.c
@@ -1,7 +1,8 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright (C) 2021. Huawei Technologies Co., Ltd */
#include <test_progs.h>
-#include "dummy_st_ops.skel.h"
+#include "dummy_st_ops_success.skel.h"
+#include "dummy_st_ops_fail.skel.h"
#include "trace_dummy_st_ops.skel.h"

/* Need to keep consistent with definition in include/linux/bpf.h */
@@ -11,17 +12,17 @@ struct bpf_dummy_ops_state {

static void test_dummy_st_ops_attach(void)
{
- struct dummy_st_ops *skel;
+ struct dummy_st_ops_success *skel;
struct bpf_link *link;

- skel = dummy_st_ops__open_and_load();
+ skel = dummy_st_ops_success__open_and_load();
if (!ASSERT_OK_PTR(skel, "dummy_st_ops_load"))
return;

link = bpf_map__attach_struct_ops(skel->maps.dummy_1);
ASSERT_EQ(libbpf_get_error(link), -EOPNOTSUPP, "dummy_st_ops_attach");

- dummy_st_ops__destroy(skel);
+ dummy_st_ops_success__destroy(skel);
}

static void test_dummy_init_ret_value(void)
@@ -31,10 +32,10 @@ static void test_dummy_init_ret_value(void)
.ctx_in = args,
.ctx_size_in = sizeof(args),
);
- struct dummy_st_ops *skel;
+ struct dummy_st_ops_success *skel;
int fd, err;

- skel = dummy_st_ops__open_and_load();
+ skel = dummy_st_ops_success__open_and_load();
if (!ASSERT_OK_PTR(skel, "dummy_st_ops_load"))
return;

@@ -43,7 +44,7 @@ static void test_dummy_init_ret_value(void)
ASSERT_OK(err, "test_run");
ASSERT_EQ(attr.retval, 0xf2f3f4f5, "test_ret");

- dummy_st_ops__destroy(skel);
+ dummy_st_ops_success__destroy(skel);
}

static void test_dummy_init_ptr_arg(void)
@@ -58,17 +59,17 @@ static void test_dummy_init_ptr_arg(void)
.ctx_size_in = sizeof(args),
);
struct trace_dummy_st_ops *trace_skel;
- struct dummy_st_ops *skel;
+ struct dummy_st_ops_success *skel;
int fd, err;

- skel = dummy_st_ops__open_and_load();
+ skel = dummy_st_ops_success__open_and_load();
if (!ASSERT_OK_PTR(skel, "dummy_st_ops_load"))
return;

fd = bpf_program__fd(skel->progs.test_1);

trace_skel = trace_dummy_st_ops__open();
- if (!ASSERT_OK_PTR(trace_skel, "trace_dummy_st_ops__open"))
+ if (!ASSERT_OK_PTR(trace_skel, "trace_dummy_st_ops_success__open"))
goto done;

err = bpf_program__set_attach_target(trace_skel->progs.fentry_test_1,
@@ -91,7 +92,7 @@ static void test_dummy_init_ptr_arg(void)
ASSERT_EQ(trace_skel->bss->val, exp_retval, "fentry_val");

done:
- dummy_st_ops__destroy(skel);
+ dummy_st_ops_success__destroy(skel);
trace_dummy_st_ops__destroy(trace_skel);
}

@@ -102,12 +103,12 @@ static void test_dummy_multiple_args(void)
.ctx_in = args,
.ctx_size_in = sizeof(args),
);
- struct dummy_st_ops *skel;
+ struct dummy_st_ops_success *skel;
int fd, err;
size_t i;
char name[8];

- skel = dummy_st_ops__open_and_load();
+ skel = dummy_st_ops_success__open_and_load();
if (!ASSERT_OK_PTR(skel, "dummy_st_ops_load"))
return;

@@ -119,10 +120,31 @@ static void test_dummy_multiple_args(void)
ASSERT_EQ(skel->bss->test_2_args[i], args[i], name);
}

- dummy_st_ops__destroy(skel);
+ dummy_st_ops_success__destroy(skel);
}

-void test_dummy_st_ops(void)
+static void test_dummy_sleepable(void)
+{
+ __u64 args[1] = {0};
+ LIBBPF_OPTS(bpf_test_run_opts, attr,
+ .ctx_in = args,
+ .ctx_size_in = sizeof(args),
+ );
+ struct dummy_st_ops_success *skel;
+ int fd, err;
+
+ skel = dummy_st_ops_success__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "dummy_st_ops_load"))
+ return;
+
+ fd = bpf_program__fd(skel->progs.test_3);
+ err = bpf_prog_test_run_opts(fd, &attr);
+ ASSERT_OK(err, "test_run");
+
+ dummy_st_ops_success__destroy(skel);
+}
+
+void test_dummy_st_ops_success(void)
{
if (test__start_subtest("dummy_st_ops_attach"))
test_dummy_st_ops_attach();
@@ -132,4 +154,8 @@ void test_dummy_st_ops(void)
test_dummy_init_ptr_arg();
if (test__start_subtest("dummy_multiple_args"))
test_dummy_multiple_args();
+ if (test__start_subtest("dummy_sleepable"))
+ test_dummy_sleepable();
+
+ RUN_TESTS(dummy_st_ops_fail);
}
diff --git a/tools/testing/selftests/bpf/progs/dummy_st_ops_common.h b/tools/testing/selftests/bpf/progs/dummy_st_ops_common.h
new file mode 100644
index 000000000000..7d0761594b69
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/dummy_st_ops_common.h
@@ -0,0 +1,20 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */
+
+#ifndef _DUMMY_ST_OPS_COMMON_H
+#define _DUMMY_ST_OPS_COMMON_H
+
+struct bpf_dummy_ops_state {
+ int val;
+} __attribute__((preserve_access_index));
+
+struct bpf_dummy_ops {
+ int (*test_1)(struct bpf_dummy_ops_state *state);
+ int (*test_2)(struct bpf_dummy_ops_state *state, int a1, unsigned short a2,
+ char a3, unsigned long a4);
+ int (*test_3)(struct bpf_dummy_ops_state *state);
+};
+
+void bpf_kfunc_call_test_sleepable(void) __ksym;
+
+#endif /* _DUMMY_ST_OPS_COMMON_H */
diff --git a/tools/testing/selftests/bpf/progs/dummy_st_ops_fail.c b/tools/testing/selftests/bpf/progs/dummy_st_ops_fail.c
new file mode 100644
index 000000000000..e3f6c5445e0d
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/dummy_st_ops_fail.c
@@ -0,0 +1,41 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */
+
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+#include "bpf_misc.h"
+#include "dummy_st_ops_common.h"
+
+char _license[] SEC("license") = "GPL";
+
+SEC("struct_ops.s/test_2")
+__failure __msg("attach to unsupported member test_2 of struct bpf_dummy_ops")
+int BPF_PROG(test_unsupported_field_sleepable,
+ struct bpf_dummy_ops_state *state, int a1, unsigned short a2,
+ char a3, unsigned long a4)
+{
+ /* Tries to mark an unsleepable field in struct bpf_dummy_ops as sleepable. */
+ return 0;
+}
+
+SEC("struct_ops/test_3")
+__failure __msg("program must be sleepable to call sleepable kfunc bpf_kfunc_call_test_sleepable")
+int BPF_PROG(test_supported_field_not_in_sleepable_section, struct bpf_dummy_ops_state *state)
+{
+ /*
+ * Calls a sleepable kfunc, but doesn't mark self as sleepable, even
+ * though test_3 is allowed to be a sleepable field in struct
+ * bpf_ops_dummy.
+ */
+ bpf_kfunc_call_test_sleepable();
+ return 0;
+}
+
+SEC(".struct_ops")
+struct bpf_dummy_ops dummy_1 = {
+ .test_1 = NULL,
+ .test_2 = (void *)test_unsupported_field_sleepable,
+ .test_3 = (void *)test_supported_field_not_in_sleepable_section,
+};
diff --git a/tools/testing/selftests/bpf/progs/dummy_st_ops.c b/tools/testing/selftests/bpf/progs/dummy_st_ops_success.c
similarity index 75%
rename from tools/testing/selftests/bpf/progs/dummy_st_ops.c
rename to tools/testing/selftests/bpf/progs/dummy_st_ops_success.c
index ead87edb75e2..1e9fab99ac0d 100644
--- a/tools/testing/selftests/bpf/progs/dummy_st_ops.c
+++ b/tools/testing/selftests/bpf/progs/dummy_st_ops_success.c
@@ -4,15 +4,7 @@
#include <bpf/bpf_helpers.h>
#include <bpf/bpf_tracing.h>

-struct bpf_dummy_ops_state {
- int val;
-} __attribute__((preserve_access_index));
-
-struct bpf_dummy_ops {
- int (*test_1)(struct bpf_dummy_ops_state *state);
- int (*test_2)(struct bpf_dummy_ops_state *state, int a1, unsigned short a2,
- char a3, unsigned long a4);
-};
+#include "dummy_st_ops_common.h"

char _license[] SEC("license") = "GPL";

@@ -43,8 +35,16 @@ int BPF_PROG(test_2, struct bpf_dummy_ops_state *state, int a1, unsigned short a
return 0;
}

+SEC("struct_ops.s/test_3")
+int BPF_PROG(test_3, struct bpf_dummy_ops_state *state)
+{
+ bpf_kfunc_call_test_sleepable();
+ return 0;
+}
+
SEC(".struct_ops")
struct bpf_dummy_ops dummy_1 = {
.test_1 = (void *)test_1,
.test_2 = (void *)test_2,
+ .test_3 = (void *)test_3,
};
--
2.39.0


2023-01-24 19:52:31

by Martin KaFai Lau

[permalink] [raw]
Subject: Re: [PATCH bpf-next v2 4/4] bpf/selftests: Verify struct_ops prog sleepable behavior

On 1/24/23 8:08 AM, David Vernet wrote:
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index 50123afab9bf..64034311c5f7 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -1474,6 +1474,7 @@ struct bpf_dummy_ops {
> int (*test_1)(struct bpf_dummy_ops_state *cb);
> int (*test_2)(struct bpf_dummy_ops_state *cb, int a1, unsigned short a2,
> char a3, unsigned long a4);
> + int (*test_3)(struct bpf_dummy_ops_state *cb);

nit. May be a self describe name like test_sleepable().

> };
>
> int bpf_struct_ops_test_run(struct bpf_prog *prog, const union bpf_attr *kattr,
> diff --git a/net/bpf/bpf_dummy_struct_ops.c b/net/bpf/bpf_dummy_struct_ops.c
> index 1ac4467928a9..46099737d1da 100644
> --- a/net/bpf/bpf_dummy_struct_ops.c
> +++ b/net/bpf/bpf_dummy_struct_ops.c
> @@ -154,6 +154,23 @@ static bool bpf_dummy_ops_is_valid_access(int off, int size,
> return bpf_tracing_btf_ctx_access(off, size, type, prog, info);
> }
>
> +static int bpf_dummy_ops_check_member(const struct btf_type *t,
> + const struct btf_member *member,
> + const struct bpf_prog *prog)
> +{
> + u32 moff = __btf_member_bit_offset(t, member) / 8;
> +
> + switch (moff) {
> + case offsetof(struct bpf_dummy_ops, test_3):
> + break;
> + default:
> + if (prog->aux->sleepable)
> + return -EINVAL;
> + }
> +
> + return 0;
> +}
> +
> static int bpf_dummy_ops_btf_struct_access(struct bpf_verifier_log *log,
> const struct bpf_reg_state *reg,
> int off, int size, enum bpf_access_type atype,
> @@ -208,6 +225,7 @@ static void bpf_dummy_unreg(void *kdata)
> struct bpf_struct_ops bpf_bpf_dummy_ops = {
> .verifier_ops = &bpf_dummy_verifier_ops,
> .init = bpf_dummy_init,
> + .check_member = bpf_dummy_ops_check_member,
> .init_member = bpf_dummy_init_member,
> .reg = bpf_dummy_reg,
> .unreg = bpf_dummy_unreg,
> diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
> index 8da0d73b368e..33ea57d34c0b 100644
> --- a/net/bpf/test_run.c
> +++ b/net/bpf/test_run.c
> @@ -730,6 +730,10 @@ noinline void bpf_kfunc_call_test_destructive(void)
> {
> }
>
> +noinline void bpf_kfunc_call_test_sleepable(void)
> +{
> +}
> +
> __diag_pop();
>
> BTF_SET8_START(bpf_test_modify_return_ids)
> @@ -767,6 +771,7 @@ BTF_ID_FLAGS(func, bpf_kfunc_call_test_mem_len_fail1)
> BTF_ID_FLAGS(func, bpf_kfunc_call_test_mem_len_fail2)
> BTF_ID_FLAGS(func, bpf_kfunc_call_test_ref, KF_TRUSTED_ARGS)
> BTF_ID_FLAGS(func, bpf_kfunc_call_test_destructive, KF_DESTRUCTIVE)
> +BTF_ID_FLAGS(func, bpf_kfunc_call_test_sleepable, KF_SLEEPABLE)

KF_SLEEPABLE kfunc is not specific to the struct_ops prog. I hope a test has
already covered that KF_SLEEPABLE kfunc can only be called from sleepable prog.
eg. there is bpf_fentry_test1.

This new kfunc could then be omitted and make the test simpler. There is no need
to add the test to the DENYLIST.s390x:
https://github.com/kernel-patches/bpf/actions/runs/3998188872/jobs/6861920516

> diff --git a/tools/testing/selftests/bpf/progs/dummy_st_ops_common.h b/tools/testing/selftests/bpf/progs/dummy_st_ops_common.h
> new file mode 100644
> index 000000000000..7d0761594b69
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/progs/dummy_st_ops_common.h
> @@ -0,0 +1,20 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */
> +
> +#ifndef _DUMMY_ST_OPS_COMMON_H
> +#define _DUMMY_ST_OPS_COMMON_H
> +
> +struct bpf_dummy_ops_state {
> + int val;
> +} __attribute__((preserve_access_index));
> +
> +struct bpf_dummy_ops {
> + int (*test_1)(struct bpf_dummy_ops_state *state);
> + int (*test_2)(struct bpf_dummy_ops_state *state, int a1, unsigned short a2,
> + char a3, unsigned long a4);
> + int (*test_3)(struct bpf_dummy_ops_state *state);
> +};

Instead of adding a new dummy_st_ops_common.h header, try to directly include
vmlinux.h in the dummy_st_ops_{success,fail}.c.

> +
> +void bpf_kfunc_call_test_sleepable(void) __ksym;


2023-01-24 21:11:13

by David Vernet

[permalink] [raw]
Subject: Re: [PATCH bpf-next v2 4/4] bpf/selftests: Verify struct_ops prog sleepable behavior

On Tue, Jan 24, 2023 at 11:52:17AM -0800, Martin KaFai Lau wrote:
> On 1/24/23 8:08 AM, David Vernet wrote:
> > diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> > index 50123afab9bf..64034311c5f7 100644
> > --- a/include/linux/bpf.h
> > +++ b/include/linux/bpf.h
> > @@ -1474,6 +1474,7 @@ struct bpf_dummy_ops {
> > int (*test_1)(struct bpf_dummy_ops_state *cb);
> > int (*test_2)(struct bpf_dummy_ops_state *cb, int a1, unsigned short a2,
> > char a3, unsigned long a4);
> > + int (*test_3)(struct bpf_dummy_ops_state *cb);
>
> nit. May be a self describe name like test_sleepable().

Will do. I agree that's better, but was just following the existing
contours of the file. Happy to have an excuse to improve it.

>
> > };
> > int bpf_struct_ops_test_run(struct bpf_prog *prog, const union bpf_attr *kattr,
> > diff --git a/net/bpf/bpf_dummy_struct_ops.c b/net/bpf/bpf_dummy_struct_ops.c
> > index 1ac4467928a9..46099737d1da 100644
> > --- a/net/bpf/bpf_dummy_struct_ops.c
> > +++ b/net/bpf/bpf_dummy_struct_ops.c
> > @@ -154,6 +154,23 @@ static bool bpf_dummy_ops_is_valid_access(int off, int size,
> > return bpf_tracing_btf_ctx_access(off, size, type, prog, info);
> > }
> > +static int bpf_dummy_ops_check_member(const struct btf_type *t,
> > + const struct btf_member *member,
> > + const struct bpf_prog *prog)
> > +{
> > + u32 moff = __btf_member_bit_offset(t, member) / 8;
> > +
> > + switch (moff) {
> > + case offsetof(struct bpf_dummy_ops, test_3):
> > + break;
> > + default:
> > + if (prog->aux->sleepable)
> > + return -EINVAL;
> > + }
> > +
> > + return 0;
> > +}
> > +
> > static int bpf_dummy_ops_btf_struct_access(struct bpf_verifier_log *log,
> > const struct bpf_reg_state *reg,
> > int off, int size, enum bpf_access_type atype,
> > @@ -208,6 +225,7 @@ static void bpf_dummy_unreg(void *kdata)
> > struct bpf_struct_ops bpf_bpf_dummy_ops = {
> > .verifier_ops = &bpf_dummy_verifier_ops,
> > .init = bpf_dummy_init,
> > + .check_member = bpf_dummy_ops_check_member,
> > .init_member = bpf_dummy_init_member,
> > .reg = bpf_dummy_reg,
> > .unreg = bpf_dummy_unreg,
> > diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
> > index 8da0d73b368e..33ea57d34c0b 100644
> > --- a/net/bpf/test_run.c
> > +++ b/net/bpf/test_run.c
> > @@ -730,6 +730,10 @@ noinline void bpf_kfunc_call_test_destructive(void)
> > {
> > }
> > +noinline void bpf_kfunc_call_test_sleepable(void)
> > +{
> > +}
> > +
> > __diag_pop();
> > BTF_SET8_START(bpf_test_modify_return_ids)
> > @@ -767,6 +771,7 @@ BTF_ID_FLAGS(func, bpf_kfunc_call_test_mem_len_fail1)
> > BTF_ID_FLAGS(func, bpf_kfunc_call_test_mem_len_fail2)
> > BTF_ID_FLAGS(func, bpf_kfunc_call_test_ref, KF_TRUSTED_ARGS)
> > BTF_ID_FLAGS(func, bpf_kfunc_call_test_destructive, KF_DESTRUCTIVE)
> > +BTF_ID_FLAGS(func, bpf_kfunc_call_test_sleepable, KF_SLEEPABLE)
>
> KF_SLEEPABLE kfunc is not specific to the struct_ops prog. I hope a test has
> already covered that KF_SLEEPABLE kfunc can only be called from sleepable
> prog. eg. there is bpf_fentry_test1.
>
> This new kfunc could then be omitted and make the test simpler. There is no
> need to add the test to the DENYLIST.s390x:
> https://github.com/kernel-patches/bpf/actions/runs/3998188872/jobs/6861920516

Ah, good point. Totally forgot about s390x. Will send out a v3 that
doesn't bother with also including the KF_SLEEPABLE invocation, and
instead just validates that .check_member() is called.

>
> > diff --git a/tools/testing/selftests/bpf/progs/dummy_st_ops_common.h b/tools/testing/selftests/bpf/progs/dummy_st_ops_common.h
> > new file mode 100644
> > index 000000000000..7d0761594b69
> > --- /dev/null
> > +++ b/tools/testing/selftests/bpf/progs/dummy_st_ops_common.h
> > @@ -0,0 +1,20 @@
> > +/* SPDX-License-Identifier: GPL-2.0 */
> > +/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */
> > +
> > +#ifndef _DUMMY_ST_OPS_COMMON_H
> > +#define _DUMMY_ST_OPS_COMMON_H
> > +
> > +struct bpf_dummy_ops_state {
> > + int val;
> > +} __attribute__((preserve_access_index));
> > +
> > +struct bpf_dummy_ops {
> > + int (*test_1)(struct bpf_dummy_ops_state *state);
> > + int (*test_2)(struct bpf_dummy_ops_state *state, int a1, unsigned short a2,
> > + char a3, unsigned long a4);
> > + int (*test_3)(struct bpf_dummy_ops_state *state);
> > +};
>
> Instead of adding a new dummy_st_ops_common.h header, try to directly
> include vmlinux.h in the dummy_st_ops_{success,fail}.c.

Ack, I'll give it a shot. Should be fine to include once we get rid of
the test logic that includes the KF_SLEEPABLE kfunc.

>
> > +
> > +void bpf_kfunc_call_test_sleepable(void) __ksym;
>