These patches try to test the fix made in commit e2f7fc0ac695 ("bpf:
fix undefined behavior in narrow load handling"). The problem existed
in the generated BPF bytecode that was doing a 32bit narrow read of a
64bit field, so to test it the code would need to be executed.
Currently the only such field exists in BPF_PROG_TYPE_PERF_EVENT,
which was not supported by bpf_prog_test_run().
I'm sending these patches to bpf-next now as they introduce a new
feature. But maybe some of those patches could go to the bpf branch?
There is a bit of yak shaving to do for the test to be run:
1. Print why the program could not be run (patch 1).
2. Some fixes for errno clobbering (patches 2 and 3).
3. Using bpf_prog_test_run_xattr, so I can pass ctx_in stuff too
(patch 4).
4. Adding ctx stuff to struct bpf_test (patch 5).
5. Some tools headers syncing (patches 6 and 7).
6. Implement bpf_prog_test_run for perf event programs and test it
(patches 8 and 9).
The last point is where I'm least sure how things should be done
properly:
1. There is a bunch of stuff to prepare for the
bpf_perf_prog_read_value to work, and that stuff is very hacky. I
would welcome some hints about how to set up the perf_event and
perf_sample_data structs in a way that is a bit more future-proof
than just setting some fields in a specific way, so some other code
won't use some other fields (like setting event.oncpu to -1 to
avoid event.pmu to be used for reading the value of the event).
2. The tests try to see if the test run for perf event sets up the
context properly, so they verify the struct pt_regs contents. They
way it is currently written Works For Me, but surely it won't work
on other arch. So what would be the way forward? Just put the test
case inside #ifdef __x86_64__?
3. Another thing in tests - I'm trying to make sure that the
bpf_perf_prog_read_value helper works as it seems to be the only
bpf perf helper that takes the ctx as a parameter. Is that enough
or I should test other helpers too?
About the test itself - I'm not sure if it will work on a big endian
machine. I think it should, but I don't have anything handy here to
verify it.
Krzesimir Nowak (10):
selftests/bpf: Print a message when tester could not run a program
selftests/bpf: Avoid a clobbering of errno
selftests/bpf: Avoid another case of errno clobbering
selftests/bpf: Use bpf_prog_test_run_xattr
selftests/bpf: Allow passing more information to BPF prog test run
tools headers: Adopt compiletime_assert from kernel sources
tools headers: sync struct bpf_perf_event_data
bpf: Implement bpf_prog_test_run for perf event programs
selftests/bpf: Add tests for bpf_prog_test_run for perf events progs
selftests/bpf: Test correctness of narrow 32bit read on 64bit field
kernel/trace/bpf_trace.c | 107 +++++++++++
tools/include/linux/compiler.h | 28 +++
tools/include/uapi/linux/bpf_perf_event.h | 1 +
tools/testing/selftests/bpf/test_verifier.c | 172 ++++++++++++++++--
.../selftests/bpf/verifier/perf_event_run.c | 93 ++++++++++
.../bpf/verifier/perf_event_sample_period.c | 8 +
.../testing/selftests/bpf/verifier/var_off.c | 20 ++
7 files changed, 414 insertions(+), 15 deletions(-)
create mode 100644 tools/testing/selftests/bpf/verifier/perf_event_run.c
--
2.20.1
Test the correctness of the 32bit narrow reads by reading both halves
of the 64 bit field and doing a binary or on them to see if we get the
original value.
It succeeds as it should, but with the commit e2f7fc0ac695 ("bpf: fix
undefined behavior in narrow load handling") reverted, the test fails
with a following message:
> $ sudo ./test_verifier
> ...
> #967/p 32bit loads of a 64bit field (both least and most significant words) FAIL retval -1985229329 != 0
> verification time 17 usec
> stack depth 0
> processed 8 insns (limit 1000000) max_states_per_insn 0 total_states 0 peak_states 0 mark_read 0
> ...
> Summary: 1519 PASSED, 0 SKIPPED, 1 FAILED
Signed-off-by: Krzesimir Nowak <[email protected]>
---
tools/testing/selftests/bpf/test_verifier.c | 19 ++++++++++++++++++
.../testing/selftests/bpf/verifier/var_off.c | 20 +++++++++++++++++++
2 files changed, 39 insertions(+)
diff --git a/tools/testing/selftests/bpf/test_verifier.c b/tools/testing/selftests/bpf/test_verifier.c
index 6fa962014b64..444c1ea1e326 100644
--- a/tools/testing/selftests/bpf/test_verifier.c
+++ b/tools/testing/selftests/bpf/test_verifier.c
@@ -24,6 +24,7 @@
#include <sys/capability.h>
+#include <linux/compiler.h>
#include <linux/unistd.h>
#include <linux/filter.h>
#include <linux/bpf_perf_event.h>
@@ -341,6 +342,24 @@ static void bpf_fill_perf_event_test_run_check(struct bpf_test *self)
self->fill_insns = NULL;
}
+static void bpf_fill_32bit_loads(struct bpf_test *self)
+{
+ compiletime_assert(
+ sizeof(struct bpf_perf_event_data) <= TEST_CTX_LEN,
+ "buffer for ctx is too short to fit struct bpf_perf_event_data");
+ compiletime_assert(
+ sizeof(struct bpf_perf_event_value) <= TEST_DATA_LEN,
+ "buffer for data is too short to fit struct bpf_perf_event_value");
+
+ struct bpf_perf_event_data ctx = {
+ .sample_period = 0x0123456789abcdef,
+ };
+
+ memcpy(self->ctx, &ctx, sizeof(ctx));
+ free(self->fill_insns);
+ self->fill_insns = NULL;
+}
+
/* BPF_SK_LOOKUP contains 13 instructions, if you need to fix up maps */
#define BPF_SK_LOOKUP(func) \
/* struct bpf_sock_tuple tuple = {} */ \
diff --git a/tools/testing/selftests/bpf/verifier/var_off.c b/tools/testing/selftests/bpf/verifier/var_off.c
index 8504ac937809..14d222f37081 100644
--- a/tools/testing/selftests/bpf/verifier/var_off.c
+++ b/tools/testing/selftests/bpf/verifier/var_off.c
@@ -246,3 +246,23 @@
.result = ACCEPT,
.prog_type = BPF_PROG_TYPE_LWT_IN,
},
+{
+ "32bit loads of a 64bit field (both least and most significant words)",
+ .insns = {
+ BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_1, offsetof(struct bpf_perf_event_data, sample_period)),
+ BPF_LDX_MEM(BPF_W, BPF_REG_5, BPF_REG_1, offsetof(struct bpf_perf_event_data, sample_period) + 4),
+ BPF_LDX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, offsetof(struct bpf_perf_event_data, sample_period)),
+ BPF_ALU64_IMM(BPF_LSH, BPF_REG_5, 32),
+ BPF_ALU64_REG(BPF_OR, BPF_REG_4, BPF_REG_5),
+ BPF_ALU64_REG(BPF_XOR, BPF_REG_4, BPF_REG_6),
+ BPF_MOV64_REG(BPF_REG_0, BPF_REG_4),
+ BPF_EXIT_INSN(),
+ },
+ .result = ACCEPT,
+ .prog_type = BPF_PROG_TYPE_PERF_EVENT,
+ .ctx = { 0, },
+ .ctx_len = sizeof(struct bpf_perf_event_data),
+ .data = { 0, },
+ .data_len = sizeof(struct bpf_perf_event_value),
+ .fill_helper = bpf_fill_32bit_loads,
+},
--
2.20.1
The bpf_prog_test_run_xattr function gives more options to set up a
test run of a BPF program than the bpf_prog_test_run function.
We will need this extra flexibility to pass ctx data later.
Signed-off-by: Krzesimir Nowak <[email protected]>
---
tools/testing/selftests/bpf/test_verifier.c | 16 +++++++++++-----
1 file changed, 11 insertions(+), 5 deletions(-)
diff --git a/tools/testing/selftests/bpf/test_verifier.c b/tools/testing/selftests/bpf/test_verifier.c
index 779e30b96ded..db1f0f758f81 100644
--- a/tools/testing/selftests/bpf/test_verifier.c
+++ b/tools/testing/selftests/bpf/test_verifier.c
@@ -822,14 +822,20 @@ static int do_prog_test_run(int fd_prog, bool unpriv, uint32_t expected_val,
{
__u8 tmp[TEST_DATA_LEN << 2];
__u32 size_tmp = sizeof(tmp);
- uint32_t retval;
int err;
int saved_errno;
+ struct bpf_prog_test_run_attr attr = {
+ .prog_fd = fd_prog,
+ .repeat = 1,
+ .data_in = data,
+ .data_size_in = size_data,
+ .data_out = tmp,
+ .data_size_out = size_tmp,
+ };
if (unpriv)
set_admin(true);
- err = bpf_prog_test_run(fd_prog, 1, data, size_data,
- tmp, &size_tmp, &retval, NULL);
+ err = bpf_prog_test_run_xattr(&attr);
saved_errno = errno;
if (unpriv)
set_admin(false);
@@ -846,9 +852,9 @@ static int do_prog_test_run(int fd_prog, bool unpriv, uint32_t expected_val,
return err;
}
}
- if (retval != expected_val &&
+ if (attr.retval != expected_val &&
expected_val != POINTER_VALUE) {
- printf("FAIL retval %d != %d ", retval, expected_val);
+ printf("FAIL retval %d != %d ", attr.retval, expected_val);
return 1;
}
--
2.20.1