2020-12-03 16:07:27

by Brendan Jackman

[permalink] [raw]
Subject: [PATCH bpf-next v3 00/14] Atomics for eBPF

Status of the patches
=====================

Thanks for the reviews! Differences from v2->v3 [1]:

* More minor fixes and naming/comment changes

* Dropped atomic subtract: compilers can implement this by preceding
an atomic add with a NEG instruction (which is what the x86 JIT did
under the hood anyway).

* Dropped the use of -mcpu=v4 in the Clang BPF command-line; there is
no longer an architecture version bump. Instead a feature test is
added to Kbuild - it builds a source file to check if Clang
supports BPF atomics.

* Fixed the prog_test so it no longer breaks
test_progs-no_alu32. This requires some ifdef acrobatics to avoid
complicating the prog_tests model where the same userspace code
exercises both the normal and no_alu32 BPF test objects, using the
same skeleton header.

Differences from v1->v2 [1]:

* Fixed mistakes in the netronome driver

* Addd sub, add, or, xor operations

* The above led to some refactors to keep things readable. (Maybe I
should have just waited until I'd implemented these before starting
the review...)

* Replaced BPF_[CMP]SET | BPF_FETCH with just BPF_[CMP]XCHG, which
include the BPF_FETCH flag

* Added a bit of documentation. Suggestions welcome for more places
to dump this info...

The prog_test that's added depends on Clang/LLVM features added by
Yonghong in https://reviews.llvm.org/D72184

This only includes a JIT implementation for x86_64 - I don't plan to
implement JIT support myself for other architectures.

Operations
==========

This patchset adds atomic operations to the eBPF instruction set. The
use-case that motivated this work was a trivial and efficient way to
generate globally-unique cookies in BPF progs, but I think it's
obvious that these features are pretty widely applicable. The
instructions that are added here can be summarised with this list of
kernel operations:

* atomic[64]_[fetch_]add
* atomic[64]_[fetch_]and
* atomic[64]_[fetch_]or
* atomic[64]_xchg
* atomic[64]_cmpxchg

The following are left out of scope for this effort:

* 16 and 8 bit operations
* Explicit memory barriers

Encoding
========

I originally planned to add new values for bpf_insn.opcode. This was
rather unpleasant: the opcode space has holes in it but no entire
instruction classes[2]. Yonghong Song had a better idea: use the
immediate field of the existing STX XADD instruction to encode the
operation. This works nicely, without breaking existing programs,
because the immediate field is currently reserved-must-be-zero, and
extra-nicely because BPF_ADD happens to be zero.

Note that this of course makes immediate-source atomic operations
impossible. It's hard to imagine a measurable speedup from such
instructions, and if it existed it would certainly not benefit x86,
which has no support for them.

The BPF_OP opcode fields are re-used in the immediate, and an
additional flag BPF_FETCH is used to mark instructions that should
fetch a pre-modification value from memory.

So, BPF_XADD is now called BPF_ATOMIC (the old name is kept to avoid
breaking userspace builds), and where we previously had .imm = 0, we
now have .imm = BPF_ADD (which is 0).

Operands
========

Reg-source eBPF instructions only have two operands, while these
atomic operations have up to four. To avoid needing to encode
additional operands, then:

- One of the input registers is re-used as an output register
(e.g. atomic_fetch_add both reads from and writes to the source
register).

- Where necessary (i.e. for cmpxchg) , R0 is "hard-coded" as one of
the operands.

This approach also allows the new eBPF instructions to map directly
to single x86 instructions.

[1] Previous patchset:
https://lore.kernel.org/bpf/[email protected]/

[2] Visualisation of eBPF opcode space:
https://gist.github.com/bjackman/00fdad2d5dfff601c1918bc29b16e778


Brendan Jackman (14):
bpf: x86: Factor out emission of ModR/M for *(reg + off)
bpf: x86: Factor out emission of REX byte
bpf: x86: Factor out function to emit NEG
bpf: x86: Factor out a lookup table for some ALU opcodes
bpf: Rename BPF_XADD and prepare to encode other atomics in .imm
bpf: Move BPF_STX reserved field check into BPF_STX verifier code
bpf: Add BPF_FETCH field / create atomic_fetch_add instruction
bpf: Add instructions for atomic_[cmp]xchg
bpf: Pull out a macro for interpreting atomic ALU operations
bpf: Add bitwise atomic instructions
tools build: Implement feature check for BPF atomics in Clang
bpf: Pull tools/build/feature biz into selftests Makefile
bpf: Add tests for new BPF atomic operations
bpf: Document new atomic instructions

Documentation/networking/filter.rst | 56 +++-
arch/arm/net/bpf_jit_32.c | 7 +-
arch/arm64/net/bpf_jit_comp.c | 16 +-
arch/mips/net/ebpf_jit.c | 11 +-
arch/powerpc/net/bpf_jit_comp64.c | 25 +-
arch/riscv/net/bpf_jit_comp32.c | 20 +-
arch/riscv/net/bpf_jit_comp64.c | 16 +-
arch/s390/net/bpf_jit_comp.c | 27 +-
arch/sparc/net/bpf_jit_comp_64.c | 17 +-
arch/x86/net/bpf_jit_comp.c | 241 +++++++++++-----
arch/x86/net/bpf_jit_comp32.c | 6 +-
drivers/net/ethernet/netronome/nfp/bpf/jit.c | 14 +-
drivers/net/ethernet/netronome/nfp/bpf/main.h | 4 +-
.../net/ethernet/netronome/nfp/bpf/verifier.c | 15 +-
include/linux/filter.h | 97 ++++++-
include/uapi/linux/bpf.h | 8 +-
kernel/bpf/core.c | 66 ++++-
kernel/bpf/disasm.c | 43 ++-
kernel/bpf/verifier.c | 75 +++--
lib/test_bpf.c | 2 +-
samples/bpf/bpf_insn.h | 4 +-
samples/bpf/sock_example.c | 2 +-
samples/bpf/test_cgrp2_attach.c | 4 +-
tools/build/feature/Makefile | 4 +
tools/build/feature/test-clang-bpf-atomics.c | 9 +
tools/include/linux/filter.h | 97 ++++++-
tools/include/uapi/linux/bpf.h | 8 +-
tools/testing/selftests/bpf/.gitignore | 1 +
tools/testing/selftests/bpf/Makefile | 42 +++
.../selftests/bpf/prog_tests/atomics_test.c | 262 ++++++++++++++++++
.../bpf/prog_tests/cgroup_attach_multi.c | 4 +-
.../selftests/bpf/progs/atomics_test.c | 154 ++++++++++
.../selftests/bpf/verifier/atomic_and.c | 77 +++++
.../selftests/bpf/verifier/atomic_cmpxchg.c | 96 +++++++
.../selftests/bpf/verifier/atomic_fetch_add.c | 106 +++++++
.../selftests/bpf/verifier/atomic_or.c | 77 +++++
.../selftests/bpf/verifier/atomic_xchg.c | 46 +++
.../selftests/bpf/verifier/atomic_xor.c | 77 +++++
tools/testing/selftests/bpf/verifier/ctx.c | 7 +-
.../testing/selftests/bpf/verifier/leak_ptr.c | 4 +-
tools/testing/selftests/bpf/verifier/unpriv.c | 3 +-
tools/testing/selftests/bpf/verifier/xadd.c | 2 +-
42 files changed, 1666 insertions(+), 186 deletions(-)
create mode 100644 tools/build/feature/test-clang-bpf-atomics.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/atomics_test.c
create mode 100644 tools/testing/selftests/bpf/progs/atomics_test.c
create mode 100644 tools/testing/selftests/bpf/verifier/atomic_and.c
create mode 100644 tools/testing/selftests/bpf/verifier/atomic_cmpxchg.c
create mode 100644 tools/testing/selftests/bpf/verifier/atomic_fetch_add.c
create mode 100644 tools/testing/selftests/bpf/verifier/atomic_or.c
create mode 100644 tools/testing/selftests/bpf/verifier/atomic_xchg.c
create mode 100644 tools/testing/selftests/bpf/verifier/atomic_xor.c


base-commit: 97306be45fbe7a02461c3c2a57e666cf662b1aaf
--
2.29.2.454.gaff20da3a2-goog


2020-12-03 16:07:32

by Brendan Jackman

[permalink] [raw]
Subject: [PATCH bpf-next v3 01/14] bpf: x86: Factor out emission of ModR/M for *(reg + off)

The case for JITing atomics is about to get more complicated. Let's
factor out some common code to make the review and result more
readable.

NB the atomics code doesn't yet use the new helper - a subsequent
patch will add its use as a side-effect of other changes.

Signed-off-by: Brendan Jackman <[email protected]>
Change-Id: I1510c7eb0132ff9262fea92ce1839243b6d33372
---
arch/x86/net/bpf_jit_comp.c | 42 +++++++++++++++++++++----------------
1 file changed, 24 insertions(+), 18 deletions(-)

diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index 796506dcfc42..cc818ed7c2b9 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -681,6 +681,27 @@ static void emit_mov_reg(u8 **pprog, bool is64, u32 dst_reg, u32 src_reg)
*pprog = prog;
}

+/* Emit the suffix (ModR/M etc) for addressing *(ptr_reg + off) and val_reg */
+static void emit_insn_suffix(u8 **pprog, u32 ptr_reg, u32 val_reg, int off)
+{
+ u8 *prog = *pprog;
+ int cnt = 0;
+
+ if (is_imm8(off)) {
+ /* 1-byte signed displacement.
+ *
+ * If off == 0 we could skip this and save one extra byte, but
+ * special case of x86 R13 which always needs an offset is not
+ * worth the hassle
+ */
+ EMIT2(add_2reg(0x40, ptr_reg, val_reg), off);
+ } else {
+ /* 4-byte signed displacement */
+ EMIT1_off32(add_2reg(0x80, ptr_reg, val_reg), off);
+ }
+ *pprog = prog;
+}
+
/* LDX: dst_reg = *(u8*)(src_reg + off) */
static void emit_ldx(u8 **pprog, u32 size, u32 dst_reg, u32 src_reg, int off)
{
@@ -708,15 +729,7 @@ static void emit_ldx(u8 **pprog, u32 size, u32 dst_reg, u32 src_reg, int off)
EMIT2(add_2mod(0x48, src_reg, dst_reg), 0x8B);
break;
}
- /*
- * If insn->off == 0 we can save one extra byte, but
- * special case of x86 R13 which always needs an offset
- * is not worth the hassle
- */
- if (is_imm8(off))
- EMIT2(add_2reg(0x40, src_reg, dst_reg), off);
- else
- EMIT1_off32(add_2reg(0x80, src_reg, dst_reg), off);
+ emit_insn_suffix(&prog, src_reg, dst_reg, off);
*pprog = prog;
}

@@ -751,10 +764,7 @@ static void emit_stx(u8 **pprog, u32 size, u32 dst_reg, u32 src_reg, int off)
EMIT2(add_2mod(0x48, dst_reg, src_reg), 0x89);
break;
}
- if (is_imm8(off))
- EMIT2(add_2reg(0x40, dst_reg, src_reg), off);
- else
- EMIT1_off32(add_2reg(0x80, dst_reg, src_reg), off);
+ emit_insn_suffix(&prog, dst_reg, src_reg, off);
*pprog = prog;
}

@@ -1240,11 +1250,7 @@ st: if (is_imm8(insn->off))
goto xadd;
case BPF_STX | BPF_XADD | BPF_DW:
EMIT3(0xF0, add_2mod(0x48, dst_reg, src_reg), 0x01);
-xadd: if (is_imm8(insn->off))
- EMIT2(add_2reg(0x40, dst_reg, src_reg), insn->off);
- else
- EMIT1_off32(add_2reg(0x80, dst_reg, src_reg),
- insn->off);
+xadd: emit_modrm_dstoff(&prog, dst_reg, src_reg, insn->off);
break;

/* call */
--
2.29.2.454.gaff20da3a2-goog

2020-12-03 16:07:50

by Brendan Jackman

[permalink] [raw]
Subject: [PATCH bpf-next v3 08/14] bpf: Add instructions for atomic_[cmp]xchg

This adds two atomic opcodes, both of which include the BPF_FETCH
flag. XCHG without the BPF_FETCh flag would naturally encode
atomic_set. This is not supported because it would be of limited
value to userspace (it doesn't imply any barriers). CMPXCHG without
BPF_FETCH woulud be an atomic compare-and-write. We don't have such
an operation in the kernel so it isn't provided to BPF either.

There are two significant design decisions made for the CMPXCHG
instruction:

- To solve the issue that this operation fundamentally has 3
operands, but we only have two register fields. Therefore the
operand we compare against (the kernel's API calls it 'old') is
hard-coded to be R0. x86 has similar design (and A64 doesn't
have this problem).

A potential alternative might be to encode the other operand's
register number in the immediate field.

- The kernel's atomic_cmpxchg returns the old value, while the C11
userspace APIs return a boolean indicating the comparison
result. Which should BPF do? A64 returns the old value. x86 returns
the old value in the hard-coded register (and also sets a
flag). That means return-old-value is easier to JIT.

Signed-off-by: Brendan Jackman <[email protected]>
Change-Id: I3f19ad867dfd08515eecf72674e6fdefe28424bb
---
arch/x86/net/bpf_jit_comp.c | 8 ++++++++
include/linux/filter.h | 20 ++++++++++++++++++++
include/uapi/linux/bpf.h | 4 +++-
kernel/bpf/core.c | 20 ++++++++++++++++++++
kernel/bpf/disasm.c | 15 +++++++++++++++
kernel/bpf/verifier.c | 19 +++++++++++++++++--
tools/include/linux/filter.h | 20 ++++++++++++++++++++
tools/include/uapi/linux/bpf.h | 4 +++-
8 files changed, 106 insertions(+), 4 deletions(-)

diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index 88cb09fa3bfb..7d29bc3bb4ff 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -831,6 +831,14 @@ static int emit_atomic(u8 **pprog, u8 atomic_op,
/* src_reg = atomic_fetch_add(*(dst_reg + off), src_reg); */
EMIT2(0x0F, 0xC1);
break;
+ case BPF_XCHG:
+ /* src_reg = atomic_xchg(*(u32/u64*)(dst_reg + off), src_reg); */
+ EMIT1(0x87);
+ break;
+ case BPF_CMPXCHG:
+ /* r0 = atomic_cmpxchg(*(u32/u64*)(dst_reg + off), r0, src_reg); */
+ EMIT2(0x0F, 0xB1);
+ break;
default:
pr_err("bpf_jit: unknown atomic opcode %02x\n", atomic_op);
return -EFAULT;
diff --git a/include/linux/filter.h b/include/linux/filter.h
index 4e04d0fc454f..6186280715ed 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -280,6 +280,26 @@ static inline bool insn_is_zext(const struct bpf_insn *insn)
.off = OFF, \
.imm = BPF_ADD | BPF_FETCH })

+/* Atomic exchange, src_reg = atomic_xchg((dst_reg + off), src_reg) */
+
+#define BPF_ATOMIC_XCHG(SIZE, DST, SRC, OFF) \
+ ((struct bpf_insn) { \
+ .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \
+ .dst_reg = DST, \
+ .src_reg = SRC, \
+ .off = OFF, \
+ .imm = BPF_XCHG })
+
+/* Atomic compare-exchange, r0 = atomic_cmpxchg((dst_reg + off), r0, src_reg) */
+
+#define BPF_ATOMIC_CMPXCHG(SIZE, DST, SRC, OFF) \
+ ((struct bpf_insn) { \
+ .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \
+ .dst_reg = DST, \
+ .src_reg = SRC, \
+ .off = OFF, \
+ .imm = BPF_CMPXCHG })
+
/* Memory store, *(uint *) (dst_reg + off16) = imm32 */

#define BPF_ST_MEM(SIZE, DST, OFF, IMM) \
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 025e377e7229..53334530cc81 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -45,7 +45,9 @@
#define BPF_EXIT 0x90 /* function return */

/* atomic op type fields (stored in immediate) */
-#define BPF_FETCH 0x01 /* fetch previous value into src reg */
+#define BPF_XCHG (0xe0 | BPF_FETCH) /* atomic exchange */
+#define BPF_CMPXCHG (0xf0 | BPF_FETCH) /* atomic compare-and-write */
+#define BPF_FETCH 0x01 /* not an opcode on its own, used to build others */

/* Register numbers */
enum {
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index 61e93eb7d363..28f960bc2e30 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -1630,6 +1630,16 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn, u64 *stack)
(u32) SRC,
(atomic_t *)(unsigned long) (DST + insn->off));
break;
+ case BPF_XCHG:
+ SRC = (u32) atomic_xchg(
+ (atomic_t *)(unsigned long) (DST + insn->off),
+ (u32) SRC);
+ break;
+ case BPF_CMPXCHG:
+ BPF_R0 = (u32) atomic_cmpxchg(
+ (atomic_t *)(unsigned long) (DST + insn->off),
+ (u32) BPF_R0, (u32) SRC);
+ break;
default:
goto default_label;
}
@@ -1647,6 +1657,16 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn, u64 *stack)
(u64) SRC,
(atomic64_t *)(s64) (DST + insn->off));
break;
+ case BPF_XCHG:
+ SRC = (u64) atomic64_xchg(
+ (atomic64_t *)(u64) (DST + insn->off),
+ (u64) SRC);
+ break;
+ case BPF_CMPXCHG:
+ BPF_R0 = (u64) atomic64_cmpxchg(
+ (atomic64_t *)(u64) (DST + insn->off),
+ (u64) BPF_R0, (u64) SRC);
+ break;
default:
goto default_label;
}
diff --git a/kernel/bpf/disasm.c b/kernel/bpf/disasm.c
index 3ee2246a52ef..18357ea9a17d 100644
--- a/kernel/bpf/disasm.c
+++ b/kernel/bpf/disasm.c
@@ -167,6 +167,21 @@ void print_bpf_insn(const struct bpf_insn_cbs *cbs,
BPF_SIZE(insn->code) == BPF_DW ? "64" : "",
bpf_ldst_string[BPF_SIZE(insn->code) >> 3],
insn->dst_reg, insn->off, insn->src_reg);
+ } else if (BPF_MODE(insn->code) == BPF_ATOMIC &&
+ insn->imm == BPF_CMPXCHG) {
+ verbose(cbs->private_data, "(%02x) r0 = atomic%s_cmpxchg(*(%s *)(r%d %+d), r0, r%d)\n",
+ insn->code,
+ BPF_SIZE(insn->code) == BPF_DW ? "64" : "",
+ bpf_ldst_string[BPF_SIZE(insn->code) >> 3],
+ insn->dst_reg, insn->off,
+ insn->src_reg);
+ } else if (BPF_MODE(insn->code) == BPF_ATOMIC &&
+ insn->imm == BPF_XCHG) {
+ verbose(cbs->private_data, "(%02x) r%d = atomic%s_xchg(*(%s *)(r%d %+d), r%d)\n",
+ insn->code, insn->src_reg,
+ BPF_SIZE(insn->code) == BPF_DW ? "64" : "",
+ bpf_ldst_string[BPF_SIZE(insn->code) >> 3],
+ insn->dst_reg, insn->off, insn->src_reg);
} else {
verbose(cbs->private_data, "BUG_%02x\n", insn->code);
}
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index a68adbcee370..ccf4315e54e7 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -3601,10 +3601,13 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct bpf_insn *insn)
{
int err;
+ int load_reg;

switch (insn->imm) {
case BPF_ADD:
case BPF_ADD | BPF_FETCH:
+ case BPF_XCHG:
+ case BPF_CMPXCHG:
break;
default:
verbose(env, "BPF_ATOMIC uses invalid atomic opcode %02x\n", insn->imm);
@@ -3626,6 +3629,13 @@ static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct bpf_i
if (err)
return err;

+ if (insn->imm == BPF_CMPXCHG) {
+ /* Check comparison of R0 with memory location */
+ err = check_reg_arg(env, BPF_REG_0, SRC_OP);
+ if (err)
+ return err;
+ }
+
if (is_pointer_value(env, insn->src_reg)) {
verbose(env, "R%d leaks addr into mem\n", insn->src_reg);
return -EACCES;
@@ -3656,8 +3666,13 @@ static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct bpf_i
if (!(insn->imm & BPF_FETCH))
return 0;

- /* check and record load of old value into src reg */
- err = check_reg_arg(env, insn->src_reg, DST_OP);
+ if (insn->imm == BPF_CMPXCHG)
+ load_reg = BPF_REG_0;
+ else
+ load_reg = insn->src_reg;
+
+ /* check and record load of old value */
+ err = check_reg_arg(env, load_reg, DST_OP);
if (err)
return err;

diff --git a/tools/include/linux/filter.h b/tools/include/linux/filter.h
index ac7701678e1a..ea99bd17d003 100644
--- a/tools/include/linux/filter.h
+++ b/tools/include/linux/filter.h
@@ -190,6 +190,26 @@
.off = OFF, \
.imm = BPF_ADD | BPF_FETCH })

+/* Atomic exchange, src_reg = atomic_xchg((dst_reg + off), src_reg) */
+
+#define BPF_ATOMIC_XCHG(SIZE, DST, SRC, OFF) \
+ ((struct bpf_insn) { \
+ .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \
+ .dst_reg = DST, \
+ .src_reg = SRC, \
+ .off = OFF, \
+ .imm = BPF_XCHG })
+
+/* Atomic compare-exchange, r0 = atomic_cmpxchg((dst_reg + off), r0, src_reg) */
+
+#define BPF_ATOMIC_CMPXCHG(SIZE, DST, SRC, OFF) \
+ ((struct bpf_insn) { \
+ .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \
+ .dst_reg = DST, \
+ .src_reg = SRC, \
+ .off = OFF, \
+ .imm = BPF_CMPXCHG })
+
/* Memory store, *(uint *) (dst_reg + off16) = imm32 */

#define BPF_ST_MEM(SIZE, DST, OFF, IMM) \
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 025e377e7229..53334530cc81 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -45,7 +45,9 @@
#define BPF_EXIT 0x90 /* function return */

/* atomic op type fields (stored in immediate) */
-#define BPF_FETCH 0x01 /* fetch previous value into src reg */
+#define BPF_XCHG (0xe0 | BPF_FETCH) /* atomic exchange */
+#define BPF_CMPXCHG (0xf0 | BPF_FETCH) /* atomic compare-and-write */
+#define BPF_FETCH 0x01 /* not an opcode on its own, used to build others */

/* Register numbers */
enum {
--
2.29.2.454.gaff20da3a2-goog

2020-12-03 16:07:58

by Brendan Jackman

[permalink] [raw]
Subject: [PATCH bpf-next v3 05/14] bpf: Rename BPF_XADD and prepare to encode other atomics in .imm

A subsequent patch will add additional atomic operations. These new
operations will use the same opcode field as the existing XADD, with
the immediate discriminating different operations.

In preparation, rename the instruction mode BPF_ATOMIC and start
calling the zero immediate BPF_ADD.

This is possible (doesn't break existing valid BPF progs) because the
immediate field is currently reserved MBZ and BPF_ADD is zero.

All uses are removed from the tree but the BPF_XADD definition is
kept around to avoid breaking builds for people including kernel
headers.

Signed-off-by: Brendan Jackman <[email protected]>
Change-Id: Ib78f54acba37f7196cbf6c35ffa1c40805cb0d87
---
Documentation/networking/filter.rst | 30 +++++++-----
arch/arm/net/bpf_jit_32.c | 7 ++-
arch/arm64/net/bpf_jit_comp.c | 16 +++++--
arch/mips/net/ebpf_jit.c | 11 +++--
arch/powerpc/net/bpf_jit_comp64.c | 25 ++++++++--
arch/riscv/net/bpf_jit_comp32.c | 20 ++++++--
arch/riscv/net/bpf_jit_comp64.c | 16 +++++--
arch/s390/net/bpf_jit_comp.c | 27 ++++++-----
arch/sparc/net/bpf_jit_comp_64.c | 17 +++++--
arch/x86/net/bpf_jit_comp.c | 46 ++++++++++++++-----
arch/x86/net/bpf_jit_comp32.c | 6 +--
drivers/net/ethernet/netronome/nfp/bpf/jit.c | 14 ++++--
drivers/net/ethernet/netronome/nfp/bpf/main.h | 4 +-
.../net/ethernet/netronome/nfp/bpf/verifier.c | 15 ++++--
include/linux/filter.h | 8 ++--
include/uapi/linux/bpf.h | 3 +-
kernel/bpf/core.c | 31 +++++++++----
kernel/bpf/disasm.c | 6 ++-
kernel/bpf/verifier.c | 24 ++++++----
lib/test_bpf.c | 2 +-
samples/bpf/bpf_insn.h | 4 +-
samples/bpf/sock_example.c | 2 +-
samples/bpf/test_cgrp2_attach.c | 4 +-
tools/include/linux/filter.h | 7 +--
tools/include/uapi/linux/bpf.h | 3 +-
.../bpf/prog_tests/cgroup_attach_multi.c | 4 +-
tools/testing/selftests/bpf/verifier/ctx.c | 7 ++-
.../testing/selftests/bpf/verifier/leak_ptr.c | 4 +-
tools/testing/selftests/bpf/verifier/unpriv.c | 3 +-
tools/testing/selftests/bpf/verifier/xadd.c | 2 +-
30 files changed, 248 insertions(+), 120 deletions(-)

diff --git a/Documentation/networking/filter.rst b/Documentation/networking/filter.rst
index debb59e374de..1583d59d806d 100644
--- a/Documentation/networking/filter.rst
+++ b/Documentation/networking/filter.rst
@@ -1006,13 +1006,13 @@ Size modifier is one of ...

Mode modifier is one of::

- BPF_IMM 0x00 /* used for 32-bit mov in classic BPF and 64-bit in eBPF */
- BPF_ABS 0x20
- BPF_IND 0x40
- BPF_MEM 0x60
- BPF_LEN 0x80 /* classic BPF only, reserved in eBPF */
- BPF_MSH 0xa0 /* classic BPF only, reserved in eBPF */
- BPF_XADD 0xc0 /* eBPF only, exclusive add */
+ BPF_IMM 0x00 /* used for 32-bit mov in classic BPF and 64-bit in eBPF */
+ BPF_ABS 0x20
+ BPF_IND 0x40
+ BPF_MEM 0x60
+ BPF_LEN 0x80 /* classic BPF only, reserved in eBPF */
+ BPF_MSH 0xa0 /* classic BPF only, reserved in eBPF */
+ BPF_ATOMIC 0xc0 /* eBPF only, atomic operations */

eBPF has two non-generic instructions: (BPF_ABS | <size> | BPF_LD) and
(BPF_IND | <size> | BPF_LD) which are used to access packet data.
@@ -1044,11 +1044,19 @@ Unlike classic BPF instruction set, eBPF has generic load/store operations::
BPF_MEM | <size> | BPF_STX: *(size *) (dst_reg + off) = src_reg
BPF_MEM | <size> | BPF_ST: *(size *) (dst_reg + off) = imm32
BPF_MEM | <size> | BPF_LDX: dst_reg = *(size *) (src_reg + off)
- BPF_XADD | BPF_W | BPF_STX: lock xadd *(u32 *)(dst_reg + off16) += src_reg
- BPF_XADD | BPF_DW | BPF_STX: lock xadd *(u64 *)(dst_reg + off16) += src_reg

-Where size is one of: BPF_B or BPF_H or BPF_W or BPF_DW. Note that 1 and
-2 byte atomic increments are not supported.
+Where size is one of: BPF_B or BPF_H or BPF_W or BPF_DW.
+
+It also includes atomic operations, which use the immediate field for extra
+encoding.
+
+ .imm = BPF_ADD, .code = BPF_ATOMIC | BPF_W | BPF_STX: lock xadd *(u32 *)(dst_reg + off16) += src_reg
+ .imm = BPF_ADD, .code = BPF_ATOMIC | BPF_DW | BPF_STX: lock xadd *(u64 *)(dst_reg + off16) += src_reg
+
+Note that 1 and 2 byte atomic operations are not supported.
+
+You may encounter BPF_XADD - this is a legacy name for BPF_ATOMIC, referring to
+the exclusive-add operation encoded when the immediate field is zero.

eBPF has one 16-byte instruction: BPF_LD | BPF_DW | BPF_IMM which consists
of two consecutive ``struct bpf_insn`` 8-byte blocks and interpreted as single
diff --git a/arch/arm/net/bpf_jit_32.c b/arch/arm/net/bpf_jit_32.c
index 0207b6ea6e8a..897634d0a67c 100644
--- a/arch/arm/net/bpf_jit_32.c
+++ b/arch/arm/net/bpf_jit_32.c
@@ -1620,10 +1620,9 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
}
emit_str_r(dst_lo, tmp2, off, ctx, BPF_SIZE(code));
break;
- /* STX XADD: lock *(u32 *)(dst + off) += src */
- case BPF_STX | BPF_XADD | BPF_W:
- /* STX XADD: lock *(u64 *)(dst + off) += src */
- case BPF_STX | BPF_XADD | BPF_DW:
+ /* Atomic ops */
+ case BPF_STX | BPF_ATOMIC | BPF_W:
+ case BPF_STX | BPF_ATOMIC | BPF_DW:
goto notyet;
/* STX: *(size *)(dst + off) = src */
case BPF_STX | BPF_MEM | BPF_W:
diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
index ef9f1d5e989d..f7b194878a99 100644
--- a/arch/arm64/net/bpf_jit_comp.c
+++ b/arch/arm64/net/bpf_jit_comp.c
@@ -875,10 +875,18 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx,
}
break;

- /* STX XADD: lock *(u32 *)(dst + off) += src */
- case BPF_STX | BPF_XADD | BPF_W:
- /* STX XADD: lock *(u64 *)(dst + off) += src */
- case BPF_STX | BPF_XADD | BPF_DW:
+ case BPF_STX | BPF_ATOMIC | BPF_W:
+ case BPF_STX | BPF_ATOMIC | BPF_DW:
+ if (insn->imm != BPF_ADD) {
+ pr_err_once("unknown atomic op code %02x\n", insn->imm);
+ return -EINVAL;
+ }
+
+ /* STX XADD: lock *(u32 *)(dst + off) += src
+ * and
+ * STX XADD: lock *(u64 *)(dst + off) += src
+ */
+
if (!off) {
reg = dst;
} else {
diff --git a/arch/mips/net/ebpf_jit.c b/arch/mips/net/ebpf_jit.c
index 561154cbcc40..939dd06764bc 100644
--- a/arch/mips/net/ebpf_jit.c
+++ b/arch/mips/net/ebpf_jit.c
@@ -1423,8 +1423,8 @@ static int build_one_insn(const struct bpf_insn *insn, struct jit_ctx *ctx,
case BPF_STX | BPF_H | BPF_MEM:
case BPF_STX | BPF_W | BPF_MEM:
case BPF_STX | BPF_DW | BPF_MEM:
- case BPF_STX | BPF_W | BPF_XADD:
- case BPF_STX | BPF_DW | BPF_XADD:
+ case BPF_STX | BPF_W | BPF_ATOMIC:
+ case BPF_STX | BPF_DW | BPF_ATOMIC:
if (insn->dst_reg == BPF_REG_10) {
ctx->flags |= EBPF_SEEN_FP;
dst = MIPS_R_SP;
@@ -1438,7 +1438,12 @@ static int build_one_insn(const struct bpf_insn *insn, struct jit_ctx *ctx,
src = ebpf_to_mips_reg(ctx, insn, src_reg_no_fp);
if (src < 0)
return src;
- if (BPF_MODE(insn->code) == BPF_XADD) {
+ if (BPF_MODE(insn->code) == BPF_ATOMIC) {
+ if (insn->imm != BPF_ADD) {
+ pr_err("ATOMIC OP %02x NOT HANDLED\n", insn->imm);
+ return -EINVAL;
+ }
+
/*
* If mem_off does not fit within the 9 bit ll/sc
* instruction immediate field, use a temp reg.
diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
index 022103c6a201..aaf1a887f653 100644
--- a/arch/powerpc/net/bpf_jit_comp64.c
+++ b/arch/powerpc/net/bpf_jit_comp64.c
@@ -683,10 +683,18 @@ static int bpf_jit_build_body(struct bpf_prog *fp, u32 *image,
break;

/*
- * BPF_STX XADD (atomic_add)
+ * BPF_STX ATOMIC (atomic ops)
*/
- /* *(u32 *)(dst + off) += src */
- case BPF_STX | BPF_XADD | BPF_W:
+ case BPF_STX | BPF_ATOMIC | BPF_W:
+ if (insn->imm != BPF_ADD) {
+ pr_err_ratelimited(
+ "eBPF filter atomic op code %02x (@%d) unsupported\n",
+ code, i);
+ return -ENOTSUPP;
+ }
+
+ /* *(u32 *)(dst + off) += src */
+
/* Get EA into TMP_REG_1 */
EMIT(PPC_RAW_ADDI(b2p[TMP_REG_1], dst_reg, off));
tmp_idx = ctx->idx * 4;
@@ -699,8 +707,15 @@ static int bpf_jit_build_body(struct bpf_prog *fp, u32 *image,
/* we're done if this succeeded */
PPC_BCC_SHORT(COND_NE, tmp_idx);
break;
- /* *(u64 *)(dst + off) += src */
- case BPF_STX | BPF_XADD | BPF_DW:
+ case BPF_STX | BPF_ATOMIC | BPF_DW:
+ if (insn->imm != BPF_ADD) {
+ pr_err_ratelimited(
+ "eBPF filter atomic op code %02x (@%d) unsupported\n",
+ code, i);
+ return -ENOTSUPP;
+ }
+ /* *(u64 *)(dst + off) += src */
+
EMIT(PPC_RAW_ADDI(b2p[TMP_REG_1], dst_reg, off));
tmp_idx = ctx->idx * 4;
EMIT(PPC_RAW_LDARX(b2p[TMP_REG_2], 0, b2p[TMP_REG_1], 0));
diff --git a/arch/riscv/net/bpf_jit_comp32.c b/arch/riscv/net/bpf_jit_comp32.c
index 579575f9cdae..a9ef808b235f 100644
--- a/arch/riscv/net/bpf_jit_comp32.c
+++ b/arch/riscv/net/bpf_jit_comp32.c
@@ -881,7 +881,7 @@ static int emit_store_r64(const s8 *dst, const s8 *src, s16 off,
const s8 *rd = bpf_get_reg64(dst, tmp1, ctx);
const s8 *rs = bpf_get_reg64(src, tmp2, ctx);

- if (mode == BPF_XADD && size != BPF_W)
+ if (mode == BPF_ATOMIC && (size != BPF_W || imm != BPF_ADD))
return -1;

emit_imm(RV_REG_T0, off, ctx);
@@ -899,7 +899,7 @@ static int emit_store_r64(const s8 *dst, const s8 *src, s16 off,
case BPF_MEM:
emit(rv_sw(RV_REG_T0, 0, lo(rs)), ctx);
break;
- case BPF_XADD:
+ case BPF_ATOMIC: /* .imm checked above - only BPF_ADD allowed */
emit(rv_amoadd_w(RV_REG_ZERO, lo(rs), RV_REG_T0, 0, 0),
ctx);
break;
@@ -1260,7 +1260,6 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx,
case BPF_STX | BPF_MEM | BPF_H:
case BPF_STX | BPF_MEM | BPF_W:
case BPF_STX | BPF_MEM | BPF_DW:
- case BPF_STX | BPF_XADD | BPF_W:
if (BPF_CLASS(code) == BPF_ST) {
emit_imm32(tmp2, imm, ctx);
src = tmp2;
@@ -1271,8 +1270,21 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx,
return -1;
break;

+ case BPF_STX | BPF_ATOMIC | BPF_W:
+ if (insn->imm != BPF_ADD) {
+ pr_info_once(
+ "bpf-jit: not supported: atomic operation %02x ***\n",
+ insn->imm);
+ return -EFAULT;
+ }
+
+ if (emit_store_r64(dst, src, off, ctx, BPF_SIZE(code),
+ BPF_MODE(code)))
+ return -1;
+ break;
+
/* No hardware support for 8-byte atomics in RV32. */
- case BPF_STX | BPF_XADD | BPF_DW:
+ case BPF_STX | BPF_ATOMIC | BPF_DW:
/* Fallthrough. */

notsupported:
diff --git a/arch/riscv/net/bpf_jit_comp64.c b/arch/riscv/net/bpf_jit_comp64.c
index 8a56b5293117..b44ff52f84a6 100644
--- a/arch/riscv/net/bpf_jit_comp64.c
+++ b/arch/riscv/net/bpf_jit_comp64.c
@@ -1027,10 +1027,18 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx,
emit_add(RV_REG_T1, RV_REG_T1, rd, ctx);
emit_sd(RV_REG_T1, 0, rs, ctx);
break;
- /* STX XADD: lock *(u32 *)(dst + off) += src */
- case BPF_STX | BPF_XADD | BPF_W:
- /* STX XADD: lock *(u64 *)(dst + off) += src */
- case BPF_STX | BPF_XADD | BPF_DW:
+ case BPF_STX | BPF_ATOMIC | BPF_W:
+ case BPF_STX | BPF_ATOMIC | BPF_DW:
+ if (insn->imm != BPF_ADD) {
+ pr_err("bpf-jit: not supported: atomic operation %02x ***\n",
+ insn->imm);
+ return -EINVAL;
+ }
+
+ /* atomic_add: lock *(u32 *)(dst + off) += src
+ * atomic_add: lock *(u64 *)(dst + off) += src
+ */
+
if (off) {
if (is_12b_int(off)) {
emit_addi(RV_REG_T1, rd, off, ctx);
diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c
index 0a4182792876..f973e2ead197 100644
--- a/arch/s390/net/bpf_jit_comp.c
+++ b/arch/s390/net/bpf_jit_comp.c
@@ -1205,18 +1205,23 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
jit->seen |= SEEN_MEM;
break;
/*
- * BPF_STX XADD (atomic_add)
+ * BPF_ATOMIC
*/
- case BPF_STX | BPF_XADD | BPF_W: /* *(u32 *)(dst + off) += src */
- /* laal %w0,%src,off(%dst) */
- EMIT6_DISP_LH(0xeb000000, 0x00fa, REG_W0, src_reg,
- dst_reg, off);
- jit->seen |= SEEN_MEM;
- break;
- case BPF_STX | BPF_XADD | BPF_DW: /* *(u64 *)(dst + off) += src */
- /* laalg %w0,%src,off(%dst) */
- EMIT6_DISP_LH(0xeb000000, 0x00ea, REG_W0, src_reg,
- dst_reg, off);
+ case BPF_STX | BPF_ATOMIC | BPF_DW:
+ case BPF_STX | BPF_ATOMIC | BPF_W:
+ if (insn->imm != BPF_ADD) {
+ pr_err("Unknown atomic operation %02x\n", insn->imm);
+ return -1;
+ }
+
+ /* *(u32/u64 *)(dst + off) += src
+ *
+ * BFW_W: laal %w0,%src,off(%dst)
+ * BPF_DW: laalg %w0,%src,off(%dst)
+ */
+ EMIT6_DISP_LH(0xeb000000,
+ BPF_SIZE(insn->code) == BPF_W ? 0x00fa : 0x00ea,
+ REG_W0, src_reg, dst_reg, off);
jit->seen |= SEEN_MEM;
break;
/*
diff --git a/arch/sparc/net/bpf_jit_comp_64.c b/arch/sparc/net/bpf_jit_comp_64.c
index 3364e2a00989..4b8d3c65d266 100644
--- a/arch/sparc/net/bpf_jit_comp_64.c
+++ b/arch/sparc/net/bpf_jit_comp_64.c
@@ -1366,12 +1366,18 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
break;
}

- /* STX XADD: lock *(u32 *)(dst + off) += src */
- case BPF_STX | BPF_XADD | BPF_W: {
+ case BPF_STX | BPF_ATOMIC | BPF_W: {
const u8 tmp = bpf2sparc[TMP_REG_1];
const u8 tmp2 = bpf2sparc[TMP_REG_2];
const u8 tmp3 = bpf2sparc[TMP_REG_3];

+ if (insn->imm != BPF_ADD) {
+ pr_err_once("unknown atomic op %02x\n", insn->imm);
+ return -EINVAL;
+ }
+
+ /* lock *(u32 *)(dst + off) += src */
+
if (insn->dst_reg == BPF_REG_FP)
ctx->saw_frame_pointer = true;

@@ -1390,11 +1396,16 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
break;
}
/* STX XADD: lock *(u64 *)(dst + off) += src */
- case BPF_STX | BPF_XADD | BPF_DW: {
+ case BPF_STX | BPF_ATOMIC | BPF_DW: {
const u8 tmp = bpf2sparc[TMP_REG_1];
const u8 tmp2 = bpf2sparc[TMP_REG_2];
const u8 tmp3 = bpf2sparc[TMP_REG_3];

+ if (insn->imm != BPF_ADD) {
+ pr_err_once("unknown atomic op %02x\n", insn->imm);
+ return -EINVAL;
+ }
+
if (insn->dst_reg == BPF_REG_FP)
ctx->saw_frame_pointer = true;

diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index ee7905051ee9..5e5a132b3d52 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -811,6 +811,34 @@ static void emit_neg(u8 **pprog, u32 reg, bool is64)
*pprog = prog;
}

+static int emit_atomic(u8 **pprog, u8 atomic_op,
+ u32 dst_reg, u32 src_reg, s16 off, u8 bpf_size)
+{
+ u8 *prog = *pprog;
+ int cnt = 0;
+
+ EMIT1(0xF0); /* lock prefix */
+
+ maybe_emit_mod(&prog, dst_reg, src_reg, bpf_size == BPF_DW);
+
+ /* emit opcode */
+ switch (atomic_op) {
+ case BPF_ADD:
+ /* lock *(u32/u64*)(dst_reg + off) <op>= src_reg */
+ EMIT1(simple_alu_opcodes[atomic_op]);
+ break;
+ default:
+ pr_err("bpf_jit: unknown atomic opcode %02x\n", atomic_op);
+ return -EFAULT;
+ }
+
+ emit_insn_suffix(&prog, dst_reg, src_reg, off);
+
+ *pprog = prog;
+ return 0;
+}
+
+
static bool ex_handler_bpf(const struct exception_table_entry *x,
struct pt_regs *regs, int trapnr,
unsigned long error_code, unsigned long fault_addr)
@@ -855,6 +883,7 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
int i, cnt = 0, excnt = 0;
int proglen = 0;
u8 *prog = temp;
+ int err;

detect_reg_usage(insn, insn_cnt, callee_regs_used,
&tail_call_seen);
@@ -1263,17 +1292,12 @@ st: if (is_imm8(insn->off))
}
break;

- /* STX XADD: lock *(u32*)(dst_reg + off) += src_reg */
- case BPF_STX | BPF_XADD | BPF_W:
- /* Emit 'lock add dword ptr [rax + off], eax' */
- if (is_ereg(dst_reg) || is_ereg(src_reg))
- EMIT3(0xF0, add_2mod(0x40, dst_reg, src_reg), 0x01);
- else
- EMIT2(0xF0, 0x01);
- goto xadd;
- case BPF_STX | BPF_XADD | BPF_DW:
- EMIT3(0xF0, add_2mod(0x48, dst_reg, src_reg), 0x01);
-xadd: emit_modrm_dstoff(&prog, dst_reg, src_reg, insn->off);
+ case BPF_STX | BPF_ATOMIC | BPF_W:
+ case BPF_STX | BPF_ATOMIC | BPF_DW:
+ err = emit_atomic(&prog, insn->imm, dst_reg, src_reg,
+ insn->off, BPF_SIZE(insn->code));
+ if (err)
+ return err;
break;

/* call */
diff --git a/arch/x86/net/bpf_jit_comp32.c b/arch/x86/net/bpf_jit_comp32.c
index 96fde03aa987..d17b67c69f89 100644
--- a/arch/x86/net/bpf_jit_comp32.c
+++ b/arch/x86/net/bpf_jit_comp32.c
@@ -2243,10 +2243,8 @@ emit_cond_jmp: jmp_cond = get_cond_jmp_opcode(BPF_OP(code), false);
return -EFAULT;
}
break;
- /* STX XADD: lock *(u32 *)(dst + off) += src */
- case BPF_STX | BPF_XADD | BPF_W:
- /* STX XADD: lock *(u64 *)(dst + off) += src */
- case BPF_STX | BPF_XADD | BPF_DW:
+ case BPF_STX | BPF_ATOMIC | BPF_W:
+ case BPF_STX | BPF_ATOMIC | BPF_DW:
goto notyet;
case BPF_JMP | BPF_EXIT:
if (seen_exit) {
diff --git a/drivers/net/ethernet/netronome/nfp/bpf/jit.c b/drivers/net/ethernet/netronome/nfp/bpf/jit.c
index 0a721f6e8676..e31f8fbbc696 100644
--- a/drivers/net/ethernet/netronome/nfp/bpf/jit.c
+++ b/drivers/net/ethernet/netronome/nfp/bpf/jit.c
@@ -3109,13 +3109,19 @@ mem_xadd(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, bool is64)
return 0;
}

-static int mem_xadd4(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta)
+static int mem_atomic4(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta)
{
+ if (meta->insn.imm != BPF_ADD)
+ return -EOPNOTSUPP;
+
return mem_xadd(nfp_prog, meta, false);
}

-static int mem_xadd8(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta)
+static int mem_atomic8(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta)
{
+ if (meta->insn.imm != BPF_ADD)
+ return -EOPNOTSUPP;
+
return mem_xadd(nfp_prog, meta, true);
}

@@ -3475,8 +3481,8 @@ static const instr_cb_t instr_cb[256] = {
[BPF_STX | BPF_MEM | BPF_H] = mem_stx2,
[BPF_STX | BPF_MEM | BPF_W] = mem_stx4,
[BPF_STX | BPF_MEM | BPF_DW] = mem_stx8,
- [BPF_STX | BPF_XADD | BPF_W] = mem_xadd4,
- [BPF_STX | BPF_XADD | BPF_DW] = mem_xadd8,
+ [BPF_STX | BPF_ATOMIC | BPF_W] = mem_atomic4,
+ [BPF_STX | BPF_ATOMIC | BPF_DW] = mem_atomic8,
[BPF_ST | BPF_MEM | BPF_B] = mem_st1,
[BPF_ST | BPF_MEM | BPF_H] = mem_st2,
[BPF_ST | BPF_MEM | BPF_W] = mem_st4,
diff --git a/drivers/net/ethernet/netronome/nfp/bpf/main.h b/drivers/net/ethernet/netronome/nfp/bpf/main.h
index fac9c6f9e197..d0e17eebddd9 100644
--- a/drivers/net/ethernet/netronome/nfp/bpf/main.h
+++ b/drivers/net/ethernet/netronome/nfp/bpf/main.h
@@ -428,9 +428,9 @@ static inline bool is_mbpf_classic_store_pkt(const struct nfp_insn_meta *meta)
return is_mbpf_classic_store(meta) && meta->ptr.type == PTR_TO_PACKET;
}

-static inline bool is_mbpf_xadd(const struct nfp_insn_meta *meta)
+static inline bool is_mbpf_atomic(const struct nfp_insn_meta *meta)
{
- return (meta->insn.code & ~BPF_SIZE_MASK) == (BPF_STX | BPF_XADD);
+ return (meta->insn.code & ~BPF_SIZE_MASK) == (BPF_STX | BPF_ATOMIC);
}

static inline bool is_mbpf_mul(const struct nfp_insn_meta *meta)
diff --git a/drivers/net/ethernet/netronome/nfp/bpf/verifier.c b/drivers/net/ethernet/netronome/nfp/bpf/verifier.c
index e92ee510fd52..9d235c0ce46a 100644
--- a/drivers/net/ethernet/netronome/nfp/bpf/verifier.c
+++ b/drivers/net/ethernet/netronome/nfp/bpf/verifier.c
@@ -479,7 +479,7 @@ nfp_bpf_check_ptr(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta,
pr_vlog(env, "map writes not supported\n");
return -EOPNOTSUPP;
}
- if (is_mbpf_xadd(meta)) {
+ if (is_mbpf_atomic(meta)) {
err = nfp_bpf_map_mark_used(env, meta, reg,
NFP_MAP_USE_ATOMIC_CNT);
if (err)
@@ -523,12 +523,17 @@ nfp_bpf_check_store(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta,
}

static int
-nfp_bpf_check_xadd(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta,
- struct bpf_verifier_env *env)
+nfp_bpf_check_atomic(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta,
+ struct bpf_verifier_env *env)
{
const struct bpf_reg_state *sreg = cur_regs(env) + meta->insn.src_reg;
const struct bpf_reg_state *dreg = cur_regs(env) + meta->insn.dst_reg;

+ if (meta->insn.imm != BPF_ADD) {
+ pr_vlog(env, "atomic op not implemented: %d\n", meta->insn.imm);
+ return -EOPNOTSUPP;
+ }
+
if (dreg->type != PTR_TO_MAP_VALUE) {
pr_vlog(env, "atomic add not to a map value pointer: %d\n",
dreg->type);
@@ -655,8 +660,8 @@ int nfp_verify_insn(struct bpf_verifier_env *env, int insn_idx,
if (is_mbpf_store(meta))
return nfp_bpf_check_store(nfp_prog, meta, env);

- if (is_mbpf_xadd(meta))
- return nfp_bpf_check_xadd(nfp_prog, meta, env);
+ if (is_mbpf_atomic(meta))
+ return nfp_bpf_check_atomic(nfp_prog, meta, env);

if (is_mbpf_alu(meta))
return nfp_bpf_check_alu(nfp_prog, meta, env);
diff --git a/include/linux/filter.h b/include/linux/filter.h
index 1b62397bd124..ce19988fb312 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -261,13 +261,15 @@ static inline bool insn_is_zext(const struct bpf_insn *insn)

/* Atomic memory add, *(uint *)(dst_reg + off16) += src_reg */

-#define BPF_STX_XADD(SIZE, DST, SRC, OFF) \
+#define BPF_ATOMIC_ADD(SIZE, DST, SRC, OFF) \
((struct bpf_insn) { \
- .code = BPF_STX | BPF_SIZE(SIZE) | BPF_XADD, \
+ .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \
.dst_reg = DST, \
.src_reg = SRC, \
.off = OFF, \
- .imm = 0 })
+ .imm = BPF_ADD })
+#define BPF_STX_XADD BPF_ATOMIC_ADD /* alias */
+

/* Memory store, *(uint *) (dst_reg + off16) = imm32 */

diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index c3458ec1f30a..d0adc48db43c 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -19,7 +19,8 @@

/* ld/ldx fields */
#define BPF_DW 0x18 /* double word (64-bit) */
-#define BPF_XADD 0xc0 /* exclusive add */
+#define BPF_ATOMIC 0xc0 /* atomic memory ops - op type in immediate */
+#define BPF_XADD 0xc0 /* exclusive add - legacy name */

/* alu/jmp fields */
#define BPF_MOV 0xb0 /* mov reg to reg */
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index 261f8692d0d2..3abc6b250b18 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -1309,8 +1309,8 @@ EXPORT_SYMBOL_GPL(__bpf_call_base);
INSN_3(STX, MEM, H), \
INSN_3(STX, MEM, W), \
INSN_3(STX, MEM, DW), \
- INSN_3(STX, XADD, W), \
- INSN_3(STX, XADD, DW), \
+ INSN_3(STX, ATOMIC, W), \
+ INSN_3(STX, ATOMIC, DW), \
/* Immediate based. */ \
INSN_3(ST, MEM, B), \
INSN_3(ST, MEM, H), \
@@ -1618,13 +1618,25 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn, u64 *stack)
LDX_PROBE(DW, 8)
#undef LDX_PROBE

- STX_XADD_W: /* lock xadd *(u32 *)(dst_reg + off16) += src_reg */
- atomic_add((u32) SRC, (atomic_t *)(unsigned long)
- (DST + insn->off));
+ STX_ATOMIC_W:
+ switch (IMM) {
+ case BPF_ADD:
+ /* lock xadd *(u32 *)(dst_reg + off16) += src_reg */
+ atomic_add((u32) SRC, (atomic_t *)(unsigned long)
+ (DST + insn->off));
+ default:
+ goto default_label;
+ }
CONT;
- STX_XADD_DW: /* lock xadd *(u64 *)(dst_reg + off16) += src_reg */
- atomic64_add((u64) SRC, (atomic64_t *)(unsigned long)
- (DST + insn->off));
+ STX_ATOMIC_DW:
+ switch (IMM) {
+ case BPF_ADD:
+ /* lock xadd *(u64 *)(dst_reg + off16) += src_reg */
+ atomic64_add((u64) SRC, (atomic64_t *)(unsigned long)
+ (DST + insn->off));
+ default:
+ goto default_label;
+ }
CONT;

default_label:
@@ -1634,7 +1646,8 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn, u64 *stack)
*
* Note, verifier whitelists all opcodes in bpf_opcode_in_insntable().
*/
- pr_warn("BPF interpreter: unknown opcode %02x\n", insn->code);
+ pr_warn("BPF interpreter: unknown opcode %02x (imm: 0x%x)\n",
+ insn->code, insn->imm);
BUG_ON(1);
return 0;
}
diff --git a/kernel/bpf/disasm.c b/kernel/bpf/disasm.c
index b44d8c447afd..37c8d6e9b4cc 100644
--- a/kernel/bpf/disasm.c
+++ b/kernel/bpf/disasm.c
@@ -153,14 +153,16 @@ void print_bpf_insn(const struct bpf_insn_cbs *cbs,
bpf_ldst_string[BPF_SIZE(insn->code) >> 3],
insn->dst_reg,
insn->off, insn->src_reg);
- else if (BPF_MODE(insn->code) == BPF_XADD)
+ else if (BPF_MODE(insn->code) == BPF_ATOMIC &&
+ insn->imm == BPF_ADD) {
verbose(cbs->private_data, "(%02x) lock *(%s *)(r%d %+d) += r%d\n",
insn->code,
bpf_ldst_string[BPF_SIZE(insn->code) >> 3],
insn->dst_reg, insn->off,
insn->src_reg);
- else
+ } else {
verbose(cbs->private_data, "BUG_%02x\n", insn->code);
+ }
} else if (class == BPF_ST) {
if (BPF_MODE(insn->code) != BPF_MEM) {
verbose(cbs->private_data, "BUG_st_%02x\n", insn->code);
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index e333ce43f281..1947da617b03 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -3598,13 +3598,17 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
return err;
}

-static int check_xadd(struct bpf_verifier_env *env, int insn_idx, struct bpf_insn *insn)
+static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct bpf_insn *insn)
{
int err;

- if ((BPF_SIZE(insn->code) != BPF_W && BPF_SIZE(insn->code) != BPF_DW) ||
- insn->imm != 0) {
- verbose(env, "BPF_XADD uses reserved fields\n");
+ if (insn->imm != BPF_ADD) {
+ verbose(env, "BPF_ATOMIC uses invalid atomic opcode %02x\n", insn->imm);
+ return -EINVAL;
+ }
+
+ if (BPF_SIZE(insn->code) != BPF_W && BPF_SIZE(insn->code) != BPF_DW) {
+ verbose(env, "invalid atomic operand size\n");
return -EINVAL;
}

@@ -3627,19 +3631,19 @@ static int check_xadd(struct bpf_verifier_env *env, int insn_idx, struct bpf_ins
is_pkt_reg(env, insn->dst_reg) ||
is_flow_key_reg(env, insn->dst_reg) ||
is_sk_reg(env, insn->dst_reg)) {
- verbose(env, "BPF_XADD stores into R%d %s is not allowed\n",
+ verbose(env, "atomic stores into R%d %s is not allowed\n",
insn->dst_reg,
reg_type_str[reg_state(env, insn->dst_reg)->type]);
return -EACCES;
}

- /* check whether atomic_add can read the memory */
+ /* check whether we can read the memory */
err = check_mem_access(env, insn_idx, insn->dst_reg, insn->off,
BPF_SIZE(insn->code), BPF_READ, -1, true);
if (err)
return err;

- /* check whether atomic_add can write into the same memory */
+ /* check whether we can write into the same memory */
return check_mem_access(env, insn_idx, insn->dst_reg, insn->off,
BPF_SIZE(insn->code), BPF_WRITE, -1, true);
}
@@ -9497,8 +9501,8 @@ static int do_check(struct bpf_verifier_env *env)
} else if (class == BPF_STX) {
enum bpf_reg_type *prev_dst_type, dst_reg_type;

- if (BPF_MODE(insn->code) == BPF_XADD) {
- err = check_xadd(env, env->insn_idx, insn);
+ if (BPF_MODE(insn->code) == BPF_ATOMIC) {
+ err = check_atomic(env, env->insn_idx, insn);
if (err)
return err;
env->insn_idx++;
@@ -9908,7 +9912,7 @@ static int resolve_pseudo_ldimm64(struct bpf_verifier_env *env)

if (BPF_CLASS(insn->code) == BPF_STX &&
((BPF_MODE(insn->code) != BPF_MEM &&
- BPF_MODE(insn->code) != BPF_XADD) || insn->imm != 0)) {
+ BPF_MODE(insn->code) != BPF_ATOMIC) || insn->imm != 0)) {
verbose(env, "BPF_STX uses reserved fields\n");
return -EINVAL;
}
diff --git a/lib/test_bpf.c b/lib/test_bpf.c
index ca7d635bccd9..fbb13ef9207c 100644
--- a/lib/test_bpf.c
+++ b/lib/test_bpf.c
@@ -4295,7 +4295,7 @@ static struct bpf_test tests[] = {
{ { 0, 0xffffffff } },
.stack_depth = 40,
},
- /* BPF_STX | BPF_XADD | BPF_W/DW */
+ /* BPF_STX | BPF_ATOMIC | BPF_W/DW */
{
"STX_XADD_W: Test: 0x12 + 0x10 = 0x22",
.u.insns_int = {
diff --git a/samples/bpf/bpf_insn.h b/samples/bpf/bpf_insn.h
index 544237980582..db67a2847395 100644
--- a/samples/bpf/bpf_insn.h
+++ b/samples/bpf/bpf_insn.h
@@ -138,11 +138,11 @@ struct bpf_insn;

#define BPF_STX_XADD(SIZE, DST, SRC, OFF) \
((struct bpf_insn) { \
- .code = BPF_STX | BPF_SIZE(SIZE) | BPF_XADD, \
+ .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \
.dst_reg = DST, \
.src_reg = SRC, \
.off = OFF, \
- .imm = 0 })
+ .imm = BPF_ADD })

/* Memory store, *(uint *) (dst_reg + off16) = imm32 */

diff --git a/samples/bpf/sock_example.c b/samples/bpf/sock_example.c
index 00aae1d33fca..b18fa8083137 100644
--- a/samples/bpf/sock_example.c
+++ b/samples/bpf/sock_example.c
@@ -54,7 +54,7 @@ static int test_sock(void)
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
BPF_MOV64_IMM(BPF_REG_1, 1), /* r1 = 1 */
- BPF_RAW_INSN(BPF_STX | BPF_XADD | BPF_DW, BPF_REG_0, BPF_REG_1, 0, 0), /* xadd r0 += r1 */
+ BPF_ATOMIC_ADD(BPF_DW, BPF_REG_0, BPF_REG_1, 0),
BPF_MOV64_IMM(BPF_REG_0, 0), /* r0 = 0 */
BPF_EXIT_INSN(),
};
diff --git a/samples/bpf/test_cgrp2_attach.c b/samples/bpf/test_cgrp2_attach.c
index 20fbd1241db3..61896c4f9322 100644
--- a/samples/bpf/test_cgrp2_attach.c
+++ b/samples/bpf/test_cgrp2_attach.c
@@ -53,7 +53,7 @@ static int prog_load(int map_fd, int verdict)
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
BPF_MOV64_IMM(BPF_REG_1, 1), /* r1 = 1 */
- BPF_RAW_INSN(BPF_STX | BPF_XADD | BPF_DW, BPF_REG_0, BPF_REG_1, 0, 0), /* xadd r0 += r1 */
+ BPF_ATOMIC_ADD(BPF_DW, BPF_REG_0, BPF_REG_1, 0),

/* Count bytes */
BPF_MOV64_IMM(BPF_REG_0, MAP_KEY_BYTES), /* r0 = 1 */
@@ -64,7 +64,7 @@ static int prog_load(int map_fd, int verdict)
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_6, offsetof(struct __sk_buff, len)), /* r1 = skb->len */
- BPF_RAW_INSN(BPF_STX | BPF_XADD | BPF_DW, BPF_REG_0, BPF_REG_1, 0, 0), /* xadd r0 += r1 */
+ BPF_ATOMIC_ADD(BPF_DW, BPF_REG_0, BPF_REG_1, 0),

BPF_MOV64_IMM(BPF_REG_0, verdict), /* r0 = verdict */
BPF_EXIT_INSN(),
diff --git a/tools/include/linux/filter.h b/tools/include/linux/filter.h
index ca28b6ab8db7..95ff51d97f25 100644
--- a/tools/include/linux/filter.h
+++ b/tools/include/linux/filter.h
@@ -171,13 +171,14 @@

/* Atomic memory add, *(uint *)(dst_reg + off16) += src_reg */

-#define BPF_STX_XADD(SIZE, DST, SRC, OFF) \
+#define BPF_ATOMIC_ADD(SIZE, DST, SRC, OFF) \
((struct bpf_insn) { \
- .code = BPF_STX | BPF_SIZE(SIZE) | BPF_XADD, \
+ .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \
.dst_reg = DST, \
.src_reg = SRC, \
.off = OFF, \
- .imm = 0 })
+ .imm = BPF_ADD })
+#define BPF_STX_XADD BPF_ATOMIC_ADD /* alias */

/* Memory store, *(uint *) (dst_reg + off16) = imm32 */

diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index c3458ec1f30a..d0adc48db43c 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -19,7 +19,8 @@

/* ld/ldx fields */
#define BPF_DW 0x18 /* double word (64-bit) */
-#define BPF_XADD 0xc0 /* exclusive add */
+#define BPF_ATOMIC 0xc0 /* atomic memory ops - op type in immediate */
+#define BPF_XADD 0xc0 /* exclusive add - legacy name */

/* alu/jmp fields */
#define BPF_MOV 0xb0 /* mov reg to reg */
diff --git a/tools/testing/selftests/bpf/prog_tests/cgroup_attach_multi.c b/tools/testing/selftests/bpf/prog_tests/cgroup_attach_multi.c
index b549fcfacc0b..882fce827c81 100644
--- a/tools/testing/selftests/bpf/prog_tests/cgroup_attach_multi.c
+++ b/tools/testing/selftests/bpf/prog_tests/cgroup_attach_multi.c
@@ -45,13 +45,13 @@ static int prog_load_cnt(int verdict, int val)
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
BPF_MOV64_IMM(BPF_REG_1, val), /* r1 = 1 */
- BPF_RAW_INSN(BPF_STX | BPF_XADD | BPF_DW, BPF_REG_0, BPF_REG_1, 0, 0), /* xadd r0 += r1 */
+ BPF_ATOMIC_ADD(BPF_DW, BPF_REG_0, BPF_REG_1, 0),

BPF_LD_MAP_FD(BPF_REG_1, cgroup_storage_fd),
BPF_MOV64_IMM(BPF_REG_2, 0),
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage),
BPF_MOV64_IMM(BPF_REG_1, val),
- BPF_RAW_INSN(BPF_STX | BPF_XADD | BPF_W, BPF_REG_0, BPF_REG_1, 0, 0),
+ BPF_ATOMIC_ADD(BPF_W, BPF_REG_0, BPF_REG_1, 0),

BPF_LD_MAP_FD(BPF_REG_1, percpu_cgroup_storage_fd),
BPF_MOV64_IMM(BPF_REG_2, 0),
diff --git a/tools/testing/selftests/bpf/verifier/ctx.c b/tools/testing/selftests/bpf/verifier/ctx.c
index 93d6b1641481..ede3842d123b 100644
--- a/tools/testing/selftests/bpf/verifier/ctx.c
+++ b/tools/testing/selftests/bpf/verifier/ctx.c
@@ -10,14 +10,13 @@
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
},
{
- "context stores via XADD",
+ "context stores via BPF_ATOMIC",
.insns = {
BPF_MOV64_IMM(BPF_REG_0, 0),
- BPF_RAW_INSN(BPF_STX | BPF_XADD | BPF_W, BPF_REG_1,
- BPF_REG_0, offsetof(struct __sk_buff, mark), 0),
+ BPF_ATOMIC_ADD(BPF_W, BPF_REG_1, BPF_REG_0, offsetof(struct __sk_buff, mark)),
BPF_EXIT_INSN(),
},
- .errstr = "BPF_XADD stores into R1 ctx is not allowed",
+ .errstr = "BPF_ATOMIC stores into R1 ctx is not allowed",
.result = REJECT,
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
},
diff --git a/tools/testing/selftests/bpf/verifier/leak_ptr.c b/tools/testing/selftests/bpf/verifier/leak_ptr.c
index d6eec17f2cd2..f9a594b48fb3 100644
--- a/tools/testing/selftests/bpf/verifier/leak_ptr.c
+++ b/tools/testing/selftests/bpf/verifier/leak_ptr.c
@@ -13,7 +13,7 @@
.errstr_unpriv = "R2 leaks addr into mem",
.result_unpriv = REJECT,
.result = REJECT,
- .errstr = "BPF_XADD stores into R1 ctx is not allowed",
+ .errstr = "BPF_ATOMIC stores into R1 ctx is not allowed",
},
{
"leak pointer into ctx 2",
@@ -28,7 +28,7 @@
.errstr_unpriv = "R10 leaks addr into mem",
.result_unpriv = REJECT,
.result = REJECT,
- .errstr = "BPF_XADD stores into R1 ctx is not allowed",
+ .errstr = "BPF_ATOMIC stores into R1 ctx is not allowed",
},
{
"leak pointer into ctx 3",
diff --git a/tools/testing/selftests/bpf/verifier/unpriv.c b/tools/testing/selftests/bpf/verifier/unpriv.c
index 91bb77c24a2e..85b5e8b70e5d 100644
--- a/tools/testing/selftests/bpf/verifier/unpriv.c
+++ b/tools/testing/selftests/bpf/verifier/unpriv.c
@@ -206,7 +206,8 @@
BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -8),
BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, 0),
BPF_MOV64_IMM(BPF_REG_0, 1),
- BPF_RAW_INSN(BPF_STX | BPF_XADD | BPF_DW, BPF_REG_10, BPF_REG_0, -8, 0),
+ BPF_RAW_INSN(BPF_STX | BPF_ATOMIC | BPF_DW,
+ BPF_REG_10, BPF_REG_0, -8, BPF_ADD),
BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_6, 0),
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_hash_recalc),
BPF_EXIT_INSN(),
diff --git a/tools/testing/selftests/bpf/verifier/xadd.c b/tools/testing/selftests/bpf/verifier/xadd.c
index c5de2e62cc8b..70a320505bf2 100644
--- a/tools/testing/selftests/bpf/verifier/xadd.c
+++ b/tools/testing/selftests/bpf/verifier/xadd.c
@@ -51,7 +51,7 @@
BPF_EXIT_INSN(),
},
.result = REJECT,
- .errstr = "BPF_XADD stores into R2 pkt is not allowed",
+ .errstr = "BPF_ATOMIC stores into R2 pkt is not allowed",
.prog_type = BPF_PROG_TYPE_XDP,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
},
--
2.29.2.454.gaff20da3a2-goog

2020-12-03 16:08:10

by Brendan Jackman

[permalink] [raw]
Subject: [PATCH bpf-next v3 03/14] bpf: x86: Factor out function to emit NEG

There's currently only one usage of this but implementation of
atomic_sub add another.

Change-Id: Ia56743ec26ff5e7bcde8ae94fa17fef92d418d2b
Signed-off-by: Brendan Jackman <[email protected]>
---
arch/x86/net/bpf_jit_comp.c | 23 ++++++++++++++++++-----
1 file changed, 18 insertions(+), 5 deletions(-)

diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index 7106cfd10ba6..171ce539f6b9 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -783,6 +783,22 @@ static void emit_stx(u8 **pprog, u32 size, u32 dst_reg, u32 src_reg, int off)
*pprog = prog;
}

+
+static void emit_neg(u8 **pprog, u32 reg, bool is64)
+{
+ u8 *prog = *pprog;
+ int cnt = 0;
+
+ /* Emit REX byte if necessary */
+ if (is64)
+ EMIT1(add_1mod(0x48, reg));
+ else if (is_ereg(reg))
+ EMIT1(add_1mod(0x40, reg));
+
+ EMIT2(0xF7, add_1reg(0xD8, reg)); /* x86 NEG */
+ *pprog = prog;
+}
+
static bool ex_handler_bpf(const struct exception_table_entry *x,
struct pt_regs *regs, int trapnr,
unsigned long error_code, unsigned long fault_addr)
@@ -884,11 +900,8 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
/* neg dst */
case BPF_ALU | BPF_NEG:
case BPF_ALU64 | BPF_NEG:
- if (BPF_CLASS(insn->code) == BPF_ALU64)
- EMIT1(add_1mod(0x48, dst_reg));
- else if (is_ereg(dst_reg))
- EMIT1(add_1mod(0x40, dst_reg));
- EMIT2(0xF7, add_1reg(0xD8, dst_reg));
+ emit_neg(&prog, dst_reg,
+ BPF_CLASS(insn->code) == BPF_ALU64);
break;

case BPF_ALU | BPF_ADD | BPF_K:
--
2.29.2.454.gaff20da3a2-goog

2020-12-03 16:08:29

by Brendan Jackman

[permalink] [raw]
Subject: [PATCH bpf-next v3 06/14] bpf: Move BPF_STX reserved field check into BPF_STX verifier code

I can't find a reason why this code is in resolve_pseudo_ldimm64;
since I'll be modifying it in a subsequent commit, tidy it up.

Change-Id: I3410469270f4889a3af67612bd6c2e7979ab4da1
Signed-off-by: Brendan Jackman <[email protected]>
---
kernel/bpf/verifier.c | 13 ++++++-------
1 file changed, 6 insertions(+), 7 deletions(-)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 1947da617b03..e8b41ccdfb90 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -9501,6 +9501,12 @@ static int do_check(struct bpf_verifier_env *env)
} else if (class == BPF_STX) {
enum bpf_reg_type *prev_dst_type, dst_reg_type;

+ if (((BPF_MODE(insn->code) != BPF_MEM &&
+ BPF_MODE(insn->code) != BPF_ATOMIC) || insn->imm != 0)) {
+ verbose(env, "BPF_STX uses reserved fields\n");
+ return -EINVAL;
+ }
+
if (BPF_MODE(insn->code) == BPF_ATOMIC) {
err = check_atomic(env, env->insn_idx, insn);
if (err)
@@ -9910,13 +9916,6 @@ static int resolve_pseudo_ldimm64(struct bpf_verifier_env *env)
return -EINVAL;
}

- if (BPF_CLASS(insn->code) == BPF_STX &&
- ((BPF_MODE(insn->code) != BPF_MEM &&
- BPF_MODE(insn->code) != BPF_ATOMIC) || insn->imm != 0)) {
- verbose(env, "BPF_STX uses reserved fields\n");
- return -EINVAL;
- }
-
if (insn[0].code == (BPF_LD | BPF_IMM | BPF_DW)) {
struct bpf_insn_aux_data *aux;
struct bpf_map *map;
--
2.29.2.454.gaff20da3a2-goog

2020-12-03 16:08:35

by Brendan Jackman

[permalink] [raw]
Subject: [PATCH bpf-next v3 10/14] bpf: Add bitwise atomic instructions

This adds instructions for

atomic[64]_[fetch_]and
atomic[64]_[fetch_]or
atomic[64]_[fetch_]xor

All these operations are isomorphic enough to implement with the same
verifier, interpreter, and x86 JIT code, hence being a single commit.

The main interesting thing here is that x86 doesn't directly support
the fetch_ version these operations, so we need to generate a CMPXCHG
loop in the JIT. This requires the use of two temporary registers,
IIUC it's safe to use BPF_REG_AX and x86's AUX_REG for this purpose.

Change-Id: I340b10cecebea8cb8a52e3606010cde547a10ed4
Signed-off-by: Brendan Jackman <[email protected]>
---
arch/x86/net/bpf_jit_comp.c | 50 +++++++++++++++++++++++++++++-
include/linux/filter.h | 60 ++++++++++++++++++++++++++++++++++++
kernel/bpf/core.c | 5 ++-
kernel/bpf/disasm.c | 21 ++++++++++---
kernel/bpf/verifier.c | 6 ++++
tools/include/linux/filter.h | 60 ++++++++++++++++++++++++++++++++++++
6 files changed, 196 insertions(+), 6 deletions(-)

diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index 7d29bc3bb4ff..4ab0f821326c 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -824,6 +824,10 @@ static int emit_atomic(u8 **pprog, u8 atomic_op,
/* emit opcode */
switch (atomic_op) {
case BPF_ADD:
+ case BPF_SUB:
+ case BPF_AND:
+ case BPF_OR:
+ case BPF_XOR:
/* lock *(u32/u64*)(dst_reg + off) <op>= src_reg */
EMIT1(simple_alu_opcodes[atomic_op]);
break;
@@ -1306,8 +1310,52 @@ st: if (is_imm8(insn->off))

case BPF_STX | BPF_ATOMIC | BPF_W:
case BPF_STX | BPF_ATOMIC | BPF_DW:
+ if (insn->imm == (BPF_AND | BPF_FETCH) ||
+ insn->imm == (BPF_OR | BPF_FETCH) ||
+ insn->imm == (BPF_XOR | BPF_FETCH)) {
+ u8 *branch_target;
+ bool is64 = BPF_SIZE(insn->code) == BPF_DW;
+
+ /*
+ * Can't be implemented with a single x86 insn.
+ * Need to do a CMPXCHG loop.
+ */
+
+ /* Will need RAX as a CMPXCHG operand so save R0 */
+ emit_mov_reg(&prog, true, BPF_REG_AX, BPF_REG_0);
+ branch_target = prog;
+ /* Load old value */
+ emit_ldx(&prog, BPF_SIZE(insn->code),
+ BPF_REG_0, dst_reg, insn->off);
+ /*
+ * Perform the (commutative) operation locally,
+ * put the result in the AUX_REG.
+ */
+ emit_mov_reg(&prog, is64, AUX_REG, BPF_REG_0);
+ maybe_emit_mod(&prog, AUX_REG, src_reg, is64);
+ EMIT2(simple_alu_opcodes[BPF_OP(insn->imm)],
+ add_2reg(0xC0, AUX_REG, src_reg));
+ /* Attempt to swap in new value */
+ err = emit_atomic(&prog, BPF_CMPXCHG,
+ dst_reg, AUX_REG, insn->off,
+ BPF_SIZE(insn->code));
+ if (WARN_ON(err))
+ return err;
+ /*
+ * ZF tells us whether we won the race. If it's
+ * cleared we need to try again.
+ */
+ EMIT2(X86_JNE, -(prog - branch_target) - 2);
+ /* Return the pre-modification value */
+ emit_mov_reg(&prog, is64, src_reg, BPF_REG_0);
+ /* Restore R0 after clobbering RAX */
+ emit_mov_reg(&prog, true, BPF_REG_0, BPF_REG_AX);
+ break;
+
+ }
+
err = emit_atomic(&prog, insn->imm, dst_reg, src_reg,
- insn->off, BPF_SIZE(insn->code));
+ insn->off, BPF_SIZE(insn->code));
if (err)
return err;
break;
diff --git a/include/linux/filter.h b/include/linux/filter.h
index 6186280715ed..698f82897b0d 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -280,6 +280,66 @@ static inline bool insn_is_zext(const struct bpf_insn *insn)
.off = OFF, \
.imm = BPF_ADD | BPF_FETCH })

+/* Atomic memory and, *(uint *)(dst_reg + off16) &= src_reg */
+
+#define BPF_ATOMIC_AND(SIZE, DST, SRC, OFF) \
+ ((struct bpf_insn) { \
+ .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \
+ .dst_reg = DST, \
+ .src_reg = SRC, \
+ .off = OFF, \
+ .imm = BPF_AND })
+
+/* Atomic memory and with fetch, src_reg = atomic_fetch_and(*(dst_reg + off), src_reg); */
+
+#define BPF_ATOMIC_FETCH_AND(SIZE, DST, SRC, OFF) \
+ ((struct bpf_insn) { \
+ .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \
+ .dst_reg = DST, \
+ .src_reg = SRC, \
+ .off = OFF, \
+ .imm = BPF_AND | BPF_FETCH })
+
+/* Atomic memory or, *(uint *)(dst_reg + off16) |= src_reg */
+
+#define BPF_ATOMIC_OR(SIZE, DST, SRC, OFF) \
+ ((struct bpf_insn) { \
+ .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \
+ .dst_reg = DST, \
+ .src_reg = SRC, \
+ .off = OFF, \
+ .imm = BPF_OR })
+
+/* Atomic memory or with fetch, src_reg = atomic_fetch_or(*(dst_reg + off), src_reg); */
+
+#define BPF_ATOMIC_FETCH_OR(SIZE, DST, SRC, OFF) \
+ ((struct bpf_insn) { \
+ .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \
+ .dst_reg = DST, \
+ .src_reg = SRC, \
+ .off = OFF, \
+ .imm = BPF_OR | BPF_FETCH })
+
+/* Atomic memory xor, *(uint *)(dst_reg + off16) ^= src_reg */
+
+#define BPF_ATOMIC_XOR(SIZE, DST, SRC, OFF) \
+ ((struct bpf_insn) { \
+ .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \
+ .dst_reg = DST, \
+ .src_reg = SRC, \
+ .off = OFF, \
+ .imm = BPF_XOR })
+
+/* Atomic memory xor with fetch, src_reg = atomic_fetch_xor(*(dst_reg + off), src_reg); */
+
+#define BPF_ATOMIC_FETCH_XOR(SIZE, DST, SRC, OFF) \
+ ((struct bpf_insn) { \
+ .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \
+ .dst_reg = DST, \
+ .src_reg = SRC, \
+ .off = OFF, \
+ .imm = BPF_XOR | BPF_FETCH })
+
/* Atomic exchange, src_reg = atomic_xchg((dst_reg + off), src_reg) */

#define BPF_ATOMIC_XCHG(SIZE, DST, SRC, OFF) \
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index 498d3f067be7..27eac4d5724c 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -1642,7 +1642,10 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn, u64 *stack)
STX_ATOMIC_W:
switch (IMM) {
ATOMIC(BPF_ADD, add)
-
+ ATOMIC(BPF_AND, and)
+ ATOMIC(BPF_OR, or)
+ ATOMIC(BPF_XOR, xor)
+#undef ATOMIC
case BPF_XCHG:
if (BPF_SIZE(insn->code) == BPF_W)
SRC = (u32) atomic_xchg(
diff --git a/kernel/bpf/disasm.c b/kernel/bpf/disasm.c
index 18357ea9a17d..0c7c1c31a57b 100644
--- a/kernel/bpf/disasm.c
+++ b/kernel/bpf/disasm.c
@@ -80,6 +80,13 @@ const char *const bpf_alu_string[16] = {
[BPF_END >> 4] = "endian",
};

+static const char *const bpf_atomic_alu_string[16] = {
+ [BPF_ADD >> 4] = "add",
+ [BPF_AND >> 4] = "and",
+ [BPF_OR >> 4] = "or",
+ [BPF_XOR >> 4] = "or",
+};
+
static const char *const bpf_ldst_string[] = {
[BPF_W >> 3] = "u32",
[BPF_H >> 3] = "u16",
@@ -154,17 +161,23 @@ void print_bpf_insn(const struct bpf_insn_cbs *cbs,
insn->dst_reg,
insn->off, insn->src_reg);
else if (BPF_MODE(insn->code) == BPF_ATOMIC &&
- insn->imm == BPF_ADD) {
- verbose(cbs->private_data, "(%02x) lock *(%s *)(r%d %+d) += r%d\n",
+ (insn->imm == BPF_ADD || insn->imm == BPF_ADD ||
+ insn->imm == BPF_OR || insn->imm == BPF_XOR)) {
+ verbose(cbs->private_data, "(%02x) lock *(%s *)(r%d %+d) %s r%d\n",
insn->code,
bpf_ldst_string[BPF_SIZE(insn->code) >> 3],
insn->dst_reg, insn->off,
+ bpf_alu_string[BPF_OP(insn->imm) >> 4],
insn->src_reg);
} else if (BPF_MODE(insn->code) == BPF_ATOMIC &&
- insn->imm == (BPF_ADD | BPF_FETCH)) {
- verbose(cbs->private_data, "(%02x) r%d = atomic%s_fetch_add(*(%s *)(r%d %+d), r%d)\n",
+ (insn->imm == (BPF_ADD | BPF_FETCH) ||
+ insn->imm == (BPF_AND | BPF_FETCH) ||
+ insn->imm == (BPF_OR | BPF_FETCH) ||
+ insn->imm == (BPF_XOR | BPF_FETCH))) {
+ verbose(cbs->private_data, "(%02x) r%d = atomic%s_fetch_%s(*(%s *)(r%d %+d), r%d)\n",
insn->code, insn->src_reg,
BPF_SIZE(insn->code) == BPF_DW ? "64" : "",
+ bpf_atomic_alu_string[BPF_OP(insn->imm) >> 4],
bpf_ldst_string[BPF_SIZE(insn->code) >> 3],
insn->dst_reg, insn->off, insn->src_reg);
} else if (BPF_MODE(insn->code) == BPF_ATOMIC &&
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index ccf4315e54e7..dd30eb9a6c1b 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -3606,6 +3606,12 @@ static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct bpf_i
switch (insn->imm) {
case BPF_ADD:
case BPF_ADD | BPF_FETCH:
+ case BPF_AND:
+ case BPF_AND | BPF_FETCH:
+ case BPF_OR:
+ case BPF_OR | BPF_FETCH:
+ case BPF_XOR:
+ case BPF_XOR | BPF_FETCH:
case BPF_XCHG:
case BPF_CMPXCHG:
break;
diff --git a/tools/include/linux/filter.h b/tools/include/linux/filter.h
index ea99bd17d003..b74febf83eb1 100644
--- a/tools/include/linux/filter.h
+++ b/tools/include/linux/filter.h
@@ -190,6 +190,66 @@
.off = OFF, \
.imm = BPF_ADD | BPF_FETCH })

+/* Atomic memory and, *(uint *)(dst_reg + off16) -= src_reg */
+
+#define BPF_ATOMIC_AND(SIZE, DST, SRC, OFF) \
+ ((struct bpf_insn) { \
+ .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \
+ .dst_reg = DST, \
+ .src_reg = SRC, \
+ .off = OFF, \
+ .imm = BPF_AND })
+
+/* Atomic memory and with fetch, src_reg = atomic_fetch_and(*(dst_reg + off), src_reg); */
+
+#define BPF_ATOMIC_FETCH_AND(SIZE, DST, SRC, OFF) \
+ ((struct bpf_insn) { \
+ .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \
+ .dst_reg = DST, \
+ .src_reg = SRC, \
+ .off = OFF, \
+ .imm = BPF_AND | BPF_FETCH })
+
+/* Atomic memory or, *(uint *)(dst_reg + off16) -= src_reg */
+
+#define BPF_ATOMIC_OR(SIZE, DST, SRC, OFF) \
+ ((struct bpf_insn) { \
+ .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \
+ .dst_reg = DST, \
+ .src_reg = SRC, \
+ .off = OFF, \
+ .imm = BPF_OR })
+
+/* Atomic memory or with fetch, src_reg = atomic_fetch_or(*(dst_reg + off), src_reg); */
+
+#define BPF_ATOMIC_FETCH_OR(SIZE, DST, SRC, OFF) \
+ ((struct bpf_insn) { \
+ .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \
+ .dst_reg = DST, \
+ .src_reg = SRC, \
+ .off = OFF, \
+ .imm = BPF_OR | BPF_FETCH })
+
+/* Atomic memory xor, *(uint *)(dst_reg + off16) -= src_reg */
+
+#define BPF_ATOMIC_XOR(SIZE, DST, SRC, OFF) \
+ ((struct bpf_insn) { \
+ .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \
+ .dst_reg = DST, \
+ .src_reg = SRC, \
+ .off = OFF, \
+ .imm = BPF_XOR })
+
+/* Atomic memory xor with fetch, src_reg = atomic_fetch_xor(*(dst_reg + off), src_reg); */
+
+#define BPF_ATOMIC_FETCH_XOR(SIZE, DST, SRC, OFF) \
+ ((struct bpf_insn) { \
+ .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \
+ .dst_reg = DST, \
+ .src_reg = SRC, \
+ .off = OFF, \
+ .imm = BPF_XOR | BPF_FETCH })
+
/* Atomic exchange, src_reg = atomic_xchg((dst_reg + off), src_reg) */

#define BPF_ATOMIC_XCHG(SIZE, DST, SRC, OFF) \
--
2.29.2.454.gaff20da3a2-goog

2020-12-03 16:08:48

by Brendan Jackman

[permalink] [raw]
Subject: [PATCH bpf-next v3 07/14] bpf: Add BPF_FETCH field / create atomic_fetch_add instruction

This value can be set in bpf_insn.imm, for BPF_ATOMIC instructions,
in order to have the previous value of the atomically-modified memory
location loaded into the src register after an atomic op is carried
out.

Suggested-by: Yonghong Song <[email protected]>
Signed-off-by: Brendan Jackman <[email protected]>
Change-Id: I649ad48edb565a32ccdf72924ffe96a8c8da57ad
---
arch/x86/net/bpf_jit_comp.c | 4 ++++
include/linux/filter.h | 9 +++++++++
include/uapi/linux/bpf.h | 3 +++
kernel/bpf/core.c | 13 +++++++++++++
kernel/bpf/disasm.c | 7 +++++++
kernel/bpf/verifier.c | 35 ++++++++++++++++++++++++----------
tools/include/linux/filter.h | 10 ++++++++++
tools/include/uapi/linux/bpf.h | 3 +++
8 files changed, 74 insertions(+), 10 deletions(-)

diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index 5e5a132b3d52..88cb09fa3bfb 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -827,6 +827,10 @@ static int emit_atomic(u8 **pprog, u8 atomic_op,
/* lock *(u32/u64*)(dst_reg + off) <op>= src_reg */
EMIT1(simple_alu_opcodes[atomic_op]);
break;
+ case BPF_ADD | BPF_FETCH:
+ /* src_reg = atomic_fetch_add(*(dst_reg + off), src_reg); */
+ EMIT2(0x0F, 0xC1);
+ break;
default:
pr_err("bpf_jit: unknown atomic opcode %02x\n", atomic_op);
return -EFAULT;
diff --git a/include/linux/filter.h b/include/linux/filter.h
index ce19988fb312..4e04d0fc454f 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -270,6 +270,15 @@ static inline bool insn_is_zext(const struct bpf_insn *insn)
.imm = BPF_ADD })
#define BPF_STX_XADD BPF_ATOMIC_ADD /* alias */

+/* Atomic memory add with fetch, src_reg = atomic_fetch_add(*(dst_reg + off), src_reg); */
+
+#define BPF_ATOMIC_FETCH_ADD(SIZE, DST, SRC, OFF) \
+ ((struct bpf_insn) { \
+ .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \
+ .dst_reg = DST, \
+ .src_reg = SRC, \
+ .off = OFF, \
+ .imm = BPF_ADD | BPF_FETCH })

/* Memory store, *(uint *) (dst_reg + off16) = imm32 */

diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index d0adc48db43c..025e377e7229 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -44,6 +44,9 @@
#define BPF_CALL 0x80 /* function call */
#define BPF_EXIT 0x90 /* function return */

+/* atomic op type fields (stored in immediate) */
+#define BPF_FETCH 0x01 /* fetch previous value into src reg */
+
/* Register numbers */
enum {
BPF_REG_0 = 0,
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index 3abc6b250b18..61e93eb7d363 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -1624,16 +1624,29 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn, u64 *stack)
/* lock xadd *(u32 *)(dst_reg + off16) += src_reg */
atomic_add((u32) SRC, (atomic_t *)(unsigned long)
(DST + insn->off));
+ break;
+ case BPF_ADD | BPF_FETCH:
+ SRC = (u32) atomic_fetch_add(
+ (u32) SRC,
+ (atomic_t *)(unsigned long) (DST + insn->off));
+ break;
default:
goto default_label;
}
CONT;
+
STX_ATOMIC_DW:
switch (IMM) {
case BPF_ADD:
/* lock xadd *(u64 *)(dst_reg + off16) += src_reg */
atomic64_add((u64) SRC, (atomic64_t *)(unsigned long)
(DST + insn->off));
+ break;
+ case BPF_ADD | BPF_FETCH:
+ SRC = (u64) atomic64_fetch_add(
+ (u64) SRC,
+ (atomic64_t *)(s64) (DST + insn->off));
+ break;
default:
goto default_label;
}
diff --git a/kernel/bpf/disasm.c b/kernel/bpf/disasm.c
index 37c8d6e9b4cc..3ee2246a52ef 100644
--- a/kernel/bpf/disasm.c
+++ b/kernel/bpf/disasm.c
@@ -160,6 +160,13 @@ void print_bpf_insn(const struct bpf_insn_cbs *cbs,
bpf_ldst_string[BPF_SIZE(insn->code) >> 3],
insn->dst_reg, insn->off,
insn->src_reg);
+ } else if (BPF_MODE(insn->code) == BPF_ATOMIC &&
+ insn->imm == (BPF_ADD | BPF_FETCH)) {
+ verbose(cbs->private_data, "(%02x) r%d = atomic%s_fetch_add(*(%s *)(r%d %+d), r%d)\n",
+ insn->code, insn->src_reg,
+ BPF_SIZE(insn->code) == BPF_DW ? "64" : "",
+ bpf_ldst_string[BPF_SIZE(insn->code) >> 3],
+ insn->dst_reg, insn->off, insn->src_reg);
} else {
verbose(cbs->private_data, "BUG_%02x\n", insn->code);
}
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index e8b41ccdfb90..a68adbcee370 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -3602,7 +3602,11 @@ static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct bpf_i
{
int err;

- if (insn->imm != BPF_ADD) {
+ switch (insn->imm) {
+ case BPF_ADD:
+ case BPF_ADD | BPF_FETCH:
+ break;
+ default:
verbose(env, "BPF_ATOMIC uses invalid atomic opcode %02x\n", insn->imm);
return -EINVAL;
}
@@ -3631,7 +3635,7 @@ static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct bpf_i
is_pkt_reg(env, insn->dst_reg) ||
is_flow_key_reg(env, insn->dst_reg) ||
is_sk_reg(env, insn->dst_reg)) {
- verbose(env, "atomic stores into R%d %s is not allowed\n",
+ verbose(env, "BPF_ATOMIC stores into R%d %s is not allowed\n",
insn->dst_reg,
reg_type_str[reg_state(env, insn->dst_reg)->type]);
return -EACCES;
@@ -3644,8 +3648,20 @@ static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct bpf_i
return err;

/* check whether we can write into the same memory */
- return check_mem_access(env, insn_idx, insn->dst_reg, insn->off,
- BPF_SIZE(insn->code), BPF_WRITE, -1, true);
+ err = check_mem_access(env, insn_idx, insn->dst_reg, insn->off,
+ BPF_SIZE(insn->code), BPF_WRITE, -1, true);
+ if (err)
+ return err;
+
+ if (!(insn->imm & BPF_FETCH))
+ return 0;
+
+ /* check and record load of old value into src reg */
+ err = check_reg_arg(env, insn->src_reg, DST_OP);
+ if (err)
+ return err;
+
+ return 0;
}

static int __check_stack_boundary(struct bpf_verifier_env *env, u32 regno,
@@ -9501,12 +9517,6 @@ static int do_check(struct bpf_verifier_env *env)
} else if (class == BPF_STX) {
enum bpf_reg_type *prev_dst_type, dst_reg_type;

- if (((BPF_MODE(insn->code) != BPF_MEM &&
- BPF_MODE(insn->code) != BPF_ATOMIC) || insn->imm != 0)) {
- verbose(env, "BPF_STX uses reserved fields\n");
- return -EINVAL;
- }
-
if (BPF_MODE(insn->code) == BPF_ATOMIC) {
err = check_atomic(env, env->insn_idx, insn);
if (err)
@@ -9515,6 +9525,11 @@ static int do_check(struct bpf_verifier_env *env)
continue;
}

+ if (BPF_MODE(insn->code) != BPF_MEM || insn->imm != 0) {
+ verbose(env, "BPF_STX uses reserved fields\n");
+ return -EINVAL;
+ }
+
/* check src1 operand */
err = check_reg_arg(env, insn->src_reg, SRC_OP);
if (err)
diff --git a/tools/include/linux/filter.h b/tools/include/linux/filter.h
index 95ff51d97f25..ac7701678e1a 100644
--- a/tools/include/linux/filter.h
+++ b/tools/include/linux/filter.h
@@ -180,6 +180,16 @@
.imm = BPF_ADD })
#define BPF_STX_XADD BPF_ATOMIC_ADD /* alias */

+/* Atomic memory add with fetch, src_reg = atomic_fetch_add(*(dst_reg + off), src_reg); */
+
+#define BPF_ATOMIC_FETCH_ADD(SIZE, DST, SRC, OFF) \
+ ((struct bpf_insn) { \
+ .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \
+ .dst_reg = DST, \
+ .src_reg = SRC, \
+ .off = OFF, \
+ .imm = BPF_ADD | BPF_FETCH })
+
/* Memory store, *(uint *) (dst_reg + off16) = imm32 */

#define BPF_ST_MEM(SIZE, DST, OFF, IMM) \
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index d0adc48db43c..025e377e7229 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -44,6 +44,9 @@
#define BPF_CALL 0x80 /* function call */
#define BPF_EXIT 0x90 /* function return */

+/* atomic op type fields (stored in immediate) */
+#define BPF_FETCH 0x01 /* fetch previous value into src reg */
+
/* Register numbers */
enum {
BPF_REG_0 = 0,
--
2.29.2.454.gaff20da3a2-goog

2020-12-03 16:09:05

by Brendan Jackman

[permalink] [raw]
Subject: [PATCH bpf-next v3 13/14] bpf: Add tests for new BPF atomic operations

This relies on the work done by Yonghong Song in
https://reviews.llvm.org/D72184

Note the use of a define called ENABLE_ATOMICS_TESTS: this is used
to:

- Avoid breaking the build for people on old versions of Clang
- Avoid needing separate lists of test objects for no_alu32, where
atomics are not supported even if Clang has the feature.

The atomics_test.o BPF object is built unconditionally both for
test_progs and test_progs-no_alu32. For test_progs, if Clang supports
atomics, ENABLE_ATOMICS_TESTS is defined, so it includes the proper
test code. Otherwise, progs and global vars are defined anyway, as
stubs; this means that the skeleton user code still builds.

The atomics_test.o userspace object is built once and used for both
test_progs and test_progs-no_alu32. A variable called skip_tests is
defined in the BPF object's data section, which tells the userspace
object whether to skip the atomics test.

Change-Id: Iecc12f35f0ded4a1dd805cce1be576e7b27917ef
Signed-off-by: Brendan Jackman <[email protected]>
---
tools/testing/selftests/bpf/Makefile | 4 +
.../selftests/bpf/prog_tests/atomics_test.c | 262 ++++++++++++++++++
.../selftests/bpf/progs/atomics_test.c | 154 ++++++++++
.../selftests/bpf/verifier/atomic_and.c | 77 +++++
.../selftests/bpf/verifier/atomic_cmpxchg.c | 96 +++++++
.../selftests/bpf/verifier/atomic_fetch_add.c | 106 +++++++
.../selftests/bpf/verifier/atomic_or.c | 77 +++++
.../selftests/bpf/verifier/atomic_xchg.c | 46 +++
.../selftests/bpf/verifier/atomic_xor.c | 77 +++++
9 files changed, 899 insertions(+)
create mode 100644 tools/testing/selftests/bpf/prog_tests/atomics_test.c
create mode 100644 tools/testing/selftests/bpf/progs/atomics_test.c
create mode 100644 tools/testing/selftests/bpf/verifier/atomic_and.c
create mode 100644 tools/testing/selftests/bpf/verifier/atomic_cmpxchg.c
create mode 100644 tools/testing/selftests/bpf/verifier/atomic_fetch_add.c
create mode 100644 tools/testing/selftests/bpf/verifier/atomic_or.c
create mode 100644 tools/testing/selftests/bpf/verifier/atomic_xchg.c
create mode 100644 tools/testing/selftests/bpf/verifier/atomic_xor.c

diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
index f21c4841a612..448a9eb1a56c 100644
--- a/tools/testing/selftests/bpf/Makefile
+++ b/tools/testing/selftests/bpf/Makefile
@@ -431,11 +431,15 @@ TRUNNER_EXTRA_FILES := $(OUTPUT)/urandom_read \
$(wildcard progs/btf_dump_test_case_*.c)
TRUNNER_BPF_BUILD_RULE := CLANG_BPF_BUILD_RULE
TRUNNER_BPF_CFLAGS := $(BPF_CFLAGS) $(CLANG_CFLAGS)
+ifeq ($(feature-clang-bpf-atomics),1)
+ TRUNNER_BPF_CFLAGS += -DENABLE_ATOMICS_TESTS
+endif
TRUNNER_BPF_LDFLAGS := -mattr=+alu32
$(eval $(call DEFINE_TEST_RUNNER,test_progs))

# Define test_progs-no_alu32 test runner.
TRUNNER_BPF_BUILD_RULE := CLANG_NOALU32_BPF_BUILD_RULE
+TRUNNER_BPF_CFLAGS := $(BPF_CFLAGS) $(CLANG_CFLAGS)
TRUNNER_BPF_LDFLAGS :=
$(eval $(call DEFINE_TEST_RUNNER,test_progs,no_alu32))

diff --git a/tools/testing/selftests/bpf/prog_tests/atomics_test.c b/tools/testing/selftests/bpf/prog_tests/atomics_test.c
new file mode 100644
index 000000000000..66f0ccf4f4ec
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/atomics_test.c
@@ -0,0 +1,262 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <test_progs.h>
+
+
+#include "atomics_test.skel.h"
+
+static struct atomics_test *setup(void)
+{
+ struct atomics_test *atomics_skel;
+ __u32 duration = 0, err;
+
+ atomics_skel = atomics_test__open_and_load();
+ if (CHECK(!atomics_skel, "atomics_skel_load", "atomics skeleton failed\n"))
+ return NULL;
+
+ if (atomics_skel->data->skip_tests) {
+ printf("%s:SKIP:no ENABLE_ATOMICS_TEST (missing Clang BPF atomics support)",
+ __func__);
+ test__skip();
+ goto err;
+ }
+
+ err = atomics_test__attach(atomics_skel);
+ if (CHECK(err, "atomics_attach", "atomics attach failed: %d\n", err))
+ goto err;
+
+ return atomics_skel;
+
+err:
+ atomics_test__destroy(atomics_skel);
+ return NULL;
+}
+
+static void test_add(void)
+{
+ struct atomics_test *atomics_skel;
+ int err, prog_fd;
+ __u32 duration = 0, retval;
+
+ atomics_skel = setup();
+ if (!atomics_skel)
+ return;
+
+ prog_fd = bpf_program__fd(atomics_skel->progs.add);
+ err = bpf_prog_test_run(prog_fd, 1, NULL, 0,
+ NULL, NULL, &retval, &duration);
+ if (CHECK(err || retval, "test_run add",
+ "err %d errno %d retval %d duration %d\n",
+ err, errno, retval, duration))
+ goto cleanup;
+
+ ASSERT_EQ(atomics_skel->data->add64_value, 3, "add64_value");
+ ASSERT_EQ(atomics_skel->bss->add64_result, 1, "add64_result");
+
+ ASSERT_EQ(atomics_skel->data->add32_value, 3, "add32_value");
+ ASSERT_EQ(atomics_skel->bss->add32_result, 1, "add32_result");
+
+ ASSERT_EQ(atomics_skel->bss->add_stack_value_copy, 3, "add_stack_value");
+ ASSERT_EQ(atomics_skel->bss->add_stack_result, 1, "add_stack_result");
+
+ ASSERT_EQ(atomics_skel->data->add_noreturn_value, 3, "add_noreturn_value");
+
+cleanup:
+ atomics_test__destroy(atomics_skel);
+}
+
+static void test_sub(void)
+{
+ struct atomics_test *atomics_skel;
+ int err, prog_fd;
+ __u32 duration = 0, retval;
+
+ atomics_skel = setup();
+ if (!atomics_skel)
+ return;
+
+ prog_fd = bpf_program__fd(atomics_skel->progs.sub);
+ err = bpf_prog_test_run(prog_fd, 1, NULL, 0,
+ NULL, NULL, &retval, &duration);
+ if (CHECK(err || retval, "test_run sub",
+ "err %d errno %d retval %d duration %d\n",
+ err, errno, retval, duration))
+ goto cleanup;
+
+ ASSERT_EQ(atomics_skel->data->sub64_value, -1, "sub64_value");
+ ASSERT_EQ(atomics_skel->bss->sub64_result, 1, "sub64_result");
+
+ ASSERT_EQ(atomics_skel->data->sub32_value, -1, "sub32_value");
+ ASSERT_EQ(atomics_skel->bss->sub32_result, 1, "sub32_result");
+
+ ASSERT_EQ(atomics_skel->bss->sub_stack_value_copy, -1, "sub_stack_value");
+ ASSERT_EQ(atomics_skel->bss->sub_stack_result, 1, "sub_stack_result");
+
+ ASSERT_EQ(atomics_skel->data->sub_noreturn_value, -1, "sub_noreturn_value");
+
+cleanup:
+ atomics_test__destroy(atomics_skel);
+}
+
+static void test_and(void)
+{
+ struct atomics_test *atomics_skel;
+ int err, prog_fd;
+ __u32 duration = 0, retval;
+
+ atomics_skel = setup();
+ if (!atomics_skel)
+ return;
+
+ prog_fd = bpf_program__fd(atomics_skel->progs.and);
+ err = bpf_prog_test_run(prog_fd, 1, NULL, 0,
+ NULL, NULL, &retval, &duration);
+ if (CHECK(err || retval, "test_run and",
+ "err %d errno %d retval %d duration %d\n",
+ err, errno, retval, duration))
+ goto cleanup;
+
+ ASSERT_EQ(atomics_skel->data->and64_value, 0x010ull << 32, "and64_value");
+ ASSERT_EQ(atomics_skel->bss->and64_result, 0x110ull << 32, "and64_result");
+
+ ASSERT_EQ(atomics_skel->data->and32_value, 0x010, "and32_value");
+ ASSERT_EQ(atomics_skel->bss->and32_result, 0x110, "and32_result");
+
+ ASSERT_EQ(atomics_skel->data->and_noreturn_value, 0x010ull << 32, "and_noreturn_value");
+cleanup:
+ atomics_test__destroy(atomics_skel);
+}
+
+static void test_or(void)
+{
+ struct atomics_test *atomics_skel;
+ int err, prog_fd;
+ __u32 duration = 0, retval;
+
+ atomics_skel = setup();
+ if (!atomics_skel)
+ return;
+
+ prog_fd = bpf_program__fd(atomics_skel->progs.or);
+ err = bpf_prog_test_run(prog_fd, 1, NULL, 0,
+ NULL, NULL, &retval, &duration);
+ if (CHECK(err || retval, "test_run or",
+ "err %d errno %d retval %d duration %d\n",
+ err, errno, retval, duration))
+ goto cleanup;
+
+ ASSERT_EQ(atomics_skel->data->or64_value, 0x111ull << 32, "or64_value");
+ ASSERT_EQ(atomics_skel->bss->or64_result, 0x110ull << 32, "or64_result");
+
+ ASSERT_EQ(atomics_skel->data->or32_value, 0x111, "or32_value");
+ ASSERT_EQ(atomics_skel->bss->or32_result, 0x110, "or32_result");
+
+ ASSERT_EQ(atomics_skel->data->or_noreturn_value, 0x111ull << 32, "or_noreturn_value");
+cleanup:
+ atomics_test__destroy(atomics_skel);
+}
+
+static void test_xor(void)
+{
+ struct atomics_test *atomics_skel;
+ int err, prog_fd;
+ __u32 duration = 0, retval;
+
+ atomics_skel = setup();
+ if (!atomics_skel)
+ return;
+
+ prog_fd = bpf_program__fd(atomics_skel->progs.xor);
+ err = bpf_prog_test_run(prog_fd, 1, NULL, 0,
+ NULL, NULL, &retval, &duration);
+ if (CHECK(err || retval, "test_run xor",
+ "err %d errno %d retval %d duration %d\n",
+ err, errno, retval, duration))
+ goto cleanup;
+
+ ASSERT_EQ(atomics_skel->data->xor64_value, 0x101ull << 32, "xor64_value");
+ ASSERT_EQ(atomics_skel->bss->xor64_result, 0x110ull << 32, "xor64_result");
+
+ ASSERT_EQ(atomics_skel->data->xor32_value, 0x101, "xor32_value");
+ ASSERT_EQ(atomics_skel->bss->xor32_result, 0x110, "xor32_result");
+
+ ASSERT_EQ(atomics_skel->data->xor_noreturn_value, 0x101ull << 32, "xor_nxoreturn_value");
+cleanup:
+ atomics_test__destroy(atomics_skel);
+}
+
+static void test_cmpxchg(void)
+{
+ struct atomics_test *atomics_skel;
+ int err, prog_fd;
+ __u32 duration = 0, retval;
+
+ atomics_skel = setup();
+ if (!atomics_skel)
+ return;
+
+ prog_fd = bpf_program__fd(atomics_skel->progs.add);
+ err = bpf_prog_test_run(prog_fd, 1, NULL, 0,
+ NULL, NULL, &retval, &duration);
+ if (CHECK(err || retval, "test_run add",
+ "err %d errno %d retval %d duration %d\n",
+ err, errno, retval, duration))
+ goto cleanup;
+
+ ASSERT_EQ(atomics_skel->data->cmpxchg64_value, 2, "cmpxchg64_value");
+ ASSERT_EQ(atomics_skel->bss->cmpxchg64_result_fail, 1, "cmpxchg_result_fail");
+ ASSERT_EQ(atomics_skel->bss->cmpxchg64_result_succeed, 1, "cmpxchg_result_succeed");
+
+ ASSERT_EQ(atomics_skel->data->cmpxchg32_value, 2, "cmpxchg32_value");
+ ASSERT_EQ(atomics_skel->bss->cmpxchg32_result_fail, 1, "cmpxchg_result_fail");
+ ASSERT_EQ(atomics_skel->bss->cmpxchg32_result_succeed, 1, "cmpxchg_result_succeed");
+
+cleanup:
+ atomics_test__destroy(atomics_skel);
+}
+
+static void test_xchg(void)
+{
+ struct atomics_test *atomics_skel;
+ int err, prog_fd;
+ __u32 duration = 0, retval;
+
+ atomics_skel = setup();
+ if (!atomics_skel)
+ return;
+
+ prog_fd = bpf_program__fd(atomics_skel->progs.add);
+ err = bpf_prog_test_run(prog_fd, 1, NULL, 0,
+ NULL, NULL, &retval, &duration);
+ if (CHECK(err || retval, "test_run add",
+ "err %d errno %d retval %d duration %d\n",
+ err, errno, retval, duration))
+ goto cleanup;
+
+ ASSERT_EQ(atomics_skel->data->xchg64_value, 2, "xchg64_value");
+ ASSERT_EQ(atomics_skel->bss->xchg64_result, 1, "xchg_result");
+
+ ASSERT_EQ(atomics_skel->data->xchg32_value, 2, "xchg32_value");
+ ASSERT_EQ(atomics_skel->bss->xchg32_result, 1, "xchg_result");
+
+cleanup:
+ atomics_test__destroy(atomics_skel);
+}
+
+void test_atomics_test(void)
+{
+ if (test__start_subtest("add"))
+ test_add();
+ if (test__start_subtest("sub"))
+ test_sub();
+ if (test__start_subtest("and"))
+ test_and();
+ if (test__start_subtest("or"))
+ test_or();
+ if (test__start_subtest("xor"))
+ test_xor();
+ if (test__start_subtest("cmpxchg"))
+ test_cmpxchg();
+ if (test__start_subtest("xchg"))
+ test_xchg();
+}
diff --git a/tools/testing/selftests/bpf/progs/atomics_test.c b/tools/testing/selftests/bpf/progs/atomics_test.c
new file mode 100644
index 000000000000..d40c93496843
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/atomics_test.c
@@ -0,0 +1,154 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+#include <stdbool.h>
+
+#ifdef ENABLE_ATOMICS_TESTS
+bool skip_tests __attribute((__section__(".data"))) = false;
+#else
+bool skip_tests = true;
+#endif
+
+__u64 add64_value = 1;
+__u64 add64_result = 0;
+__u32 add32_value = 1;
+__u32 add32_result = 0;
+__u64 add_stack_value_copy = 0;
+__u64 add_stack_result = 0;
+__u64 add_noreturn_value = 1;
+
+SEC("fentry/bpf_fentry_test1")
+int BPF_PROG(add, int a)
+{
+#ifdef ENABLE_ATOMICS_TESTS
+ __u64 add_stack_value = 1;
+
+ add64_result = __sync_fetch_and_add(&add64_value, 2);
+ add32_result = __sync_fetch_and_add(&add32_value, 2);
+ add_stack_result = __sync_fetch_and_add(&add_stack_value, 2);
+ add_stack_value_copy = add_stack_value;
+ __sync_fetch_and_add(&add_noreturn_value, 2);
+#endif
+
+ return 0;
+}
+
+__s64 sub64_value = 1;
+__s64 sub64_result = 0;
+__s32 sub32_value = 1;
+__s32 sub32_result = 0;
+__s64 sub_stack_value_copy = 0;
+__s64 sub_stack_result = 0;
+__s64 sub_noreturn_value = 1;
+
+SEC("fentry/bpf_fentry_test1")
+int BPF_PROG(sub, int a)
+{
+#ifdef ENABLE_ATOMICS_TESTS
+ __u64 sub_stack_value = 1;
+
+ sub64_result = __sync_fetch_and_sub(&sub64_value, 2);
+ sub32_result = __sync_fetch_and_sub(&sub32_value, 2);
+ sub_stack_result = __sync_fetch_and_sub(&sub_stack_value, 2);
+ sub_stack_value_copy = sub_stack_value;
+ __sync_fetch_and_sub(&sub_noreturn_value, 2);
+#endif
+
+ return 0;
+}
+
+__u64 and64_value = (0x110ull << 32);
+__u64 and64_result = 0;
+__u32 and32_value = 0x110;
+__u32 and32_result = 0;
+__u64 and_noreturn_value = (0x110ull << 32);
+
+SEC("fentry/bpf_fentry_test1")
+int BPF_PROG(and, int a)
+{
+#ifdef ENABLE_ATOMICS_TESTS
+
+ and64_result = __sync_fetch_and_and(&and64_value, 0x011ull << 32);
+ and32_result = __sync_fetch_and_and(&and32_value, 0x011);
+ __sync_fetch_and_and(&and_noreturn_value, 0x011ull << 32);
+#endif
+
+ return 0;
+}
+
+__u64 or64_value = (0x110ull << 32);
+__u64 or64_result = 0;
+__u32 or32_value = 0x110;
+__u32 or32_result = 0;
+__u64 or_noreturn_value = (0x110ull << 32);
+
+SEC("fentry/bpf_fentry_test1")
+int BPF_PROG(or, int a)
+{
+#ifdef ENABLE_ATOMICS_TESTS
+ or64_result = __sync_fetch_and_or(&or64_value, 0x011ull << 32);
+ or32_result = __sync_fetch_and_or(&or32_value, 0x011);
+ __sync_fetch_and_or(&or_noreturn_value, 0x011ull << 32);
+#endif
+
+ return 0;
+}
+
+__u64 xor64_value = (0x110ull << 32);
+__u64 xor64_result = 0;
+__u32 xor32_value = 0x110;
+__u32 xor32_result = 0;
+__u64 xor_noreturn_value = (0x110ull << 32);
+
+SEC("fentry/bpf_fentry_test1")
+int BPF_PROG(xor, int a)
+{
+#ifdef ENABLE_ATOMICS_TESTS
+ xor64_result = __sync_fetch_and_xor(&xor64_value, 0x011ull << 32);
+ xor32_result = __sync_fetch_and_xor(&xor32_value, 0x011);
+ __sync_fetch_and_xor(&xor_noreturn_value, 0x011ull << 32);
+#endif
+
+ return 0;
+}
+
+__u64 cmpxchg64_value = 1;
+__u64 cmpxchg64_result_fail = 0;
+__u64 cmpxchg64_result_succeed = 0;
+__u32 cmpxchg32_value = 1;
+__u32 cmpxchg32_result_fail = 0;
+__u32 cmpxchg32_result_succeed = 0;
+
+SEC("fentry/bpf_fentry_test1")
+int BPF_PROG(cmpxchg, int a)
+{
+#ifdef ENABLE_ATOMICS_TESTS
+ cmpxchg64_result_fail = __sync_val_compare_and_swap(&cmpxchg64_value, 0, 3);
+ cmpxchg64_result_succeed = __sync_val_compare_and_swap(&cmpxchg64_value, 1, 2);
+
+ cmpxchg32_result_fail = __sync_val_compare_and_swap(&cmpxchg32_value, 0, 3);
+ cmpxchg32_result_succeed = __sync_val_compare_and_swap(&cmpxchg32_value, 1, 2);
+#endif
+
+ return 0;
+}
+
+__u64 xchg64_value = 1;
+__u64 xchg64_result = 0;
+__u32 xchg32_value = 1;
+__u32 xchg32_result = 0;
+
+SEC("fentry/bpf_fentry_test1")
+int BPF_PROG(xchg, int a)
+{
+#ifdef ENABLE_ATOMICS_TESTS
+ __u64 val64 = 2;
+ __u32 val32 = 2;
+
+ __atomic_exchange(&xchg64_value, &val64, &xchg64_result, __ATOMIC_RELAXED);
+ __atomic_exchange(&xchg32_value, &val32, &xchg32_result, __ATOMIC_RELAXED);
+#endif
+
+ return 0;
+}
diff --git a/tools/testing/selftests/bpf/verifier/atomic_and.c b/tools/testing/selftests/bpf/verifier/atomic_and.c
new file mode 100644
index 000000000000..7eea6d9dfd7d
--- /dev/null
+++ b/tools/testing/selftests/bpf/verifier/atomic_and.c
@@ -0,0 +1,77 @@
+{
+ "BPF_ATOMIC_AND without fetch",
+ .insns = {
+ /* val = 0x110; */
+ BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0x110),
+ /* atomic_and(&val, 0x011); */
+ BPF_MOV64_IMM(BPF_REG_1, 0x011),
+ BPF_ATOMIC_AND(BPF_DW, BPF_REG_10, BPF_REG_1, -8),
+ /* if (val != 0x010) exit(2); */
+ BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_10, -8),
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0x010, 2),
+ BPF_MOV64_IMM(BPF_REG_0, 2),
+ BPF_EXIT_INSN(),
+ /* r1 should not be clobbered, no BPF_FETCH flag */
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0x011, 1),
+ BPF_MOV64_IMM(BPF_REG_0, 1),
+ BPF_EXIT_INSN(),
+ },
+ .result = ACCEPT,
+},
+{
+ "BPF_ATOMIC_AND with fetch",
+ .insns = {
+ BPF_MOV64_IMM(BPF_REG_0, 123),
+ /* val = 0x110; */
+ BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0x110),
+ /* old = atomic_fetch_and(&val, 0x011); */
+ BPF_MOV64_IMM(BPF_REG_1, 0x011),
+ BPF_ATOMIC_FETCH_AND(BPF_DW, BPF_REG_10, BPF_REG_1, -8),
+ /* if (old != 0x110) exit(3); */
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0x110, 2),
+ BPF_MOV64_IMM(BPF_REG_0, 3),
+ BPF_EXIT_INSN(),
+ /* if (val != 0x010) exit(2); */
+ BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -8),
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0x010, 2),
+ BPF_MOV64_IMM(BPF_REG_1, 2),
+ BPF_EXIT_INSN(),
+ /* Check R0 wasn't clobbered (for fear of x86 JIT bug) */
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 123, 2),
+ BPF_MOV64_IMM(BPF_REG_0, 1),
+ BPF_EXIT_INSN(),
+ /* exit(0); */
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .result = ACCEPT,
+},
+{
+ "BPF_ATOMIC_AND with fetch 32bit",
+ .insns = {
+ /* r0 = (s64) -1 */
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_ALU64_IMM(BPF_SUB, BPF_REG_0, 1),
+ /* val = 0x110; */
+ BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 0x110),
+ /* old = atomic_fetch_and(&val, 0x011); */
+ BPF_MOV32_IMM(BPF_REG_1, 0x011),
+ BPF_ATOMIC_FETCH_AND(BPF_W, BPF_REG_10, BPF_REG_1, -4),
+ /* if (old != 0x110) exit(3); */
+ BPF_JMP32_IMM(BPF_JEQ, BPF_REG_1, 0x110, 2),
+ BPF_MOV32_IMM(BPF_REG_0, 3),
+ BPF_EXIT_INSN(),
+ /* if (val != 0x010) exit(2); */
+ BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_10, -4),
+ BPF_JMP32_IMM(BPF_JEQ, BPF_REG_1, 0x010, 2),
+ BPF_MOV32_IMM(BPF_REG_1, 2),
+ BPF_EXIT_INSN(),
+ /* Check R0 wasn't clobbered (for fear of x86 JIT bug)
+ * It should be -1 so add 1 to get exit code.
+ */
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1),
+ BPF_EXIT_INSN(),
+ },
+ .result = ACCEPT,
+},
diff --git a/tools/testing/selftests/bpf/verifier/atomic_cmpxchg.c b/tools/testing/selftests/bpf/verifier/atomic_cmpxchg.c
new file mode 100644
index 000000000000..335e12690be7
--- /dev/null
+++ b/tools/testing/selftests/bpf/verifier/atomic_cmpxchg.c
@@ -0,0 +1,96 @@
+{
+ "atomic compare-and-exchange smoketest - 64bit",
+ .insns = {
+ /* val = 3; */
+ BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 3),
+ /* old = atomic_cmpxchg(&val, 2, 4); */
+ BPF_MOV64_IMM(BPF_REG_1, 4),
+ BPF_MOV64_IMM(BPF_REG_0, 2),
+ BPF_ATOMIC_CMPXCHG(BPF_DW, BPF_REG_10, BPF_REG_1, -8),
+ /* if (old != 3) exit(2); */
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 3, 2),
+ BPF_MOV64_IMM(BPF_REG_0, 2),
+ BPF_EXIT_INSN(),
+ /* if (val != 3) exit(3); */
+ BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_10, -8),
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 3, 2),
+ BPF_MOV64_IMM(BPF_REG_0, 3),
+ BPF_EXIT_INSN(),
+ /* old = atomic_cmpxchg(&val, 3, 4); */
+ BPF_MOV64_IMM(BPF_REG_1, 4),
+ BPF_MOV64_IMM(BPF_REG_0, 3),
+ BPF_ATOMIC_CMPXCHG(BPF_DW, BPF_REG_10, BPF_REG_1, -8),
+ /* if (old != 3) exit(4); */
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 3, 2),
+ BPF_MOV64_IMM(BPF_REG_0, 4),
+ BPF_EXIT_INSN(),
+ /* if (val != 4) exit(5); */
+ BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_10, -8),
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 4, 2),
+ BPF_MOV64_IMM(BPF_REG_0, 5),
+ BPF_EXIT_INSN(),
+ /* exit(0); */
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .result = ACCEPT,
+},
+{
+ "atomic compare-and-exchange smoketest - 32bit",
+ .insns = {
+ /* val = 3; */
+ BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 3),
+ /* old = atomic_cmpxchg(&val, 2, 4); */
+ BPF_MOV32_IMM(BPF_REG_1, 4),
+ BPF_MOV32_IMM(BPF_REG_0, 2),
+ BPF_ATOMIC_CMPXCHG(BPF_W, BPF_REG_10, BPF_REG_1, -4),
+ /* if (old != 3) exit(2); */
+ BPF_JMP32_IMM(BPF_JEQ, BPF_REG_0, 3, 2),
+ BPF_MOV32_IMM(BPF_REG_0, 2),
+ BPF_EXIT_INSN(),
+ /* if (val != 3) exit(3); */
+ BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_10, -4),
+ BPF_JMP32_IMM(BPF_JEQ, BPF_REG_0, 3, 2),
+ BPF_MOV32_IMM(BPF_REG_0, 3),
+ BPF_EXIT_INSN(),
+ /* old = atomic_cmpxchg(&val, 3, 4); */
+ BPF_MOV32_IMM(BPF_REG_1, 4),
+ BPF_MOV32_IMM(BPF_REG_0, 3),
+ BPF_ATOMIC_CMPXCHG(BPF_W, BPF_REG_10, BPF_REG_1, -4),
+ /* if (old != 3) exit(4); */
+ BPF_JMP32_IMM(BPF_JEQ, BPF_REG_0, 3, 2),
+ BPF_MOV32_IMM(BPF_REG_0, 4),
+ BPF_EXIT_INSN(),
+ /* if (val != 4) exit(5); */
+ BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_10, -4),
+ BPF_JMP32_IMM(BPF_JEQ, BPF_REG_0, 4, 2),
+ BPF_MOV32_IMM(BPF_REG_0, 5),
+ BPF_EXIT_INSN(),
+ /* exit(0); */
+ BPF_MOV32_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .result = ACCEPT,
+},
+{
+ "Can't use cmpxchg on uninit src reg",
+ .insns = {
+ BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 3),
+ BPF_MOV64_IMM(BPF_REG_0, 3),
+ BPF_ATOMIC_CMPXCHG(BPF_DW, BPF_REG_10, BPF_REG_2, -8),
+ BPF_EXIT_INSN(),
+ },
+ .result = REJECT,
+ .errstr = "!read_ok",
+},
+{
+ "Can't use cmpxchg on uninit memory",
+ .insns = {
+ BPF_MOV64_IMM(BPF_REG_0, 3),
+ BPF_MOV64_IMM(BPF_REG_2, 4),
+ BPF_ATOMIC_CMPXCHG(BPF_DW, BPF_REG_10, BPF_REG_2, -8),
+ BPF_EXIT_INSN(),
+ },
+ .result = REJECT,
+ .errstr = "invalid read from stack",
+},
diff --git a/tools/testing/selftests/bpf/verifier/atomic_fetch_add.c b/tools/testing/selftests/bpf/verifier/atomic_fetch_add.c
new file mode 100644
index 000000000000..7c87bc9a13de
--- /dev/null
+++ b/tools/testing/selftests/bpf/verifier/atomic_fetch_add.c
@@ -0,0 +1,106 @@
+{
+ "BPF_ATOMIC_FETCH_ADD smoketest - 64bit",
+ .insns = {
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ /* Write 3 to stack */
+ BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 3),
+ /* Put a 1 in R1, add it to the 3 on the stack, and load the value back into R1 */
+ BPF_MOV64_IMM(BPF_REG_1, 1),
+ BPF_ATOMIC_FETCH_ADD(BPF_DW, BPF_REG_10, BPF_REG_1, -8),
+ /* Check the value we loaded back was 3 */
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 3, 2),
+ BPF_MOV64_IMM(BPF_REG_0, 1),
+ BPF_EXIT_INSN(),
+ /* Load value from stack */
+ BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -8),
+ /* Check value loaded from stack was 4 */
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 4, 1),
+ BPF_MOV64_IMM(BPF_REG_0, 2),
+ BPF_EXIT_INSN(),
+ },
+ .result = ACCEPT,
+},
+{
+ "BPF_ATOMIC_FETCH_ADD smoketest - 32bit",
+ .insns = {
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ /* Write 3 to stack */
+ BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 3),
+ /* Put a 1 in R1, add it to the 3 on the stack, and load the value back into R1 */
+ BPF_MOV32_IMM(BPF_REG_1, 1),
+ BPF_ATOMIC_FETCH_ADD(BPF_W, BPF_REG_10, BPF_REG_1, -4),
+ /* Check the value we loaded back was 3 */
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 3, 2),
+ BPF_MOV64_IMM(BPF_REG_0, 1),
+ BPF_EXIT_INSN(),
+ /* Load value from stack */
+ BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_10, -4),
+ /* Check value loaded from stack was 4 */
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 4, 1),
+ BPF_MOV64_IMM(BPF_REG_0, 2),
+ BPF_EXIT_INSN(),
+ },
+ .result = ACCEPT,
+},
+{
+ "Can't use ATM_FETCH_ADD on frame pointer",
+ .insns = {
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 3),
+ BPF_ATOMIC_FETCH_ADD(BPF_DW, BPF_REG_10, BPF_REG_10, -8),
+ BPF_EXIT_INSN(),
+ },
+ .result = REJECT,
+ .errstr_unpriv = "R10 leaks addr into mem",
+ .errstr = "frame pointer is read only",
+},
+{
+ "Can't use ATM_FETCH_ADD on uninit src reg",
+ .insns = {
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 3),
+ BPF_ATOMIC_FETCH_ADD(BPF_DW, BPF_REG_10, BPF_REG_2, -8),
+ BPF_EXIT_INSN(),
+ },
+ .result = REJECT,
+ /* It happens that the address leak check is first, but it would also be
+ * complain about the fact that we're trying to modify R10.
+ */
+ .errstr = "!read_ok",
+},
+{
+ "Can't use ATM_FETCH_ADD on uninit dst reg",
+ .insns = {
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_ATOMIC_FETCH_ADD(BPF_DW, BPF_REG_2, BPF_REG_0, -8),
+ BPF_EXIT_INSN(),
+ },
+ .result = REJECT,
+ /* It happens that the address leak check is first, but it would also be
+ * complain about the fact that we're trying to modify R10.
+ */
+ .errstr = "!read_ok",
+},
+{
+ "Can't use ATM_FETCH_ADD on kernel memory",
+ .insns = {
+ /* This is an fentry prog, context is array of the args of the
+ * kernel function being called. Load first arg into R2.
+ */
+ BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_1, 0),
+ /* First arg of bpf_fentry_test7 is a pointer to a struct.
+ * Attempt to modify that struct. Verifier shouldn't let us
+ * because it's kernel memory.
+ */
+ BPF_MOV64_IMM(BPF_REG_3, 1),
+ BPF_ATOMIC_FETCH_ADD(BPF_DW, BPF_REG_2, BPF_REG_3, 0),
+ /* Done */
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .prog_type = BPF_PROG_TYPE_TRACING,
+ .expected_attach_type = BPF_TRACE_FENTRY,
+ .kfunc = "bpf_fentry_test7",
+ .result = REJECT,
+ .errstr = "only read is supported",
+},
diff --git a/tools/testing/selftests/bpf/verifier/atomic_or.c b/tools/testing/selftests/bpf/verifier/atomic_or.c
new file mode 100644
index 000000000000..1b22fb2881f0
--- /dev/null
+++ b/tools/testing/selftests/bpf/verifier/atomic_or.c
@@ -0,0 +1,77 @@
+{
+ "BPF_ATOMIC_OR without fetch",
+ .insns = {
+ /* val = 0x110; */
+ BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0x110),
+ /* atomic_or(&val, 0x011); */
+ BPF_MOV64_IMM(BPF_REG_1, 0x011),
+ BPF_ATOMIC_OR(BPF_DW, BPF_REG_10, BPF_REG_1, -8),
+ /* if (val != 0x111) exit(2); */
+ BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_10, -8),
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0x111, 2),
+ BPF_MOV64_IMM(BPF_REG_0, 2),
+ BPF_EXIT_INSN(),
+ /* r1 should not be clobbered, no BPF_FETCH flag */
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0x011, 1),
+ BPF_MOV64_IMM(BPF_REG_0, 1),
+ BPF_EXIT_INSN(),
+ },
+ .result = ACCEPT,
+},
+{
+ "BPF_ATOMIC_OR with fetch",
+ .insns = {
+ BPF_MOV64_IMM(BPF_REG_0, 123),
+ /* val = 0x110; */
+ BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0x110),
+ /* old = atomic_fetch_or(&val, 0x011); */
+ BPF_MOV64_IMM(BPF_REG_1, 0x011),
+ BPF_ATOMIC_FETCH_OR(BPF_DW, BPF_REG_10, BPF_REG_1, -8),
+ /* if (old != 0x110) exit(3); */
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0x110, 2),
+ BPF_MOV64_IMM(BPF_REG_0, 3),
+ BPF_EXIT_INSN(),
+ /* if (val != 0x111) exit(2); */
+ BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -8),
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0x111, 2),
+ BPF_MOV64_IMM(BPF_REG_1, 2),
+ BPF_EXIT_INSN(),
+ /* Check R0 wasn't clobbered (for fear of x86 JIT bug) */
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 123, 2),
+ BPF_MOV64_IMM(BPF_REG_0, 1),
+ BPF_EXIT_INSN(),
+ /* exit(0); */
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .result = ACCEPT,
+},
+{
+ "BPF_ATOMIC_OR with fetch 32bit",
+ .insns = {
+ /* r0 = (s64) -1 */
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_ALU64_IMM(BPF_SUB, BPF_REG_0, 1),
+ /* val = 0x110; */
+ BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 0x110),
+ /* old = atomic_fetch_or(&val, 0x011); */
+ BPF_MOV32_IMM(BPF_REG_1, 0x011),
+ BPF_ATOMIC_FETCH_OR(BPF_W, BPF_REG_10, BPF_REG_1, -4),
+ /* if (old != 0x110) exit(3); */
+ BPF_JMP32_IMM(BPF_JEQ, BPF_REG_1, 0x110, 2),
+ BPF_MOV32_IMM(BPF_REG_0, 3),
+ BPF_EXIT_INSN(),
+ /* if (val != 0x111) exit(2); */
+ BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_10, -4),
+ BPF_JMP32_IMM(BPF_JEQ, BPF_REG_1, 0x111, 2),
+ BPF_MOV32_IMM(BPF_REG_1, 2),
+ BPF_EXIT_INSN(),
+ /* Check R0 wasn't clobbered (for fear of x86 JIT bug)
+ * It should be -1 so add 1 to get exit code.
+ */
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1),
+ BPF_EXIT_INSN(),
+ },
+ .result = ACCEPT,
+},
diff --git a/tools/testing/selftests/bpf/verifier/atomic_xchg.c b/tools/testing/selftests/bpf/verifier/atomic_xchg.c
new file mode 100644
index 000000000000..9348ac490e24
--- /dev/null
+++ b/tools/testing/selftests/bpf/verifier/atomic_xchg.c
@@ -0,0 +1,46 @@
+{
+ "atomic exchange smoketest - 64bit",
+ .insns = {
+ /* val = 3; */
+ BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 3),
+ /* old = atomic_xchg(&val, 4); */
+ BPF_MOV64_IMM(BPF_REG_1, 4),
+ BPF_ATOMIC_XCHG(BPF_DW, BPF_REG_10, BPF_REG_1, -8),
+ /* if (old != 3) exit(1); */
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 3, 2),
+ BPF_MOV64_IMM(BPF_REG_0, 1),
+ BPF_EXIT_INSN(),
+ /* if (val != 4) exit(2); */
+ BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_10, -8),
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 4, 2),
+ BPF_MOV64_IMM(BPF_REG_0, 2),
+ BPF_EXIT_INSN(),
+ /* exit(0); */
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .result = ACCEPT,
+},
+{
+ "atomic exchange smoketest - 32bit",
+ .insns = {
+ /* val = 3; */
+ BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 3),
+ /* old = atomic_xchg(&val, 4); */
+ BPF_MOV32_IMM(BPF_REG_1, 4),
+ BPF_ATOMIC_XCHG(BPF_W, BPF_REG_10, BPF_REG_1, -4),
+ /* if (old != 3) exit(1); */
+ BPF_JMP32_IMM(BPF_JEQ, BPF_REG_1, 3, 2),
+ BPF_MOV32_IMM(BPF_REG_0, 1),
+ BPF_EXIT_INSN(),
+ /* if (val != 4) exit(2); */
+ BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_10, -4),
+ BPF_JMP32_IMM(BPF_JEQ, BPF_REG_0, 4, 2),
+ BPF_MOV32_IMM(BPF_REG_0, 2),
+ BPF_EXIT_INSN(),
+ /* exit(0); */
+ BPF_MOV32_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .result = ACCEPT,
+},
diff --git a/tools/testing/selftests/bpf/verifier/atomic_xor.c b/tools/testing/selftests/bpf/verifier/atomic_xor.c
new file mode 100644
index 000000000000..d1315419a3a8
--- /dev/null
+++ b/tools/testing/selftests/bpf/verifier/atomic_xor.c
@@ -0,0 +1,77 @@
+{
+ "BPF_ATOMIC_XOR without fetch",
+ .insns = {
+ /* val = 0x110; */
+ BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0x110),
+ /* atomic_xor(&val, 0x011); */
+ BPF_MOV64_IMM(BPF_REG_1, 0x011),
+ BPF_ATOMIC_XOR(BPF_DW, BPF_REG_10, BPF_REG_1, -8),
+ /* if (val != 0x101) exit(2); */
+ BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_10, -8),
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0x101, 2),
+ BPF_MOV64_IMM(BPF_REG_0, 2),
+ BPF_EXIT_INSN(),
+ /* r1 should not be clobbered, no BPF_FETCH flag */
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0x011, 1),
+ BPF_MOV64_IMM(BPF_REG_0, 1),
+ BPF_EXIT_INSN(),
+ },
+ .result = ACCEPT,
+},
+{
+ "BPF_ATOMIC_XOR with fetch",
+ .insns = {
+ BPF_MOV64_IMM(BPF_REG_0, 123),
+ /* val = 0x110; */
+ BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0x110),
+ /* old = atomic_fetch_xor(&val, 0x011); */
+ BPF_MOV64_IMM(BPF_REG_1, 0x011),
+ BPF_ATOMIC_FETCH_XOR(BPF_DW, BPF_REG_10, BPF_REG_1, -8),
+ /* if (old != 0x110) exit(3); */
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0x110, 2),
+ BPF_MOV64_IMM(BPF_REG_0, 3),
+ BPF_EXIT_INSN(),
+ /* if (val != 0x101) exit(2); */
+ BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -8),
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0x101, 2),
+ BPF_MOV64_IMM(BPF_REG_1, 2),
+ BPF_EXIT_INSN(),
+ /* Check R0 wasn't clobbered (fxor fear of x86 JIT bug) */
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 123, 2),
+ BPF_MOV64_IMM(BPF_REG_0, 1),
+ BPF_EXIT_INSN(),
+ /* exit(0); */
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .result = ACCEPT,
+},
+{
+ "BPF_ATOMIC_XOR with fetch 32bit",
+ .insns = {
+ /* r0 = (s64) -1 */
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_ALU64_IMM(BPF_SUB, BPF_REG_0, 1),
+ /* val = 0x110; */
+ BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 0x110),
+ /* old = atomic_fetch_xor(&val, 0x011); */
+ BPF_MOV32_IMM(BPF_REG_1, 0x011),
+ BPF_ATOMIC_FETCH_XOR(BPF_W, BPF_REG_10, BPF_REG_1, -4),
+ /* if (old != 0x110) exit(3); */
+ BPF_JMP32_IMM(BPF_JEQ, BPF_REG_1, 0x110, 2),
+ BPF_MOV32_IMM(BPF_REG_0, 3),
+ BPF_EXIT_INSN(),
+ /* if (val != 0x101) exit(2); */
+ BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_10, -4),
+ BPF_JMP32_IMM(BPF_JEQ, BPF_REG_1, 0x101, 2),
+ BPF_MOV32_IMM(BPF_REG_1, 2),
+ BPF_EXIT_INSN(),
+ /* Check R0 wasn't clobbered (fxor fear of x86 JIT bug)
+ * It should be -1 so add 1 to get exit code.
+ */
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1),
+ BPF_EXIT_INSN(),
+ },
+ .result = ACCEPT,
+},
--
2.29.2.454.gaff20da3a2-goog

2020-12-03 16:09:30

by Brendan Jackman

[permalink] [raw]
Subject: [PATCH bpf-next v3 12/14] bpf: Pull tools/build/feature biz into selftests Makefile

This is somewhat cargo-culted from the libbpf build. It will be used
in a subsequent patch to query for Clang BPF atomics support.

Change-Id: I9318a1702170eb752acced35acbb33f45126c44c
Signed-off-by: Brendan Jackman <[email protected]>
---
tools/testing/selftests/bpf/.gitignore | 1 +
tools/testing/selftests/bpf/Makefile | 38 ++++++++++++++++++++++++++
2 files changed, 39 insertions(+)

diff --git a/tools/testing/selftests/bpf/.gitignore b/tools/testing/selftests/bpf/.gitignore
index 395ae040ce1f..3c604dff1e20 100644
--- a/tools/testing/selftests/bpf/.gitignore
+++ b/tools/testing/selftests/bpf/.gitignore
@@ -35,3 +35,4 @@ test_cpp
/tools
/runqslower
/bench
+/FEATURE-DUMP.selftests.bpf
diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
index 894192c319fb..f21c4841a612 100644
--- a/tools/testing/selftests/bpf/Makefile
+++ b/tools/testing/selftests/bpf/Makefile
@@ -104,8 +104,46 @@ OVERRIDE_TARGETS := 1
override define CLEAN
$(call msg,CLEAN)
$(Q)$(RM) -r $(TEST_GEN_PROGS) $(TEST_GEN_PROGS_EXTENDED) $(TEST_GEN_FILES) $(EXTRA_CLEAN)
+ $(Q)$(RM) $(OUTPUT)/FEATURE-DUMP.selftests.bpf
endef

+# This will work when bpf is built in tools env. where srctree
+# isn't set and when invoked from selftests build, where srctree
+# is set to ".". building_out_of_srctree is undefined for in srctree
+# builds
+ifeq ($(srctree),)
+update_srctree := 1
+endif
+ifdef building_out_of_srctree
+update_srctree := 1
+endif
+ifeq ($(update_srctree),1)
+srctree := $(patsubst %/,%,$(dir $(CURDIR)))
+srctree := $(patsubst %/,%,$(dir $(srctree)))
+srctree := $(patsubst %/,%,$(dir $(srctree)))
+srctree := $(patsubst %/,%,$(dir $(srctree)))
+endif
+
+FEATURE_USER = .selftests.bpf
+FEATURE_TESTS = clang-bpf-atomics
+FEATURE_DISPLAY = clang-bpf-atomics
+
+check_feat := 1
+NON_CHECK_FEAT_TARGETS := clean
+ifdef MAKECMDGOALS
+ifeq ($(filter-out $(NON_CHECK_FEAT_TARGETS),$(MAKECMDGOALS)),)
+ check_feat := 0
+endif
+endif
+
+ifeq ($(check_feat),1)
+ifeq ($(FEATURES_DUMP),)
+include $(srctree)/tools/build/Makefile.feature
+else
+include $(FEATURES_DUMP)
+endif
+endif
+
include ../lib.mk

SCRATCH_DIR := $(OUTPUT)/tools
--
2.29.2.454.gaff20da3a2-goog

2020-12-03 16:09:35

by Brendan Jackman

[permalink] [raw]
Subject: [PATCH bpf-next v3 09/14] bpf: Pull out a macro for interpreting atomic ALU operations

Since the atomic operations that are added in subsequent commits are
all isomorphic with BPF_ADD, pull out a macro to avoid the
interpreter becoming dominated by lines of atomic-related code.

Note that this sacrificies interpreter performance (combining
STX_ATOMIC_W and STX_ATOMIC_DW into single switch case means that we
need an extra conditional branch to differentiate them) in favour of
compact and (relatively!) simple C code.

Change-Id: I8cae5b66e75f34393de6063b91c05a8006fdd9e6
Signed-off-by: Brendan Jackman <[email protected]>
---
kernel/bpf/core.c | 79 +++++++++++++++++++++++------------------------
1 file changed, 38 insertions(+), 41 deletions(-)

diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index 28f960bc2e30..498d3f067be7 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -1618,55 +1618,52 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn, u64 *stack)
LDX_PROBE(DW, 8)
#undef LDX_PROBE

- STX_ATOMIC_W:
- switch (IMM) {
- case BPF_ADD:
- /* lock xadd *(u32 *)(dst_reg + off16) += src_reg */
- atomic_add((u32) SRC, (atomic_t *)(unsigned long)
- (DST + insn->off));
- break;
- case BPF_ADD | BPF_FETCH:
- SRC = (u32) atomic_fetch_add(
- (u32) SRC,
- (atomic_t *)(unsigned long) (DST + insn->off));
- break;
- case BPF_XCHG:
- SRC = (u32) atomic_xchg(
- (atomic_t *)(unsigned long) (DST + insn->off),
- (u32) SRC);
- break;
- case BPF_CMPXCHG:
- BPF_R0 = (u32) atomic_cmpxchg(
- (atomic_t *)(unsigned long) (DST + insn->off),
- (u32) BPF_R0, (u32) SRC);
+#define ATOMIC(BOP, KOP) \
+ case BOP: \
+ if (BPF_SIZE(insn->code) == BPF_W) \
+ atomic_##KOP((u32) SRC, (atomic_t *)(unsigned long) \
+ (DST + insn->off)); \
+ else \
+ atomic64_##KOP((u64) SRC, (atomic64_t *)(unsigned long) \
+ (DST + insn->off)); \
+ break; \
+ case BOP | BPF_FETCH: \
+ if (BPF_SIZE(insn->code) == BPF_W) \
+ SRC = (u32) atomic_fetch_##KOP( \
+ (u32) SRC, \
+ (atomic_t *)(unsigned long) (DST + insn->off)); \
+ else \
+ SRC = (u64) atomic64_fetch_##KOP( \
+ (u64) SRC, \
+ (atomic64_t *)(s64) (DST + insn->off)); \
break;
- default:
- goto default_label;
- }
- CONT;

STX_ATOMIC_DW:
+ STX_ATOMIC_W:
switch (IMM) {
- case BPF_ADD:
- /* lock xadd *(u64 *)(dst_reg + off16) += src_reg */
- atomic64_add((u64) SRC, (atomic64_t *)(unsigned long)
- (DST + insn->off));
- break;
- case BPF_ADD | BPF_FETCH:
- SRC = (u64) atomic64_fetch_add(
- (u64) SRC,
- (atomic64_t *)(s64) (DST + insn->off));
- break;
+ ATOMIC(BPF_ADD, add)
+
case BPF_XCHG:
- SRC = (u64) atomic64_xchg(
- (atomic64_t *)(u64) (DST + insn->off),
- (u64) SRC);
+ if (BPF_SIZE(insn->code) == BPF_W)
+ SRC = (u32) atomic_xchg(
+ (atomic_t *)(unsigned long) (DST + insn->off),
+ (u32) SRC);
+ else
+ SRC = (u64) atomic64_xchg(
+ (atomic64_t *)(u64) (DST + insn->off),
+ (u64) SRC);
break;
case BPF_CMPXCHG:
- BPF_R0 = (u64) atomic64_cmpxchg(
- (atomic64_t *)(u64) (DST + insn->off),
- (u64) BPF_R0, (u64) SRC);
+ if (BPF_SIZE(insn->code) == BPF_W)
+ BPF_R0 = (u32) atomic_cmpxchg(
+ (atomic_t *)(unsigned long) (DST + insn->off),
+ (u32) BPF_R0, (u32) SRC);
+ else
+ BPF_R0 = (u64) atomic64_cmpxchg(
+ (atomic64_t *)(u64) (DST + insn->off),
+ (u64) BPF_R0, (u64) SRC);
break;
+
default:
goto default_label;
}
--
2.29.2.454.gaff20da3a2-goog

2020-12-03 16:09:47

by Brendan Jackman

[permalink] [raw]
Subject: [PATCH bpf-next v3 14/14] bpf: Document new atomic instructions

Change-Id: Ic70fe9e3cb4403df4eb3be2ea5ae5af53156559e
Signed-off-by: Brendan Jackman <[email protected]>
---
Documentation/networking/filter.rst | 26 ++++++++++++++++++++++++++
1 file changed, 26 insertions(+)

diff --git a/Documentation/networking/filter.rst b/Documentation/networking/filter.rst
index 1583d59d806d..26d508a5e038 100644
--- a/Documentation/networking/filter.rst
+++ b/Documentation/networking/filter.rst
@@ -1053,6 +1053,32 @@ encoding.
.imm = BPF_ADD, .code = BPF_ATOMIC | BPF_W | BPF_STX: lock xadd *(u32 *)(dst_reg + off16) += src_reg
.imm = BPF_ADD, .code = BPF_ATOMIC | BPF_DW | BPF_STX: lock xadd *(u64 *)(dst_reg + off16) += src_reg

+The basic atomic operations supported (from architecture v4 onwards) are:
+
+ BPF_ADD
+ BPF_AND
+ BPF_OR
+ BPF_XOR
+
+Each having equivalent semantics with the ``BPF_ADD`` example, that is: the
+memory location addresed by ``dst_reg + off`` is atomically modified, with
+``src_reg`` as the other operand. If the ``BPF_FETCH`` flag is set in the
+immediate, then these operations also overwrite ``src_reg`` with the
+value that was in memory before it was modified.
+
+The more special operations are:
+
+ BPF_XCHG
+
+This atomically exchanges ``src_reg`` with the value addressed by ``dst_reg +
+off``.
+
+ BPF_CMPXCHG
+
+This atomically compares the value addressed by ``dst_reg + off`` with
+``R0``. If they match it is replaced with ``src_reg``, The value that was there
+before is loaded back to ``R0``.
+
Note that 1 and 2 byte atomic operations are not supported.

You may encounter BPF_XADD - this is a legacy name for BPF_ATOMIC, referring to
--
2.29.2.454.gaff20da3a2-goog

2020-12-03 19:16:58

by Brendan Jackman

[permalink] [raw]
Subject: [PATCH bpf-next v3 11/14] tools build: Implement feature check for BPF atomics in Clang

Change-Id: Ia15bb76f7152fff2974e38242d7430ce2987a71e

Cc: Arnaldo Carvalho de Melo <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Quentin Monnet <[email protected]>
Cc: "Frank Ch. Eigler" <[email protected]>
Cc: Stephane Eranian <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Thomas Hebb <[email protected]>
Change-Id: Ie2c3832eaf050d627764071d1927c7546e7c4b4b
Signed-off-by: Brendan Jackman <[email protected]>
---
tools/build/feature/Makefile | 4 ++++
tools/build/feature/test-clang-bpf-atomics.c | 9 +++++++++
2 files changed, 13 insertions(+)
create mode 100644 tools/build/feature/test-clang-bpf-atomics.c

diff --git a/tools/build/feature/Makefile b/tools/build/feature/Makefile
index cdde783f3018..81370d7fa193 100644
--- a/tools/build/feature/Makefile
+++ b/tools/build/feature/Makefile
@@ -70,6 +70,7 @@ FILES= \
test-libaio.bin \
test-libzstd.bin \
test-clang-bpf-co-re.bin \
+ test-clang-bpf-atomics.bin \
test-file-handle.bin \
test-libpfm4.bin

@@ -331,6 +332,9 @@ $(OUTPUT)test-clang-bpf-co-re.bin:
$(CLANG) -S -g -target bpf -o - $(patsubst %.bin,%.c,$(@F)) | \
grep BTF_KIND_VAR

+$(OUTPUT)test-clang-bpf-atomics.bin:
+ $(CLANG) -S -g -target bpf -mcpu=v3 -Werror=implicit-function-declaration -o - $(patsubst %.bin,%.c,$(@F)) 2>&1
+
$(OUTPUT)test-file-handle.bin:
$(BUILD)

diff --git a/tools/build/feature/test-clang-bpf-atomics.c b/tools/build/feature/test-clang-bpf-atomics.c
new file mode 100644
index 000000000000..8b5fcdd4ba6f
--- /dev/null
+++ b/tools/build/feature/test-clang-bpf-atomics.c
@@ -0,0 +1,9 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (c) 2020 Google
+
+int x = 0;
+
+int foo(void)
+{
+ return __sync_val_compare_and_swap(&x, 1, 2);
+}
--
2.29.2.454.gaff20da3a2-goog

2020-12-03 19:17:15

by Brendan Jackman

[permalink] [raw]
Subject: Re: [PATCH bpf-next v3 00/14] Atomics for eBPF

On Thu, Dec 03, 2020 at 04:02:31PM +0000, Brendan Jackman wrote:
[...]
> [1] Previous patchset:
> https://lore.kernel.org/bpf/[email protected]/

Sorry, bogus link. That's v1, here's v2:
https://lore.kernel.org/bpf/[email protected]/

2020-12-03 21:08:01

by Andrii Nakryiko

[permalink] [raw]
Subject: Re: [PATCH bpf-next v3 12/14] bpf: Pull tools/build/feature biz into selftests Makefile

On Thu, Dec 3, 2020 at 8:07 AM Brendan Jackman <[email protected]> wrote:
>
> This is somewhat cargo-culted from the libbpf build. It will be used
> in a subsequent patch to query for Clang BPF atomics support.
>
> Change-Id: I9318a1702170eb752acced35acbb33f45126c44c

Haven't seen this before. What's this Change-Id business?

> Signed-off-by: Brendan Jackman <[email protected]>
> ---
> tools/testing/selftests/bpf/.gitignore | 1 +
> tools/testing/selftests/bpf/Makefile | 38 ++++++++++++++++++++++++++
> 2 files changed, 39 insertions(+)

All this just to detect the support for clang atomics?... Let's not
pull in the entire feature-detection framework unnecessarily,
selftests Makefile is complicated enough without that.

[...]

2020-12-03 21:10:22

by Andrii Nakryiko

[permalink] [raw]
Subject: Re: [PATCH bpf-next v3 11/14] tools build: Implement feature check for BPF atomics in Clang

On Thu, Dec 3, 2020 at 8:08 AM Brendan Jackman <[email protected]> wrote:
>
> Change-Id: Ia15bb76f7152fff2974e38242d7430ce2987a71e
>

See recent discussion on KP's patch set. There needs to be a commit
message, even if it's just a copy/paste of subject line. But see also
my other reply, I'm not sure it's worth it to do it this way for
selftests.

> Cc: Arnaldo Carvalho de Melo <[email protected]>
> Cc: Jiri Olsa <[email protected]>
> Cc: Quentin Monnet <[email protected]>
> Cc: "Frank Ch. Eigler" <[email protected]>
> Cc: Stephane Eranian <[email protected]>
> Cc: Namhyung Kim <[email protected]>
> Cc: Thomas Hebb <[email protected]>
> Change-Id: Ie2c3832eaf050d627764071d1927c7546e7c4b4b
> Signed-off-by: Brendan Jackman <[email protected]>
> ---
> tools/build/feature/Makefile | 4 ++++
> tools/build/feature/test-clang-bpf-atomics.c | 9 +++++++++
> 2 files changed, 13 insertions(+)
> create mode 100644 tools/build/feature/test-clang-bpf-atomics.c
>
> diff --git a/tools/build/feature/Makefile b/tools/build/feature/Makefile
> index cdde783f3018..81370d7fa193 100644
> --- a/tools/build/feature/Makefile
> +++ b/tools/build/feature/Makefile
> @@ -70,6 +70,7 @@ FILES= \
> test-libaio.bin \
> test-libzstd.bin \
> test-clang-bpf-co-re.bin \
> + test-clang-bpf-atomics.bin \
> test-file-handle.bin \
> test-libpfm4.bin
>
> @@ -331,6 +332,9 @@ $(OUTPUT)test-clang-bpf-co-re.bin:
> $(CLANG) -S -g -target bpf -o - $(patsubst %.bin,%.c,$(@F)) | \
> grep BTF_KIND_VAR
>
> +$(OUTPUT)test-clang-bpf-atomics.bin:
> + $(CLANG) -S -g -target bpf -mcpu=v3 -Werror=implicit-function-declaration -o - $(patsubst %.bin,%.c,$(@F)) 2>&1
> +
> $(OUTPUT)test-file-handle.bin:
> $(BUILD)
>
> diff --git a/tools/build/feature/test-clang-bpf-atomics.c b/tools/build/feature/test-clang-bpf-atomics.c
> new file mode 100644
> index 000000000000..8b5fcdd4ba6f
> --- /dev/null
> +++ b/tools/build/feature/test-clang-bpf-atomics.c
> @@ -0,0 +1,9 @@
> +// SPDX-License-Identifier: GPL-2.0
> +// Copyright (c) 2020 Google
> +
> +int x = 0;
> +
> +int foo(void)
> +{
> + return __sync_val_compare_and_swap(&x, 1, 2);
> +}
> --
> 2.29.2.454.gaff20da3a2-goog
>

2020-12-04 04:50:39

by Yonghong Song

[permalink] [raw]
Subject: Re: [PATCH bpf-next v3 00/14] Atomics for eBPF



On 12/3/20 8:02 AM, Brendan Jackman wrote:
> Status of the patches
> =====================
>
> Thanks for the reviews! Differences from v2->v3 [1]:
>
> * More minor fixes and naming/comment changes
>
> * Dropped atomic subtract: compilers can implement this by preceding
> an atomic add with a NEG instruction (which is what the x86 JIT did
> under the hood anyway).
>
> * Dropped the use of -mcpu=v4 in the Clang BPF command-line; there is
> no longer an architecture version bump. Instead a feature test is
> added to Kbuild - it builds a source file to check if Clang
> supports BPF atomics.
>
> * Fixed the prog_test so it no longer breaks
> test_progs-no_alu32. This requires some ifdef acrobatics to avoid
> complicating the prog_tests model where the same userspace code
> exercises both the normal and no_alu32 BPF test objects, using the
> same skeleton header.
>
> Differences from v1->v2 [1]:
>
> * Fixed mistakes in the netronome driver
>
> * Addd sub, add, or, xor operations
>
> * The above led to some refactors to keep things readable. (Maybe I
> should have just waited until I'd implemented these before starting
> the review...)
>
> * Replaced BPF_[CMP]SET | BPF_FETCH with just BPF_[CMP]XCHG, which
> include the BPF_FETCH flag
>
> * Added a bit of documentation. Suggestions welcome for more places
> to dump this info...
>
> The prog_test that's added depends on Clang/LLVM features added by
> Yonghong in https://reviews.llvm.org/D72184

Just let you know that the above patch has been merged into llvm-project
trunk, so you do not manually apply it any more.

>
> This only includes a JIT implementation for x86_64 - I don't plan to
> implement JIT support myself for other architectures.
>
> Operations
> ==========
>
> This patchset adds atomic operations to the eBPF instruction set. The
> use-case that motivated this work was a trivial and efficient way to
> generate globally-unique cookies in BPF progs, but I think it's
> obvious that these features are pretty widely applicable. The
> instructions that are added here can be summarised with this list of
> kernel operations:
>
> * atomic[64]_[fetch_]add
> * atomic[64]_[fetch_]and
> * atomic[64]_[fetch_]or
> * atomic[64]_xchg
> * atomic[64]_cmpxchg
>
> The following are left out of scope for this effort:
>
> * 16 and 8 bit operations
> * Explicit memory barriers
>
> Encoding
> ========
>
> I originally planned to add new values for bpf_insn.opcode. This was
> rather unpleasant: the opcode space has holes in it but no entire
> instruction classes[2]. Yonghong Song had a better idea: use the
> immediate field of the existing STX XADD instruction to encode the
> operation. This works nicely, without breaking existing programs,
> because the immediate field is currently reserved-must-be-zero, and
> extra-nicely because BPF_ADD happens to be zero.
>
> Note that this of course makes immediate-source atomic operations
> impossible. It's hard to imagine a measurable speedup from such
> instructions, and if it existed it would certainly not benefit x86,
> which has no support for them.
>
> The BPF_OP opcode fields are re-used in the immediate, and an
> additional flag BPF_FETCH is used to mark instructions that should
> fetch a pre-modification value from memory.
>
> So, BPF_XADD is now called BPF_ATOMIC (the old name is kept to avoid
> breaking userspace builds), and where we previously had .imm = 0, we
> now have .imm = BPF_ADD (which is 0).
>
> Operands
> ========
>
> Reg-source eBPF instructions only have two operands, while these
> atomic operations have up to four. To avoid needing to encode
> additional operands, then:
>
> - One of the input registers is re-used as an output register
> (e.g. atomic_fetch_add both reads from and writes to the source
> register).
>
> - Where necessary (i.e. for cmpxchg) , R0 is "hard-coded" as one of
> the operands.
>
> This approach also allows the new eBPF instructions to map directly
> to single x86 instructions.
>
> [1] Previous patchset:
> https://lore.kernel.org/bpf/[email protected]/
>
> [2] Visualisation of eBPF opcode space:
> https://gist.github.com/bjackman/00fdad2d5dfff601c1918bc29b16e778
>
[...]

2020-12-04 04:54:49

by Yonghong Song

[permalink] [raw]
Subject: Re: [PATCH bpf-next v3 06/14] bpf: Move BPF_STX reserved field check into BPF_STX verifier code



On 12/3/20 8:02 AM, Brendan Jackman wrote:
> I can't find a reason why this code is in resolve_pseudo_ldimm64;
> since I'll be modifying it in a subsequent commit, tidy it up.
>
> Change-Id: I3410469270f4889a3af67612bd6c2e7979ab4da1
> Signed-off-by: Brendan Jackman <[email protected]>

Acked-by: Yonghong Song <[email protected]>

2020-12-04 04:55:02

by Yonghong Song

[permalink] [raw]
Subject: Re: [PATCH bpf-next v3 05/14] bpf: Rename BPF_XADD and prepare to encode other atomics in .imm



On 12/3/20 8:02 AM, Brendan Jackman wrote:
> A subsequent patch will add additional atomic operations. These new
> operations will use the same opcode field as the existing XADD, with
> the immediate discriminating different operations.
>
> In preparation, rename the instruction mode BPF_ATOMIC and start
> calling the zero immediate BPF_ADD.
>
> This is possible (doesn't break existing valid BPF progs) because the
> immediate field is currently reserved MBZ and BPF_ADD is zero.
>
> All uses are removed from the tree but the BPF_XADD definition is
> kept around to avoid breaking builds for people including kernel
> headers.
>
> Signed-off-by: Brendan Jackman <[email protected]>

Acked-by: Yonghong Song <[email protected]>

> Change-Id: Ib78f54acba37f7196cbf6c35ffa1c40805cb0d87

As pointed by Andrii earlier, this 'Change-Id' is weird. I didn't
see it in other submitted patches.

2020-12-04 05:07:33

by Yonghong Song

[permalink] [raw]
Subject: Re: [PATCH bpf-next v3 07/14] bpf: Add BPF_FETCH field / create atomic_fetch_add instruction



On 12/3/20 8:02 AM, Brendan Jackman wrote:
> This value can be set in bpf_insn.imm, for BPF_ATOMIC instructions,

it is not clear what "this value" means here.
Maybe more specific using "The BPF_FETCH field"?

> in order to have the previous value of the atomically-modified memory
> location loaded into the src register after an atomic op is carried
> out.
>
> Suggested-by: Yonghong Song <[email protected]>
> Signed-off-by: Brendan Jackman <[email protected]>

Ack with the above nit.

Acked-by: Yonghong Song <[email protected]>

2020-12-04 05:33:19

by Yonghong Song

[permalink] [raw]
Subject: Re: [PATCH bpf-next v3 07/14] bpf: Add BPF_FETCH field / create atomic_fetch_add instruction



On 12/3/20 8:02 AM, Brendan Jackman wrote:
> This value can be set in bpf_insn.imm, for BPF_ATOMIC instructions,
> in order to have the previous value of the atomically-modified memory
> location loaded into the src register after an atomic op is carried
> out.
>
> Suggested-by: Yonghong Song <[email protected]>
> Signed-off-by: Brendan Jackman <[email protected]>
> Change-Id: I649ad48edb565a32ccdf72924ffe96a8c8da57ad
> ---
> arch/x86/net/bpf_jit_comp.c | 4 ++++
> include/linux/filter.h | 9 +++++++++
> include/uapi/linux/bpf.h | 3 +++
> kernel/bpf/core.c | 13 +++++++++++++
> kernel/bpf/disasm.c | 7 +++++++
> kernel/bpf/verifier.c | 35 ++++++++++++++++++++++++----------
> tools/include/linux/filter.h | 10 ++++++++++
> tools/include/uapi/linux/bpf.h | 3 +++
> 8 files changed, 74 insertions(+), 10 deletions(-)
>
> diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
> index 5e5a132b3d52..88cb09fa3bfb 100644
> --- a/arch/x86/net/bpf_jit_comp.c
> +++ b/arch/x86/net/bpf_jit_comp.c
> @@ -827,6 +827,10 @@ static int emit_atomic(u8 **pprog, u8 atomic_op,
> /* lock *(u32/u64*)(dst_reg + off) <op>= src_reg */
> EMIT1(simple_alu_opcodes[atomic_op]);
> break;
> + case BPF_ADD | BPF_FETCH:
> + /* src_reg = atomic_fetch_add(*(dst_reg + off), src_reg); */
> + EMIT2(0x0F, 0xC1);
> + break;
> default:
> pr_err("bpf_jit: unknown atomic opcode %02x\n", atomic_op);
> return -EFAULT;
> diff --git a/include/linux/filter.h b/include/linux/filter.h
> index ce19988fb312..4e04d0fc454f 100644
> --- a/include/linux/filter.h
> +++ b/include/linux/filter.h
> @@ -270,6 +270,15 @@ static inline bool insn_is_zext(const struct bpf_insn *insn)
> .imm = BPF_ADD })
> #define BPF_STX_XADD BPF_ATOMIC_ADD /* alias */
>
> +/* Atomic memory add with fetch, src_reg = atomic_fetch_add(*(dst_reg + off), src_reg); */
> +
> +#define BPF_ATOMIC_FETCH_ADD(SIZE, DST, SRC, OFF) \
> + ((struct bpf_insn) { \
> + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \
> + .dst_reg = DST, \
> + .src_reg = SRC, \
> + .off = OFF, \
> + .imm = BPF_ADD | BPF_FETCH })
>
> /* Memory store, *(uint *) (dst_reg + off16) = imm32 */
>
> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> index d0adc48db43c..025e377e7229 100644
> --- a/include/uapi/linux/bpf.h
> +++ b/include/uapi/linux/bpf.h
> @@ -44,6 +44,9 @@
> #define BPF_CALL 0x80 /* function call */
> #define BPF_EXIT 0x90 /* function return */
>
> +/* atomic op type fields (stored in immediate) */
> +#define BPF_FETCH 0x01 /* fetch previous value into src reg */
> +
> /* Register numbers */
> enum {
> BPF_REG_0 = 0,
> diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
> index 3abc6b250b18..61e93eb7d363 100644
> --- a/kernel/bpf/core.c
> +++ b/kernel/bpf/core.c
> @@ -1624,16 +1624,29 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn, u64 *stack)
> /* lock xadd *(u32 *)(dst_reg + off16) += src_reg */
> atomic_add((u32) SRC, (atomic_t *)(unsigned long)
> (DST + insn->off));
> + break;
> + case BPF_ADD | BPF_FETCH:
> + SRC = (u32) atomic_fetch_add(
> + (u32) SRC,
> + (atomic_t *)(unsigned long) (DST + insn->off));
> + break;
> default:
> goto default_label;
> }
> CONT;
> +
> STX_ATOMIC_DW:
> switch (IMM) {
> case BPF_ADD:
> /* lock xadd *(u64 *)(dst_reg + off16) += src_reg */
> atomic64_add((u64) SRC, (atomic64_t *)(unsigned long)
> (DST + insn->off));
> + break;
> + case BPF_ADD | BPF_FETCH:
> + SRC = (u64) atomic64_fetch_add(
> + (u64) SRC,
> + (atomic64_t *)(s64) (DST + insn->off));
> + break;
> default:
> goto default_label;
> }
> diff --git a/kernel/bpf/disasm.c b/kernel/bpf/disasm.c
> index 37c8d6e9b4cc..3ee2246a52ef 100644
> --- a/kernel/bpf/disasm.c
> +++ b/kernel/bpf/disasm.c
> @@ -160,6 +160,13 @@ void print_bpf_insn(const struct bpf_insn_cbs *cbs,
> bpf_ldst_string[BPF_SIZE(insn->code) >> 3],
> insn->dst_reg, insn->off,
> insn->src_reg);
> + } else if (BPF_MODE(insn->code) == BPF_ATOMIC &&
> + insn->imm == (BPF_ADD | BPF_FETCH)) {
> + verbose(cbs->private_data, "(%02x) r%d = atomic%s_fetch_add(*(%s *)(r%d %+d), r%d)\n",

We should not do dereference here (withough first *), right? since the
input is actually an address. something like below?
r2 = atomic[64]_fetch_add((u64/u32 *)(r3 +40), r2)

> + insn->code, insn->src_reg,
> + BPF_SIZE(insn->code) == BPF_DW ? "64" : "",
> + bpf_ldst_string[BPF_SIZE(insn->code) >> 3],
> + insn->dst_reg, insn->off, insn->src_reg);
> } else {
> verbose(cbs->private_data, "BUG_%02x\n", insn->code);
> }
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index e8b41ccdfb90..a68adbcee370 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -3602,7 +3602,11 @@ static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct bpf_i
> {
> int err;
>
> - if (insn->imm != BPF_ADD) {
> + switch (insn->imm) {
> + case BPF_ADD:
> + case BPF_ADD | BPF_FETCH:
> + break;
> + default:
> verbose(env, "BPF_ATOMIC uses invalid atomic opcode %02x\n", insn->imm);
> return -EINVAL;
> }
> @@ -3631,7 +3635,7 @@ static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct bpf_i
> is_pkt_reg(env, insn->dst_reg) ||
> is_flow_key_reg(env, insn->dst_reg) ||
> is_sk_reg(env, insn->dst_reg)) {
> - verbose(env, "atomic stores into R%d %s is not allowed\n",
> + verbose(env, "BPF_ATOMIC stores into R%d %s is not allowed\n",
> insn->dst_reg,
> reg_type_str[reg_state(env, insn->dst_reg)->type]);
> return -EACCES;
> @@ -3644,8 +3648,20 @@ static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct bpf_i
> return err;
>
> /* check whether we can write into the same memory */
> - return check_mem_access(env, insn_idx, insn->dst_reg, insn->off,
> - BPF_SIZE(insn->code), BPF_WRITE, -1, true);
> + err = check_mem_access(env, insn_idx, insn->dst_reg, insn->off,
> + BPF_SIZE(insn->code), BPF_WRITE, -1, true);
> + if (err)
> + return err;
> +
> + if (!(insn->imm & BPF_FETCH))
> + return 0;
> +
> + /* check and record load of old value into src reg */
> + err = check_reg_arg(env, insn->src_reg, DST_OP);
> + if (err)
> + return err;
> +
> + return 0;
> }
>
> static int __check_stack_boundary(struct bpf_verifier_env *env, u32 regno,
> @@ -9501,12 +9517,6 @@ static int do_check(struct bpf_verifier_env *env)
> } else if (class == BPF_STX) {
> enum bpf_reg_type *prev_dst_type, dst_reg_type;
>
> - if (((BPF_MODE(insn->code) != BPF_MEM &&
> - BPF_MODE(insn->code) != BPF_ATOMIC) || insn->imm != 0)) {
> - verbose(env, "BPF_STX uses reserved fields\n");
> - return -EINVAL;
> - }
> -
> if (BPF_MODE(insn->code) == BPF_ATOMIC) {
> err = check_atomic(env, env->insn_idx, insn);
> if (err)
> @@ -9515,6 +9525,11 @@ static int do_check(struct bpf_verifier_env *env)
> continue;
> }
>
> + if (BPF_MODE(insn->code) != BPF_MEM || insn->imm != 0) {
> + verbose(env, "BPF_STX uses reserved fields\n");
> + return -EINVAL;
> + }
> +
> /* check src1 operand */
> err = check_reg_arg(env, insn->src_reg, SRC_OP);
> if (err)
> diff --git a/tools/include/linux/filter.h b/tools/include/linux/filter.h
> index 95ff51d97f25..ac7701678e1a 100644
> --- a/tools/include/linux/filter.h
> +++ b/tools/include/linux/filter.h
> @@ -180,6 +180,16 @@
> .imm = BPF_ADD })
> #define BPF_STX_XADD BPF_ATOMIC_ADD /* alias */
>
> +/* Atomic memory add with fetch, src_reg = atomic_fetch_add(*(dst_reg + off), src_reg); */

Maybe src_reg = atomic_fetch_add(dst_reg + off, src_reg)?

> +
> +#define BPF_ATOMIC_FETCH_ADD(SIZE, DST, SRC, OFF) \
> + ((struct bpf_insn) { \
> + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \
> + .dst_reg = DST, \
> + .src_reg = SRC, \
> + .off = OFF, \
> + .imm = BPF_ADD | BPF_FETCH })
> +
> /* Memory store, *(uint *) (dst_reg + off16) = imm32 */
>
> #define BPF_ST_MEM(SIZE, DST, OFF, IMM) \
> diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
> index d0adc48db43c..025e377e7229 100644
> --- a/tools/include/uapi/linux/bpf.h
> +++ b/tools/include/uapi/linux/bpf.h
> @@ -44,6 +44,9 @@
> #define BPF_CALL 0x80 /* function call */
> #define BPF_EXIT 0x90 /* function return */
>
> +/* atomic op type fields (stored in immediate) */
> +#define BPF_FETCH 0x01 /* fetch previous value into src reg */
> +
> /* Register numbers */
> enum {
> BPF_REG_0 = 0,
>

2020-12-04 05:38:37

by Yonghong Song

[permalink] [raw]
Subject: Re: [PATCH bpf-next v3 08/14] bpf: Add instructions for atomic_[cmp]xchg



On 12/3/20 8:02 AM, Brendan Jackman wrote:
> This adds two atomic opcodes, both of which include the BPF_FETCH
> flag. XCHG without the BPF_FETCh flag would naturally encode

BPF_FETCh => BPF_FETCH

> atomic_set. This is not supported because it would be of limited
> value to userspace (it doesn't imply any barriers). CMPXCHG without
> BPF_FETCH woulud be an atomic compare-and-write. We don't have such
> an operation in the kernel so it isn't provided to BPF either.
>
> There are two significant design decisions made for the CMPXCHG
> instruction:
>
> - To solve the issue that this operation fundamentally has 3
> operands, but we only have two register fields. Therefore the
> operand we compare against (the kernel's API calls it 'old') is
> hard-coded to be R0. x86 has similar design (and A64 doesn't
> have this problem).
>
> A potential alternative might be to encode the other operand's
> register number in the immediate field.
>
> - The kernel's atomic_cmpxchg returns the old value, while the C11
> userspace APIs return a boolean indicating the comparison
> result. Which should BPF do? A64 returns the old value. x86 returns
> the old value in the hard-coded register (and also sets a
> flag). That means return-old-value is easier to JIT.
>
> Signed-off-by: Brendan Jackman <[email protected]>

Ack with minor comments in the above and below.

Acked-by: Yonghong Song <[email protected]>

> Change-Id: I3f19ad867dfd08515eecf72674e6fdefe28424bb
> ---
> arch/x86/net/bpf_jit_comp.c | 8 ++++++++
> include/linux/filter.h | 20 ++++++++++++++++++++
> include/uapi/linux/bpf.h | 4 +++-
> kernel/bpf/core.c | 20 ++++++++++++++++++++
> kernel/bpf/disasm.c | 15 +++++++++++++++
> kernel/bpf/verifier.c | 19 +++++++++++++++++--
> tools/include/linux/filter.h | 20 ++++++++++++++++++++
> tools/include/uapi/linux/bpf.h | 4 +++-
> 8 files changed, 106 insertions(+), 4 deletions(-)
>
> diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
> index 88cb09fa3bfb..7d29bc3bb4ff 100644
> --- a/arch/x86/net/bpf_jit_comp.c
> +++ b/arch/x86/net/bpf_jit_comp.c
> @@ -831,6 +831,14 @@ static int emit_atomic(u8 **pprog, u8 atomic_op,
> /* src_reg = atomic_fetch_add(*(dst_reg + off), src_reg); */
> EMIT2(0x0F, 0xC1);
> break;
> + case BPF_XCHG:
> + /* src_reg = atomic_xchg(*(u32/u64*)(dst_reg + off), src_reg); */

src_reg = atomic_xchg((u32/u64*)(dst_reg + off), src_reg)?

> + EMIT1(0x87);
> + break;
> + case BPF_CMPXCHG:
> + /* r0 = atomic_cmpxchg(*(u32/u64*)(dst_reg + off), r0, src_reg); */

r0 = atomic_cmpxchg((u32/u64*)(dst_reg + off), r0, src_reg)?

> + EMIT2(0x0F, 0xB1);
> + break;
> default:
> pr_err("bpf_jit: unknown atomic opcode %02x\n", atomic_op);
> return -EFAULT;
> diff --git a/include/linux/filter.h b/include/linux/filter.h
> index 4e04d0fc454f..6186280715ed 100644
> --- a/include/linux/filter.h
> +++ b/include/linux/filter.h
> @@ -280,6 +280,26 @@ static inline bool insn_is_zext(const struct bpf_insn *insn)
> .off = OFF, \
> .imm = BPF_ADD | BPF_FETCH })
>
> +/* Atomic exchange, src_reg = atomic_xchg((dst_reg + off), src_reg) */

src_reg = atomic_xchg(dst_reg + off, src_reg)?

> +
> +#define BPF_ATOMIC_XCHG(SIZE, DST, SRC, OFF) \
> + ((struct bpf_insn) { \
> + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \
> + .dst_reg = DST, \
> + .src_reg = SRC, \
> + .off = OFF, \
> + .imm = BPF_XCHG })
> +
> +/* Atomic compare-exchange, r0 = atomic_cmpxchg((dst_reg + off), r0, src_reg) */

r0 = atomic_cmpxchg(dst_reg + off, r0, src_reg)?

> +
> +#define BPF_ATOMIC_CMPXCHG(SIZE, DST, SRC, OFF) \
> + ((struct bpf_insn) { \
> + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \
> + .dst_reg = DST, \
> + .src_reg = SRC, \
> + .off = OFF, \
> + .imm = BPF_CMPXCHG })
> +
> /* Memory store, *(uint *) (dst_reg + off16) = imm32 */
>
> #define BPF_ST_MEM(SIZE, DST, OFF, IMM) \
> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> index 025e377e7229..53334530cc81 100644
> --- a/include/uapi/linux/bpf.h
> +++ b/include/uapi/linux/bpf.h
> @@ -45,7 +45,9 @@
> #define BPF_EXIT 0x90 /* function return */
>
> /* atomic op type fields (stored in immediate) */
> -#define BPF_FETCH 0x01 /* fetch previous value into src reg */
> +#define BPF_XCHG (0xe0 | BPF_FETCH) /* atomic exchange */
> +#define BPF_CMPXCHG (0xf0 | BPF_FETCH) /* atomic compare-and-write */
> +#define BPF_FETCH 0x01 /* not an opcode on its own, used to build others */
>
> /* Register numbers */
> enum {
> diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
> index 61e93eb7d363..28f960bc2e30 100644
> --- a/kernel/bpf/core.c
> +++ b/kernel/bpf/core.c
> @@ -1630,6 +1630,16 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn, u64 *stack)
> (u32) SRC,
> (atomic_t *)(unsigned long) (DST + insn->off));
> break;
> + case BPF_XCHG:
> + SRC = (u32) atomic_xchg(
> + (atomic_t *)(unsigned long) (DST + insn->off),
> + (u32) SRC);
> + break;
> + case BPF_CMPXCHG:
> + BPF_R0 = (u32) atomic_cmpxchg(
> + (atomic_t *)(unsigned long) (DST + insn->off),
> + (u32) BPF_R0, (u32) SRC);
> + break;
> default:
> goto default_label;
> }
> @@ -1647,6 +1657,16 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn, u64 *stack)
> (u64) SRC,
> (atomic64_t *)(s64) (DST + insn->off));
> break;
> + case BPF_XCHG:
> + SRC = (u64) atomic64_xchg(
> + (atomic64_t *)(u64) (DST + insn->off),
> + (u64) SRC);
> + break;
> + case BPF_CMPXCHG:
> + BPF_R0 = (u64) atomic64_cmpxchg(
> + (atomic64_t *)(u64) (DST + insn->off),
> + (u64) BPF_R0, (u64) SRC);
> + break;
> default:
> goto default_label;
> }
> diff --git a/kernel/bpf/disasm.c b/kernel/bpf/disasm.c
> index 3ee2246a52ef..18357ea9a17d 100644
> --- a/kernel/bpf/disasm.c
> +++ b/kernel/bpf/disasm.c
> @@ -167,6 +167,21 @@ void print_bpf_insn(const struct bpf_insn_cbs *cbs,
> BPF_SIZE(insn->code) == BPF_DW ? "64" : "",
> bpf_ldst_string[BPF_SIZE(insn->code) >> 3],
> insn->dst_reg, insn->off, insn->src_reg);
> + } else if (BPF_MODE(insn->code) == BPF_ATOMIC &&
> + insn->imm == BPF_CMPXCHG) {
> + verbose(cbs->private_data, "(%02x) r0 = atomic%s_cmpxchg(*(%s *)(r%d %+d), r0, r%d)\n",

(%02x) r0 = atomic%s_cmpxchg((%s *)(r%d %+d), r0, r%d)?

> + insn->code,
> + BPF_SIZE(insn->code) == BPF_DW ? "64" : "",
> + bpf_ldst_string[BPF_SIZE(insn->code) >> 3],
> + insn->dst_reg, insn->off,
> + insn->src_reg);
> + } else if (BPF_MODE(insn->code) == BPF_ATOMIC &&
> + insn->imm == BPF_XCHG) {
> + verbose(cbs->private_data, "(%02x) r%d = atomic%s_xchg(*(%s *)(r%d %+d), r%d)\n",

(%02x) r%d = atomic%s_xchg((%s *)(r%d %+d), r%d)?

> + insn->code, insn->src_reg,
> + BPF_SIZE(insn->code) == BPF_DW ? "64" : "",
> + bpf_ldst_string[BPF_SIZE(insn->code) >> 3],
> + insn->dst_reg, insn->off, insn->src_reg);
> } else {
> verbose(cbs->private_data, "BUG_%02x\n", insn->code);
> }
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index a68adbcee370..ccf4315e54e7 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -3601,10 +3601,13 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
> static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct bpf_insn *insn)
> {
> int err;
> + int load_reg;

nit: not a big deal but maybe put this definition before 'int err' to
maintain reverse christmas tree coding style.

>
> switch (insn->imm) {
> case BPF_ADD:
> case BPF_ADD | BPF_FETCH:
> + case BPF_XCHG:
> + case BPF_CMPXCHG:
> break;
> default:
> verbose(env, "BPF_ATOMIC uses invalid atomic opcode %02x\n", insn->imm);
> @@ -3626,6 +3629,13 @@ static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct bpf_i
> if (err)
> return err;
>
> + if (insn->imm == BPF_CMPXCHG) {
> + /* Check comparison of R0 with memory location */
> + err = check_reg_arg(env, BPF_REG_0, SRC_OP);
> + if (err)
> + return err;
> + }
> +
> if (is_pointer_value(env, insn->src_reg)) {
> verbose(env, "R%d leaks addr into mem\n", insn->src_reg);
> return -EACCES;
> @@ -3656,8 +3666,13 @@ static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct bpf_i
> if (!(insn->imm & BPF_FETCH))
> return 0;
>
> - /* check and record load of old value into src reg */
> - err = check_reg_arg(env, insn->src_reg, DST_OP);
> + if (insn->imm == BPF_CMPXCHG)
> + load_reg = BPF_REG_0;
> + else
> + load_reg = insn->src_reg;
> +
> + /* check and record load of old value */
> + err = check_reg_arg(env, load_reg, DST_OP);
> if (err)
> return err;
>
> diff --git a/tools/include/linux/filter.h b/tools/include/linux/filter.h
> index ac7701678e1a..ea99bd17d003 100644
> --- a/tools/include/linux/filter.h
> +++ b/tools/include/linux/filter.h
> @@ -190,6 +190,26 @@
> .off = OFF, \
> .imm = BPF_ADD | BPF_FETCH })
>
> +/* Atomic exchange, src_reg = atomic_xchg((dst_reg + off), src_reg) */

src_reg = atomic_xchg(dst_reg + off, src_reg)?

> +
> +#define BPF_ATOMIC_XCHG(SIZE, DST, SRC, OFF) \
> + ((struct bpf_insn) { \
> + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \
> + .dst_reg = DST, \
> + .src_reg = SRC, \
> + .off = OFF, \
> + .imm = BPF_XCHG })
> +
> +/* Atomic compare-exchange, r0 = atomic_cmpxchg((dst_reg + off), r0, src_reg) */

r0 = atomic_cmpxchg(dst_reg + off, r0, src_reg)?

> +
> +#define BPF_ATOMIC_CMPXCHG(SIZE, DST, SRC, OFF) \
> + ((struct bpf_insn) { \
> + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \
> + .dst_reg = DST, \
> + .src_reg = SRC, \
> + .off = OFF, \
> + .imm = BPF_CMPXCHG })
> +
> /* Memory store, *(uint *) (dst_reg + off16) = imm32 */
>
> #define BPF_ST_MEM(SIZE, DST, OFF, IMM) \
> diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
> index 025e377e7229..53334530cc81 100644
> --- a/tools/include/uapi/linux/bpf.h
> +++ b/tools/include/uapi/linux/bpf.h
> @@ -45,7 +45,9 @@
> #define BPF_EXIT 0x90 /* function return */
>
> /* atomic op type fields (stored in immediate) */
> -#define BPF_FETCH 0x01 /* fetch previous value into src reg */
> +#define BPF_XCHG (0xe0 | BPF_FETCH) /* atomic exchange */
> +#define BPF_CMPXCHG (0xf0 | BPF_FETCH) /* atomic compare-and-write */
> +#define BPF_FETCH 0x01 /* not an opcode on its own, used to build others */
>
> /* Register numbers */
> enum {
>

2020-12-04 06:33:16

by Yonghong Song

[permalink] [raw]
Subject: Re: [PATCH bpf-next v3 09/14] bpf: Pull out a macro for interpreting atomic ALU operations



On 12/3/20 8:02 AM, Brendan Jackman wrote:
> Since the atomic operations that are added in subsequent commits are
> all isomorphic with BPF_ADD, pull out a macro to avoid the
> interpreter becoming dominated by lines of atomic-related code.
>
> Note that this sacrificies interpreter performance (combining
> STX_ATOMIC_W and STX_ATOMIC_DW into single switch case means that we
> need an extra conditional branch to differentiate them) in favour of
> compact and (relatively!) simple C code.
>
> Change-Id: I8cae5b66e75f34393de6063b91c05a8006fdd9e6
> Signed-off-by: Brendan Jackman <[email protected]>

Ack with a minor suggestion below.

Acked-by: Yonghong Song <[email protected]>

> ---
> kernel/bpf/core.c | 79 +++++++++++++++++++++++------------------------
> 1 file changed, 38 insertions(+), 41 deletions(-)
>
> diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
> index 28f960bc2e30..498d3f067be7 100644
> --- a/kernel/bpf/core.c
> +++ b/kernel/bpf/core.c
> @@ -1618,55 +1618,52 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn, u64 *stack)
> LDX_PROBE(DW, 8)
> #undef LDX_PROBE
>
> - STX_ATOMIC_W:
> - switch (IMM) {
> - case BPF_ADD:
> - /* lock xadd *(u32 *)(dst_reg + off16) += src_reg */
> - atomic_add((u32) SRC, (atomic_t *)(unsigned long)
> - (DST + insn->off));
> - break;
> - case BPF_ADD | BPF_FETCH:
> - SRC = (u32) atomic_fetch_add(
> - (u32) SRC,
> - (atomic_t *)(unsigned long) (DST + insn->off));
> - break;
> - case BPF_XCHG:
> - SRC = (u32) atomic_xchg(
> - (atomic_t *)(unsigned long) (DST + insn->off),
> - (u32) SRC);
> - break;
> - case BPF_CMPXCHG:
> - BPF_R0 = (u32) atomic_cmpxchg(
> - (atomic_t *)(unsigned long) (DST + insn->off),
> - (u32) BPF_R0, (u32) SRC);
> +#define ATOMIC(BOP, KOP) \

ATOMIC a little bit generic. Maybe ATOMIC_FETCH_BOP?

> + case BOP: \
> + if (BPF_SIZE(insn->code) == BPF_W) \
> + atomic_##KOP((u32) SRC, (atomic_t *)(unsigned long) \
> + (DST + insn->off)); \
> + else \
> + atomic64_##KOP((u64) SRC, (atomic64_t *)(unsigned long) \
> + (DST + insn->off)); \
> + break; \
> + case BOP | BPF_FETCH: \
> + if (BPF_SIZE(insn->code) == BPF_W) \
> + SRC = (u32) atomic_fetch_##KOP( \
> + (u32) SRC, \
> + (atomic_t *)(unsigned long) (DST + insn->off)); \
> + else \
> + SRC = (u64) atomic64_fetch_##KOP( \
> + (u64) SRC, \
> + (atomic64_t *)(s64) (DST + insn->off)); \
> break;
> - default:
> - goto default_label;
> - }
> - CONT;
>
> STX_ATOMIC_DW:
> + STX_ATOMIC_W:
> switch (IMM) {
> - case BPF_ADD:
> - /* lock xadd *(u64 *)(dst_reg + off16) += src_reg */
> - atomic64_add((u64) SRC, (atomic64_t *)(unsigned long)
> - (DST + insn->off));
> - break;
> - case BPF_ADD | BPF_FETCH:
> - SRC = (u64) atomic64_fetch_add(
> - (u64) SRC,
> - (atomic64_t *)(s64) (DST + insn->off));
> - break;
> + ATOMIC(BPF_ADD, add)
> +
> case BPF_XCHG:
> - SRC = (u64) atomic64_xchg(
> - (atomic64_t *)(u64) (DST + insn->off),
> - (u64) SRC);
> + if (BPF_SIZE(insn->code) == BPF_W)
> + SRC = (u32) atomic_xchg(
> + (atomic_t *)(unsigned long) (DST + insn->off),
> + (u32) SRC);
> + else
> + SRC = (u64) atomic64_xchg(
> + (atomic64_t *)(u64) (DST + insn->off),
> + (u64) SRC);
> break;
> case BPF_CMPXCHG:
> - BPF_R0 = (u64) atomic64_cmpxchg(
> - (atomic64_t *)(u64) (DST + insn->off),
> - (u64) BPF_R0, (u64) SRC);
> + if (BPF_SIZE(insn->code) == BPF_W)
> + BPF_R0 = (u32) atomic_cmpxchg(
> + (atomic_t *)(unsigned long) (DST + insn->off),
> + (u32) BPF_R0, (u32) SRC);
> + else
> + BPF_R0 = (u64) atomic64_cmpxchg(
> + (atomic64_t *)(u64) (DST + insn->off),
> + (u64) BPF_R0, (u64) SRC);
> break;
> +
> default:
> goto default_label;
> }
>

2020-12-04 06:47:19

by Yonghong Song

[permalink] [raw]
Subject: Re: [PATCH bpf-next v3 10/14] bpf: Add bitwise atomic instructions



On 12/3/20 8:02 AM, Brendan Jackman wrote:
> This adds instructions for
>
> atomic[64]_[fetch_]and
> atomic[64]_[fetch_]or
> atomic[64]_[fetch_]xor
>
> All these operations are isomorphic enough to implement with the same
> verifier, interpreter, and x86 JIT code, hence being a single commit.
>
> The main interesting thing here is that x86 doesn't directly support
> the fetch_ version these operations, so we need to generate a CMPXCHG
> loop in the JIT. This requires the use of two temporary registers,
> IIUC it's safe to use BPF_REG_AX and x86's AUX_REG for this purpose.
>
> Change-Id: I340b10cecebea8cb8a52e3606010cde547a10ed4
> Signed-off-by: Brendan Jackman <[email protected]>
> ---
> arch/x86/net/bpf_jit_comp.c | 50 +++++++++++++++++++++++++++++-
> include/linux/filter.h | 60 ++++++++++++++++++++++++++++++++++++
> kernel/bpf/core.c | 5 ++-
> kernel/bpf/disasm.c | 21 ++++++++++---
> kernel/bpf/verifier.c | 6 ++++
> tools/include/linux/filter.h | 60 ++++++++++++++++++++++++++++++++++++
> 6 files changed, 196 insertions(+), 6 deletions(-)
>
> diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
> index 7d29bc3bb4ff..4ab0f821326c 100644
> --- a/arch/x86/net/bpf_jit_comp.c
> +++ b/arch/x86/net/bpf_jit_comp.c
> @@ -824,6 +824,10 @@ static int emit_atomic(u8 **pprog, u8 atomic_op,
> /* emit opcode */
> switch (atomic_op) {
> case BPF_ADD:
> + case BPF_SUB:
> + case BPF_AND:
> + case BPF_OR:
> + case BPF_XOR:
> /* lock *(u32/u64*)(dst_reg + off) <op>= src_reg */
> EMIT1(simple_alu_opcodes[atomic_op]);
> break;
> @@ -1306,8 +1310,52 @@ st: if (is_imm8(insn->off))
>
> case BPF_STX | BPF_ATOMIC | BPF_W:
> case BPF_STX | BPF_ATOMIC | BPF_DW:
> + if (insn->imm == (BPF_AND | BPF_FETCH) ||
> + insn->imm == (BPF_OR | BPF_FETCH) ||
> + insn->imm == (BPF_XOR | BPF_FETCH)) {
> + u8 *branch_target;
> + bool is64 = BPF_SIZE(insn->code) == BPF_DW;
> +
> + /*
> + * Can't be implemented with a single x86 insn.
> + * Need to do a CMPXCHG loop.
> + */
> +
> + /* Will need RAX as a CMPXCHG operand so save R0 */
> + emit_mov_reg(&prog, true, BPF_REG_AX, BPF_REG_0);
> + branch_target = prog;
> + /* Load old value */
> + emit_ldx(&prog, BPF_SIZE(insn->code),
> + BPF_REG_0, dst_reg, insn->off);
> + /*
> + * Perform the (commutative) operation locally,
> + * put the result in the AUX_REG.
> + */
> + emit_mov_reg(&prog, is64, AUX_REG, BPF_REG_0);
> + maybe_emit_mod(&prog, AUX_REG, src_reg, is64);
> + EMIT2(simple_alu_opcodes[BPF_OP(insn->imm)],
> + add_2reg(0xC0, AUX_REG, src_reg));
> + /* Attempt to swap in new value */
> + err = emit_atomic(&prog, BPF_CMPXCHG,
> + dst_reg, AUX_REG, insn->off,
> + BPF_SIZE(insn->code));
> + if (WARN_ON(err))
> + return err;
> + /*
> + * ZF tells us whether we won the race. If it's
> + * cleared we need to try again.
> + */
> + EMIT2(X86_JNE, -(prog - branch_target) - 2);
> + /* Return the pre-modification value */
> + emit_mov_reg(&prog, is64, src_reg, BPF_REG_0);
> + /* Restore R0 after clobbering RAX */
> + emit_mov_reg(&prog, true, BPF_REG_0, BPF_REG_AX);
> + break;
> +
> + }
> +
> err = emit_atomic(&prog, insn->imm, dst_reg, src_reg,
> - insn->off, BPF_SIZE(insn->code));
> + insn->off, BPF_SIZE(insn->code));
> if (err)
> return err;
> break;
> diff --git a/include/linux/filter.h b/include/linux/filter.h
> index 6186280715ed..698f82897b0d 100644
> --- a/include/linux/filter.h
> +++ b/include/linux/filter.h
> @@ -280,6 +280,66 @@ static inline bool insn_is_zext(const struct bpf_insn *insn)
> .off = OFF, \
> .imm = BPF_ADD | BPF_FETCH })
>
> +/* Atomic memory and, *(uint *)(dst_reg + off16) &= src_reg */
> +
> +#define BPF_ATOMIC_AND(SIZE, DST, SRC, OFF) \
> + ((struct bpf_insn) { \
> + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \
> + .dst_reg = DST, \
> + .src_reg = SRC, \
> + .off = OFF, \
> + .imm = BPF_AND })
> +
> +/* Atomic memory and with fetch, src_reg = atomic_fetch_and(*(dst_reg + off), src_reg); */

src_reg = atomic_fetch_and(dst_reg + off, src_reg)?

> +
> +#define BPF_ATOMIC_FETCH_AND(SIZE, DST, SRC, OFF) \
> + ((struct bpf_insn) { \
> + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \
> + .dst_reg = DST, \
> + .src_reg = SRC, \
> + .off = OFF, \
> + .imm = BPF_AND | BPF_FETCH })
> +
> +/* Atomic memory or, *(uint *)(dst_reg + off16) |= src_reg */
> +
> +#define BPF_ATOMIC_OR(SIZE, DST, SRC, OFF) \
> + ((struct bpf_insn) { \
> + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \
> + .dst_reg = DST, \
> + .src_reg = SRC, \
> + .off = OFF, \
> + .imm = BPF_OR })
> +
> +/* Atomic memory or with fetch, src_reg = atomic_fetch_or(*(dst_reg + off), src_reg); */

src_reg = atomic_fetch_or(dst_reg + off, src_reg)?

> +
> +#define BPF_ATOMIC_FETCH_OR(SIZE, DST, SRC, OFF) \
> + ((struct bpf_insn) { \
> + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \
> + .dst_reg = DST, \
> + .src_reg = SRC, \
> + .off = OFF, \
> + .imm = BPF_OR | BPF_FETCH })
> +
> +/* Atomic memory xor, *(uint *)(dst_reg + off16) ^= src_reg */
> +
> +#define BPF_ATOMIC_XOR(SIZE, DST, SRC, OFF) \
> + ((struct bpf_insn) { \
> + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \
> + .dst_reg = DST, \
> + .src_reg = SRC, \
> + .off = OFF, \
> + .imm = BPF_XOR })
> +
> +/* Atomic memory xor with fetch, src_reg = atomic_fetch_xor(*(dst_reg + off), src_reg); */

src_reg = atomic_fetch_xor(dst_reg + off, src_reg)?

> +
> +#define BPF_ATOMIC_FETCH_XOR(SIZE, DST, SRC, OFF) \
> + ((struct bpf_insn) { \
> + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \
> + .dst_reg = DST, \
> + .src_reg = SRC, \
> + .off = OFF, \
> + .imm = BPF_XOR | BPF_FETCH })
> +
> /* Atomic exchange, src_reg = atomic_xchg((dst_reg + off), src_reg) */
>

Looks like BPF_ATOMIC_XOR/OR/AND/... all similar to each other.
The same is for BPF_ATOMIC_FETCH_XOR/OR/AND/...

I am wondering whether it makes sence to have to
BPF_ATOMIC_BOP(BOP, SIZE, DST, SRC, OFF) and
BPF_ATOMIC_FETCH_BOP(BOP, SIZE, DST, SRC, OFF)
can have less number of macros?

> #define BPF_ATOMIC_XCHG(SIZE, DST, SRC, OFF) \
> diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
> index 498d3f067be7..27eac4d5724c 100644
> --- a/kernel/bpf/core.c
> +++ b/kernel/bpf/core.c
> @@ -1642,7 +1642,10 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn, u64 *stack)
> STX_ATOMIC_W:
> switch (IMM) {
> ATOMIC(BPF_ADD, add)
> -
> + ATOMIC(BPF_AND, and)
> + ATOMIC(BPF_OR, or)
> + ATOMIC(BPF_XOR, xor)
> +#undef ATOMIC
> case BPF_XCHG:
> if (BPF_SIZE(insn->code) == BPF_W)
> SRC = (u32) atomic_xchg(
> diff --git a/kernel/bpf/disasm.c b/kernel/bpf/disasm.c
> index 18357ea9a17d..0c7c1c31a57b 100644
> --- a/kernel/bpf/disasm.c
> +++ b/kernel/bpf/disasm.c
> @@ -80,6 +80,13 @@ const char *const bpf_alu_string[16] = {
> [BPF_END >> 4] = "endian",
> };
>
> +static const char *const bpf_atomic_alu_string[16] = {
> + [BPF_ADD >> 4] = "add",
> + [BPF_AND >> 4] = "and",
> + [BPF_OR >> 4] = "or",
> + [BPF_XOR >> 4] = "or",
> +};
> +
> static const char *const bpf_ldst_string[] = {
> [BPF_W >> 3] = "u32",
> [BPF_H >> 3] = "u16",
> @@ -154,17 +161,23 @@ void print_bpf_insn(const struct bpf_insn_cbs *cbs,
> insn->dst_reg,
> insn->off, insn->src_reg);
> else if (BPF_MODE(insn->code) == BPF_ATOMIC &&
> - insn->imm == BPF_ADD) {
> - verbose(cbs->private_data, "(%02x) lock *(%s *)(r%d %+d) += r%d\n",
> + (insn->imm == BPF_ADD || insn->imm == BPF_ADD ||
> + insn->imm == BPF_OR || insn->imm == BPF_XOR)) {
> + verbose(cbs->private_data, "(%02x) lock *(%s *)(r%d %+d) %s r%d\n",
> insn->code,
> bpf_ldst_string[BPF_SIZE(insn->code) >> 3],
> insn->dst_reg, insn->off,
> + bpf_alu_string[BPF_OP(insn->imm) >> 4],
> insn->src_reg);
> } else if (BPF_MODE(insn->code) == BPF_ATOMIC &&
> - insn->imm == (BPF_ADD | BPF_FETCH)) {
> - verbose(cbs->private_data, "(%02x) r%d = atomic%s_fetch_add(*(%s *)(r%d %+d), r%d)\n",

(%02x) r%d = atomic%s_fetch_add((%s *)(r%d %+d), r%d)?

> + (insn->imm == (BPF_ADD | BPF_FETCH) ||
> + insn->imm == (BPF_AND | BPF_FETCH) ||
> + insn->imm == (BPF_OR | BPF_FETCH) ||
> + insn->imm == (BPF_XOR | BPF_FETCH))) > + verbose(cbs->private_data, "(%02x) r%d = atomic%s_fetch_%s(*(%s
*)(r%d %+d), r%d)\n",

(%02x) r%d = atomic%s_fetch_%s((%s *)(r%d %+d), r%d)?

> insn->code, insn->src_reg,
> BPF_SIZE(insn->code) == BPF_DW ? "64" : "",
> + bpf_atomic_alu_string[BPF_OP(insn->imm) >> 4],
> bpf_ldst_string[BPF_SIZE(insn->code) >> 3],
> insn->dst_reg, insn->off, insn->src_reg);
> } else if (BPF_MODE(insn->code) == BPF_ATOMIC &&
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index ccf4315e54e7..dd30eb9a6c1b 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -3606,6 +3606,12 @@ static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct bpf_i
> switch (insn->imm) {
> case BPF_ADD:
> case BPF_ADD | BPF_FETCH:
> + case BPF_AND:
> + case BPF_AND | BPF_FETCH:
> + case BPF_OR:
> + case BPF_OR | BPF_FETCH:
> + case BPF_XOR:
> + case BPF_XOR | BPF_FETCH:
> case BPF_XCHG:
> case BPF_CMPXCHG:
> break;
> diff --git a/tools/include/linux/filter.h b/tools/include/linux/filter.h
> index ea99bd17d003..b74febf83eb1 100644
> --- a/tools/include/linux/filter.h
> +++ b/tools/include/linux/filter.h
> @@ -190,6 +190,66 @@
> .off = OFF, \
> .imm = BPF_ADD | BPF_FETCH })
>
> +/* Atomic memory and, *(uint *)(dst_reg + off16) -= src_reg */
> +
> +#define BPF_ATOMIC_AND(SIZE, DST, SRC, OFF) \
> + ((struct bpf_insn) { \
> + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \
> + .dst_reg = DST, \
> + .src_reg = SRC, \
> + .off = OFF, \
> + .imm = BPF_AND })
> +
> +/* Atomic memory and with fetch, src_reg = atomic_fetch_and(*(dst_reg + off), src_reg); */
> +
> +#define BPF_ATOMIC_FETCH_AND(SIZE, DST, SRC, OFF) \
> + ((struct bpf_insn) { \
> + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \
> + .dst_reg = DST, \
> + .src_reg = SRC, \
> + .off = OFF, \
> + .imm = BPF_AND | BPF_FETCH })
> +
> +/* Atomic memory or, *(uint *)(dst_reg + off16) -= src_reg */
> +
> +#define BPF_ATOMIC_OR(SIZE, DST, SRC, OFF) \
> + ((struct bpf_insn) { \
> + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \
> + .dst_reg = DST, \
> + .src_reg = SRC, \
> + .off = OFF, \
> + .imm = BPF_OR })
> +
> +/* Atomic memory or with fetch, src_reg = atomic_fetch_or(*(dst_reg + off), src_reg); */
> +
> +#define BPF_ATOMIC_FETCH_OR(SIZE, DST, SRC, OFF) \
> + ((struct bpf_insn) { \
> + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \
> + .dst_reg = DST, \
> + .src_reg = SRC, \
> + .off = OFF, \
> + .imm = BPF_OR | BPF_FETCH })
> +
> +/* Atomic memory xor, *(uint *)(dst_reg + off16) -= src_reg */
> +
> +#define BPF_ATOMIC_XOR(SIZE, DST, SRC, OFF) \
> + ((struct bpf_insn) { \
> + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \
> + .dst_reg = DST, \
> + .src_reg = SRC, \
> + .off = OFF, \
> + .imm = BPF_XOR })
> +
> +/* Atomic memory xor with fetch, src_reg = atomic_fetch_xor(*(dst_reg + off), src_reg); */
> +
> +#define BPF_ATOMIC_FETCH_XOR(SIZE, DST, SRC, OFF) \
> + ((struct bpf_insn) { \
> + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \
> + .dst_reg = DST, \
> + .src_reg = SRC, \
> + .off = OFF, \
> + .imm = BPF_XOR | BPF_FETCH })
> +
> /* Atomic exchange, src_reg = atomic_xchg((dst_reg + off), src_reg) */
>
> #define BPF_ATOMIC_XCHG(SIZE, DST, SRC, OFF) \
>

2020-12-04 07:12:15

by Yonghong Song

[permalink] [raw]
Subject: Re: [PATCH bpf-next v3 13/14] bpf: Add tests for new BPF atomic operations



On 12/3/20 8:02 AM, Brendan Jackman wrote:
> This relies on the work done by Yonghong Song in
> https://reviews.llvm.org/D72184
>
> Note the use of a define called ENABLE_ATOMICS_TESTS: this is used
> to:
>
> - Avoid breaking the build for people on old versions of Clang
> - Avoid needing separate lists of test objects for no_alu32, where
> atomics are not supported even if Clang has the feature.
>
> The atomics_test.o BPF object is built unconditionally both for
> test_progs and test_progs-no_alu32. For test_progs, if Clang supports
> atomics, ENABLE_ATOMICS_TESTS is defined, so it includes the proper
> test code. Otherwise, progs and global vars are defined anyway, as
> stubs; this means that the skeleton user code still builds.
>
> The atomics_test.o userspace object is built once and used for both
> test_progs and test_progs-no_alu32. A variable called skip_tests is
> defined in the BPF object's data section, which tells the userspace
> object whether to skip the atomics test.
>
> Change-Id: Iecc12f35f0ded4a1dd805cce1be576e7b27917ef
> Signed-off-by: Brendan Jackman <[email protected]>
> ---
> tools/testing/selftests/bpf/Makefile | 4 +
> .../selftests/bpf/prog_tests/atomics_test.c | 262 ++++++++++++++++++
> .../selftests/bpf/progs/atomics_test.c | 154 ++++++++++
> .../selftests/bpf/verifier/atomic_and.c | 77 +++++
> .../selftests/bpf/verifier/atomic_cmpxchg.c | 96 +++++++
> .../selftests/bpf/verifier/atomic_fetch_add.c | 106 +++++++
> .../selftests/bpf/verifier/atomic_or.c | 77 +++++
> .../selftests/bpf/verifier/atomic_xchg.c | 46 +++
> .../selftests/bpf/verifier/atomic_xor.c | 77 +++++
> 9 files changed, 899 insertions(+)
> create mode 100644 tools/testing/selftests/bpf/prog_tests/atomics_test.c
> create mode 100644 tools/testing/selftests/bpf/progs/atomics_test.c
> create mode 100644 tools/testing/selftests/bpf/verifier/atomic_and.c
> create mode 100644 tools/testing/selftests/bpf/verifier/atomic_cmpxchg.c
> create mode 100644 tools/testing/selftests/bpf/verifier/atomic_fetch_add.c
> create mode 100644 tools/testing/selftests/bpf/verifier/atomic_or.c
> create mode 100644 tools/testing/selftests/bpf/verifier/atomic_xchg.c
> create mode 100644 tools/testing/selftests/bpf/verifier/atomic_xor.c
>
> diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
> index f21c4841a612..448a9eb1a56c 100644
> --- a/tools/testing/selftests/bpf/Makefile
> +++ b/tools/testing/selftests/bpf/Makefile
> @@ -431,11 +431,15 @@ TRUNNER_EXTRA_FILES := $(OUTPUT)/urandom_read \
> $(wildcard progs/btf_dump_test_case_*.c)
> TRUNNER_BPF_BUILD_RULE := CLANG_BPF_BUILD_RULE
> TRUNNER_BPF_CFLAGS := $(BPF_CFLAGS) $(CLANG_CFLAGS)
> +ifeq ($(feature-clang-bpf-atomics),1)
> + TRUNNER_BPF_CFLAGS += -DENABLE_ATOMICS_TESTS
> +endif
> TRUNNER_BPF_LDFLAGS := -mattr=+alu32
> $(eval $(call DEFINE_TEST_RUNNER,test_progs))
>
> # Define test_progs-no_alu32 test runner.
> TRUNNER_BPF_BUILD_RULE := CLANG_NOALU32_BPF_BUILD_RULE
> +TRUNNER_BPF_CFLAGS := $(BPF_CFLAGS) $(CLANG_CFLAGS)
> TRUNNER_BPF_LDFLAGS :=
> $(eval $(call DEFINE_TEST_RUNNER,test_progs,no_alu32))
>
> diff --git a/tools/testing/selftests/bpf/prog_tests/atomics_test.c b/tools/testing/selftests/bpf/prog_tests/atomics_test.c
> new file mode 100644
> index 000000000000..66f0ccf4f4ec
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/prog_tests/atomics_test.c
> @@ -0,0 +1,262 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +#include <test_progs.h>
> +
> +
> +#include "atomics_test.skel.h"
> +
> +static struct atomics_test *setup(void)
> +{
> + struct atomics_test *atomics_skel;
> + __u32 duration = 0, err;
> +
> + atomics_skel = atomics_test__open_and_load();
> + if (CHECK(!atomics_skel, "atomics_skel_load", "atomics skeleton failed\n"))
> + return NULL;
> +
> + if (atomics_skel->data->skip_tests) {
> + printf("%s:SKIP:no ENABLE_ATOMICS_TEST (missing Clang BPF atomics support)",
> + __func__);
> + test__skip();
> + goto err;
> + }
> +
> + err = atomics_test__attach(atomics_skel);
> + if (CHECK(err, "atomics_attach", "atomics attach failed: %d\n", err))
> + goto err;
> +
> + return atomics_skel;
> +
> +err:
> + atomics_test__destroy(atomics_skel);
> + return NULL;
> +}
> +
> +static void test_add(void)
> +{
> + struct atomics_test *atomics_skel;
> + int err, prog_fd;
> + __u32 duration = 0, retval;
> +
> + atomics_skel = setup();

When running the test, I observed a noticeable delay between skel load
and skel attach. The reason is the bpf program object file contains
multiple programs and the above setup() tries to do attachment
for ALL programs but actually below only "add" program is tested.
This will unnecessarily increase test_progs running time.

The best is for setup() here only load and attach program "add".
The libbpf API bpf_program__set_autoload() can set a particular
program not autoload. You can call attach function explicitly
for one specific program. This should be able to reduce test
running time.

> + if (!atomics_skel)
> + return;
> +
> + prog_fd = bpf_program__fd(atomics_skel->progs.add);
> + err = bpf_prog_test_run(prog_fd, 1, NULL, 0,
> + NULL, NULL, &retval, &duration);
> + if (CHECK(err || retval, "test_run add",
> + "err %d errno %d retval %d duration %d\n",
> + err, errno, retval, duration))
> + goto cleanup;
> +
> + ASSERT_EQ(atomics_skel->data->add64_value, 3, "add64_value");
> + ASSERT_EQ(atomics_skel->bss->add64_result, 1, "add64_result");
> +
> + ASSERT_EQ(atomics_skel->data->add32_value, 3, "add32_value");
> + ASSERT_EQ(atomics_skel->bss->add32_result, 1, "add32_result");
> +
> + ASSERT_EQ(atomics_skel->bss->add_stack_value_copy, 3, "add_stack_value");
> + ASSERT_EQ(atomics_skel->bss->add_stack_result, 1, "add_stack_result");
> +
> + ASSERT_EQ(atomics_skel->data->add_noreturn_value, 3, "add_noreturn_value");
> +
> +cleanup:
> + atomics_test__destroy(atomics_skel);
> +}
> +
> +static void test_sub(void)
> +{
> + struct atomics_test *atomics_skel;
> + int err, prog_fd;
> + __u32 duration = 0, retval;
> +
> + atomics_skel = setup();
> + if (!atomics_skel)
> + return;
> +
[...]

2020-12-04 09:17:45

by Brendan Jackman

[permalink] [raw]
Subject: Re: [PATCH bpf-next v3 07/14] bpf: Add BPF_FETCH field / create atomic_fetch_add instruction

On Thu, Dec 03, 2020 at 09:27:04PM -0800, Yonghong Song wrote:
> On 12/3/20 8:02 AM, Brendan Jackman wrote:
[...]
> > diff --git a/kernel/bpf/disasm.c b/kernel/bpf/disasm.c
> > index 37c8d6e9b4cc..3ee2246a52ef 100644
> > --- a/kernel/bpf/disasm.c
> > +++ b/kernel/bpf/disasm.c
> > @@ -160,6 +160,13 @@ void print_bpf_insn(const struct bpf_insn_cbs *cbs,
> > bpf_ldst_string[BPF_SIZE(insn->code) >> 3],
> > insn->dst_reg, insn->off,
> > insn->src_reg);
> > + } else if (BPF_MODE(insn->code) == BPF_ATOMIC &&
> > + insn->imm == (BPF_ADD | BPF_FETCH)) {
> > + verbose(cbs->private_data, "(%02x) r%d = atomic%s_fetch_add(*(%s *)(r%d %+d), r%d)\n",
>
> We should not do dereference here (withough first *), right? since the input
> is actually an address. something like below?
> r2 = atomic[64]_fetch_add((u64/u32 *)(r3 +40), r2)

Ah yep - thanks!

[...]
> > diff --git a/tools/include/linux/filter.h b/tools/include/linux/filter.h
> > index 95ff51d97f25..ac7701678e1a 100644
> > --- a/tools/include/linux/filter.h
> > +++ b/tools/include/linux/filter.h
> > @@ -180,6 +180,16 @@
> > .imm = BPF_ADD })
> > #define BPF_STX_XADD BPF_ATOMIC_ADD /* alias */
> > +/* Atomic memory add with fetch, src_reg = atomic_fetch_add(*(dst_reg + off), src_reg); */
>
> Maybe src_reg = atomic_fetch_add(dst_reg + off, src_reg)?

Yep - and the same for the bitwise ops in the later patch.

2020-12-04 09:30:07

by Brendan Jackman

[permalink] [raw]
Subject: Re: [PATCH bpf-next v3 08/14] bpf: Add instructions for atomic_[cmp]xchg

O Thu, Dec 03, 2020 at 09:34:23PM -0800, Yonghong Song wrote:
> On 12/3/20 8:02 AM, Brendan Jackman wrote:
> > This adds two atomic opcodes, both of which include the BPF_FETCH
> > flag. XCHG without the BPF_FETCh flag would naturally encode
>
> BPF_FETCh => BPF_FETCH

Thanks, sorry I think you've already pointed that one out and I didn't fix it!

> > atomic_set. This is not supported because it would be of limited
> > value to userspace (it doesn't imply any barriers). CMPXCHG without
> > BPF_FETCH woulud be an atomic compare-and-write. We don't have such
> > an operation in the kernel so it isn't provided to BPF either.
> >
> > There are two significant design decisions made for the CMPXCHG
> > instruction:
> >
> > - To solve the issue that this operation fundamentally has 3
> > operands, but we only have two register fields. Therefore the
> > operand we compare against (the kernel's API calls it 'old') is
> > hard-coded to be R0. x86 has similar design (and A64 doesn't
> > have this problem).
> >
> > A potential alternative might be to encode the other operand's
> > register number in the immediate field.
> >
> > - The kernel's atomic_cmpxchg returns the old value, while the C11
> > userspace APIs return a boolean indicating the comparison
> > result. Which should BPF do? A64 returns the old value. x86 returns
> > the old value in the hard-coded register (and also sets a
> > flag). That means return-old-value is easier to JIT.
> >
> > Signed-off-by: Brendan Jackman <[email protected]>
>
> Ack with minor comments in the above and below.

Thanks, ack to all the comments.

Have run a `grep -r "atomic_.*(\*" *.patch` - hopefully we're now free
of this mistake where the first arg is dereferenced in the
comments/disasm...

> Acked-by: Yonghong Song <[email protected]>
>
> > Change-Id: I3f19ad867dfd08515eecf72674e6fdefe28424bb
> > ---
> > arch/x86/net/bpf_jit_comp.c | 8 ++++++++
> > include/linux/filter.h | 20 ++++++++++++++++++++
> > include/uapi/linux/bpf.h | 4 +++-
> > kernel/bpf/core.c | 20 ++++++++++++++++++++
> > kernel/bpf/disasm.c | 15 +++++++++++++++
> > kernel/bpf/verifier.c | 19 +++++++++++++++++--
> > tools/include/linux/filter.h | 20 ++++++++++++++++++++
> > tools/include/uapi/linux/bpf.h | 4 +++-
> > 8 files changed, 106 insertions(+), 4 deletions(-)
> >
> > diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
> > index 88cb09fa3bfb..7d29bc3bb4ff 100644
> > --- a/arch/x86/net/bpf_jit_comp.c
> > +++ b/arch/x86/net/bpf_jit_comp.c
> > @@ -831,6 +831,14 @@ static int emit_atomic(u8 **pprog, u8 atomic_op,
> > /* src_reg = atomic_fetch_add(*(dst_reg + off), src_reg); */
> > EMIT2(0x0F, 0xC1);
> > break;
> > + case BPF_XCHG:
> > + /* src_reg = atomic_xchg(*(u32/u64*)(dst_reg + off), src_reg); */
>
> src_reg = atomic_xchg((u32/u64*)(dst_reg + off), src_reg)?
>
> > + EMIT1(0x87);
> > + break;
> > + case BPF_CMPXCHG:
> > + /* r0 = atomic_cmpxchg(*(u32/u64*)(dst_reg + off), r0, src_reg); */
>
> r0 = atomic_cmpxchg((u32/u64*)(dst_reg + off), r0, src_reg)?
>
> > + EMIT2(0x0F, 0xB1);
> > + break;
> > default:
> > pr_err("bpf_jit: unknown atomic opcode %02x\n", atomic_op);
> > return -EFAULT;
> > diff --git a/include/linux/filter.h b/include/linux/filter.h
> > index 4e04d0fc454f..6186280715ed 100644
> > --- a/include/linux/filter.h
> > +++ b/include/linux/filter.h
> > @@ -280,6 +280,26 @@ static inline bool insn_is_zext(const struct bpf_insn *insn)
> > .off = OFF, \
> > .imm = BPF_ADD | BPF_FETCH })
> > +/* Atomic exchange, src_reg = atomic_xchg((dst_reg + off), src_reg) */
>
> src_reg = atomic_xchg(dst_reg + off, src_reg)?
>
> > +
> > +#define BPF_ATOMIC_XCHG(SIZE, DST, SRC, OFF) \
> > + ((struct bpf_insn) { \
> > + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \
> > + .dst_reg = DST, \
> > + .src_reg = SRC, \
> > + .off = OFF, \
> > + .imm = BPF_XCHG })
> > +
> > +/* Atomic compare-exchange, r0 = atomic_cmpxchg((dst_reg + off), r0, src_reg) */
>
> r0 = atomic_cmpxchg(dst_reg + off, r0, src_reg)?
>
> > +
> > +#define BPF_ATOMIC_CMPXCHG(SIZE, DST, SRC, OFF) \
> > + ((struct bpf_insn) { \
> > + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \
> > + .dst_reg = DST, \
> > + .src_reg = SRC, \
> > + .off = OFF, \
> > + .imm = BPF_CMPXCHG })
> > +
> > /* Memory store, *(uint *) (dst_reg + off16) = imm32 */
> > #define BPF_ST_MEM(SIZE, DST, OFF, IMM) \
> > diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> > index 025e377e7229..53334530cc81 100644
> > --- a/include/uapi/linux/bpf.h
> > +++ b/include/uapi/linux/bpf.h
> > @@ -45,7 +45,9 @@
> > #define BPF_EXIT 0x90 /* function return */
> > /* atomic op type fields (stored in immediate) */
> > -#define BPF_FETCH 0x01 /* fetch previous value into src reg */
> > +#define BPF_XCHG (0xe0 | BPF_FETCH) /* atomic exchange */
> > +#define BPF_CMPXCHG (0xf0 | BPF_FETCH) /* atomic compare-and-write */
> > +#define BPF_FETCH 0x01 /* not an opcode on its own, used to build others */
> > /* Register numbers */
> > enum {
> > diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
> > index 61e93eb7d363..28f960bc2e30 100644
> > --- a/kernel/bpf/core.c
> > +++ b/kernel/bpf/core.c
> > @@ -1630,6 +1630,16 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn, u64 *stack)
> > (u32) SRC,
> > (atomic_t *)(unsigned long) (DST + insn->off));
> > break;
> > + case BPF_XCHG:
> > + SRC = (u32) atomic_xchg(
> > + (atomic_t *)(unsigned long) (DST + insn->off),
> > + (u32) SRC);
> > + break;
> > + case BPF_CMPXCHG:
> > + BPF_R0 = (u32) atomic_cmpxchg(
> > + (atomic_t *)(unsigned long) (DST + insn->off),
> > + (u32) BPF_R0, (u32) SRC);
> > + break;
> > default:
> > goto default_label;
> > }
> > @@ -1647,6 +1657,16 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn, u64 *stack)
> > (u64) SRC,
> > (atomic64_t *)(s64) (DST + insn->off));
> > break;
> > + case BPF_XCHG:
> > + SRC = (u64) atomic64_xchg(
> > + (atomic64_t *)(u64) (DST + insn->off),
> > + (u64) SRC);
> > + break;
> > + case BPF_CMPXCHG:
> > + BPF_R0 = (u64) atomic64_cmpxchg(
> > + (atomic64_t *)(u64) (DST + insn->off),
> > + (u64) BPF_R0, (u64) SRC);
> > + break;
> > default:
> > goto default_label;
> > }
> > diff --git a/kernel/bpf/disasm.c b/kernel/bpf/disasm.c
> > index 3ee2246a52ef..18357ea9a17d 100644
> > --- a/kernel/bpf/disasm.c
> > +++ b/kernel/bpf/disasm.c
> > @@ -167,6 +167,21 @@ void print_bpf_insn(const struct bpf_insn_cbs *cbs,
> > BPF_SIZE(insn->code) == BPF_DW ? "64" : "",
> > bpf_ldst_string[BPF_SIZE(insn->code) >> 3],
> > insn->dst_reg, insn->off, insn->src_reg);
> > + } else if (BPF_MODE(insn->code) == BPF_ATOMIC &&
> > + insn->imm == BPF_CMPXCHG) {
> > + verbose(cbs->private_data, "(%02x) r0 = atomic%s_cmpxchg(*(%s *)(r%d %+d), r0, r%d)\n",
>
> (%02x) r0 = atomic%s_cmpxchg((%s *)(r%d %+d), r0, r%d)?
>
> > + insn->code,
> > + BPF_SIZE(insn->code) == BPF_DW ? "64" : "",
> > + bpf_ldst_string[BPF_SIZE(insn->code) >> 3],
> > + insn->dst_reg, insn->off,
> > + insn->src_reg);
> > + } else if (BPF_MODE(insn->code) == BPF_ATOMIC &&
> > + insn->imm == BPF_XCHG) {
> > + verbose(cbs->private_data, "(%02x) r%d = atomic%s_xchg(*(%s *)(r%d %+d), r%d)\n",
>
> (%02x) r%d = atomic%s_xchg((%s *)(r%d %+d), r%d)?
>
> > + insn->code, insn->src_reg,
> > + BPF_SIZE(insn->code) == BPF_DW ? "64" : "",
> > + bpf_ldst_string[BPF_SIZE(insn->code) >> 3],
> > + insn->dst_reg, insn->off, insn->src_reg);
> > } else {
> > verbose(cbs->private_data, "BUG_%02x\n", insn->code);
> > }
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index a68adbcee370..ccf4315e54e7 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> > @@ -3601,10 +3601,13 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
> > static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct bpf_insn *insn)
> > {
> > int err;
> > + int load_reg;
>
> nit: not a big deal but maybe put this definition before 'int err' to
> maintain reverse christmas tree coding style.
>
> > switch (insn->imm) {
> > case BPF_ADD:
> > case BPF_ADD | BPF_FETCH:
> > + case BPF_XCHG:
> > + case BPF_CMPXCHG:
> > break;
> > default:
> > verbose(env, "BPF_ATOMIC uses invalid atomic opcode %02x\n", insn->imm);
> > @@ -3626,6 +3629,13 @@ static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct bpf_i
> > if (err)
> > return err;
> > + if (insn->imm == BPF_CMPXCHG) {
> > + /* Check comparison of R0 with memory location */
> > + err = check_reg_arg(env, BPF_REG_0, SRC_OP);
> > + if (err)
> > + return err;
> > + }
> > +
> > if (is_pointer_value(env, insn->src_reg)) {
> > verbose(env, "R%d leaks addr into mem\n", insn->src_reg);
> > return -EACCES;
> > @@ -3656,8 +3666,13 @@ static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct bpf_i
> > if (!(insn->imm & BPF_FETCH))
> > return 0;
> > - /* check and record load of old value into src reg */
> > - err = check_reg_arg(env, insn->src_reg, DST_OP);
> > + if (insn->imm == BPF_CMPXCHG)
> > + load_reg = BPF_REG_0;
> > + else
> > + load_reg = insn->src_reg;
> > +
> > + /* check and record load of old value */
> > + err = check_reg_arg(env, load_reg, DST_OP);
> > if (err)
> > return err;
> > diff --git a/tools/include/linux/filter.h b/tools/include/linux/filter.h
> > index ac7701678e1a..ea99bd17d003 100644
> > --- a/tools/include/linux/filter.h
> > +++ b/tools/include/linux/filter.h
> > @@ -190,6 +190,26 @@
> > .off = OFF, \
> > .imm = BPF_ADD | BPF_FETCH })
> > +/* Atomic exchange, src_reg = atomic_xchg((dst_reg + off), src_reg) */
>
> src_reg = atomic_xchg(dst_reg + off, src_reg)?
>
> > +
> > +#define BPF_ATOMIC_XCHG(SIZE, DST, SRC, OFF) \
> > + ((struct bpf_insn) { \
> > + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \
> > + .dst_reg = DST, \
> > + .src_reg = SRC, \
> > + .off = OFF, \
> > + .imm = BPF_XCHG })
> > +
> > +/* Atomic compare-exchange, r0 = atomic_cmpxchg((dst_reg + off), r0, src_reg) */
>
> r0 = atomic_cmpxchg(dst_reg + off, r0, src_reg)?
>
> > +
> > +#define BPF_ATOMIC_CMPXCHG(SIZE, DST, SRC, OFF) \
> > + ((struct bpf_insn) { \
> > + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \
> > + .dst_reg = DST, \
> > + .src_reg = SRC, \
> > + .off = OFF, \
> > + .imm = BPF_CMPXCHG })
> > +
> > /* Memory store, *(uint *) (dst_reg + off16) = imm32 */
> > #define BPF_ST_MEM(SIZE, DST, OFF, IMM) \
> > diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
> > index 025e377e7229..53334530cc81 100644
> > --- a/tools/include/uapi/linux/bpf.h
> > +++ b/tools/include/uapi/linux/bpf.h
> > @@ -45,7 +45,9 @@
> > #define BPF_EXIT 0x90 /* function return */
> > /* atomic op type fields (stored in immediate) */
> > -#define BPF_FETCH 0x01 /* fetch previous value into src reg */
> > +#define BPF_XCHG (0xe0 | BPF_FETCH) /* atomic exchange */
> > +#define BPF_CMPXCHG (0xf0 | BPF_FETCH) /* atomic compare-and-write */
> > +#define BPF_FETCH 0x01 /* not an opcode on its own, used to build others */
> > /* Register numbers */
> > enum {
> >

2020-12-04 09:32:01

by Brendan Jackman

[permalink] [raw]
Subject: Re: [PATCH bpf-next v3 09/14] bpf: Pull out a macro for interpreting atomic ALU operations

On Thu, Dec 03, 2020 at 10:30:18PM -0800, Yonghong Song wrote:
>
>
> On 12/3/20 8:02 AM, Brendan Jackman wrote:
> > Since the atomic operations that are added in subsequent commits are
> > all isomorphic with BPF_ADD, pull out a macro to avoid the
> > interpreter becoming dominated by lines of atomic-related code.
> >
> > Note that this sacrificies interpreter performance (combining
> > STX_ATOMIC_W and STX_ATOMIC_DW into single switch case means that we
> > need an extra conditional branch to differentiate them) in favour of
> > compact and (relatively!) simple C code.
> >
> > Change-Id: I8cae5b66e75f34393de6063b91c05a8006fdd9e6
> > Signed-off-by: Brendan Jackman <[email protected]>
>
> Ack with a minor suggestion below.
>
> Acked-by: Yonghong Song <[email protected]>
>
> > ---
> > kernel/bpf/core.c | 79 +++++++++++++++++++++++------------------------
> > 1 file changed, 38 insertions(+), 41 deletions(-)
> >
> > diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
> > index 28f960bc2e30..498d3f067be7 100644
> > --- a/kernel/bpf/core.c
> > +++ b/kernel/bpf/core.c
> > @@ -1618,55 +1618,52 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn, u64 *stack)
> > LDX_PROBE(DW, 8)
> > #undef LDX_PROBE
> > - STX_ATOMIC_W:
> > - switch (IMM) {
> > - case BPF_ADD:
> > - /* lock xadd *(u32 *)(dst_reg + off16) += src_reg */
> > - atomic_add((u32) SRC, (atomic_t *)(unsigned long)
> > - (DST + insn->off));
> > - break;
> > - case BPF_ADD | BPF_FETCH:
> > - SRC = (u32) atomic_fetch_add(
> > - (u32) SRC,
> > - (atomic_t *)(unsigned long) (DST + insn->off));
> > - break;
> > - case BPF_XCHG:
> > - SRC = (u32) atomic_xchg(
> > - (atomic_t *)(unsigned long) (DST + insn->off),
> > - (u32) SRC);
> > - break;
> > - case BPF_CMPXCHG:
> > - BPF_R0 = (u32) atomic_cmpxchg(
> > - (atomic_t *)(unsigned long) (DST + insn->off),
> > - (u32) BPF_R0, (u32) SRC);
> > +#define ATOMIC(BOP, KOP) \
>
> ATOMIC a little bit generic. Maybe ATOMIC_FETCH_BOP?

Well it doesn't fetch in all cases and "BOP" is intended to
differentiate from KOP i.e. BOP = BPF operation KOP = Kernel operation.

Could go with ATOMIC_ALU_OP?

> > + case BOP: \
> > + if (BPF_SIZE(insn->code) == BPF_W) \
> > + atomic_##KOP((u32) SRC, (atomic_t *)(unsigned long) \
> > + (DST + insn->off)); \
> > + else \
> > + atomic64_##KOP((u64) SRC, (atomic64_t *)(unsigned long) \
> > + (DST + insn->off)); \
> > + break; \
> > + case BOP | BPF_FETCH: \
> > + if (BPF_SIZE(insn->code) == BPF_W) \
> > + SRC = (u32) atomic_fetch_##KOP( \
> > + (u32) SRC, \
> > + (atomic_t *)(unsigned long) (DST + insn->off)); \
> > + else \
> > + SRC = (u64) atomic64_fetch_##KOP( \
> > + (u64) SRC, \
> > + (atomic64_t *)(s64) (DST + insn->off)); \
> > break;
> > - default:
> > - goto default_label;
> > - }
> > - CONT;
> > STX_ATOMIC_DW:
> > + STX_ATOMIC_W:
> > switch (IMM) {
> > - case BPF_ADD:
> > - /* lock xadd *(u64 *)(dst_reg + off16) += src_reg */
> > - atomic64_add((u64) SRC, (atomic64_t *)(unsigned long)
> > - (DST + insn->off));
> > - break;
> > - case BPF_ADD | BPF_FETCH:
> > - SRC = (u64) atomic64_fetch_add(
> > - (u64) SRC,
> > - (atomic64_t *)(s64) (DST + insn->off));
> > - break;
> > + ATOMIC(BPF_ADD, add)
> > +
> > case BPF_XCHG:
> > - SRC = (u64) atomic64_xchg(
> > - (atomic64_t *)(u64) (DST + insn->off),
> > - (u64) SRC);
> > + if (BPF_SIZE(insn->code) == BPF_W)
> > + SRC = (u32) atomic_xchg(
> > + (atomic_t *)(unsigned long) (DST + insn->off),
> > + (u32) SRC);
> > + else
> > + SRC = (u64) atomic64_xchg(
> > + (atomic64_t *)(u64) (DST + insn->off),
> > + (u64) SRC);
> > break;
> > case BPF_CMPXCHG:
> > - BPF_R0 = (u64) atomic64_cmpxchg(
> > - (atomic64_t *)(u64) (DST + insn->off),
> > - (u64) BPF_R0, (u64) SRC);
> > + if (BPF_SIZE(insn->code) == BPF_W)
> > + BPF_R0 = (u32) atomic_cmpxchg(
> > + (atomic_t *)(unsigned long) (DST + insn->off),
> > + (u32) BPF_R0, (u32) SRC);
> > + else
> > + BPF_R0 = (u64) atomic64_cmpxchg(
> > + (atomic64_t *)(u64) (DST + insn->off),
> > + (u64) BPF_R0, (u64) SRC);
> > break;
> > +
> > default:
> > goto default_label;
> > }
> >

2020-12-04 09:39:39

by Brendan Jackman

[permalink] [raw]
Subject: Re: [PATCH bpf-next v3 10/14] bpf: Add bitwise atomic instructions

On Thu, Dec 03, 2020 at 10:42:19PM -0800, Yonghong Song wrote:
>
>
> On 12/3/20 8:02 AM, Brendan Jackman wrote:
> > This adds instructions for
> >
> > atomic[64]_[fetch_]and
> > atomic[64]_[fetch_]or
> > atomic[64]_[fetch_]xor
> >
> > All these operations are isomorphic enough to implement with the same
> > verifier, interpreter, and x86 JIT code, hence being a single commit.
> >
> > The main interesting thing here is that x86 doesn't directly support
> > the fetch_ version these operations, so we need to generate a CMPXCHG
> > loop in the JIT. This requires the use of two temporary registers,
> > IIUC it's safe to use BPF_REG_AX and x86's AUX_REG for this purpose.
> >
> > Change-Id: I340b10cecebea8cb8a52e3606010cde547a10ed4
> > Signed-off-by: Brendan Jackman <[email protected]>
> > ---
> > arch/x86/net/bpf_jit_comp.c | 50 +++++++++++++++++++++++++++++-
> > include/linux/filter.h | 60 ++++++++++++++++++++++++++++++++++++
> > kernel/bpf/core.c | 5 ++-
> > kernel/bpf/disasm.c | 21 ++++++++++---
> > kernel/bpf/verifier.c | 6 ++++
> > tools/include/linux/filter.h | 60 ++++++++++++++++++++++++++++++++++++
> > 6 files changed, 196 insertions(+), 6 deletions(-)
> >
[...]
> > diff --git a/include/linux/filter.h b/include/linux/filter.h
> > index 6186280715ed..698f82897b0d 100644
> > --- a/include/linux/filter.h
> > +++ b/include/linux/filter.h
> > @@ -280,6 +280,66 @@ static inline bool insn_is_zext(const struct bpf_insn *insn)
[...]
> > +#define BPF_ATOMIC_FETCH_XOR(SIZE, DST, SRC, OFF) \
> > + ((struct bpf_insn) { \
> > + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \
> > + .dst_reg = DST, \
> > + .src_reg = SRC, \
> > + .off = OFF, \
> > + .imm = BPF_XOR | BPF_FETCH })
> > +
> > /* Atomic exchange, src_reg = atomic_xchg((dst_reg + off), src_reg) */
>
> Looks like BPF_ATOMIC_XOR/OR/AND/... all similar to each other.
> The same is for BPF_ATOMIC_FETCH_XOR/OR/AND/...
>
> I am wondering whether it makes sence to have to
> BPF_ATOMIC_BOP(BOP, SIZE, DST, SRC, OFF) and
> BPF_ATOMIC_FETCH_BOP(BOP, SIZE, DST, SRC, OFF)
> can have less number of macros?

Hmm yeah I think that's probably a good idea, it would be consistent
with the macros for non-atomic ALU ops.

I don't think 'BOP' would be very clear though, 'ALU' might be more
obvious.

2020-12-04 09:44:22

by Brendan Jackman

[permalink] [raw]
Subject: Re: [PATCH bpf-next v3 12/14] bpf: Pull tools/build/feature biz into selftests Makefile

On Thu, Dec 03, 2020 at 01:01:27PM -0800, Andrii Nakryiko wrote:
> On Thu, Dec 3, 2020 at 8:07 AM Brendan Jackman <[email protected]> wrote:
> >
> > This is somewhat cargo-culted from the libbpf build. It will be used
> > in a subsequent patch to query for Clang BPF atomics support.
> >
> > Change-Id: I9318a1702170eb752acced35acbb33f45126c44c
>
> Haven't seen this before. What's this Change-Id business?

Argh, apologies. Looks like it's time for me to adopt a less error-prone
workflow for sending patches.

(This is noise from Gerrit, which we sometimes use for internal reviews)

> > Signed-off-by: Brendan Jackman <[email protected]>
> > ---
> > tools/testing/selftests/bpf/.gitignore | 1 +
> > tools/testing/selftests/bpf/Makefile | 38 ++++++++++++++++++++++++++
> > 2 files changed, 39 insertions(+)
>
> All this just to detect the support for clang atomics?... Let's not
> pull in the entire feature-detection framework unnecessarily,
> selftests Makefile is complicated enough without that.

Then the test build would break for people who haven't updated Clang.
Is that acceptable?

I'm aware of cases where you need to be on a pretty fresh Clang for
tests to _pass_ so maybe it's fine.

2020-12-04 09:49:53

by Brendan Jackman

[permalink] [raw]
Subject: Re: [PATCH bpf-next v3 13/14] bpf: Add tests for new BPF atomic operations

On Thu, Dec 03, 2020 at 11:06:31PM -0800, Yonghong Song wrote:
> On 12/3/20 8:02 AM, Brendan Jackman wrote:
[...]
> > diff --git a/tools/testing/selftests/bpf/prog_tests/atomics_test.c b/tools/testing/selftests/bpf/prog_tests/atomics_test.c
> > new file mode 100644
> > index 000000000000..66f0ccf4f4ec
> > --- /dev/null
> > +++ b/tools/testing/selftests/bpf/prog_tests/atomics_test.c
> > @@ -0,0 +1,262 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +
> > +#include <test_progs.h>
> > +
> > +
> > +#include "atomics_test.skel.h"
> > +
> > +static struct atomics_test *setup(void)
> > +{
> > + struct atomics_test *atomics_skel;
> > + __u32 duration = 0, err;
> > +
> > + atomics_skel = atomics_test__open_and_load();
> > + if (CHECK(!atomics_skel, "atomics_skel_load", "atomics skeleton failed\n"))
> > + return NULL;
> > +
> > + if (atomics_skel->data->skip_tests) {
> > + printf("%s:SKIP:no ENABLE_ATOMICS_TEST (missing Clang BPF atomics support)",
> > + __func__);
> > + test__skip();
> > + goto err;
> > + }
> > +
> > + err = atomics_test__attach(atomics_skel);
> > + if (CHECK(err, "atomics_attach", "atomics attach failed: %d\n", err))
> > + goto err;
> > +
> > + return atomics_skel;
> > +
> > +err:
> > + atomics_test__destroy(atomics_skel);
> > + return NULL;
> > +}
> > +
> > +static void test_add(void)
> > +{
> > + struct atomics_test *atomics_skel;
> > + int err, prog_fd;
> > + __u32 duration = 0, retval;
> > +
> > + atomics_skel = setup();
>
> When running the test, I observed a noticeable delay between skel load and
> skel attach. The reason is the bpf program object file contains
> multiple programs and the above setup() tries to do attachment
> for ALL programs but actually below only "add" program is tested.
> This will unnecessarily increase test_progs running time.
>
> The best is for setup() here only load and attach program "add".
> The libbpf API bpf_program__set_autoload() can set a particular
> program not autoload. You can call attach function explicitly
> for one specific program. This should be able to reduce test
> running time.

Interesting, thanks a lot - I'll try this out next week. Maybe we can
actually load all the progs once at the beginning (i.e. in
test_atomics_test) then attach/detch each prog individually as needed...
Sorry, I haven't got much of a grip on libbpf yet.

2020-12-04 15:24:43

by Yonghong Song

[permalink] [raw]
Subject: Re: [PATCH bpf-next v3 09/14] bpf: Pull out a macro for interpreting atomic ALU operations



On 12/4/20 1:29 AM, Brendan Jackman wrote:
> On Thu, Dec 03, 2020 at 10:30:18PM -0800, Yonghong Song wrote:
>>
>>
>> On 12/3/20 8:02 AM, Brendan Jackman wrote:
>>> Since the atomic operations that are added in subsequent commits are
>>> all isomorphic with BPF_ADD, pull out a macro to avoid the
>>> interpreter becoming dominated by lines of atomic-related code.
>>>
>>> Note that this sacrificies interpreter performance (combining
>>> STX_ATOMIC_W and STX_ATOMIC_DW into single switch case means that we
>>> need an extra conditional branch to differentiate them) in favour of
>>> compact and (relatively!) simple C code.
>>>
>>> Change-Id: I8cae5b66e75f34393de6063b91c05a8006fdd9e6
>>> Signed-off-by: Brendan Jackman <[email protected]>
>>
>> Ack with a minor suggestion below.
>>
>> Acked-by: Yonghong Song <[email protected]>
>>
>>> ---
>>> kernel/bpf/core.c | 79 +++++++++++++++++++++++------------------------
>>> 1 file changed, 38 insertions(+), 41 deletions(-)
>>>
>>> diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
>>> index 28f960bc2e30..498d3f067be7 100644
>>> --- a/kernel/bpf/core.c
>>> +++ b/kernel/bpf/core.c
>>> @@ -1618,55 +1618,52 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn, u64 *stack)
>>> LDX_PROBE(DW, 8)
>>> #undef LDX_PROBE
>>> - STX_ATOMIC_W:
>>> - switch (IMM) {
>>> - case BPF_ADD:
>>> - /* lock xadd *(u32 *)(dst_reg + off16) += src_reg */
>>> - atomic_add((u32) SRC, (atomic_t *)(unsigned long)
>>> - (DST + insn->off));
>>> - break;
>>> - case BPF_ADD | BPF_FETCH:
>>> - SRC = (u32) atomic_fetch_add(
>>> - (u32) SRC,
>>> - (atomic_t *)(unsigned long) (DST + insn->off));
>>> - break;
>>> - case BPF_XCHG:
>>> - SRC = (u32) atomic_xchg(
>>> - (atomic_t *)(unsigned long) (DST + insn->off),
>>> - (u32) SRC);
>>> - break;
>>> - case BPF_CMPXCHG:
>>> - BPF_R0 = (u32) atomic_cmpxchg(
>>> - (atomic_t *)(unsigned long) (DST + insn->off),
>>> - (u32) BPF_R0, (u32) SRC);
>>> +#define ATOMIC(BOP, KOP) \
>>
>> ATOMIC a little bit generic. Maybe ATOMIC_FETCH_BOP?
>
> Well it doesn't fetch in all cases and "BOP" is intended to
> differentiate from KOP i.e. BOP = BPF operation KOP = Kernel operation.
>
> Could go with ATOMIC_ALU_OP?

ATOMIC_ALU_OP sounds good.

>
>>> + case BOP: \
>>> + if (BPF_SIZE(insn->code) == BPF_W) \
>>> + atomic_##KOP((u32) SRC, (atomic_t *)(unsigned long) \
>>> + (DST + insn->off)); \
>>> + else \
>>> + atomic64_##KOP((u64) SRC, (atomic64_t *)(unsigned long) \
>>> + (DST + insn->off)); \
>>> + break; \
>>> + case BOP | BPF_FETCH: \
>>> + if (BPF_SIZE(insn->code) == BPF_W) \
>>> + SRC = (u32) atomic_fetch_##KOP( \
>>> + (u32) SRC, \
>>> + (atomic_t *)(unsigned long) (DST + insn->off)); \
>>> + else \
>>> + SRC = (u64) atomic64_fetch_##KOP( \
>>> + (u64) SRC, \
>>> + (atomic64_t *)(s64) (DST + insn->off)); \
>>> break;
>>> - default:
>>> - goto default_label;
>>> - }
>>> - CONT;
>>> STX_ATOMIC_DW:
>>> + STX_ATOMIC_W:
>>> switch (IMM) {
>>> - case BPF_ADD:
>>> - /* lock xadd *(u64 *)(dst_reg + off16) += src_reg */
>>> - atomic64_add((u64) SRC, (atomic64_t *)(unsigned long)
>>> - (DST + insn->off));
>>> - break;
>>> - case BPF_ADD | BPF_FETCH:
>>> - SRC = (u64) atomic64_fetch_add(
>>> - (u64) SRC,
>>> - (atomic64_t *)(s64) (DST + insn->off));
>>> - break;
>>> + ATOMIC(BPF_ADD, add)
>>> +
>>> case BPF_XCHG:
>>> - SRC = (u64) atomic64_xchg(
>>> - (atomic64_t *)(u64) (DST + insn->off),
>>> - (u64) SRC);
>>> + if (BPF_SIZE(insn->code) == BPF_W)
>>> + SRC = (u32) atomic_xchg(
>>> + (atomic_t *)(unsigned long) (DST + insn->off),
>>> + (u32) SRC);
>>> + else
>>> + SRC = (u64) atomic64_xchg(
>>> + (atomic64_t *)(u64) (DST + insn->off),
>>> + (u64) SRC);
>>> break;
>>> case BPF_CMPXCHG:
>>> - BPF_R0 = (u64) atomic64_cmpxchg(
>>> - (atomic64_t *)(u64) (DST + insn->off),
>>> - (u64) BPF_R0, (u64) SRC);
>>> + if (BPF_SIZE(insn->code) == BPF_W)
>>> + BPF_R0 = (u32) atomic_cmpxchg(
>>> + (atomic_t *)(unsigned long) (DST + insn->off),
>>> + (u32) BPF_R0, (u32) SRC);
>>> + else
>>> + BPF_R0 = (u64) atomic64_cmpxchg(
>>> + (atomic64_t *)(u64) (DST + insn->off),
>>> + (u64) BPF_R0, (u64) SRC);
>>> break;
>>> +
>>> default:
>>> goto default_label;
>>> }
>>>

2020-12-04 15:25:54

by Yonghong Song

[permalink] [raw]
Subject: Re: [PATCH bpf-next v3 10/14] bpf: Add bitwise atomic instructions



On 12/4/20 1:36 AM, Brendan Jackman wrote:
> On Thu, Dec 03, 2020 at 10:42:19PM -0800, Yonghong Song wrote:
>>
>>
>> On 12/3/20 8:02 AM, Brendan Jackman wrote:
>>> This adds instructions for
>>>
>>> atomic[64]_[fetch_]and
>>> atomic[64]_[fetch_]or
>>> atomic[64]_[fetch_]xor
>>>
>>> All these operations are isomorphic enough to implement with the same
>>> verifier, interpreter, and x86 JIT code, hence being a single commit.
>>>
>>> The main interesting thing here is that x86 doesn't directly support
>>> the fetch_ version these operations, so we need to generate a CMPXCHG
>>> loop in the JIT. This requires the use of two temporary registers,
>>> IIUC it's safe to use BPF_REG_AX and x86's AUX_REG for this purpose.
>>>
>>> Change-Id: I340b10cecebea8cb8a52e3606010cde547a10ed4
>>> Signed-off-by: Brendan Jackman <[email protected]>
>>> ---
>>> arch/x86/net/bpf_jit_comp.c | 50 +++++++++++++++++++++++++++++-
>>> include/linux/filter.h | 60 ++++++++++++++++++++++++++++++++++++
>>> kernel/bpf/core.c | 5 ++-
>>> kernel/bpf/disasm.c | 21 ++++++++++---
>>> kernel/bpf/verifier.c | 6 ++++
>>> tools/include/linux/filter.h | 60 ++++++++++++++++++++++++++++++++++++
>>> 6 files changed, 196 insertions(+), 6 deletions(-)
>>>
> [...]
>>> diff --git a/include/linux/filter.h b/include/linux/filter.h
>>> index 6186280715ed..698f82897b0d 100644
>>> --- a/include/linux/filter.h
>>> +++ b/include/linux/filter.h
>>> @@ -280,6 +280,66 @@ static inline bool insn_is_zext(const struct bpf_insn *insn)
> [...]
>>> +#define BPF_ATOMIC_FETCH_XOR(SIZE, DST, SRC, OFF) \
>>> + ((struct bpf_insn) { \
>>> + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \
>>> + .dst_reg = DST, \
>>> + .src_reg = SRC, \
>>> + .off = OFF, \
>>> + .imm = BPF_XOR | BPF_FETCH })
>>> +
>>> /* Atomic exchange, src_reg = atomic_xchg((dst_reg + off), src_reg) */
>>
>> Looks like BPF_ATOMIC_XOR/OR/AND/... all similar to each other.
>> The same is for BPF_ATOMIC_FETCH_XOR/OR/AND/...
>>
>> I am wondering whether it makes sence to have to
>> BPF_ATOMIC_BOP(BOP, SIZE, DST, SRC, OFF) and
>> BPF_ATOMIC_FETCH_BOP(BOP, SIZE, DST, SRC, OFF)
>> can have less number of macros?
>
> Hmm yeah I think that's probably a good idea, it would be consistent
> with the macros for non-atomic ALU ops.
>
> I don't think 'BOP' would be very clear though, 'ALU' might be more
> obvious.

BPF_ATOMIC_ALU and BPF_ATOMIC_FETCH_ALU indeed better.

>

2020-12-04 15:33:23

by Yonghong Song

[permalink] [raw]
Subject: Re: [PATCH bpf-next v3 13/14] bpf: Add tests for new BPF atomic operations



On 12/4/20 1:45 AM, Brendan Jackman wrote:
> On Thu, Dec 03, 2020 at 11:06:31PM -0800, Yonghong Song wrote:
>> On 12/3/20 8:02 AM, Brendan Jackman wrote:
> [...]
>>> diff --git a/tools/testing/selftests/bpf/prog_tests/atomics_test.c b/tools/testing/selftests/bpf/prog_tests/atomics_test.c
>>> new file mode 100644
>>> index 000000000000..66f0ccf4f4ec
>>> --- /dev/null
>>> +++ b/tools/testing/selftests/bpf/prog_tests/atomics_test.c
>>> @@ -0,0 +1,262 @@
>>> +// SPDX-License-Identifier: GPL-2.0
>>> +
>>> +#include <test_progs.h>
>>> +
>>> +
>>> +#include "atomics_test.skel.h"
>>> +
>>> +static struct atomics_test *setup(void)
>>> +{
>>> + struct atomics_test *atomics_skel;
>>> + __u32 duration = 0, err;
>>> +
>>> + atomics_skel = atomics_test__open_and_load();
>>> + if (CHECK(!atomics_skel, "atomics_skel_load", "atomics skeleton failed\n"))
>>> + return NULL;
>>> +
>>> + if (atomics_skel->data->skip_tests) {
>>> + printf("%s:SKIP:no ENABLE_ATOMICS_TEST (missing Clang BPF atomics support)",
>>> + __func__);
>>> + test__skip();
>>> + goto err;
>>> + }
>>> +
>>> + err = atomics_test__attach(atomics_skel);
>>> + if (CHECK(err, "atomics_attach", "atomics attach failed: %d\n", err))
>>> + goto err;
>>> +
>>> + return atomics_skel;
>>> +
>>> +err:
>>> + atomics_test__destroy(atomics_skel);
>>> + return NULL;
>>> +}
>>> +
>>> +static void test_add(void)
>>> +{
>>> + struct atomics_test *atomics_skel;
>>> + int err, prog_fd;
>>> + __u32 duration = 0, retval;
>>> +
>>> + atomics_skel = setup();
>>
>> When running the test, I observed a noticeable delay between skel load and
>> skel attach. The reason is the bpf program object file contains
>> multiple programs and the above setup() tries to do attachment
>> for ALL programs but actually below only "add" program is tested.
>> This will unnecessarily increase test_progs running time.
>>
>> The best is for setup() here only load and attach program "add".
>> The libbpf API bpf_program__set_autoload() can set a particular
>> program not autoload. You can call attach function explicitly
>> for one specific program. This should be able to reduce test
>> running time.
>
> Interesting, thanks a lot - I'll try this out next week. Maybe we can
> actually load all the progs once at the beginning (i.e. in

If you have subtest, people expects subtest can be individual runable.
This will complicate your logic.

> test_atomics_test) then attach/detch each prog individually as needed...
> Sorry, I haven't got much of a grip on libbpf yet.

One alternative is not to do subtests. There is nothing run to have
just one bpf program instead of many. This way, you load all and attach
once, then do all the test verification.

2020-12-04 19:03:58

by Andrii Nakryiko

[permalink] [raw]
Subject: Re: [PATCH bpf-next v3 12/14] bpf: Pull tools/build/feature biz into selftests Makefile

On Fri, Dec 4, 2020 at 1:41 AM Brendan Jackman <[email protected]> wrote:
>
> On Thu, Dec 03, 2020 at 01:01:27PM -0800, Andrii Nakryiko wrote:
> > On Thu, Dec 3, 2020 at 8:07 AM Brendan Jackman <[email protected]> wrote:
> > >
> > > This is somewhat cargo-culted from the libbpf build. It will be used
> > > in a subsequent patch to query for Clang BPF atomics support.
> > >
> > > Change-Id: I9318a1702170eb752acced35acbb33f45126c44c
> >
> > Haven't seen this before. What's this Change-Id business?
>
> Argh, apologies. Looks like it's time for me to adopt a less error-prone
> workflow for sending patches.
>
> (This is noise from Gerrit, which we sometimes use for internal reviews)
>
> > > Signed-off-by: Brendan Jackman <[email protected]>
> > > ---
> > > tools/testing/selftests/bpf/.gitignore | 1 +
> > > tools/testing/selftests/bpf/Makefile | 38 ++++++++++++++++++++++++++
> > > 2 files changed, 39 insertions(+)
> >
> > All this just to detect the support for clang atomics?... Let's not
> > pull in the entire feature-detection framework unnecessarily,
> > selftests Makefile is complicated enough without that.
>
> Then the test build would break for people who haven't updated Clang.
> Is that acceptable?
>
> I'm aware of cases where you need to be on a pretty fresh Clang for
> tests to _pass_ so maybe it's fine.

I didn't mean to drop any detection of this new feature. I just didn't
want a new dependency on tools' feature probing framework. See
IS_LITTLE_ENDIAN and get_sys_includes, we already have various feature
detection-like stuff in there. So we can do this with a one-liner. I
just want to keep it simple. Thanks.

2020-12-04 19:52:35

by Andrii Nakryiko

[permalink] [raw]
Subject: Re: [PATCH bpf-next v3 13/14] bpf: Add tests for new BPF atomic operations

On Fri, Dec 4, 2020 at 7:29 AM Yonghong Song <[email protected]> wrote:
>
>
>
> On 12/4/20 1:45 AM, Brendan Jackman wrote:
> > On Thu, Dec 03, 2020 at 11:06:31PM -0800, Yonghong Song wrote:
> >> On 12/3/20 8:02 AM, Brendan Jackman wrote:
> > [...]
> >>> diff --git a/tools/testing/selftests/bpf/prog_tests/atomics_test.c b/tools/testing/selftests/bpf/prog_tests/atomics_test.c
> >>> new file mode 100644
> >>> index 000000000000..66f0ccf4f4ec
> >>> --- /dev/null
> >>> +++ b/tools/testing/selftests/bpf/prog_tests/atomics_test.c
> >>> @@ -0,0 +1,262 @@
> >>> +// SPDX-License-Identifier: GPL-2.0
> >>> +
> >>> +#include <test_progs.h>
> >>> +
> >>> +
> >>> +#include "atomics_test.skel.h"
> >>> +
> >>> +static struct atomics_test *setup(void)
> >>> +{
> >>> + struct atomics_test *atomics_skel;
> >>> + __u32 duration = 0, err;
> >>> +
> >>> + atomics_skel = atomics_test__open_and_load();
> >>> + if (CHECK(!atomics_skel, "atomics_skel_load", "atomics skeleton failed\n"))
> >>> + return NULL;
> >>> +
> >>> + if (atomics_skel->data->skip_tests) {
> >>> + printf("%s:SKIP:no ENABLE_ATOMICS_TEST (missing Clang BPF atomics support)",
> >>> + __func__);
> >>> + test__skip();
> >>> + goto err;
> >>> + }
> >>> +
> >>> + err = atomics_test__attach(atomics_skel);
> >>> + if (CHECK(err, "atomics_attach", "atomics attach failed: %d\n", err))
> >>> + goto err;
> >>> +
> >>> + return atomics_skel;
> >>> +
> >>> +err:
> >>> + atomics_test__destroy(atomics_skel);
> >>> + return NULL;
> >>> +}
> >>> +
> >>> +static void test_add(void)
> >>> +{
> >>> + struct atomics_test *atomics_skel;
> >>> + int err, prog_fd;
> >>> + __u32 duration = 0, retval;
> >>> +
> >>> + atomics_skel = setup();
> >>
> >> When running the test, I observed a noticeable delay between skel load and
> >> skel attach. The reason is the bpf program object file contains
> >> multiple programs and the above setup() tries to do attachment
> >> for ALL programs but actually below only "add" program is tested.
> >> This will unnecessarily increase test_progs running time.
> >>
> >> The best is for setup() here only load and attach program "add".
> >> The libbpf API bpf_program__set_autoload() can set a particular
> >> program not autoload. You can call attach function explicitly
> >> for one specific program. This should be able to reduce test
> >> running time.
> >
> > Interesting, thanks a lot - I'll try this out next week. Maybe we can
> > actually load all the progs once at the beginning (i.e. in
>
> If you have subtest, people expects subtest can be individual runable.
> This will complicate your logic.
>
> > test_atomics_test) then attach/detch each prog individually as needed...
> > Sorry, I haven't got much of a grip on libbpf yet.
>
> One alternative is not to do subtests. There is nothing run to have
> just one bpf program instead of many. This way, you load all and attach
> once, then do all the test verification.

I think subtests are good for debuggability, at least. But in this
case it's very easy to achieve everything you've discussed:

1. do open() right there in test_atomics_test() (btw, consider naming
the test just "atomics" or "atomic_insns" or something, no need for
test-test tautology)
2. check if needs skipping, skip entire test
3. if not skipping, load
4. then pass the same instance of the skeleton to each subtest
5. each subtest will
5a. bpf_prog__attach(skel->prog.my_specific_subtest_prog);
5b. trigger and do checks
5c. bpf_link__destroy(<link from 5a step>);

2020-12-07 11:04:04

by Brendan Jackman

[permalink] [raw]
Subject: Re: [PATCH bpf-next v3 12/14] bpf: Pull tools/build/feature biz into selftests Makefile

On Fri, Dec 04, 2020 at 11:00:24AM -0800, Andrii Nakryiko wrote:
> On Fri, Dec 4, 2020 at 1:41 AM Brendan Jackman <[email protected]> wrote:
> >
> > On Thu, Dec 03, 2020 at 01:01:27PM -0800, Andrii Nakryiko wrote:
> > > On Thu, Dec 3, 2020 at 8:07 AM Brendan Jackman <[email protected]> wrote:
> > > >
> > > > This is somewhat cargo-culted from the libbpf build. It will be used
> > > > in a subsequent patch to query for Clang BPF atomics support.
> > > >
> > > > Change-Id: I9318a1702170eb752acced35acbb33f45126c44c
> > >
> > > Haven't seen this before. What's this Change-Id business?
> >
> > Argh, apologies. Looks like it's time for me to adopt a less error-prone
> > workflow for sending patches.
> >
> > (This is noise from Gerrit, which we sometimes use for internal reviews)
> >
> > > > Signed-off-by: Brendan Jackman <[email protected]>
> > > > ---
> > > > tools/testing/selftests/bpf/.gitignore | 1 +
> > > > tools/testing/selftests/bpf/Makefile | 38 ++++++++++++++++++++++++++
> > > > 2 files changed, 39 insertions(+)
> > >
> > > All this just to detect the support for clang atomics?... Let's not
> > > pull in the entire feature-detection framework unnecessarily,
> > > selftests Makefile is complicated enough without that.
> >
> > Then the test build would break for people who haven't updated Clang.
> > Is that acceptable?
> >
> > I'm aware of cases where you need to be on a pretty fresh Clang for
> > tests to _pass_ so maybe it's fine.
>
> I didn't mean to drop any detection of this new feature. I just didn't
> want a new dependency on tools' feature probing framework. See
> IS_LITTLE_ENDIAN and get_sys_includes, we already have various feature
> detection-like stuff in there. So we can do this with a one-liner. I
> just want to keep it simple. Thanks.

Ah right gotcha. Then yeah I think we can do this:

BPF_ATOMICS_SUPPORTED = $(shell \
echo "int x = 0; int foo(void) { return __sync_val_compare_and_swap(&x, 1, 2); }" \
| $(CLANG) -x cpp-output -S -target bpf -mcpu=v3 - -o /dev/null && echo 1 || echo 0)

2020-12-07 11:31:13

by Brendan Jackman

[permalink] [raw]
Subject: Re: [PATCH bpf-next v3 10/14] bpf: Add bitwise atomic instructions

On Fri, Dec 04, 2020 at 07:21:22AM -0800, Yonghong Song wrote:
>
>
> On 12/4/20 1:36 AM, Brendan Jackman wrote:
> > On Thu, Dec 03, 2020 at 10:42:19PM -0800, Yonghong Song wrote:
> > >
> > >
> > > On 12/3/20 8:02 AM, Brendan Jackman wrote:
> > > > This adds instructions for
> > > >
> > > > atomic[64]_[fetch_]and
> > > > atomic[64]_[fetch_]or
> > > > atomic[64]_[fetch_]xor
> > > >
> > > > All these operations are isomorphic enough to implement with the same
> > > > verifier, interpreter, and x86 JIT code, hence being a single commit.
> > > >
> > > > The main interesting thing here is that x86 doesn't directly support
> > > > the fetch_ version these operations, so we need to generate a CMPXCHG
> > > > loop in the JIT. This requires the use of two temporary registers,
> > > > IIUC it's safe to use BPF_REG_AX and x86's AUX_REG for this purpose.
> > > >
> > > > Change-Id: I340b10cecebea8cb8a52e3606010cde547a10ed4
> > > > Signed-off-by: Brendan Jackman <[email protected]>
> > > > ---
> > > > arch/x86/net/bpf_jit_comp.c | 50 +++++++++++++++++++++++++++++-
> > > > include/linux/filter.h | 60 ++++++++++++++++++++++++++++++++++++
> > > > kernel/bpf/core.c | 5 ++-
> > > > kernel/bpf/disasm.c | 21 ++++++++++---
> > > > kernel/bpf/verifier.c | 6 ++++
> > > > tools/include/linux/filter.h | 60 ++++++++++++++++++++++++++++++++++++
> > > > 6 files changed, 196 insertions(+), 6 deletions(-)
> > > >
> > [...]
> > > > diff --git a/include/linux/filter.h b/include/linux/filter.h
> > > > index 6186280715ed..698f82897b0d 100644
> > > > --- a/include/linux/filter.h
> > > > +++ b/include/linux/filter.h
> > > > @@ -280,6 +280,66 @@ static inline bool insn_is_zext(const struct bpf_insn *insn)
> > [...]
> > > > +#define BPF_ATOMIC_FETCH_XOR(SIZE, DST, SRC, OFF) \
> > > > + ((struct bpf_insn) { \
> > > > + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \
> > > > + .dst_reg = DST, \
> > > > + .src_reg = SRC, \
> > > > + .off = OFF, \
> > > > + .imm = BPF_XOR | BPF_FETCH })
> > > > +
> > > > /* Atomic exchange, src_reg = atomic_xchg((dst_reg + off), src_reg) */
> > >
> > > Looks like BPF_ATOMIC_XOR/OR/AND/... all similar to each other.
> > > The same is for BPF_ATOMIC_FETCH_XOR/OR/AND/...
> > >
> > > I am wondering whether it makes sence to have to
> > > BPF_ATOMIC_BOP(BOP, SIZE, DST, SRC, OFF) and
> > > BPF_ATOMIC_FETCH_BOP(BOP, SIZE, DST, SRC, OFF)
> > > can have less number of macros?
> >
> > Hmm yeah I think that's probably a good idea, it would be consistent
> > with the macros for non-atomic ALU ops.
> >
> > I don't think 'BOP' would be very clear though, 'ALU' might be more
> > obvious.
>
> BPF_ATOMIC_ALU and BPF_ATOMIC_FETCH_ALU indeed better.

On second thoughts I think it feels right (i.e. it would be roughly
consistent with the level of abstraction of the rest of this macro API)
to go further and just have two macros BPF_ATOMIC64 and BPF_ATOMIC32:

/*
* Atomic ALU ops:
*
* BPF_ADD *(uint *) (dst_reg + off16) += src_reg
* BPF_AND *(uint *) (dst_reg + off16) &= src_reg
* BPF_OR *(uint *) (dst_reg + off16) |= src_reg
* BPF_XOR *(uint *) (dst_reg + off16) ^= src_reg
* BPF_ADD | BPF_FETCH src_reg = atomic_fetch_add(dst_reg + off16, src_reg);
* BPF_AND | BPF_FETCH src_reg = atomic_fetch_and(dst_reg + off16, src_reg);
* BPF_OR | BPF_FETCH src_reg = atomic_fetch_or(dst_reg + off16, src_reg);
* BPF_XOR | BPF_FETCH src_reg = atomic_fetch_xor(dst_reg + off16, src_reg);
* BPF_XCHG src_reg = atomic_xchg(dst_reg + off16, src_reg)
* BPF_CMPXCHG r0 = atomic_cmpxchg(dst_reg + off16, r0, src_reg)
*/

#define BPF_ATOMIC64(OP, DST, SRC, OFF) \
((struct bpf_insn) { \
.code = BPF_STX | BPF_DW | BPF_ATOMIC, \
.dst_reg = DST, \
.src_reg = SRC, \
.off = OFF, \
.imm = OP })

#define BPF_ATOMIC32(OP, DST, SRC, OFF) \
((struct bpf_insn) { \
.code = BPF_STX | BPF_W | BPF_ATOMIC, \
.dst_reg = DST, \
.src_reg = SRC, \
.off = OFF, \
.imm = OP })

The downside compared to what's currently in the patchset is that the
user can write e.g. BPF_ATOMIC64(BPF_SUB, BPF_REG_1, BPF_REG_2, 0) and
it will compile. On the other hand they'll get a pretty clear
"BPF_ATOMIC uses invalid atomic opcode 10" when they try to load the
prog, and the valid atomic ops are clearly listed in Documentation as
well as the comments here.

2020-12-07 15:51:46

by Brendan Jackman

[permalink] [raw]
Subject: Re: [PATCH bpf-next v3 13/14] bpf: Add tests for new BPF atomic operations

On Fri, Dec 04, 2020 at 11:49:22AM -0800, Andrii Nakryiko wrote:
> On Fri, Dec 4, 2020 at 7:29 AM Yonghong Song <[email protected]> wrote:
> >
> >
> >
> > On 12/4/20 1:45 AM, Brendan Jackman wrote:
> > > On Thu, Dec 03, 2020 at 11:06:31PM -0800, Yonghong Song wrote:
> > >> On 12/3/20 8:02 AM, Brendan Jackman wrote:
> > > [...]
> > >>> diff --git a/tools/testing/selftests/bpf/prog_tests/atomics_test.c b/tools/testing/selftests/bpf/prog_tests/atomics_test.c
> > >>> new file mode 100644
> > >>> index 000000000000..66f0ccf4f4ec
> > >>> --- /dev/null
> > >>> +++ b/tools/testing/selftests/bpf/prog_tests/atomics_test.c
> > >>> @@ -0,0 +1,262 @@
> > >>> +// SPDX-License-Identifier: GPL-2.0
> > >>> +
> > >>> +#include <test_progs.h>
> > >>> +
> > >>> +
> > >>> +#include "atomics_test.skel.h"
> > >>> +
> > >>> +static struct atomics_test *setup(void)
> > >>> +{
> > >>> + struct atomics_test *atomics_skel;
> > >>> + __u32 duration = 0, err;
> > >>> +
> > >>> + atomics_skel = atomics_test__open_and_load();
> > >>> + if (CHECK(!atomics_skel, "atomics_skel_load", "atomics skeleton failed\n"))
> > >>> + return NULL;
> > >>> +
> > >>> + if (atomics_skel->data->skip_tests) {
> > >>> + printf("%s:SKIP:no ENABLE_ATOMICS_TEST (missing Clang BPF atomics support)",
> > >>> + __func__);
> > >>> + test__skip();
> > >>> + goto err;
> > >>> + }
> > >>> +
> > >>> + err = atomics_test__attach(atomics_skel);
> > >>> + if (CHECK(err, "atomics_attach", "atomics attach failed: %d\n", err))
> > >>> + goto err;
> > >>> +
> > >>> + return atomics_skel;
> > >>> +
> > >>> +err:
> > >>> + atomics_test__destroy(atomics_skel);
> > >>> + return NULL;
> > >>> +}
> > >>> +
> > >>> +static void test_add(void)
> > >>> +{
> > >>> + struct atomics_test *atomics_skel;
> > >>> + int err, prog_fd;
> > >>> + __u32 duration = 0, retval;
> > >>> +
> > >>> + atomics_skel = setup();
> > >>
> > >> When running the test, I observed a noticeable delay between skel load and
> > >> skel attach. The reason is the bpf program object file contains
> > >> multiple programs and the above setup() tries to do attachment
> > >> for ALL programs but actually below only "add" program is tested.
> > >> This will unnecessarily increase test_progs running time.
> > >>
> > >> The best is for setup() here only load and attach program "add".
> > >> The libbpf API bpf_program__set_autoload() can set a particular
> > >> program not autoload. You can call attach function explicitly
> > >> for one specific program. This should be able to reduce test
> > >> running time.
> > >
> > > Interesting, thanks a lot - I'll try this out next week. Maybe we can
> > > actually load all the progs once at the beginning (i.e. in
> >
> > If you have subtest, people expects subtest can be individual runable.
> > This will complicate your logic.
> >
> > > test_atomics_test) then attach/detch each prog individually as needed...
> > > Sorry, I haven't got much of a grip on libbpf yet.
> >
> > One alternative is not to do subtests. There is nothing run to have
> > just one bpf program instead of many. This way, you load all and attach
> > once, then do all the test verification.
>
> I think subtests are good for debuggability, at least. But in this
> case it's very easy to achieve everything you've discussed:
>
> 1. do open() right there in test_atomics_test() (btw, consider naming
> the test just "atomics" or "atomic_insns" or something, no need for
> test-test tautology)
> 2. check if needs skipping, skip entire test
> 3. if not skipping, load
> 4. then pass the same instance of the skeleton to each subtest
> 5. each subtest will
> 5a. bpf_prog__attach(skel->prog.my_specific_subtest_prog);
> 5b. trigger and do checks
> 5c. bpf_link__destroy(<link from 5a step>);

Thanks, this seems like the way forward to me.

2020-12-07 16:04:04

by Yonghong Song

[permalink] [raw]
Subject: Re: [PATCH bpf-next v3 10/14] bpf: Add bitwise atomic instructions



On 12/7/20 3:28 AM, Brendan Jackman wrote:
> On Fri, Dec 04, 2020 at 07:21:22AM -0800, Yonghong Song wrote:
>>
>>
>> On 12/4/20 1:36 AM, Brendan Jackman wrote:
>>> On Thu, Dec 03, 2020 at 10:42:19PM -0800, Yonghong Song wrote:
>>>>
>>>>
>>>> On 12/3/20 8:02 AM, Brendan Jackman wrote:
>>>>> This adds instructions for
>>>>>
>>>>> atomic[64]_[fetch_]and
>>>>> atomic[64]_[fetch_]or
>>>>> atomic[64]_[fetch_]xor
>>>>>
>>>>> All these operations are isomorphic enough to implement with the same
>>>>> verifier, interpreter, and x86 JIT code, hence being a single commit.
>>>>>
>>>>> The main interesting thing here is that x86 doesn't directly support
>>>>> the fetch_ version these operations, so we need to generate a CMPXCHG
>>>>> loop in the JIT. This requires the use of two temporary registers,
>>>>> IIUC it's safe to use BPF_REG_AX and x86's AUX_REG for this purpose.
>>>>>
>>>>> Change-Id: I340b10cecebea8cb8a52e3606010cde547a10ed4
>>>>> Signed-off-by: Brendan Jackman <[email protected]>
>>>>> ---
>>>>> arch/x86/net/bpf_jit_comp.c | 50 +++++++++++++++++++++++++++++-
>>>>> include/linux/filter.h | 60 ++++++++++++++++++++++++++++++++++++
>>>>> kernel/bpf/core.c | 5 ++-
>>>>> kernel/bpf/disasm.c | 21 ++++++++++---
>>>>> kernel/bpf/verifier.c | 6 ++++
>>>>> tools/include/linux/filter.h | 60 ++++++++++++++++++++++++++++++++++++
>>>>> 6 files changed, 196 insertions(+), 6 deletions(-)
>>>>>
>>> [...]
>>>>> diff --git a/include/linux/filter.h b/include/linux/filter.h
>>>>> index 6186280715ed..698f82897b0d 100644
>>>>> --- a/include/linux/filter.h
>>>>> +++ b/include/linux/filter.h
>>>>> @@ -280,6 +280,66 @@ static inline bool insn_is_zext(const struct bpf_insn *insn)
>>> [...]
>>>>> +#define BPF_ATOMIC_FETCH_XOR(SIZE, DST, SRC, OFF) \
>>>>> + ((struct bpf_insn) { \
>>>>> + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \
>>>>> + .dst_reg = DST, \
>>>>> + .src_reg = SRC, \
>>>>> + .off = OFF, \
>>>>> + .imm = BPF_XOR | BPF_FETCH })
>>>>> +
>>>>> /* Atomic exchange, src_reg = atomic_xchg((dst_reg + off), src_reg) */
>>>>
>>>> Looks like BPF_ATOMIC_XOR/OR/AND/... all similar to each other.
>>>> The same is for BPF_ATOMIC_FETCH_XOR/OR/AND/...
>>>>
>>>> I am wondering whether it makes sence to have to
>>>> BPF_ATOMIC_BOP(BOP, SIZE, DST, SRC, OFF) and
>>>> BPF_ATOMIC_FETCH_BOP(BOP, SIZE, DST, SRC, OFF)
>>>> can have less number of macros?
>>>
>>> Hmm yeah I think that's probably a good idea, it would be consistent
>>> with the macros for non-atomic ALU ops.
>>>
>>> I don't think 'BOP' would be very clear though, 'ALU' might be more
>>> obvious.
>>
>> BPF_ATOMIC_ALU and BPF_ATOMIC_FETCH_ALU indeed better.
>
> On second thoughts I think it feels right (i.e. it would be roughly
> consistent with the level of abstraction of the rest of this macro API)
> to go further and just have two macros BPF_ATOMIC64 and BPF_ATOMIC32:
>
> /*
> * Atomic ALU ops:
> *
> * BPF_ADD *(uint *) (dst_reg + off16) += src_reg
> * BPF_AND *(uint *) (dst_reg + off16) &= src_reg
> * BPF_OR *(uint *) (dst_reg + off16) |= src_reg
> * BPF_XOR *(uint *) (dst_reg + off16) ^= src_reg

"uint *" => "size_type *"?
and give an explanation that "size_type" is either "u32" or "u64"?

> * BPF_ADD | BPF_FETCH src_reg = atomic_fetch_add(dst_reg + off16, src_reg);
> * BPF_AND | BPF_FETCH src_reg = atomic_fetch_and(dst_reg + off16, src_reg);
> * BPF_OR | BPF_FETCH src_reg = atomic_fetch_or(dst_reg + off16, src_reg);
> * BPF_XOR | BPF_FETCH src_reg = atomic_fetch_xor(dst_reg + off16, src_reg);
> * BPF_XCHG src_reg = atomic_xchg(dst_reg + off16, src_reg)
> * BPF_CMPXCHG r0 = atomic_cmpxchg(dst_reg + off16, r0, src_reg)
> */
>
> #define BPF_ATOMIC64(OP, DST, SRC, OFF) \
> ((struct bpf_insn) { \
> .code = BPF_STX | BPF_DW | BPF_ATOMIC, \
> .dst_reg = DST, \
> .src_reg = SRC, \
> .off = OFF, \
> .imm = OP })
>
> #define BPF_ATOMIC32(OP, DST, SRC, OFF) \
> ((struct bpf_insn) { \
> .code = BPF_STX | BPF_W | BPF_ATOMIC, \
> .dst_reg = DST, \
> .src_reg = SRC, \
> .off = OFF, \
> .imm = OP })

You could have
BPF_ATOMIC(OP, SIZE, DST, SRC, OFF)
where SIZE is BPF_DW or BPF_W.

>
> The downside compared to what's currently in the patchset is that the
> user can write e.g. BPF_ATOMIC64(BPF_SUB, BPF_REG_1, BPF_REG_2, 0) and
> it will compile. On the other hand they'll get a pretty clear
> "BPF_ATOMIC uses invalid atomic opcode 10" when they try to load the
> prog, and the valid atomic ops are clearly listed in Documentation as
> well as the comments here.

This should be fine. As you mentioned, documentation has mentioned
what is supported and what is not...

2020-12-07 16:16:55

by Brendan Jackman

[permalink] [raw]
Subject: Re: [PATCH bpf-next v3 10/14] bpf: Add bitwise atomic instructions

On Mon, Dec 07, 2020 at 07:58:09AM -0800, Yonghong Song wrote:
>
>
> On 12/7/20 3:28 AM, Brendan Jackman wrote:
> > On Fri, Dec 04, 2020 at 07:21:22AM -0800, Yonghong Song wrote:
> > >
> > >
> > > On 12/4/20 1:36 AM, Brendan Jackman wrote:
> > > > On Thu, Dec 03, 2020 at 10:42:19PM -0800, Yonghong Song wrote:
> > > > >
> > > > >
> > > > > On 12/3/20 8:02 AM, Brendan Jackman wrote:
> > > > > > This adds instructions for
> > > > > >
> > > > > > atomic[64]_[fetch_]and
> > > > > > atomic[64]_[fetch_]or
> > > > > > atomic[64]_[fetch_]xor
> > > > > >
> > > > > > All these operations are isomorphic enough to implement with the same
> > > > > > verifier, interpreter, and x86 JIT code, hence being a single commit.
> > > > > >
> > > > > > The main interesting thing here is that x86 doesn't directly support
> > > > > > the fetch_ version these operations, so we need to generate a CMPXCHG
> > > > > > loop in the JIT. This requires the use of two temporary registers,
> > > > > > IIUC it's safe to use BPF_REG_AX and x86's AUX_REG for this purpose.
> > > > > >
> > > > > > Change-Id: I340b10cecebea8cb8a52e3606010cde547a10ed4
> > > > > > Signed-off-by: Brendan Jackman <[email protected]>
> > > > > > ---
> > > > > > arch/x86/net/bpf_jit_comp.c | 50 +++++++++++++++++++++++++++++-
> > > > > > include/linux/filter.h | 60 ++++++++++++++++++++++++++++++++++++
> > > > > > kernel/bpf/core.c | 5 ++-
> > > > > > kernel/bpf/disasm.c | 21 ++++++++++---
> > > > > > kernel/bpf/verifier.c | 6 ++++
> > > > > > tools/include/linux/filter.h | 60 ++++++++++++++++++++++++++++++++++++
> > > > > > 6 files changed, 196 insertions(+), 6 deletions(-)
> > > > > >
> > > > [...]
> > > > > > diff --git a/include/linux/filter.h b/include/linux/filter.h
> > > > > > index 6186280715ed..698f82897b0d 100644
> > > > > > --- a/include/linux/filter.h
> > > > > > +++ b/include/linux/filter.h
> > > > > > @@ -280,6 +280,66 @@ static inline bool insn_is_zext(const struct bpf_insn *insn)
> > > > [...]
> > > > > > +#define BPF_ATOMIC_FETCH_XOR(SIZE, DST, SRC, OFF) \
> > > > > > + ((struct bpf_insn) { \
> > > > > > + .code = BPF_STX | BPF_SIZE(SIZE) | BPF_ATOMIC, \
> > > > > > + .dst_reg = DST, \
> > > > > > + .src_reg = SRC, \
> > > > > > + .off = OFF, \
> > > > > > + .imm = BPF_XOR | BPF_FETCH })
> > > > > > +
> > > > > > /* Atomic exchange, src_reg = atomic_xchg((dst_reg + off), src_reg) */
> > > > >
> > > > > Looks like BPF_ATOMIC_XOR/OR/AND/... all similar to each other.
> > > > > The same is for BPF_ATOMIC_FETCH_XOR/OR/AND/...
> > > > >
> > > > > I am wondering whether it makes sence to have to
> > > > > BPF_ATOMIC_BOP(BOP, SIZE, DST, SRC, OFF) and
> > > > > BPF_ATOMIC_FETCH_BOP(BOP, SIZE, DST, SRC, OFF)
> > > > > can have less number of macros?
> > > >
> > > > Hmm yeah I think that's probably a good idea, it would be consistent
> > > > with the macros for non-atomic ALU ops.
> > > >
> > > > I don't think 'BOP' would be very clear though, 'ALU' might be more
> > > > obvious.
> > >
> > > BPF_ATOMIC_ALU and BPF_ATOMIC_FETCH_ALU indeed better.
> >
> > On second thoughts I think it feels right (i.e. it would be roughly
> > consistent with the level of abstraction of the rest of this macro API)
> > to go further and just have two macros BPF_ATOMIC64 and BPF_ATOMIC32:
> >
> > /*
> > * Atomic ALU ops:
> > *
> > * BPF_ADD *(uint *) (dst_reg + off16) += src_reg
> > * BPF_AND *(uint *) (dst_reg + off16) &= src_reg
> > * BPF_OR *(uint *) (dst_reg + off16) |= src_reg
> > * BPF_XOR *(uint *) (dst_reg + off16) ^= src_reg
>
> "uint *" => "size_type *"?
> and give an explanation that "size_type" is either "u32" or "u64"?

"uint *" is already used in the file so I'll follow the precedent there.

>
> > * BPF_ADD | BPF_FETCH src_reg = atomic_fetch_add(dst_reg + off16, src_reg);
> > * BPF_AND | BPF_FETCH src_reg = atomic_fetch_and(dst_reg + off16, src_reg);
> > * BPF_OR | BPF_FETCH src_reg = atomic_fetch_or(dst_reg + off16, src_reg);
> > * BPF_XOR | BPF_FETCH src_reg = atomic_fetch_xor(dst_reg + off16, src_reg);
> > * BPF_XCHG src_reg = atomic_xchg(dst_reg + off16, src_reg)
> > * BPF_CMPXCHG r0 = atomic_cmpxchg(dst_reg + off16, r0, src_reg)
> > */
> >
> > #define BPF_ATOMIC64(OP, DST, SRC, OFF) \
> > ((struct bpf_insn) { \
> > .code = BPF_STX | BPF_DW | BPF_ATOMIC, \
> > .dst_reg = DST, \
> > .src_reg = SRC, \
> > .off = OFF, \
> > .imm = OP })
> >
> > #define BPF_ATOMIC32(OP, DST, SRC, OFF) \
> > ((struct bpf_insn) { \
> > .code = BPF_STX | BPF_W | BPF_ATOMIC, \
> > .dst_reg = DST, \
> > .src_reg = SRC, \
> > .off = OFF, \
> > .imm = OP })
>
> You could have
> BPF_ATOMIC(OP, SIZE, DST, SRC, OFF)
> where SIZE is BPF_DW or BPF_W.

Ah sorry, I didn't see this mail and have just posted v4 with the 2
separate macros. Let's see if anyone else has an opinion on
this point.

> >
> > The downside compared to what's currently in the patchset is that the
> > user can write e.g. BPF_ATOMIC64(BPF_SUB, BPF_REG_1, BPF_REG_2, 0) and
> > it will compile. On the other hand they'll get a pretty clear
> > "BPF_ATOMIC uses invalid atomic opcode 10" when they try to load the
> > prog, and the valid atomic ops are clearly listed in Documentation as
> > well as the comments here.
>
> This should be fine. As you mentioned, documentation has mentioned
> what is supported and what is not...

2020-12-08 02:25:02

by Andrii Nakryiko

[permalink] [raw]
Subject: Re: [PATCH bpf-next v3 12/14] bpf: Pull tools/build/feature biz into selftests Makefile

On Mon, Dec 7, 2020 at 3:00 AM Brendan Jackman <[email protected]> wrote:
>
> On Fri, Dec 04, 2020 at 11:00:24AM -0800, Andrii Nakryiko wrote:
> > On Fri, Dec 4, 2020 at 1:41 AM Brendan Jackman <[email protected]> wrote:
> > >
> > > On Thu, Dec 03, 2020 at 01:01:27PM -0800, Andrii Nakryiko wrote:
> > > > On Thu, Dec 3, 2020 at 8:07 AM Brendan Jackman <[email protected]> wrote:
> > > > >
> > > > > This is somewhat cargo-culted from the libbpf build. It will be used
> > > > > in a subsequent patch to query for Clang BPF atomics support.
> > > > >
> > > > > Change-Id: I9318a1702170eb752acced35acbb33f45126c44c
> > > >
> > > > Haven't seen this before. What's this Change-Id business?
> > >
> > > Argh, apologies. Looks like it's time for me to adopt a less error-prone
> > > workflow for sending patches.
> > >
> > > (This is noise from Gerrit, which we sometimes use for internal reviews)
> > >
> > > > > Signed-off-by: Brendan Jackman <[email protected]>
> > > > > ---
> > > > > tools/testing/selftests/bpf/.gitignore | 1 +
> > > > > tools/testing/selftests/bpf/Makefile | 38 ++++++++++++++++++++++++++
> > > > > 2 files changed, 39 insertions(+)
> > > >
> > > > All this just to detect the support for clang atomics?... Let's not
> > > > pull in the entire feature-detection framework unnecessarily,
> > > > selftests Makefile is complicated enough without that.
> > >
> > > Then the test build would break for people who haven't updated Clang.
> > > Is that acceptable?
> > >
> > > I'm aware of cases where you need to be on a pretty fresh Clang for
> > > tests to _pass_ so maybe it's fine.
> >
> > I didn't mean to drop any detection of this new feature. I just didn't
> > want a new dependency on tools' feature probing framework. See
> > IS_LITTLE_ENDIAN and get_sys_includes, we already have various feature
> > detection-like stuff in there. So we can do this with a one-liner. I
> > just want to keep it simple. Thanks.
>
> Ah right gotcha. Then yeah I think we can do this:
>
> BPF_ATOMICS_SUPPORTED = $(shell \
> echo "int x = 0; int foo(void) { return __sync_val_compare_and_swap(&x, 1, 2); }" \
> | $(CLANG) -x cpp-output -S -target bpf -mcpu=v3 - -o /dev/null && echo 1 || echo 0)

Looks like it would work, yes. Curious what "-x cpp-output" does?

2020-12-08 17:11:10

by Brendan Jackman

[permalink] [raw]
Subject: Re: [PATCH bpf-next v3 12/14] bpf: Pull tools/build/feature biz into selftests Makefile

On Mon, Dec 07, 2020 at 06:19:12PM -0800, Andrii Nakryiko wrote:
> On Mon, Dec 7, 2020 at 3:00 AM Brendan Jackman <[email protected]> wrote:
> >
> > On Fri, Dec 04, 2020 at 11:00:24AM -0800, Andrii Nakryiko wrote:
> > > On Fri, Dec 4, 2020 at 1:41 AM Brendan Jackman <[email protected]> wrote:
> > > >
> > > > On Thu, Dec 03, 2020 at 01:01:27PM -0800, Andrii Nakryiko wrote:
> > > > > On Thu, Dec 3, 2020 at 8:07 AM Brendan Jackman <[email protected]> wrote:
> > > > > >
[...]
> >
> > Ah right gotcha. Then yeah I think we can do this:
> >
> > BPF_ATOMICS_SUPPORTED = $(shell \
> > echo "int x = 0; int foo(void) { return __sync_val_compare_and_swap(&x, 1, 2); }" \
> > | $(CLANG) -x cpp-output -S -target bpf -mcpu=v3 - -o /dev/null && echo 1 || echo 0)
>
> Looks like it would work, yes.
/
> Curious what "-x cpp-output" does?

That's just to tell Clang what language to expect, since it can't infer
it from a file extension:

$ echo foo | clang -S -
clang-10: error: -E or -x required when input is from standard input

Yonghong pointed out that we can actually just use `-x c`.

2020-12-08 18:35:15

by Andrii Nakryiko

[permalink] [raw]
Subject: Re: [PATCH bpf-next v3 12/14] bpf: Pull tools/build/feature biz into selftests Makefile

On Tue, Dec 8, 2020 at 9:04 AM Brendan Jackman <[email protected]> wrote:
>
> On Mon, Dec 07, 2020 at 06:19:12PM -0800, Andrii Nakryiko wrote:
> > On Mon, Dec 7, 2020 at 3:00 AM Brendan Jackman <[email protected]> wrote:
> > >
> > > On Fri, Dec 04, 2020 at 11:00:24AM -0800, Andrii Nakryiko wrote:
> > > > On Fri, Dec 4, 2020 at 1:41 AM Brendan Jackman <[email protected]> wrote:
> > > > >
> > > > > On Thu, Dec 03, 2020 at 01:01:27PM -0800, Andrii Nakryiko wrote:
> > > > > > On Thu, Dec 3, 2020 at 8:07 AM Brendan Jackman <[email protected]> wrote:
> > > > > > >
> [...]
> > >
> > > Ah right gotcha. Then yeah I think we can do this:
> > >
> > > BPF_ATOMICS_SUPPORTED = $(shell \
> > > echo "int x = 0; int foo(void) { return __sync_val_compare_and_swap(&x, 1, 2); }" \
> > > | $(CLANG) -x cpp-output -S -target bpf -mcpu=v3 - -o /dev/null && echo 1 || echo 0)
> >
> > Looks like it would work, yes.
> /
> > Curious what "-x cpp-output" does?
>
> That's just to tell Clang what language to expect, since it can't infer
> it from a file extension:
>
> $ echo foo | clang -S -
> clang-10: error: -E or -x required when input is from standard input
>
> Yonghong pointed out that we can actually just use `-x c`.

yeah, that's what confused me, as we don't really write C++ for BPF
code :) All good.