2024-04-26 12:14:11

by Puranjay Mohan

[permalink] [raw]
Subject: [PATCH bpf-next v3 0/2] bpf, arm64: Support per-cpu instruction

Changes in v2 -> v3:
v2: https://lore.kernel.org/all/[email protected]/
- Fixed the xlated dump of percpu mov to "r0 = &(void __percpu *)(r0)"
- Made ARM64 and x86-64 use the same code for inlining. The only difference
that remains is the per-cpu address of the cpu_number.

Changes in v1 -> v2:
v1: https://lore.kernel.org/all/[email protected]/
- Add a patch to inline bpf_get_smp_processor_id()
- Fix an issue in MRS instruction encoding as pointed out by Will
- Remove CONFIG_SMP check because arm64 kernel always compiles with CONFIG_SMP

This series adds the support of internal only per-CPU instructions and
inlines the bpf_get_smp_processor_id() helper call for ARM64 BPF JIT.

Here is an example of calls to bpf_get_smp_processor_id() and
percpu_array_map_lookup_elem() before and after this series.

BPF
=====
BEFORE AFTER
-------- -------

int cpu = bpf_get_smp_processor_id(); int cpu = bpf_get_smp_processor_id();
(85) call bpf_get_smp_processor_id#229032 (18) r0 = 0xffff800082072008
(bf) r0 = &(void __percpu *)(r0)
(61) r0 = *(u32 *)(r0 +0)


p = bpf_map_lookup_elem(map, &zero); p = bpf_map_lookup_elem(map, &zero);
(18) r1 = map[id:78] (18) r1 = map[id:153]
(18) r2 = map[id:82][0]+65536 (18) r2 = map[id:157][0]+65536
(85) call percpu_array_map_lookup_elem#313512 (07) r1 += 496
(61) r0 = *(u32 *)(r2 +0)
(35) if r0 >= 0x1 goto pc+5
(67) r0 <<= 3
(0f) r0 += r1
(79) r0 = *(u64 *)(r0 +0)
(bf) r0 = &(void __percpu *)(r0)
(05) goto pc+1
(b7) r0 = 0


ARM64 JIT
===========

BEFORE AFTER
-------- -------

int cpu = bpf_get_smp_processor_id(); int cpu = bpf_get_smp_processor_id();
mov x10, #0xfffffffffffff4d0 mov x7, #0xffff8000ffffffff
movk x10, #0x802b, lsl #16 movk x7, #0x8207, lsl #16
movk x10, #0x8000, lsl #32 movk x7, #0x2008
blr x10 mrs x10, tpidr_el1
add x7, x0, #0x0 add x7, x7, x10
ldr w7, [x7]


p = bpf_map_lookup_elem(map, &zero); p = bpf_map_lookup_elem(map, &zero);
mov x0, #0xffff0003ffffffff mov x0, #0xffff0003ffffffff
movk x0, #0xce5c, lsl #16 movk x0, #0xe0f3, lsl #16
movk x0, #0xca00 movk x0, #0x7c00
mov x1, #0xffff8000ffffffff mov x1, #0xffff8000ffffffff
movk x1, #0x8bdb, lsl #16 movk x1, #0xb0c7, lsl #16
movk x1, #0x6000 movk x1, #0xe000
mov x10, #0xffffffffffff3ed0 add x0, x0, #0x1f0
movk x10, #0x802d, lsl #16 ldr w7, [x1]
movk x10, #0x8000, lsl #32 cmp x7, #0x1
blr x10 b.cs 0x0000000000000090
add x7, x0, #0x0 lsl x7, x7, #3
add x7, x7, x0
ldr x7, [x7]
mrs x10, tpidr_el1
add x7, x7, x10
b 0x0000000000000094
mov x7, #0x0

Performance improvement found using benchmark[1]

BEFORE AFTER
-------- -------

glob-arr-inc : 23.817 ± 0.019M/s glob-arr-inc : 24.631 ± 0.027M/s
arr-inc : 23.253 ± 0.019M/s arr-inc : 23.742 ± 0.023M/s
hash-inc : 12.258 ± 0.010M/s hash-inc : 12.625 ± 0.004M/s

[1] https://github.com/anakryiko/linux/commit/8dec900975ef

Puranjay Mohan (2):
arm64, bpf: add internal-only MOV instruction to resolve per-CPU addrs
bpf, arm64: inline bpf_get_smp_processor_id() helper

arch/arm64/include/asm/insn.h | 7 +++++++
arch/arm64/lib/insn.c | 11 +++++++++++
arch/arm64/net/bpf_jit.h | 6 ++++++
arch/arm64/net/bpf_jit_comp.c | 14 ++++++++++++++
kernel/bpf/verifier.c | 24 +++++++++++++++++-------
5 files changed, 55 insertions(+), 7 deletions(-)

--
2.40.1



2024-04-26 12:14:25

by Puranjay Mohan

[permalink] [raw]
Subject: [PATCH bpf-next v3 1/2] arm64, bpf: add internal-only MOV instruction to resolve per-CPU addrs

From: Puranjay Mohan <[email protected]>

Support an instruction for resolving absolute addresses of per-CPU
data from their per-CPU offsets. This instruction is internal-only and
users are not allowed to use them directly. They will only be used for
internal inlining optimizations for now between BPF verifier and BPF
JITs.

Since commit 7158627686f0 ("arm64: percpu: implement optimised pcpu
access using tpidr_el1"), the per-cpu offset for the CPU is stored in
the tpidr_el1/2 register of that CPU.

To support this BPF instruction in the ARM64 JIT, the following ARM64
instructions are emitted:

mov dst, src // Move src to dst, if src != dst
mrs tmp, tpidr_el1/2 // Move per-cpu offset of the current cpu in tmp.
add dst, dst, tmp // Add the per cpu offset to the dst.

To measure the performance improvement provided by this change, the
benchmark in [1] was used:

Before:
glob-arr-inc : 23.597 ± 0.012M/s
arr-inc : 23.173 ± 0.019M/s
hash-inc : 12.186 ± 0.028M/s

After:
glob-arr-inc : 23.819 ± 0.034M/s
arr-inc : 23.285 ± 0.017M/s
hash-inc : 12.419 ± 0.011M/s

[1] https://github.com/anakryiko/linux/commit/8dec900975ef

Signed-off-by: Puranjay Mohan <[email protected]>
---
arch/arm64/include/asm/insn.h | 7 +++++++
arch/arm64/lib/insn.c | 11 +++++++++++
arch/arm64/net/bpf_jit.h | 6 ++++++
arch/arm64/net/bpf_jit_comp.c | 14 ++++++++++++++
4 files changed, 38 insertions(+)

diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h
index db1aeacd4cd9..8de0e39b29f3 100644
--- a/arch/arm64/include/asm/insn.h
+++ b/arch/arm64/include/asm/insn.h
@@ -135,6 +135,11 @@ enum aarch64_insn_special_register {
AARCH64_INSN_SPCLREG_SP_EL2 = 0xF210
};

+enum aarch64_insn_system_register {
+ AARCH64_INSN_SYSREG_TPIDR_EL1 = 0x4684,
+ AARCH64_INSN_SYSREG_TPIDR_EL2 = 0x6682,
+};
+
enum aarch64_insn_variant {
AARCH64_INSN_VARIANT_32BIT,
AARCH64_INSN_VARIANT_64BIT
@@ -686,6 +691,8 @@ u32 aarch64_insn_gen_cas(enum aarch64_insn_register result,
}
#endif
u32 aarch64_insn_gen_dmb(enum aarch64_insn_mb_type type);
+u32 aarch64_insn_gen_mrs(enum aarch64_insn_register result,
+ enum aarch64_insn_system_register sysreg);

s32 aarch64_get_branch_offset(u32 insn);
u32 aarch64_set_branch_offset(u32 insn, s32 offset);
diff --git a/arch/arm64/lib/insn.c b/arch/arm64/lib/insn.c
index a635ab83fee3..b008a9b46a7f 100644
--- a/arch/arm64/lib/insn.c
+++ b/arch/arm64/lib/insn.c
@@ -1515,3 +1515,14 @@ u32 aarch64_insn_gen_dmb(enum aarch64_insn_mb_type type)

return insn;
}
+
+u32 aarch64_insn_gen_mrs(enum aarch64_insn_register result,
+ enum aarch64_insn_system_register sysreg)
+{
+ u32 insn = aarch64_insn_get_mrs_value();
+
+ insn &= ~GENMASK(19, 0);
+ insn |= sysreg << 5;
+ return aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RT,
+ insn, result);
+}
diff --git a/arch/arm64/net/bpf_jit.h b/arch/arm64/net/bpf_jit.h
index 23b1b34db088..b627ef7188c7 100644
--- a/arch/arm64/net/bpf_jit.h
+++ b/arch/arm64/net/bpf_jit.h
@@ -297,4 +297,10 @@
#define A64_ADR(Rd, offset) \
aarch64_insn_gen_adr(0, offset, Rd, AARCH64_INSN_ADR_TYPE_ADR)

+/* MRS */
+#define A64_MRS_TPIDR_EL1(Rt) \
+ aarch64_insn_gen_mrs(Rt, AARCH64_INSN_SYSREG_TPIDR_EL1)
+#define A64_MRS_TPIDR_EL2(Rt) \
+ aarch64_insn_gen_mrs(Rt, AARCH64_INSN_SYSREG_TPIDR_EL2)
+
#endif /* _BPF_JIT_H */
diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
index 76b91f36c729..ed8f9716d9d5 100644
--- a/arch/arm64/net/bpf_jit_comp.c
+++ b/arch/arm64/net/bpf_jit_comp.c
@@ -877,6 +877,15 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx,
emit(A64_ORR(1, tmp, dst, tmp), ctx);
emit(A64_MOV(1, dst, tmp), ctx);
break;
+ } else if (insn_is_mov_percpu_addr(insn)) {
+ if (dst != src)
+ emit(A64_MOV(1, dst, src), ctx);
+ if (cpus_have_cap(ARM64_HAS_VIRT_HOST_EXTN))
+ emit(A64_MRS_TPIDR_EL2(tmp), ctx);
+ else
+ emit(A64_MRS_TPIDR_EL1(tmp), ctx);
+ emit(A64_ADD(1, dst, dst, tmp), ctx);
+ break;
}
switch (insn->off) {
case 0:
@@ -2527,6 +2536,11 @@ bool bpf_jit_supports_arena(void)
return true;
}

+bool bpf_jit_supports_percpu_insn(void)
+{
+ return true;
+}
+
void bpf_jit_free(struct bpf_prog *prog)
{
if (prog->jited) {
--
2.40.1


2024-04-26 12:14:40

by Puranjay Mohan

[permalink] [raw]
Subject: [PATCH bpf-next v3 2/2] bpf, arm64: inline bpf_get_smp_processor_id() helper

As ARM64 JIT now implements BPF_MOV64_PERCPU_REG instruction, inline
bpf_get_smp_processor_id().

ARM64 uses the per-cpu variable cpu_number to store the cpu id.

Here is how the BPF and ARM64 JITed assembly changes after this commit:

BPF
=====
BEFORE AFTER
-------- -------

int cpu = bpf_get_smp_processor_id(); int cpu = bpf_get_smp_processor_id();
(85) call bpf_get_smp_processor_id#229032 (18) r0 = 0xffff800082072008
(bf) r0 = &(void __percpu *)(r0)
(61) r0 = *(u32 *)(r0 +0)

ARM64 JIT
===========

BEFORE AFTER
-------- -------

int cpu = bpf_get_smp_processor_id(); int cpu = bpf_get_smp_processor_id();
mov x10, #0xfffffffffffff4d0 mov x7, #0xffff8000ffffffff
movk x10, #0x802b, lsl #16 movk x7, #0x8207, lsl #16
movk x10, #0x8000, lsl #32 movk x7, #0x2008
blr x10 mrs x10, tpidr_el1
add x7, x0, #0x0 add x7, x7, x10
ldr w7, [x7]

Performance improvement using benchmark[1]

BEFORE AFTER
-------- -------

glob-arr-inc : 23.817 ± 0.019M/s glob-arr-inc : 24.631 ± 0.027M/s
arr-inc : 23.253 ± 0.019M/s arr-inc : 23.742 ± 0.023M/s
hash-inc : 12.258 ± 0.010M/s hash-inc : 12.625 ± 0.004M/s

[1] https://github.com/anakryiko/linux/commit/8dec900975ef

Signed-off-by: Puranjay Mohan <[email protected]>
Acked-by: Andrii Nakryiko <[email protected]>
---
kernel/bpf/verifier.c | 24 +++++++++++++++++-------
1 file changed, 17 insertions(+), 7 deletions(-)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 4e474ef44e9c..6ff4e63b2ef2 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -20273,20 +20273,31 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
goto next_insn;
}

-#ifdef CONFIG_X86_64
/* Implement bpf_get_smp_processor_id() inline. */
if (insn->imm == BPF_FUNC_get_smp_processor_id &&
prog->jit_requested && bpf_jit_supports_percpu_insn()) {
/* BPF_FUNC_get_smp_processor_id inlining is an
- * optimization, so if pcpu_hot.cpu_number is ever
+ * optimization, so if cpu_number_addr is ever
* changed in some incompatible and hard to support
* way, it's fine to back out this inlining logic
*/
- insn_buf[0] = BPF_MOV32_IMM(BPF_REG_0, (u32)(unsigned long)&pcpu_hot.cpu_number);
- insn_buf[1] = BPF_MOV64_PERCPU_REG(BPF_REG_0, BPF_REG_0);
- insn_buf[2] = BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 0);
- cnt = 3;
+ u64 cpu_number_addr;

+#if defined(CONFIG_X86_64)
+ cpu_number_addr = (u64)&pcpu_hot.cpu_number;
+#elif defined(CONFIG_ARM64)
+ cpu_number_addr = (u64)&cpu_number;
+#else
+ goto next_insn;
+#endif
+ struct bpf_insn ld_cpu_number_addr[2] = {
+ BPF_LD_IMM64(BPF_REG_0, cpu_number_addr)
+ };
+ insn_buf[0] = ld_cpu_number_addr[0];
+ insn_buf[1] = ld_cpu_number_addr[1];
+ insn_buf[2] = BPF_MOV64_PERCPU_REG(BPF_REG_0, BPF_REG_0);
+ insn_buf[3] = BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 0);
+ cnt = 4;
new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, cnt);
if (!new_prog)
return -ENOMEM;
@@ -20296,7 +20307,6 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
insn = new_prog->insnsi + i + delta;
goto next_insn;
}
-#endif
/* Implement bpf_get_func_arg inline. */
if (prog_type == BPF_PROG_TYPE_TRACING &&
insn->imm == BPF_FUNC_get_func_arg) {
--
2.40.1


2024-04-26 16:19:52

by Andrii Nakryiko

[permalink] [raw]
Subject: Re: [PATCH bpf-next v3 1/2] arm64, bpf: add internal-only MOV instruction to resolve per-CPU addrs

On Fri, Apr 26, 2024 at 5:14 AM Puranjay Mohan <[email protected]> wrote:
>
> From: Puranjay Mohan <[email protected]>
>
> Support an instruction for resolving absolute addresses of per-CPU
> data from their per-CPU offsets. This instruction is internal-only and
> users are not allowed to use them directly. They will only be used for
> internal inlining optimizations for now between BPF verifier and BPF
> JITs.
>
> Since commit 7158627686f0 ("arm64: percpu: implement optimised pcpu
> access using tpidr_el1"), the per-cpu offset for the CPU is stored in
> the tpidr_el1/2 register of that CPU.
>
> To support this BPF instruction in the ARM64 JIT, the following ARM64
> instructions are emitted:
>
> mov dst, src // Move src to dst, if src != dst
> mrs tmp, tpidr_el1/2 // Move per-cpu offset of the current cpu in tmp.
> add dst, dst, tmp // Add the per cpu offset to the dst.
>
> To measure the performance improvement provided by this change, the
> benchmark in [1] was used:
>
> Before:
> glob-arr-inc : 23.597 ± 0.012M/s
> arr-inc : 23.173 ± 0.019M/s
> hash-inc : 12.186 ± 0.028M/s
>
> After:
> glob-arr-inc : 23.819 ± 0.034M/s
> arr-inc : 23.285 ± 0.017M/s

I still expected a better improvement (global-arr-inc's results
improved more than arr-inc, which is completely different from
x86-64), but it's still a good thing to support this for arm64, of
course.

ack for generic parts I can understand:

Acked-by: Andrii Nakryiko <[email protected]>

> hash-inc : 12.419 ± 0.011M/s
>
> [1] https://github.com/anakryiko/linux/commit/8dec900975ef
>
> Signed-off-by: Puranjay Mohan <[email protected]>
> ---
> arch/arm64/include/asm/insn.h | 7 +++++++
> arch/arm64/lib/insn.c | 11 +++++++++++
> arch/arm64/net/bpf_jit.h | 6 ++++++
> arch/arm64/net/bpf_jit_comp.c | 14 ++++++++++++++
> 4 files changed, 38 insertions(+)
>

[...]

2024-04-26 16:27:28

by Andrii Nakryiko

[permalink] [raw]
Subject: Re: [PATCH bpf-next v3 2/2] bpf, arm64: inline bpf_get_smp_processor_id() helper

On Fri, Apr 26, 2024 at 5:14 AM Puranjay Mohan <[email protected]> wrote:
>
> As ARM64 JIT now implements BPF_MOV64_PERCPU_REG instruction, inline
> bpf_get_smp_processor_id().
>
> ARM64 uses the per-cpu variable cpu_number to store the cpu id.
>
> Here is how the BPF and ARM64 JITed assembly changes after this commit:
>
> BPF
> =====
> BEFORE AFTER
> -------- -------
>
> int cpu = bpf_get_smp_processor_id(); int cpu = bpf_get_smp_processor_id();
> (85) call bpf_get_smp_processor_id#229032 (18) r0 = 0xffff800082072008
> (bf) r0 = &(void __percpu *)(r0)
> (61) r0 = *(u32 *)(r0 +0)
>
> ARM64 JIT
> ===========
>
> BEFORE AFTER
> -------- -------
>
> int cpu = bpf_get_smp_processor_id(); int cpu = bpf_get_smp_processor_id();
> mov x10, #0xfffffffffffff4d0 mov x7, #0xffff8000ffffffff
> movk x10, #0x802b, lsl #16 movk x7, #0x8207, lsl #16
> movk x10, #0x8000, lsl #32 movk x7, #0x2008
> blr x10 mrs x10, tpidr_el1
> add x7, x0, #0x0 add x7, x7, x10
> ldr w7, [x7]
>
> Performance improvement using benchmark[1]
>
> BEFORE AFTER
> -------- -------
>
> glob-arr-inc : 23.817 ± 0.019M/s glob-arr-inc : 24.631 ± 0.027M/s
> arr-inc : 23.253 ± 0.019M/s arr-inc : 23.742 ± 0.023M/s
> hash-inc : 12.258 ± 0.010M/s hash-inc : 12.625 ± 0.004M/s
>
> [1] https://github.com/anakryiko/linux/commit/8dec900975ef
>
> Signed-off-by: Puranjay Mohan <[email protected]>
> Acked-by: Andrii Nakryiko <[email protected]>
> ---
> kernel/bpf/verifier.c | 24 +++++++++++++++++-------
> 1 file changed, 17 insertions(+), 7 deletions(-)
>
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 4e474ef44e9c..6ff4e63b2ef2 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -20273,20 +20273,31 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
> goto next_insn;
> }
>
> -#ifdef CONFIG_X86_64
> /* Implement bpf_get_smp_processor_id() inline. */
> if (insn->imm == BPF_FUNC_get_smp_processor_id &&
> prog->jit_requested && bpf_jit_supports_percpu_insn()) {
> /* BPF_FUNC_get_smp_processor_id inlining is an
> - * optimization, so if pcpu_hot.cpu_number is ever
> + * optimization, so if cpu_number_addr is ever
> * changed in some incompatible and hard to support
> * way, it's fine to back out this inlining logic
> */
> - insn_buf[0] = BPF_MOV32_IMM(BPF_REG_0, (u32)(unsigned long)&pcpu_hot.cpu_number);
> - insn_buf[1] = BPF_MOV64_PERCPU_REG(BPF_REG_0, BPF_REG_0);
> - insn_buf[2] = BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 0);
> - cnt = 3;
> + u64 cpu_number_addr;
>
> +#if defined(CONFIG_X86_64)
> + cpu_number_addr = (u64)&pcpu_hot.cpu_number;
> +#elif defined(CONFIG_ARM64)
> + cpu_number_addr = (u64)&cpu_number;
> +#else
> + goto next_insn;
> +#endif
> + struct bpf_insn ld_cpu_number_addr[2] = {
> + BPF_LD_IMM64(BPF_REG_0, cpu_number_addr)
> + };

here we are violating C89 requirement to have a single block of
variable declarations by mixing variables and statements. I'm
surprised this is not triggering any build errors on !arm64 &&
!x86_64.

I think we can declare this BPF_LD_IMM64 instruction with zero "addr".
And then update

ld_cpu_number_addr[0].imm = (u32)cpu_number_addr;
ld_cpu_number_addr[1].imm = (u32)(cpu_number_addr >> 32);

WDYT?

nit: I'd rename ld_cpu_number_addr to ld_insn or something short like that

> + insn_buf[0] = ld_cpu_number_addr[0];
> + insn_buf[1] = ld_cpu_number_addr[1];
> + insn_buf[2] = BPF_MOV64_PERCPU_REG(BPF_REG_0, BPF_REG_0);
> + insn_buf[3] = BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 0);
> + cnt = 4;

nit: we normally have an empty line here to separate setting up
replacement instructions from actual patching

> new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, cnt);
> if (!new_prog)
> return -ENOMEM;
> @@ -20296,7 +20307,6 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
> insn = new_prog->insnsi + i + delta;
> goto next_insn;
> }
> -#endif
> /* Implement bpf_get_func_arg inline. */
> if (prog_type == BPF_PROG_TYPE_TRACING &&
> insn->imm == BPF_FUNC_get_func_arg) {
> --
> 2.40.1
>

2024-04-26 16:59:59

by Puranjay Mohan

[permalink] [raw]
Subject: Re: [PATCH bpf-next v3 1/2] arm64, bpf: add internal-only MOV instruction to resolve per-CPU addrs

Andrii Nakryiko <[email protected]> writes:

> On Fri, Apr 26, 2024 at 5:14 AM Puranjay Mohan <[email protected]> wrote:
>>
>> From: Puranjay Mohan <[email protected]>
>>
>> Support an instruction for resolving absolute addresses of per-CPU
>> data from their per-CPU offsets. This instruction is internal-only and
>> users are not allowed to use them directly. They will only be used for
>> internal inlining optimizations for now between BPF verifier and BPF
>> JITs.
>>
>> Since commit 7158627686f0 ("arm64: percpu: implement optimised pcpu
>> access using tpidr_el1"), the per-cpu offset for the CPU is stored in
>> the tpidr_el1/2 register of that CPU.
>>
>> To support this BPF instruction in the ARM64 JIT, the following ARM64
>> instructions are emitted:
>>
>> mov dst, src // Move src to dst, if src != dst
>> mrs tmp, tpidr_el1/2 // Move per-cpu offset of the current cpu in tmp.
>> add dst, dst, tmp // Add the per cpu offset to the dst.
>>
>> To measure the performance improvement provided by this change, the
>> benchmark in [1] was used:
>>
>> Before:
>> glob-arr-inc : 23.597 ± 0.012M/s
>> arr-inc : 23.173 ± 0.019M/s
>> hash-inc : 12.186 ± 0.028M/s
>>
>> After:
>> glob-arr-inc : 23.819 ± 0.034M/s
>> arr-inc : 23.285 ± 0.017M/s
>
> I still expected a better improvement (global-arr-inc's results
> improved more than arr-inc, which is completely different from
> x86-64), but it's still a good thing to support this for arm64, of
> course.
>
> ack for generic parts I can understand:
>
> Acked-by: Andrii Nakryiko <[email protected]>
>

I will have to do more research to find why we don't see very high
improvement.

But this is what is happening here:

This was the complete picture before inlining:

int cpu = bpf_get_smp_processor_id();
mov x10, #0xffffffffffffd4a8
movk x10, #0x802c, lsl #16
movk x10, #0x8000, lsl #32
blr x10 ---------------------------------------> nop
nop
adrp x0, 0xffff800082128000
mrs x1, tpidr_el1
add x0, x0, #0x8
ldrsw x0, [x0, x1]
<----------------------------------------ret
add x7, x0, #0x0


Now we have:

int cpu = bpf_get_smp_processor_id();
mov x7, #0xffff8000ffffffff
movk x7, #0x8212, lsl #16
movk x7, #0x8008
mrs x10, tpidr_el1
add x7, x7, x10
ldr w7, [x7]


So, we have removed multiple instructions including a branch and a
return. I was expecting to see more improvement. This benchmark is taken
from a KVM based virtual machine, maybe if I do it on bare-metal I would
see more improvement ?

Thanks,
Puranjay

2024-04-26 17:07:19

by Puranjay Mohan

[permalink] [raw]
Subject: Re: [PATCH bpf-next v3 2/2] bpf, arm64: inline bpf_get_smp_processor_id() helper

Andrii Nakryiko <[email protected]> writes:

> On Fri, Apr 26, 2024 at 5:14 AM Puranjay Mohan <[email protected]> wrote:
>>
>> As ARM64 JIT now implements BPF_MOV64_PERCPU_REG instruction, inline
>> bpf_get_smp_processor_id().
>>
>> ARM64 uses the per-cpu variable cpu_number to store the cpu id.
>>
>> Here is how the BPF and ARM64 JITed assembly changes after this commit:
>>
>> BPF
>> =====
>> BEFORE AFTER
>> -------- -------
>>
>> int cpu = bpf_get_smp_processor_id(); int cpu = bpf_get_smp_processor_id();
>> (85) call bpf_get_smp_processor_id#229032 (18) r0 = 0xffff800082072008
>> (bf) r0 = &(void __percpu *)(r0)
>> (61) r0 = *(u32 *)(r0 +0)
>>
>> ARM64 JIT
>> ===========
>>
>> BEFORE AFTER
>> -------- -------
>>
>> int cpu = bpf_get_smp_processor_id(); int cpu = bpf_get_smp_processor_id();
>> mov x10, #0xfffffffffffff4d0 mov x7, #0xffff8000ffffffff
>> movk x10, #0x802b, lsl #16 movk x7, #0x8207, lsl #16
>> movk x10, #0x8000, lsl #32 movk x7, #0x2008
>> blr x10 mrs x10, tpidr_el1
>> add x7, x0, #0x0 add x7, x7, x10
>> ldr w7, [x7]
>>
>> Performance improvement using benchmark[1]
>>
>> BEFORE AFTER
>> -------- -------
>>
>> glob-arr-inc : 23.817 ± 0.019M/s glob-arr-inc : 24.631 ± 0.027M/s
>> arr-inc : 23.253 ± 0.019M/s arr-inc : 23.742 ± 0.023M/s
>> hash-inc : 12.258 ± 0.010M/s hash-inc : 12.625 ± 0.004M/s
>>
>> [1] https://github.com/anakryiko/linux/commit/8dec900975ef
>>
>> Signed-off-by: Puranjay Mohan <[email protected]>
>> Acked-by: Andrii Nakryiko <[email protected]>
>> ---
>> kernel/bpf/verifier.c | 24 +++++++++++++++++-------
>> 1 file changed, 17 insertions(+), 7 deletions(-)
>>
>> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
>> index 4e474ef44e9c..6ff4e63b2ef2 100644
>> --- a/kernel/bpf/verifier.c
>> +++ b/kernel/bpf/verifier.c
>> @@ -20273,20 +20273,31 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
>> goto next_insn;
>> }
>>
>> -#ifdef CONFIG_X86_64
>> /* Implement bpf_get_smp_processor_id() inline. */
>> if (insn->imm == BPF_FUNC_get_smp_processor_id &&
>> prog->jit_requested && bpf_jit_supports_percpu_insn()) {
>> /* BPF_FUNC_get_smp_processor_id inlining is an
>> - * optimization, so if pcpu_hot.cpu_number is ever
>> + * optimization, so if cpu_number_addr is ever
>> * changed in some incompatible and hard to support
>> * way, it's fine to back out this inlining logic
>> */
>> - insn_buf[0] = BPF_MOV32_IMM(BPF_REG_0, (u32)(unsigned long)&pcpu_hot.cpu_number);
>> - insn_buf[1] = BPF_MOV64_PERCPU_REG(BPF_REG_0, BPF_REG_0);
>> - insn_buf[2] = BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 0);
>> - cnt = 3;
>> + u64 cpu_number_addr;
>>
>> +#if defined(CONFIG_X86_64)
>> + cpu_number_addr = (u64)&pcpu_hot.cpu_number;
>> +#elif defined(CONFIG_ARM64)
>> + cpu_number_addr = (u64)&cpu_number;
>> +#else
>> + goto next_insn;
>> +#endif
>> + struct bpf_insn ld_cpu_number_addr[2] = {
>> + BPF_LD_IMM64(BPF_REG_0, cpu_number_addr)
>> + };
>
> here we are violating C89 requirement to have a single block of
> variable declarations by mixing variables and statements. I'm
> surprised this is not triggering any build errors on !arm64 &&
> !x86_64.
>
> I think we can declare this BPF_LD_IMM64 instruction with zero "addr".
> And then update
>
> ld_cpu_number_addr[0].imm = (u32)cpu_number_addr;
> ld_cpu_number_addr[1].imm = (u32)(cpu_number_addr >> 32);
>
> WDYT?
>
> nit: I'd rename ld_cpu_number_addr to ld_insn or something short like that

I agree with you,
What do you think about the following diff:

--- 8< ---

-#ifdef CONFIG_X86_64
/* Implement bpf_get_smp_processor_id() inline. */
if (insn->imm == BPF_FUNC_get_smp_processor_id &&
prog->jit_requested && bpf_jit_supports_percpu_insn()) {
/* BPF_FUNC_get_smp_processor_id inlining is an
- * optimization, so if pcpu_hot.cpu_number is ever
+ * optimization, so if cpu_number_addr is ever
* changed in some incompatible and hard to support
* way, it's fine to back out this inlining logic
*/
- insn_buf[0] = BPF_MOV32_IMM(BPF_REG_0, (u32)(unsigned long)&pcpu_hot.cpu_number);
- insn_buf[1] = BPF_MOV64_PERCPU_REG(BPF_REG_0, BPF_REG_0);
- insn_buf[2] = BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 0);
- cnt = 3;
+ u64 cpu_number_addr;
+ struct bpf_insn ld_insn[2] = {
+ BPF_LD_IMM64(BPF_REG_0, 0)
+ };
+
+#if defined(CONFIG_X86_64)
+ cpu_number_addr = (u64)&pcpu_hot.cpu_number;
+#elif defined(CONFIG_ARM64)
+ cpu_number_addr = (u64)&cpu_number;
+#else
+ goto next_insn;
+#endif
+ ld_insn[0].imm = (u32)cpu_number_addr;
+ ld_insn[1].imm = (u32)(cpu_number_addr >> 32);
+ insn_buf[0] = ld_insn[0];
+ insn_buf[1] = ld_insn[1];
+ insn_buf[2] = BPF_MOV64_PERCPU_REG(BPF_REG_0, BPF_REG_0);
+ insn_buf[3] = BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 0);
+ cnt = 4;

new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, cnt);
if (!new_prog)
@@ -20296,7 +20310,6 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
insn = new_prog->insnsi + i + delta;
goto next_insn;
}
-#endif
/* Implement bpf_get_func_arg inline. */

--- >8---

Thanks,
Puranjay

2024-04-26 17:34:24

by Andrii Nakryiko

[permalink] [raw]
Subject: Re: [PATCH bpf-next v3 2/2] bpf, arm64: inline bpf_get_smp_processor_id() helper

On Fri, Apr 26, 2024 at 10:06 AM Puranjay Mohan <[email protected]> wrote:
>
> Andrii Nakryiko <[email protected]> writes:
>
> > On Fri, Apr 26, 2024 at 5:14 AM Puranjay Mohan <puranjay@kernelorg> wrote:
> >>
> >> As ARM64 JIT now implements BPF_MOV64_PERCPU_REG instruction, inline
> >> bpf_get_smp_processor_id().
> >>
> >> ARM64 uses the per-cpu variable cpu_number to store the cpu id.
> >>
> >> Here is how the BPF and ARM64 JITed assembly changes after this commit:
> >>
> >> BPF
> >> =====
> >> BEFORE AFTER
> >> -------- -------
> >>
> >> int cpu = bpf_get_smp_processor_id(); int cpu = bpf_get_smp_processor_id();
> >> (85) call bpf_get_smp_processor_id#229032 (18) r0 = 0xffff800082072008
> >> (bf) r0 = &(void __percpu *)(r0)
> >> (61) r0 = *(u32 *)(r0 +0)
> >>
> >> ARM64 JIT
> >> ===========
> >>
> >> BEFORE AFTER
> >> -------- -------
> >>
> >> int cpu = bpf_get_smp_processor_id(); int cpu = bpf_get_smp_processor_id();
> >> mov x10, #0xfffffffffffff4d0 mov x7, #0xffff8000ffffffff
> >> movk x10, #0x802b, lsl #16 movk x7, #0x8207, lsl #16
> >> movk x10, #0x8000, lsl #32 movk x7, #0x2008
> >> blr x10 mrs x10, tpidr_el1
> >> add x7, x0, #0x0 add x7, x7, x10
> >> ldr w7, [x7]
> >>
> >> Performance improvement using benchmark[1]
> >>
> >> BEFORE AFTER
> >> -------- -------
> >>
> >> glob-arr-inc : 23.817 ± 0.019M/s glob-arr-inc : 24.631 ± 0.027M/s
> >> arr-inc : 23.253 ± 0.019M/s arr-inc : 23.742 ± 0.023M/s
> >> hash-inc : 12.258 ± 0.010M/s hash-inc : 12.625 ± 0.004M/s
> >>
> >> [1] https://github.com/anakryiko/linux/commit/8dec900975ef
> >>
> >> Signed-off-by: Puranjay Mohan <[email protected]>
> >> Acked-by: Andrii Nakryiko <[email protected]>
> >> ---
> >> kernel/bpf/verifier.c | 24 +++++++++++++++++-------
> >> 1 file changed, 17 insertions(+), 7 deletions(-)
> >>
> >> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> >> index 4e474ef44e9c..6ff4e63b2ef2 100644
> >> --- a/kernel/bpf/verifier.c
> >> +++ b/kernel/bpf/verifier.c
> >> @@ -20273,20 +20273,31 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
> >> goto next_insn;
> >> }
> >>
> >> -#ifdef CONFIG_X86_64
> >> /* Implement bpf_get_smp_processor_id() inline. */
> >> if (insn->imm == BPF_FUNC_get_smp_processor_id &&
> >> prog->jit_requested && bpf_jit_supports_percpu_insn()) {
> >> /* BPF_FUNC_get_smp_processor_id inlining is an
> >> - * optimization, so if pcpu_hot.cpu_number is ever
> >> + * optimization, so if cpu_number_addr is ever
> >> * changed in some incompatible and hard to support
> >> * way, it's fine to back out this inlining logic
> >> */
> >> - insn_buf[0] = BPF_MOV32_IMM(BPF_REG_0, (u32)(unsigned long)&pcpu_hot.cpu_number);
> >> - insn_buf[1] = BPF_MOV64_PERCPU_REG(BPF_REG_0, BPF_REG_0);
> >> - insn_buf[2] = BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 0);
> >> - cnt = 3;
> >> + u64 cpu_number_addr;
> >>
> >> +#if defined(CONFIG_X86_64)
> >> + cpu_number_addr = (u64)&pcpu_hot.cpu_number;
> >> +#elif defined(CONFIG_ARM64)
> >> + cpu_number_addr = (u64)&cpu_number;
> >> +#else
> >> + goto next_insn;
> >> +#endif
> >> + struct bpf_insn ld_cpu_number_addr[2] = {
> >> + BPF_LD_IMM64(BPF_REG_0, cpu_number_addr)
> >> + };
> >
> > here we are violating C89 requirement to have a single block of
> > variable declarations by mixing variables and statements. I'm
> > surprised this is not triggering any build errors on !arm64 &&
> > !x86_64.
> >
> > I think we can declare this BPF_LD_IMM64 instruction with zero "addr".
> > And then update
> >
> > ld_cpu_number_addr[0].imm = (u32)cpu_number_addr;
> > ld_cpu_number_addr[1].imm = (u32)(cpu_number_addr >> 32);
> >
> > WDYT?
> >
> > nit: I'd rename ld_cpu_number_addr to ld_insn or something short like that
>
> I agree with you,
> What do you think about the following diff:

yep, that's what I had in mind, ack

>
> --- 8< ---
>
> -#ifdef CONFIG_X86_64
> /* Implement bpf_get_smp_processor_id() inline. */
> if (insn->imm == BPF_FUNC_get_smp_processor_id &&
> prog->jit_requested && bpf_jit_supports_percpu_insn()) {
> /* BPF_FUNC_get_smp_processor_id inlining is an
> - * optimization, so if pcpu_hot.cpu_number is ever
> + * optimization, so if cpu_number_addr is ever
> * changed in some incompatible and hard to support
> * way, it's fine to back out this inlining logic
> */
> - insn_buf[0] = BPF_MOV32_IMM(BPF_REG_0, (u32)(unsigned long)&pcpu_hot.cpu_number);
> - insn_buf[1] = BPF_MOV64_PERCPU_REG(BPF_REG_0, BPF_REG_0);
> - insn_buf[2] = BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 0);
> - cnt = 3;
> + u64 cpu_number_addr;
> + struct bpf_insn ld_insn[2] = {
> + BPF_LD_IMM64(BPF_REG_0, 0)
> + };
> +
> +#if defined(CONFIG_X86_64)
> + cpu_number_addr = (u64)&pcpu_hot.cpu_number;
> +#elif defined(CONFIG_ARM64)
> + cpu_number_addr = (u64)&cpu_number;
> +#else
> + goto next_insn;
> +#endif
> + ld_insn[0].imm = (u32)cpu_number_addr;
> + ld_insn[1].imm = (u32)(cpu_number_addr >> 32);
> + insn_buf[0] = ld_insn[0];
> + insn_buf[1] = ld_insn[1];
> + insn_buf[2] = BPF_MOV64_PERCPU_REG(BPF_REG_0, BPF_REG_0);
> + insn_buf[3] = BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 0);
> + cnt = 4;
>
> new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, cnt);
> if (!new_prog)
> @@ -20296,7 +20310,6 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
> insn = new_prog->insnsi + i + delta;
> goto next_insn;
> }
> -#endif
> /* Implement bpf_get_func_arg inline. */
>
> --- >8---
>
> Thanks,
> Puranjay

2024-04-26 17:36:04

by Andrii Nakryiko

[permalink] [raw]
Subject: Re: [PATCH bpf-next v3 1/2] arm64, bpf: add internal-only MOV instruction to resolve per-CPU addrs

On Fri, Apr 26, 2024 at 9:55 AM Puranjay Mohan <[email protected]> wrote:
>
> Andrii Nakryiko <[email protected]> writes:
>
> > On Fri, Apr 26, 2024 at 5:14 AM Puranjay Mohan <puranjay@kernelorg> wrote:
> >>
> >> From: Puranjay Mohan <[email protected]>
> >>
> >> Support an instruction for resolving absolute addresses of per-CPU
> >> data from their per-CPU offsets. This instruction is internal-only and
> >> users are not allowed to use them directly. They will only be used for
> >> internal inlining optimizations for now between BPF verifier and BPF
> >> JITs.
> >>
> >> Since commit 7158627686f0 ("arm64: percpu: implement optimised pcpu
> >> access using tpidr_el1"), the per-cpu offset for the CPU is stored in
> >> the tpidr_el1/2 register of that CPU.
> >>
> >> To support this BPF instruction in the ARM64 JIT, the following ARM64
> >> instructions are emitted:
> >>
> >> mov dst, src // Move src to dst, if src != dst
> >> mrs tmp, tpidr_el1/2 // Move per-cpu offset of the current cpu in tmp.
> >> add dst, dst, tmp // Add the per cpu offset to the dst.
> >>
> >> To measure the performance improvement provided by this change, the
> >> benchmark in [1] was used:
> >>
> >> Before:
> >> glob-arr-inc : 23.597 ± 0.012M/s
> >> arr-inc : 23.173 ± 0.019M/s
> >> hash-inc : 12.186 ± 0.028M/s
> >>
> >> After:
> >> glob-arr-inc : 23.819 ± 0.034M/s
> >> arr-inc : 23.285 ± 0.017M/s
> >
> > I still expected a better improvement (global-arr-inc's results
> > improved more than arr-inc, which is completely different from
> > x86-64), but it's still a good thing to support this for arm64, of
> > course.
> >
> > ack for generic parts I can understand:
> >
> > Acked-by: Andrii Nakryiko <[email protected]>
> >
>
> I will have to do more research to find why we don't see very high
> improvement.
>
> But this is what is happening here:
>
> This was the complete picture before inlining:
>
> int cpu = bpf_get_smp_processor_id();
> mov x10, #0xffffffffffffd4a8
> movk x10, #0x802c, lsl #16
> movk x10, #0x8000, lsl #32
> blr x10 ---------------------------------------> nop
> nop
> adrp x0, 0xffff800082128000
> mrs x1, tpidr_el1
> add x0, x0, #0x8
> ldrsw x0, [x0, x1]
> <----------------------------------------ret
> add x7, x0, #0x0
>
>
> Now we have:
>
> int cpu = bpf_get_smp_processor_id();
> mov x7, #0xffff8000ffffffff
> movk x7, #0x8212, lsl #16
> movk x7, #0x8008
> mrs x10, tpidr_el1
> add x7, x7, x10
> ldr w7, [x7]
>
>
> So, we have removed multiple instructions including a branch and a
> return. I was expecting to see more improvement. This benchmark is taken
> from a KVM based virtual machine, maybe if I do it on bare-metal I would
> see more improvement ?

I see, yeah, I think it might change significantly. I remember back
from times when I was benchmarking BPF ringbuf, I was getting
very-very different results from inside QEMU vs bare metal. And I
don't mean just in absolute numbers. QEMU/KVM seems to change a lot of
things when it comes to contentions, atomic instructions, etc, etc.
Anyways, for benchmarking, always try to do bare metal.

>
> Thanks,
> Puranjay

2024-04-30 18:30:42

by Puranjay Mohan

[permalink] [raw]
Subject: Re: [PATCH bpf-next v3 1/2] arm64, bpf: add internal-only MOV instruction to resolve per-CPU addrs

Andrii Nakryiko <[email protected]> writes:

> On Fri, Apr 26, 2024 at 9:55 AM Puranjay Mohan <[email protected]> wrote:
>>
>> Andrii Nakryiko <[email protected]> writes:
>>
>> > On Fri, Apr 26, 2024 at 5:14 AM Puranjay Mohan <[email protected]> wrote:
>> >>
>> >> From: Puranjay Mohan <[email protected]>
>> >>
>> >> Support an instruction for resolving absolute addresses of per-CPU
>> >> data from their per-CPU offsets. This instruction is internal-only and
>> >> users are not allowed to use them directly. They will only be used for
>> >> internal inlining optimizations for now between BPF verifier and BPF
>> >> JITs.
>> >>
>> >> Since commit 7158627686f0 ("arm64: percpu: implement optimised pcpu
>> >> access using tpidr_el1"), the per-cpu offset for the CPU is stored in
>> >> the tpidr_el1/2 register of that CPU.
>> >>
>> >> To support this BPF instruction in the ARM64 JIT, the following ARM64
>> >> instructions are emitted:
>> >>
>> >> mov dst, src // Move src to dst, if src != dst
>> >> mrs tmp, tpidr_el1/2 // Move per-cpu offset of the current cpu in tmp.
>> >> add dst, dst, tmp // Add the per cpu offset to the dst.
>> >>
>> >> To measure the performance improvement provided by this change, the
>> >> benchmark in [1] was used:
>> >>
>> >> Before:
>> >> glob-arr-inc : 23.597 ± 0.012M/s
>> >> arr-inc : 23.173 ± 0.019M/s
>> >> hash-inc : 12.186 ± 0.028M/s
>> >>
>> >> After:
>> >> glob-arr-inc : 23.819 ± 0.034M/s
>> >> arr-inc : 23.285 ± 0.017M/s
>> >
>> > I still expected a better improvement (global-arr-inc's results
>> > improved more than arr-inc, which is completely different from
>> > x86-64), but it's still a good thing to support this for arm64, of
>> > course.
>> >
>> > ack for generic parts I can understand:
>> >
>> > Acked-by: Andrii Nakryiko <[email protected]>
>> >
>>
>> I will have to do more research to find why we don't see very high
>> improvement.
>>
>> But this is what is happening here:
>>
>> This was the complete picture before inlining:
>>
>> int cpu = bpf_get_smp_processor_id();
>> mov x10, #0xffffffffffffd4a8
>> movk x10, #0x802c, lsl #16
>> movk x10, #0x8000, lsl #32
>> blr x10 ---------------------------------------> nop
>> nop
>> adrp x0, 0xffff800082128000
>> mrs x1, tpidr_el1
>> add x0, x0, #0x8
>> ldrsw x0, [x0, x1]
>> <----------------------------------------ret
>> add x7, x0, #0x0
>>
>>
>> Now we have:
>>
>> int cpu = bpf_get_smp_processor_id();
>> mov x7, #0xffff8000ffffffff
>> movk x7, #0x8212, lsl #16
>> movk x7, #0x8008
>> mrs x10, tpidr_el1
>> add x7, x7, x10
>> ldr w7, [x7]
>>
>>
>> So, we have removed multiple instructions including a branch and a
>> return. I was expecting to see more improvement. This benchmark is taken
>> from a KVM based virtual machine, maybe if I do it on bare-metal I would
>> see more improvement ?
>
> I see, yeah, I think it might change significantly. I remember back
> from times when I was benchmarking BPF ringbuf, I was getting
> very-very different results from inside QEMU vs bare metal. And I
> don't mean just in absolute numbers. QEMU/KVM seems to change a lot of
> things when it comes to contentions, atomic instructions, etc, etc.
> Anyways, for benchmarking, always try to do bare metal.
>

I found the solution to this. I am seeing much better performance when
implementing this inlining in the JIT through another method, similar to
what I did for riscv see[1]

[1] https://lore.kernel.org/all/[email protected]/

Will do the same for ARM64 in V5 of this series.

Thanks,
Puranjay