2015-07-16 16:46:44

by Nicolas Schichan

[permalink] [raw]
Subject: [PATCH 0/6] BPF JIT fixes and features for ARM

Hello,

This serie fixes issues with the ARM BPF JIT and adds support for more
instructions to the ARM BPF JIT.

The first three patches are fixing bugs in the ARM JIT and should
probably find their way to a stable kernel.

The last three patches add support to the ARM JIT for more BPF
instructions, namely skb netdevice type retrieval, skb payload offset
retrieval, and skb packet type retrieval.

With the first three patches, all 60 test_bpf tests in Linux 4.1 release
are now passing OK (was 54 out of 60 before).

The last three patches allow 35 tests to use the JIT instead of 29
before.

Like previous ARM JIT patches this should go via the net tree.

Regards,

Nicolas Schichan (6):
ARM: net: fix condition for load_order > 0 when translating load
instructions.
ARM: net: handle negative offsets in BPF JIT.
ARM: net: fix vlan access instructions in ARM JIT.
ARM: net: add support for BPF_ANC | SKF_AD_PKTTYPE in ARM JIT.
ARM: net: add support for BPF_ANC | SKF_AD_PAY_OFFSET in ARM JIT.
ARM: net: add support for BPF_ANC | SKF_AD_HATYPE in ARM JIT.

arch/arm/net/bpf_jit_32.c | 98 +++++++++++++++++++++++++++++++++++++++--------
arch/arm/net/bpf_jit_32.h | 3 ++
2 files changed, 86 insertions(+), 15 deletions(-)

--
1.9.1


2015-07-16 16:48:49

by Nicolas Schichan

[permalink] [raw]
Subject: [PATCH 1/6] ARM: net: fix condition for load_order > 0 when translating load instructions.

To check whether the load should take the fast path or not, the code
would check that (r_skb_hlen - load_order) is greater than the offset
of the access using an "Unsigned higher or same" condition. For
halfword accesses and an skb length of 1 at offset 0, that test is
valid, as we end up comparing 0xffffffff(-1) and 0, so the fast path
is taken and the filter allows the load to wrongly succeed. A similar
issue exists for word loads at offset 0 and an skb length of less than
4.

Fix that by using the condition "Signed greater than or equal"
condition for the fast path code for load orders greater than 0.

Signed-off-by: Nicolas Schichan <[email protected]>
---
arch/arm/net/bpf_jit_32.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm/net/bpf_jit_32.c b/arch/arm/net/bpf_jit_32.c
index 4550d24..21f5ace 100644
--- a/arch/arm/net/bpf_jit_32.c
+++ b/arch/arm/net/bpf_jit_32.c
@@ -547,7 +547,7 @@ load_common:
emit(ARM_SUB_I(r_scratch, r_skb_hl,
1 << load_order), ctx);
emit(ARM_CMP_R(r_scratch, r_off), ctx);
- condt = ARM_COND_HS;
+ condt = ARM_COND_GE;
} else {
emit(ARM_CMP_R(r_skb_hl, r_off), ctx);
condt = ARM_COND_HI;
--
1.9.1

2015-07-16 16:46:47

by Nicolas Schichan

[permalink] [raw]
Subject: [PATCH 2/6] ARM: net: handle negative offsets in BPF JIT.

Previously, the JIT would reject negative offsets known during code
generation and mishandle negative offsets provided at runtime.

Fix that by calling bpf_internal_load_pointer_neg_helper()
appropriately in the jit_get_skb_{b,h,w} slow path helpers and by forcing
the execution flow to the slow path helpers when the offset is
negative.

Signed-off-by: Nicolas Schichan <[email protected]>
---
arch/arm/net/bpf_jit_32.c | 47 ++++++++++++++++++++++++++++++++++++++---------
1 file changed, 38 insertions(+), 9 deletions(-)

diff --git a/arch/arm/net/bpf_jit_32.c b/arch/arm/net/bpf_jit_32.c
index 21f5ace..d9b2524 100644
--- a/arch/arm/net/bpf_jit_32.c
+++ b/arch/arm/net/bpf_jit_32.c
@@ -74,32 +74,52 @@ struct jit_ctx {

int bpf_jit_enable __read_mostly;

-static u64 jit_get_skb_b(struct sk_buff *skb, unsigned offset)
+static inline int call_neg_helper(struct sk_buff *skb, int offset, void *ret,
+ unsigned int size)
+{
+ void *ptr = bpf_internal_load_pointer_neg_helper(skb, offset, size);
+
+ if (!ptr)
+ return -EFAULT;
+ memcpy(ret, ptr, size);
+ return 0;
+}
+
+static u64 jit_get_skb_b(struct sk_buff *skb, int offset)
{
u8 ret;
int err;

- err = skb_copy_bits(skb, offset, &ret, 1);
+ if (offset < 0)
+ err = call_neg_helper(skb, offset, &ret, 1);
+ else
+ err = skb_copy_bits(skb, offset, &ret, 1);

return (u64)err << 32 | ret;
}

-static u64 jit_get_skb_h(struct sk_buff *skb, unsigned offset)
+static u64 jit_get_skb_h(struct sk_buff *skb, int offset)
{
u16 ret;
int err;

- err = skb_copy_bits(skb, offset, &ret, 2);
+ if (offset < 0)
+ err = call_neg_helper(skb, offset, &ret, 2);
+ else
+ err = skb_copy_bits(skb, offset, &ret, 2);

return (u64)err << 32 | ntohs(ret);
}

-static u64 jit_get_skb_w(struct sk_buff *skb, unsigned offset)
+static u64 jit_get_skb_w(struct sk_buff *skb, int offset)
{
u32 ret;
int err;

- err = skb_copy_bits(skb, offset, &ret, 4);
+ if (offset < 0)
+ err = call_neg_helper(skb, offset, &ret, 4);
+ else
+ err = skb_copy_bits(skb, offset, &ret, 4);

return (u64)err << 32 | ntohl(ret);
}
@@ -536,9 +556,6 @@ static int build_body(struct jit_ctx *ctx)
case BPF_LD | BPF_B | BPF_ABS:
load_order = 0;
load:
- /* the interpreter will deal with the negative K */
- if ((int)k < 0)
- return -ENOTSUPP;
emit_mov_i(r_off, k, ctx);
load_common:
ctx->seen |= SEEN_DATA | SEEN_CALL;
@@ -553,6 +570,18 @@ load_common:
condt = ARM_COND_HI;
}

+ /*
+ * test for negative offset, only if we are
+ * currently scheduled to take the fast
+ * path. this will update the flags so that
+ * the slowpath instruction are ignored if the
+ * offset is negative.
+ *
+ * for loard_order == 0 the HI condition will
+ * make loads at offset 0 take the slow path too.
+ */
+ _emit(condt, ARM_CMP_I(r_off, 0), ctx);
+
_emit(condt, ARM_ADD_R(r_scratch, r_off, r_skb_data),
ctx);

--
1.9.1

2015-07-16 16:47:56

by Nicolas Schichan

[permalink] [raw]
Subject: [PATCH 3/6] ARM: net: fix vlan access instructions in ARM JIT.

This makes BPF_ANC | SKF_AD_VLAN_TAG and BPF_ANC | SKF_AD_VLAN_TAG_PRESENT
have the same behaviour as the in kernel VM and makes the test_bpf LD_VLAN_TAG
and LD_VLAN_TAG_PRESENT tests pass.

Signed-off-by: Nicolas Schichan <[email protected]>
---
arch/arm/net/bpf_jit_32.c | 8 +++++---
1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/arch/arm/net/bpf_jit_32.c b/arch/arm/net/bpf_jit_32.c
index d9b2524..c011e22 100644
--- a/arch/arm/net/bpf_jit_32.c
+++ b/arch/arm/net/bpf_jit_32.c
@@ -889,9 +889,11 @@ b_epilogue:
off = offsetof(struct sk_buff, vlan_tci);
emit(ARM_LDRH_I(r_A, r_skb, off), ctx);
if (code == (BPF_ANC | SKF_AD_VLAN_TAG))
- OP_IMM3(ARM_AND, r_A, r_A, VLAN_VID_MASK, ctx);
- else
- OP_IMM3(ARM_AND, r_A, r_A, VLAN_TAG_PRESENT, ctx);
+ OP_IMM3(ARM_AND, r_A, r_A, ~VLAN_TAG_PRESENT, ctx);
+ else {
+ OP_IMM3(ARM_LSR, r_A, r_A, 12, ctx);
+ OP_IMM3(ARM_AND, r_A, r_A, 0x1, ctx);
+ }
break;
case BPF_ANC | SKF_AD_QUEUE:
ctx->seen |= SEEN_SKB;
--
1.9.1

2015-07-16 16:46:50

by Nicolas Schichan

[permalink] [raw]
Subject: [PATCH 4/6] ARM: net: add support for BPF_ANC | SKF_AD_PKTTYPE in ARM JIT.

Signed-off-by: Nicolas Schichan <[email protected]>
---
arch/arm/net/bpf_jit_32.c | 11 +++++++++++
1 file changed, 11 insertions(+)

diff --git a/arch/arm/net/bpf_jit_32.c b/arch/arm/net/bpf_jit_32.c
index c011e22..6ff248c 100644
--- a/arch/arm/net/bpf_jit_32.c
+++ b/arch/arm/net/bpf_jit_32.c
@@ -895,6 +895,17 @@ b_epilogue:
OP_IMM3(ARM_AND, r_A, r_A, 0x1, ctx);
}
break;
+ case BPF_ANC | SKF_AD_PKTTYPE:
+ ctx->seen |= SEEN_SKB;
+ BUILD_BUG_ON(FIELD_SIZEOF(struct sk_buff,
+ __pkt_type_offset[0]) != 1);
+ off = PKT_TYPE_OFFSET();
+ emit(ARM_LDRB_I(r_A, r_skb, off), ctx);
+ emit(ARM_AND_I(r_A, r_A, PKT_TYPE_MAX), ctx);
+#ifdef __BIG_ENDIAN_BITFIELD
+ emit(ARM_LSR_I(r_A, r_A, 5), ctx);
+#endif
+ break;
case BPF_ANC | SKF_AD_QUEUE:
ctx->seen |= SEEN_SKB;
BUILD_BUG_ON(FIELD_SIZEOF(struct sk_buff,
--
1.9.1

2015-07-16 16:47:30

by Nicolas Schichan

[permalink] [raw]
Subject: [PATCH 5/6] ARM: net: add support for BPF_ANC | SKF_AD_PAY_OFFSET in ARM JIT.

Signed-off-by: Nicolas Schichan <[email protected]>
---
arch/arm/net/bpf_jit_32.c | 8 ++++++++
1 file changed, 8 insertions(+)

diff --git a/arch/arm/net/bpf_jit_32.c b/arch/arm/net/bpf_jit_32.c
index 6ff248c..3c73caf 100644
--- a/arch/arm/net/bpf_jit_32.c
+++ b/arch/arm/net/bpf_jit_32.c
@@ -915,6 +915,14 @@ b_epilogue:
off = offsetof(struct sk_buff, queue_mapping);
emit(ARM_LDRH_I(r_A, r_skb, off), ctx);
break;
+ case BPF_ANC | SKF_AD_PAY_OFFSET:
+ ctx->seen |= SEEN_SKB | SEEN_CALL;
+
+ emit(ARM_MOV_R(ARM_R0, r_skb), ctx);
+ emit_mov_i(ARM_R3, (unsigned int)skb_get_poff, ctx);
+ emit_blx_r(ARM_R3, ctx);
+ emit(ARM_MOV_R(r_A, ARM_R0), ctx);
+ break;
case BPF_LDX | BPF_W | BPF_ABS:
/*
* load a 32bit word from struct seccomp_data.
--
1.9.1

2015-07-16 16:46:54

by Nicolas Schichan

[permalink] [raw]
Subject: [PATCH 6/6] ARM: net: add support for BPF_ANC | SKF_AD_HATYPE in ARM JIT.

Signed-off-by: Nicolas Schichan <[email protected]>
---
arch/arm/net/bpf_jit_32.c | 22 ++++++++++++++++++++--
arch/arm/net/bpf_jit_32.h | 3 +++
2 files changed, 23 insertions(+), 2 deletions(-)

diff --git a/arch/arm/net/bpf_jit_32.c b/arch/arm/net/bpf_jit_32.c
index 3c73caf..876060b 100644
--- a/arch/arm/net/bpf_jit_32.c
+++ b/arch/arm/net/bpf_jit_32.c
@@ -857,7 +857,9 @@ b_epilogue:
emit(ARM_LDR_I(r_A, r_scratch, off), ctx);
break;
case BPF_ANC | SKF_AD_IFINDEX:
+ case BPF_ANC | SKF_AD_HATYPE:
/* A = skb->dev->ifindex */
+ /* A = skb->dev->type */
ctx->seen |= SEEN_SKB;
off = offsetof(struct sk_buff, dev);
emit(ARM_LDR_I(r_scratch, r_skb, off), ctx);
@@ -867,8 +869,24 @@ b_epilogue:

BUILD_BUG_ON(FIELD_SIZEOF(struct net_device,
ifindex) != 4);
- off = offsetof(struct net_device, ifindex);
- emit(ARM_LDR_I(r_A, r_scratch, off), ctx);
+ BUILD_BUG_ON(FIELD_SIZEOF(struct net_device,
+ type) != 2);
+
+ if (code == (BPF_ANC | SKF_AD_IFINDEX)) {
+ off = offsetof(struct net_device, ifindex);
+ emit(ARM_LDR_I(r_A, r_scratch, off), ctx);
+ } else {
+ /*
+ * offset of field "type" in "struct
+ * net_device" is above what can be
+ * used in the ldrh rd, [rn, #imm]
+ * instruction, so load the offset in
+ * a register and use ldrh rd, [rn, rm]
+ */
+ off = offsetof(struct net_device, type);
+ emit_mov_i(ARM_R3, off, ctx);
+ emit(ARM_LDRH_R(r_A, r_scratch, ARM_R3), ctx);
+ }
break;
case BPF_ANC | SKF_AD_MARK:
ctx->seen |= SEEN_SKB;
diff --git a/arch/arm/net/bpf_jit_32.h b/arch/arm/net/bpf_jit_32.h
index b2d7d92..4b17d5ab 100644
--- a/arch/arm/net/bpf_jit_32.h
+++ b/arch/arm/net/bpf_jit_32.h
@@ -74,6 +74,7 @@
#define ARM_INST_LDRB_I 0x05d00000
#define ARM_INST_LDRB_R 0x07d00000
#define ARM_INST_LDRH_I 0x01d000b0
+#define ARM_INST_LDRH_R 0x019000b0
#define ARM_INST_LDR_I 0x05900000

#define ARM_INST_LDM 0x08900000
@@ -160,6 +161,8 @@
| (rm))
#define ARM_LDRH_I(rt, rn, off) (ARM_INST_LDRH_I | (rt) << 12 | (rn) << 16 \
| (((off) & 0xf0) << 4) | ((off) & 0xf))
+#define ARM_LDRH_R(rt, rn, rm) (ARM_INST_LDRH_R | (rt) << 12 | (rn) << 16 \
+ | (rm))

#define ARM_LDM(rn, regs) (ARM_INST_LDM | (rn) << 16 | (regs))

--
1.9.1

2015-07-16 17:35:41

by Alexei Starovoitov

[permalink] [raw]
Subject: Re: [PATCH 0/6] BPF JIT fixes and features for ARM

On 7/16/15 9:46 AM, Nicolas Schichan wrote:
> This serie fixes issues with the ARM BPF JIT and adds support for more
> instructions to the ARM BPF JIT.
>
> The first three patches are fixing bugs in the ARM JIT and should
> probably find their way to a stable kernel.
>
> The last three patches add support to the ARM JIT for more BPF
> instructions, namely skb netdevice type retrieval, skb payload offset
> retrieval, and skb packet type retrieval.
>
> With the first three patches, all 60 test_bpf tests in Linux 4.1 release
> are now passing OK (was 54 out of 60 before).
>
> The last three patches allow 35 tests to use the JIT instead of 29
> before.

looks good to me.
For the series:
Acked-by: Alexei Starovoitov <[email protected]>

you might want to try the latest 4.2-rc, since it has 238 tests :)

2015-07-21 03:35:09

by David Miller

[permalink] [raw]
Subject: Re: [PATCH 0/6] BPF JIT fixes and features for ARM

From: Nicolas Schichan <[email protected]>
Date: Thu, 16 Jul 2015 18:46:29 +0200

> This serie fixes issues with the ARM BPF JIT and adds support for more
> instructions to the ARM BPF JIT.
>
> The first three patches are fixing bugs in the ARM JIT and should
> probably find their way to a stable kernel.
>
> The last three patches add support to the ARM JIT for more BPF
> instructions, namely skb netdevice type retrieval, skb payload offset
> retrieval, and skb packet type retrieval.
>
> With the first three patches, all 60 test_bpf tests in Linux 4.1 release
> are now passing OK (was 54 out of 60 before).
>
> The last three patches allow 35 tests to use the JIT instead of 29
> before.
>
> Like previous ARM JIT patches this should go via the net tree.

You've submitted this in a manner which is very mixed up and difficult
for me to handle.

For the bug fixes, create a separate series that targets the 'net' tree.

Next, create another series with the new features that target 'net-next'.
Carefully and clearly indicate that the series depends upon the bug fix
one and I'll sort it out.

Thanks.