2023-05-02 17:06:25

by Will Deacon

[permalink] [raw]
Subject: [PATCH] bpf: Fix mask generation for 32-bit narrow loads of 64-bit fields

A narrow load from a 64-bit context field results in a 64-bit load
followed potentially by a 64-bit right-shift and then a bitwise AND
operation to extract the relevant data.

In the case of a 32-bit access, an immediate mask of 0xffffffff is used
to construct a 64-bit BPP_AND operation which then sign-extends the mask
value and effectively acts as a glorified no-op.

Fix the mask generation so that narrow loads always perform a 32-bit AND
operation.

Cc: Alexei Starovoitov <[email protected]>
Cc: Daniel Borkmann <[email protected]>
Cc: John Fastabend <[email protected]>
Cc: Krzesimir Nowak <[email protected]>
Cc: Yonghong Song <[email protected]>
Cc: Andrey Ignatov <[email protected]>
Fixes: 31fd85816dbe ("bpf: permits narrower load from bpf program context fields")
Signed-off-by: Will Deacon <[email protected]>
---

I spotted this while playing around with the JIT on arm64. I can't
figure out why 31fd85816dbe special-cases 8-byte ctx fields in the
first place, so I fear I may be missing something...

kernel/bpf/verifier.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index fbcf5a4e2fcd..5871aa78d01a 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -17033,7 +17033,7 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env)
insn_buf[cnt++] = BPF_ALU64_IMM(BPF_RSH,
insn->dst_reg,
shift);
- insn_buf[cnt++] = BPF_ALU64_IMM(BPF_AND, insn->dst_reg,
+ insn_buf[cnt++] = BPF_ALU32_IMM(BPF_AND, insn->dst_reg,
(1ULL << size * 8) - 1);
}
}
--
2.40.1.495.gc816e09b53d-goog


2023-05-04 21:00:33

by Yonghong Song

[permalink] [raw]
Subject: Re: [PATCH] bpf: Fix mask generation for 32-bit narrow loads of 64-bit fields



On 5/2/23 9:57 AM, Will Deacon wrote:
> A narrow load from a 64-bit context field results in a 64-bit load
> followed potentially by a 64-bit right-shift and then a bitwise AND
> operation to extract the relevant data.
>
> In the case of a 32-bit access, an immediate mask of 0xffffffff is used
> to construct a 64-bit BPP_AND operation which then sign-extends the mask
> value and effectively acts as a glorified no-op.
>
> Fix the mask generation so that narrow loads always perform a 32-bit AND
> operation.
>
> Cc: Alexei Starovoitov <[email protected]>
> Cc: Daniel Borkmann <[email protected]>
> Cc: John Fastabend <[email protected]>
> Cc: Krzesimir Nowak <[email protected]>
> Cc: Yonghong Song <[email protected]>
> Cc: Andrey Ignatov <[email protected]>
> Fixes: 31fd85816dbe ("bpf: permits narrower load from bpf program context fields")
> Signed-off-by: Will Deacon <[email protected]>


Thanks for the fix! You didn't miss anything. It is a bug and we did not
find it probably because user always use 'u64 val = ctx->u64_field' in
their bpf code...

But I think the commit message can be improved. An example to show the
difference without and with this patch can explain the issue much better.

Acked-by: Yonghong Song <[email protected]>

> ---
>
> I spotted this while playing around with the JIT on arm64. I can't
> figure out why 31fd85816dbe special-cases 8-byte ctx fields in the
> first place, so I fear I may be missing something...
>
> kernel/bpf/verifier.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index fbcf5a4e2fcd..5871aa78d01a 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -17033,7 +17033,7 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env)
> insn_buf[cnt++] = BPF_ALU64_IMM(BPF_RSH,
> insn->dst_reg,
> shift);
> - insn_buf[cnt++] = BPF_ALU64_IMM(BPF_AND, insn->dst_reg,
> + insn_buf[cnt++] = BPF_ALU32_IMM(BPF_AND, insn->dst_reg,
> (1ULL << size * 8) - 1);
> }
> }

2023-05-05 15:44:46

by Alexei Starovoitov

[permalink] [raw]
Subject: Re: [PATCH] bpf: Fix mask generation for 32-bit narrow loads of 64-bit fields

On Thu, May 4, 2023 at 1:18 PM Yonghong Song <[email protected]> wrote:
>
>
>
> On 5/2/23 9:57 AM, Will Deacon wrote:
> > A narrow load from a 64-bit context field results in a 64-bit load
> > followed potentially by a 64-bit right-shift and then a bitwise AND
> > operation to extract the relevant data.
> >
> > In the case of a 32-bit access, an immediate mask of 0xffffffff is used
> > to construct a 64-bit BPP_AND operation which then sign-extends the mask
> > value and effectively acts as a glorified no-op.
> >
> > Fix the mask generation so that narrow loads always perform a 32-bit AND
> > operation.
> >
> > Cc: Alexei Starovoitov <[email protected]>
> > Cc: Daniel Borkmann <[email protected]>
> > Cc: John Fastabend <[email protected]>
> > Cc: Krzesimir Nowak <[email protected]>
> > Cc: Yonghong Song <[email protected]>
> > Cc: Andrey Ignatov <[email protected]>
> > Fixes: 31fd85816dbe ("bpf: permits narrower load from bpf program context fields")
> > Signed-off-by: Will Deacon <[email protected]>
>
>
> Thanks for the fix! You didn't miss anything. It is a bug and we did not
> find it probably because user always use 'u64 val = ctx->u64_field' in
> their bpf code...
>
> But I think the commit message can be improved. An example to show the
> difference without and with this patch can explain the issue much better.
>
> Acked-by: Yonghong Song <[email protected]>

If I'm reading it correctly it's indeed a bug.
alu64(and, 0xffffFFFF) is a nop
but it should have been
alu32(and, 0xffffFFFF) which will clear upper 32-bit, right?
Would be good to have a selftest for this.

2023-05-05 16:43:25

by Yonghong Song

[permalink] [raw]
Subject: Re: [PATCH] bpf: Fix mask generation for 32-bit narrow loads of 64-bit fields



On 5/5/23 8:30 AM, Alexei Starovoitov wrote:
> On Thu, May 4, 2023 at 1:18 PM Yonghong Song <[email protected]> wrote:
>>
>>
>>
>> On 5/2/23 9:57 AM, Will Deacon wrote:
>>> A narrow load from a 64-bit context field results in a 64-bit load
>>> followed potentially by a 64-bit right-shift and then a bitwise AND
>>> operation to extract the relevant data.
>>>
>>> In the case of a 32-bit access, an immediate mask of 0xffffffff is used
>>> to construct a 64-bit BPP_AND operation which then sign-extends the mask
>>> value and effectively acts as a glorified no-op.
>>>
>>> Fix the mask generation so that narrow loads always perform a 32-bit AND
>>> operation.
>>>
>>> Cc: Alexei Starovoitov <[email protected]>
>>> Cc: Daniel Borkmann <[email protected]>
>>> Cc: John Fastabend <[email protected]>
>>> Cc: Krzesimir Nowak <[email protected]>
>>> Cc: Yonghong Song <[email protected]>
>>> Cc: Andrey Ignatov <[email protected]>
>>> Fixes: 31fd85816dbe ("bpf: permits narrower load from bpf program context fields")
>>> Signed-off-by: Will Deacon <[email protected]>
>>
>>
>> Thanks for the fix! You didn't miss anything. It is a bug and we did not
>> find it probably because user always use 'u64 val = ctx->u64_field' in
>> their bpf code...
>>
>> But I think the commit message can be improved. An example to show the
>> difference without and with this patch can explain the issue much better.
>>
>> Acked-by: Yonghong Song <[email protected]>
>
> If I'm reading it correctly it's indeed a bug.
> alu64(and, 0xffffFFFF) is a nop
> but it should have been
> alu32(and, 0xffffFFFF) which will clear upper 32-bit, right?

Right. This is my understanding as well.

> Would be good to have a selftest for this.