2024-04-02 21:22:59

by Harishankar Vishwanathan

[permalink] [raw]
Subject: [PATCH v2 bpf-next] bpf: Fix latent unsoundness in and/or/xor value tracking

The scalar(32)_min_max_and/or/xor functions can exhibit unsound behavior
when setting signed bounds. The following example illustrates the issue for
scalar_min_max_and(), but it applies to the other functions.

In scalar_min_max_and() the following clause is executed when ANDing
positive numbers:

/* ANDing two positives gives a positive, so safe to
* cast result into s64.
*/
dst_reg->smin_value = dst_reg->umin_value;
dst_reg->smax_value = dst_reg->umax_value;

However, if umin_value and umax_value of dst_reg cross the sign boundary
(i.e., if (s64)dst_reg->umin_value > (s64)dst_reg->umax_value), then we
will end up with smin_value > smax_value, which is unsound.

Previous works [1, 2] have discovered and reported this issue. Our tool
Agni [2, 3] consideres it a false positive. This is because, during the
verification of the abstract operator scalar_min_max_and(), Agni restricts
its inputs to those passing through reg_bounds_sync(). This mimics
real-world verifier behavior, as reg_bounds_sync() is invariably executed
at the tail of every abstract operator. Therefore, such behavior is
unlikely in an actual verifier execution.

However, it is still unsound for an abstract operator to set signed bounds
such that smin_value > smax_value. This patch fixes it, making the abstract
operator sound for all (well-formed) inputs.

It is worth noting that we can update the signed bounds using the unsigned
bounds whenever the unsigned bounds do not cross the sign boundary (not
just when the input signed bounds are positive, as was the case
previously). This patch does exactly that.

An alternative approach to fix this latent unsoundness would be to
unconditionally set the signed bounds to unbounded [S64_MIN, S64_MAX], and
let reg_bounds_sync() refine the signed bounds using the unsigned bounds
and the tnum. We found that our approach produces more precise (tighter)
bounds.

For example, consider these inputs to BPF_AND:

/* dst_reg */
var_off.value: 8608032320201083347
var_off.mask: 615339716653692460
smin_value: 8070450532247928832
smax_value: 8070450532247928832
umin_value: 13206380674380886586
umax_value: 13206380674380886586
s32_min_value: -2110561598
s32_max_value: -133438816
u32_min_value: 4135055354
u32_max_value: 4135055354

/* src_reg */
var_off.value: 8584102546103074815
var_off.mask: 9862641527606476800
smin_value: 2920655011908158522
smax_value: 7495731535348625717
umin_value: 7001104867969363969
umax_value: 8584102543730304042
s32_min_value: -2097116671
s32_max_value: 71704632
u32_min_value: 1047457619
u32_max_value: 4268683090

After going through tnum_and() -> scalar32_min_max_and() ->
scalar_min_max_and() -> reg_bounds_sync(), our patch produces the following
bounds for s32:
s32_min_value: -1263875629
s32_max_value: -159911942

Whereas, setting the signed bounds to unbounded in scalar_min_max_and()
produces:
s32_min_value: -1263875629
s32_max_value: -1

As observed, our patch produces a tighter s32 bound. We also confirmed
using Agni and SMT verification that our patch always produces signed
bounds that are equal to or more precise than setting the signed bounds to
unbounded in scalar_min_max_and().

[1] https://sanjit-bhat.github.io/assets/pdf/ebpf-verifier-range-analysis22.pdf
[2] https://link.springer.com/chapter/10.1007/978-3-031-37709-9_12
[3] https://github.com/bpfverif/agni

Fixes: b03c9f9fdc37 ("bpf/verifier: track signed and unsigned min/max values")
Co-developed-by: Matan Shachnai <[email protected]>
Signed-off-by: Matan Shachnai <[email protected]>
Co-developed-by: Srinivas Narayana <[email protected]>
Signed-off-by: Srinivas Narayana <[email protected]>
Co-developed-by: Santosh Nagarakatte <[email protected]>
Signed-off-by: Santosh Nagarakatte <[email protected]>
Signed-off-by: Harishankar Vishwanathan <[email protected]>
---
kernel/bpf/verifier.c | 94 ++++++++++++++++++++-----------------------
1 file changed, 43 insertions(+), 51 deletions(-)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index fcb62300f407..a7404a7d690f 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -13326,23 +13326,21 @@ static void scalar32_min_max_and(struct bpf_reg_state *dst_reg,
return;
}

- /* We get our minimum from the var_off, since that's inherently
+ /* We get our minimum from the var32_off, since that's inherently
* bitwise. Our maximum is the minimum of the operands' maxima.
*/
dst_reg->u32_min_value = var32_off.value;
dst_reg->u32_max_value = min(dst_reg->u32_max_value, umax_val);
- if (dst_reg->s32_min_value < 0 || smin_val < 0) {
- /* Lose signed bounds when ANDing negative numbers,
- * ain't nobody got time for that.
- */
- dst_reg->s32_min_value = S32_MIN;
- dst_reg->s32_max_value = S32_MAX;
- } else {
- /* ANDing two positives gives a positive, so safe to
- * cast result into s64.
- */
+
+ /* Safe to set s32 bounds by casting u32 result into s32 when u32
+ * doesn't cross sign boundary. Otherwise set s32 bounds to unbounded.
+ */
+ if ((s32)dst_reg->u32_min_value <= (s32)dst_reg->u32_max_value) {
dst_reg->s32_min_value = dst_reg->u32_min_value;
dst_reg->s32_max_value = dst_reg->u32_max_value;
+ } else {
+ dst_reg->s32_min_value = S32_MIN;
+ dst_reg->s32_max_value = S32_MAX;
}
}

@@ -13364,18 +13362,16 @@ static void scalar_min_max_and(struct bpf_reg_state *dst_reg,
*/
dst_reg->umin_value = dst_reg->var_off.value;
dst_reg->umax_value = min(dst_reg->umax_value, umax_val);
- if (dst_reg->smin_value < 0 || smin_val < 0) {
- /* Lose signed bounds when ANDing negative numbers,
- * ain't nobody got time for that.
- */
- dst_reg->smin_value = S64_MIN;
- dst_reg->smax_value = S64_MAX;
- } else {
- /* ANDing two positives gives a positive, so safe to
- * cast result into s64.
- */
+
+ /* Safe to set s64 bounds by casting u64 result into s64 when u64
+ * doesn't cross sign boundary. Otherwise set s64 bounds to unbounded.
+ */
+ if ((s64)dst_reg->umin_value <= (s64)dst_reg->umax_value) {
dst_reg->smin_value = dst_reg->umin_value;
dst_reg->smax_value = dst_reg->umax_value;
+ } else {
+ dst_reg->smin_value = S64_MIN;
+ dst_reg->smax_value = S64_MAX;
}
/* We may learn something more from the var_off */
__update_reg_bounds(dst_reg);
@@ -13395,23 +13391,21 @@ static void scalar32_min_max_or(struct bpf_reg_state *dst_reg,
return;
}

- /* We get our maximum from the var_off, and our minimum is the
- * maximum of the operands' minima
+ /* We get our maximum from the var32_off, and our minimum is the
+ * maximum of the operands' minima.
*/
dst_reg->u32_min_value = max(dst_reg->u32_min_value, umin_val);
dst_reg->u32_max_value = var32_off.value | var32_off.mask;
- if (dst_reg->s32_min_value < 0 || smin_val < 0) {
- /* Lose signed bounds when ORing negative numbers,
- * ain't nobody got time for that.
- */
- dst_reg->s32_min_value = S32_MIN;
- dst_reg->s32_max_value = S32_MAX;
- } else {
- /* ORing two positives gives a positive, so safe to
- * cast result into s64.
- */
+
+ /* Safe to set s32 bounds by casting u32 result into s32 when u32
+ * doesn't cross sign boundary. Otherwise set s32 bounds to unbounded.
+ */
+ if ((s32)dst_reg->u32_min_value <= (s32)dst_reg->u32_max_value) {
dst_reg->s32_min_value = dst_reg->u32_min_value;
dst_reg->s32_max_value = dst_reg->u32_max_value;
+ } else {
+ dst_reg->s32_min_value = S32_MIN;
+ dst_reg->s32_max_value = S32_MAX;
}
}

@@ -13433,18 +13427,16 @@ static void scalar_min_max_or(struct bpf_reg_state *dst_reg,
*/
dst_reg->umin_value = max(dst_reg->umin_value, umin_val);
dst_reg->umax_value = dst_reg->var_off.value | dst_reg->var_off.mask;
- if (dst_reg->smin_value < 0 || smin_val < 0) {
- /* Lose signed bounds when ORing negative numbers,
- * ain't nobody got time for that.
- */
- dst_reg->smin_value = S64_MIN;
- dst_reg->smax_value = S64_MAX;
- } else {
- /* ORing two positives gives a positive, so safe to
- * cast result into s64.
- */
+
+ /* Safe to set s64 bounds by casting u64 result into s64 when u64
+ * doesn't cross sign boundary. Otherwise set s64 bounds to unbounded.
+ */
+ if ((s64)dst_reg->umin_value <= (s64)dst_reg->umax_value) {
dst_reg->smin_value = dst_reg->umin_value;
dst_reg->smax_value = dst_reg->umax_value;
+ } else {
+ dst_reg->smin_value = S64_MIN;
+ dst_reg->smax_value = S64_MAX;
}
/* We may learn something more from the var_off */
__update_reg_bounds(dst_reg);
@@ -13467,10 +13459,10 @@ static void scalar32_min_max_xor(struct bpf_reg_state *dst_reg,
dst_reg->u32_min_value = var32_off.value;
dst_reg->u32_max_value = var32_off.value | var32_off.mask;

- if (dst_reg->s32_min_value >= 0 && smin_val >= 0) {
- /* XORing two positive sign numbers gives a positive,
- * so safe to cast u32 result into s32.
- */
+ /* Safe to set s32 bounds by casting u32 result into s32 when u32
+ * doesn't cross sign boundary. Otherwise set s32 bounds to unbounded.
+ */
+ if ((s32)dst_reg->u32_min_value <= (s32)dst_reg->u32_max_value) {
dst_reg->s32_min_value = dst_reg->u32_min_value;
dst_reg->s32_max_value = dst_reg->u32_max_value;
} else {
@@ -13496,10 +13488,10 @@ static void scalar_min_max_xor(struct bpf_reg_state *dst_reg,
dst_reg->umin_value = dst_reg->var_off.value;
dst_reg->umax_value = dst_reg->var_off.value | dst_reg->var_off.mask;

- if (dst_reg->smin_value >= 0 && smin_val >= 0) {
- /* XORing two positive sign numbers gives a positive,
- * so safe to cast u64 result into s64.
- */
+ /* Safe to set s64 bounds by casting u64 result into s64 when u64
+ * doesn't cross sign boundary. Otherwise set s64 bounds to unbounded.
+ */
+ if ((s64)dst_reg->umin_value <= (s64)dst_reg->umax_value) {
dst_reg->smin_value = dst_reg->umin_value;
dst_reg->smax_value = dst_reg->umax_value;
} else {
--
2.40.1



2024-04-03 13:32:03

by Edward Cree

[permalink] [raw]
Subject: Re: [PATCH v2 bpf-next] bpf: Fix latent unsoundness in and/or/xor value tracking

On 4/2/24 22:20, Harishankar Vishwanathan wrote:
> Previous works [1, 2] have discovered and reported this issue. Our tool
> Agni [2, 3] consideres it a false positive. This is because, during the
> verification of the abstract operator scalar_min_max_and(), Agni restricts
> its inputs to those passing through reg_bounds_sync(). This mimics
> real-world verifier behavior, as reg_bounds_sync() is invariably executed
> at the tail of every abstract operator. Therefore, such behavior is
> unlikely in an actual verifier execution.
>
> However, it is still unsound for an abstract operator to set signed bounds
> such that smin_value > smax_value. This patch fixes it, making the abstract
> operator sound for all (well-formed) inputs.

Just to check I'm understanding correctly: you're saying that the existing
code has an undocumented precondition, that's currently maintained by the
callers, and your patch removes the precondition in case a future patch
(or cosmic rays?) makes a call without satisfying it?
Or is it in principle possible (just "unlikely") for a program to induce
the current verifier to call scalar_min_max_foo() on a register that
hasn't been through reg_bounds_sync()?
If the former, I think Fixes: is inappropriate here as there is no need to
backport this change to stable kernels, although I agree the change is
worth making in -next.

> It is worth noting that we can update the signed bounds using the unsigned
> bounds whenever the unsigned bounds do not cross the sign boundary (not
> just when the input signed bounds are positive, as was the case
> previously). This patch does exactly that
Commit message could also make clearer that the new code considers whether
the *output* ubounds cross sign, rather than looking at the input bounds
as the previous code did. At first I was confused as to why XOR didn't
need special handling (since -ve xor -ve is +ve).

> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index fcb62300f407..a7404a7d690f 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -13326,23 +13326,21 @@ static void scalar32_min_max_and(struct bpf_reg_state *dst_reg,
> return;
> }
>
> - /* We get our minimum from the var_off, since that's inherently
> + /* We get our minimum from the var32_off, since that's inherently
> * bitwise. Our maximum is the minimum of the operands' maxima.
> */

This change, adjusting a comment to match the existing code, should probably
be in a separate patch.
> @@ -13395,23 +13391,21 @@ static void scalar32_min_max_or(struct bpf_reg_state *dst_reg,
> return;
> }
>
> - /* We get our maximum from the var_off, and our minimum is the
> - * maximum of the operands' minima
> + /* We get our maximum from the var32_off, and our minimum is the
> + * maximum of the operands' minima.
> */

Same here.

Apart from that,
Acked-by: Edward Cree <[email protected]>

2024-04-04 02:40:59

by Harishankar Vishwanathan

[permalink] [raw]
Subject: Re: [PATCH v2 bpf-next] bpf: Fix latent unsoundness in and/or/xor value tracking

On Wed, Apr 3, 2024 at 9:25 AM Edward Cree <[email protected]> wrote:
>
> On 4/2/24 22:20, Harishankar Vishwanathan wrote:
> > Previous works [1, 2] have discovered and reported this issue. Our tool
> > Agni [2, 3] consideres it a false positive. This is because, during the
> > verification of the abstract operator scalar_min_max_and(), Agni restricts
> > its inputs to those passing through reg_bounds_sync(). This mimics
> > real-world verifier behavior, as reg_bounds_sync() is invariably executed
> > at the tail of every abstract operator. Therefore, such behavior is
> > unlikely in an actual verifier execution.
> >
> > However, it is still unsound for an abstract operator to set signed bounds
> > such that smin_value > smax_value. This patch fixes it, making the abstract
> > operator sound for all (well-formed) inputs.
>
> Just to check I'm understanding correctly: you're saying that the existing
> code has an undocumented precondition, that's currently maintained by the
> callers, and your patch removes the precondition in case a future patch
> (or cosmic rays?) makes a call without satisfying it?
> Or is it in principle possible (just "unlikely") for a program to induce
> the current verifier to call scalar_min_max_foo() on a register that
> hasn't been through reg_bounds_sync()?
> If the former, I think Fixes: is inappropriate here as there is no need to
> backport this change to stable kernels, although I agree the change is
> worth making in -next.

You are kind of right on both counts.

The existing code contains an undocumented precondition. When violated,
scalar_min_max_and() can produce unsound s64 bounds (where smin > smax).
Certain well-formed register state inputs can violate this precondition,
resulting in eventual unsoundness. However, register states that have
passed through reg_bounds_sync() -- or those that are completely known or
completely unknown -- satisfy the precondition, preventing unsoundness.

Since we haven’t examined all possible paths through the verifier, and we
cannot guarantee that every instruction preceding a BPF_AND in an eBPF
program will maintain the precondition, we cannot definitively say that
register state inputs to scalar_min_max_and() will always meet the
precondition. There is a potential for an invocation of
scalar_min_max_and() on a register state that hasn’t undergone
reg_bounds_sync(). The patch indeed removes the precondition.

Given the above, please advise if we should backport this patch to older
kernels (and whether I should use the fixes tag).

> > It is worth noting that we can update the signed bounds using the unsigned
> > bounds whenever the unsigned bounds do not cross the sign boundary (not
> > just when the input signed bounds are positive, as was the case
> > previously). This patch does exactly that
> Commit message could also make clearer that the new code considers whether
> the *output* ubounds cross sign, rather than looking at the input bounds
> as the previous code did. At first I was confused as to why XOR didn't
> need special handling (since -ve xor -ve is +ve).

Sounds good regarding making it clearer within the context of what the
existing code does. However, I wanted to clarify that XOR does indeed use
the same handling as all the other operations. Could you elaborate on what
you mean?

> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index fcb62300f407..a7404a7d690f 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> > @@ -13326,23 +13326,21 @@ static void scalar32_min_max_and(struct bpf_reg_state *dst_reg,
> > return;
> > }
> >
> > - /* We get our minimum from the var_off, since that's inherently
> > + /* We get our minimum from the var32_off, since that's inherently
> > * bitwise. Our maximum is the minimum of the operands' maxima.
> > */
>
> This change, adjusting a comment to match the existing code, should probably
> be in a separate patch.

Sounds good.

> > @@ -13395,23 +13391,21 @@ static void scalar32_min_max_or(struct bpf_reg_state *dst_reg,
> > return;
> > }
> >
> > - /* We get our maximum from the var_off, and our minimum is the
> > - * maximum of the operands' minima
> > + /* We get our maximum from the var32_off, and our minimum is the
> > + * maximum of the operands' minima.
> > */
>
> Same here.
>
> Apart from that,
> Acked-by: Edward Cree <[email protected]>

2024-04-04 03:05:23

by Harishankar Vishwanathan

[permalink] [raw]
Subject: Re: [PATCH v2 bpf-next] bpf: Fix latent unsoundness in and/or/xor value tracking

On Wed, Apr 3, 2024 at 5:09 AM Daniel Borkmann <[email protected]> wrote:
>
> On 4/2/24 11:20 PM, Harishankar Vishwanathan wrote:
> [...]
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index fcb62300f407..a7404a7d690f 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> > @@ -13326,23 +13326,21 @@ static void scalar32_min_max_and(struct bpf_reg_state *dst_reg,
> > return;
> > }
> >
> > - /* We get our minimum from the var_off, since that's inherently
> > + /* We get our minimum from the var32_off, since that's inherently
> > * bitwise. Our maximum is the minimum of the operands' maxima.
> > */
> > dst_reg->u32_min_value = var32_off.value;
> > dst_reg->u32_max_value = min(dst_reg->u32_max_value, umax_val);
> > - if (dst_reg->s32_min_value < 0 || smin_val < 0) {
>
> The smin_val is now unused, triggering the following warnings :
>
> ../kernel/bpf/verifier.c:13321:6: warning: unused variable 'smin_val' [-Wunused-variable]
> 13321 | s32 smin_val = src_reg->s32_min_value;
> | ^~~~~~~~
> ../kernel/bpf/verifier.c:13352:6: warning: unused variable 'smin_val' [-Wunused-variable]
> 13352 | s64 smin_val = src_reg->smin_value;
> | ^~~~~~~~
> ../kernel/bpf/verifier.c:13386:6: warning: unused variable 'smin_val' [-Wunused-variable]
> 13386 | s32 smin_val = src_reg->s32_min_value;
> | ^~~~~~~~
> ../kernel/bpf/verifier.c:13417:6: warning: unused variable 'smin_val' [-Wunused-variable]
> 13417 | s64 smin_val = src_reg->smin_value;
> | ^~~~~~~~
> ../kernel/bpf/verifier.c:13451:6: warning: unused variable 'smin_val' [-Wunused-variable]
> 13451 | s32 smin_val = src_reg->s32_min_value;
> | ^~~~~~~~
> ../kernel/bpf/verifier.c:13479:6: warning: unused variable 'smin_val' [-Wunused-variable]
> 13479 | s64 smin_val = src_reg->smin_value;
> | ^~~~~~~~
>
> Removing these builds fine then, please follow-up with a v3.

Apologies. Yes, these smin_vals are not required anymore. I'll remove
them when sending
the v3.

> Thanks,
> Daniel

2024-04-09 13:28:22

by Shung-Hsi Yu

[permalink] [raw]
Subject: Re: [PATCH v2 bpf-next] bpf: Fix latent unsoundness in and/or/xor value tracking

On Wed, Apr 03, 2024 at 10:40:23PM -0400, Harishankar Vishwanathan wrote:
> On Wed, Apr 3, 2024 at 9:25 AM Edward Cree <[email protected]> wrote:
> > On 4/2/24 22:20, Harishankar Vishwanathan wrote:
> > > Previous works [1, 2] have discovered and reported this issue. Our tool
> > > Agni [2, 3] consideres it a false positive. This is because, during the
> > > verification of the abstract operator scalar_min_max_and(), Agni restricts
> > > its inputs to those passing through reg_bounds_sync(). This mimics
> > > real-world verifier behavior, as reg_bounds_sync() is invariably executed
> > > at the tail of every abstract operator. Therefore, such behavior is
> > > unlikely in an actual verifier execution.
> > >
> > > However, it is still unsound for an abstract operator to set signed bounds
> > > such that smin_value > smax_value. This patch fixes it, making the abstract
> > > operator sound for all (well-formed) inputs.
> >
> > Just to check I'm understanding correctly: you're saying that the existing
> > code has an undocumented precondition, that's currently maintained by the
> > callers, and your patch removes the precondition in case a future patch
> > (or cosmic rays?) makes a call without satisfying it?
> > Or is it in principle possible (just "unlikely") for a program to induce
> > the current verifier to call scalar_min_max_foo() on a register that
> > hasn't been through reg_bounds_sync()?
> > If the former, I think Fixes: is inappropriate here as there is no need to
> > backport this change to stable kernels, although I agree the change is
> > worth making in -next.
>
> You are kind of right on both counts.
>
> The existing code contains an undocumented precondition. When violated,
> scalar_min_max_and() can produce unsound s64 bounds (where smin > smax).
> Certain well-formed register state inputs can violate this precondition,
> resulting in eventual unsoundness. However, register states that have
> passed through reg_bounds_sync() -- or those that are completely known or
> completely unknown -- satisfy the precondition, preventing unsoundness.
>
> Since we haven’t examined all possible paths through the verifier, and we
> cannot guarantee that every instruction preceding a BPF_AND in an eBPF
> program will maintain the precondition, we cannot definitively say that
> register state inputs to scalar_min_max_and() will always meet the
> precondition. There is a potential for an invocation of
> scalar_min_max_and() on a register state that hasn’t undergone
> reg_bounds_sync(). The patch indeed removes the precondition.
>
> Given the above, please advise if we should backport this patch to older
> kernels (and whether I should use the fixes tag).

I suggested the fixes tag to Harishankar in the v1 patchset, admittedly
without a thorough understanding at the same level of above.

However, given smin_value > smax_value is something we check in
reg_bounds_sanity_check(), I would still vote to have the patch
backported to stable (with "Cc: [email protected]"?) even if the
fixes tag is dropped. The overall change should be rather well contained
and isolated for relatively ease of backport; and probably save some
head scratching over the difference of behavior between mainline and
stable.

> [...]

2024-04-09 17:17:19

by Edward Cree

[permalink] [raw]
Subject: Re: [PATCH v2 bpf-next] bpf: Fix latent unsoundness in and/or/xor value tracking

On 04/04/2024 03:40, Harishankar Vishwanathan wrote:
> [...]
> Given the above, please advise if we should backport this patch to older
> kernels (and whether I should use the fixes tag).

I don't feel too strongly about it, and if you or Shung-Hsi still
think, on reflection, that backporting is desirable, then go ahead
and keep the Fixes: tag.
But maybe tweak the description so someone doesn't see "latent
unsoundness" and think they need to CVE and rush this patch out as
a security thing; it's more like hardening. *shrug*

>> Commit message could also make clearer that the new code considers whether
>> the *output* ubounds cross sign, rather than looking at the input bounds
>> as the previous code did. At first I was confused as to why XOR didn't
>> need special handling (since -ve xor -ve is +ve).
>
> Sounds good regarding making it clearer within the context of what the
> existing code does. However, I wanted to clarify that XOR does indeed use
> the same handling as all the other operations. Could you elaborate on what
> you mean?

Just that if you XOR two negative numbers you get a positive number,
which isn't true for AND or OR; and my confused little brain thought
that fact was relevant, which it isn't.

-e

2024-04-10 11:44:13

by Shung-Hsi Yu

[permalink] [raw]
Subject: Re: [PATCH v2 bpf-next] bpf: Fix latent unsoundness in and/or/xor value tracking

On Tue, Apr 09, 2024 at 06:17:05PM +0100, Edward Cree wrote:
> I don't feel too strongly about it, and if you or Shung-Hsi still
> think, on reflection, that backporting is desirable, then go ahead
> and keep the Fixes: tag.
> But maybe tweak the description so someone doesn't see "latent
> unsoundness" and think they need to CVE and rush this patch out as
> a security thing; it's more like hardening. *shrug*

Unfortunately with Linux Kernel's current approach as a CVE Numbering
Authority I don't think this can be avoided. Patches with fixes tag will
almost certainly get a CVE number assigned (e.g. CVE-2024-26624[1][2]),
and we can only dispute[3] after such assignment happend for the CVE to
be rejected.

Shung-Hsi

1: https://lore.kernel.org/linux-cve-announce/2024030648-CVE-2024-26624-3032@gregkh/
2: https://lore.kernel.org/linux-cve-announce/2024032747-REJECTED-f2cf@gregkh/
3: https://docs.kernel.org/process/cve.html#disputes-of-assigned-cves

2024-04-13 00:05:33

by Harishankar Vishwanathan

[permalink] [raw]
Subject: Re: [PATCH v2 bpf-next] bpf: Fix latent unsoundness in and/or/xor value tracking



> On Apr 10, 2024, at 7:43 AM, Shung-Hsi Yu <[email protected]> wrote:
>
> On Tue, Apr 09, 2024 at 06:17:05PM +0100, Edward Cree wrote:
>> I don't feel too strongly about it, and if you or Shung-Hsi still
>> think, on reflection, that backporting is desirable, then go ahead
>> and keep the Fixes: tag.
>> But maybe tweak the description so someone doesn't see "latent
>> unsoundness" and think they need to CVE and rush this patch out as
>> a security thing; it's more like hardening. *shrug*
>
> Unfortunately with Linux Kernel's current approach as a CVE Numbering
> Authority I don't think this can be avoided. Patches with fixes tag will
> almost certainly get a CVE number assigned (e.g. CVE-2024-26624[1][2]),
> and we can only dispute[3] after such assignment happend for the CVE to
> be rejected.

It seems the best option is to CC the patch to [email protected] (so
that it will be backported), and not add the fixes tag (so that no CVE will
be assigned). Does this seem reasonable? If yes, I’ll proceed with v3.
I'll also mention that this is a hardening in the commit message.

Hari

>
> Shung-Hsi
>
> 1: https://lore.kernel.org/linux-cve-announce/2024030648-CVE-2024-26624-3032@gregkh/
> 2: https://lore.kernel.org/linux-cve-announce/2024032747-REJECTED-f2cf@gregkh/
> 3: https://docs.kernel.org/process/cve.html#disputes-of-assigned-cves

2024-04-15 04:11:39

by Shung-Hsi Yu

[permalink] [raw]
Subject: Re: [PATCH v2 bpf-next] bpf: Fix latent unsoundness in and/or/xor value tracking

On Sat, Apr 13, 2024 at 12:05:18AM +0000, Harishankar Vishwanathan wrote:
> > On Apr 10, 2024, at 7:43 AM, Shung-Hsi Yu <[email protected]> wrote:
> > On Tue, Apr 09, 2024 at 06:17:05PM +0100, Edward Cree wrote:
> >> I don't feel too strongly about it, and if you or Shung-Hsi still
> >> think, on reflection, that backporting is desirable, then go ahead
> >> and keep the Fixes: tag.
> >> But maybe tweak the description so someone doesn't see "latent
> >> unsoundness" and think they need to CVE and rush this patch out as
> >> a security thing; it's more like hardening. *shrug*
> >
> > Unfortunately with Linux Kernel's current approach as a CVE Numbering
> > Authority I don't think this can be avoided. Patches with fixes tag will
> > almost certainly get a CVE number assigned (e.g. CVE-2024-26624[1][2]),
> > and we can only dispute[3] after such assignment happend for the CVE to
> > be rejected.
>
> It seems the best option is to CC the patch to [email protected] (so
> that it will be backported), and not add the fixes tag (so that no CVE will
> be assigned). Does this seem reasonable? If yes, I’ll proceed with v3.
> I'll also mention that this is a hardening in the commit message.

Sounds good to me. Not 100% certain that this will avoid CVE assignment,
but does seems like the best option.

Shung-Hsi