2024-04-24 19:18:09

by Kees Cook

[permalink] [raw]
Subject: [PATCH 1/4] locking/atomic/x86: Silence intentional wrapping addition

Use add_wrap() to annotate the addition in atomic_add_return() as
expecting to wrap around.

Signed-off-by: Kees Cook <[email protected]>
---
Cc: Will Deacon <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: [email protected]
Cc: "H. Peter Anvin" <[email protected]>
---
arch/x86/include/asm/atomic.h | 3 ++-
arch/x86/include/asm/atomic64_32.h | 2 +-
arch/x86/include/asm/atomic64_64.h | 2 +-
3 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/arch/x86/include/asm/atomic.h b/arch/x86/include/asm/atomic.h
index 55a55ec04350..a5862a258760 100644
--- a/arch/x86/include/asm/atomic.h
+++ b/arch/x86/include/asm/atomic.h
@@ -3,6 +3,7 @@
#define _ASM_X86_ATOMIC_H

#include <linux/compiler.h>
+#include <linux/overflow.h>
#include <linux/types.h>
#include <asm/alternative.h>
#include <asm/cmpxchg.h>
@@ -82,7 +83,7 @@ static __always_inline bool arch_atomic_add_negative(int i, atomic_t *v)

static __always_inline int arch_atomic_add_return(int i, atomic_t *v)
{
- return i + xadd(&v->counter, i);
+ return wrapping_add(int, i, xadd(&v->counter, i));
}
#define arch_atomic_add_return arch_atomic_add_return

diff --git a/arch/x86/include/asm/atomic64_32.h b/arch/x86/include/asm/atomic64_32.h
index 3486d91b8595..608b100e8ffe 100644
--- a/arch/x86/include/asm/atomic64_32.h
+++ b/arch/x86/include/asm/atomic64_32.h
@@ -254,7 +254,7 @@ static __always_inline s64 arch_atomic64_fetch_add(s64 i, atomic64_t *v)
{
s64 old, c = 0;

- while ((old = arch_atomic64_cmpxchg(v, c, c + i)) != c)
+ while ((old = arch_atomic64_cmpxchg(v, c, wrapping_add(s64, c, i))) != c)
c = old;

return old;
diff --git a/arch/x86/include/asm/atomic64_64.h b/arch/x86/include/asm/atomic64_64.h
index 3165c0feedf7..f1dc8aa54b52 100644
--- a/arch/x86/include/asm/atomic64_64.h
+++ b/arch/x86/include/asm/atomic64_64.h
@@ -76,7 +76,7 @@ static __always_inline bool arch_atomic64_add_negative(s64 i, atomic64_t *v)

static __always_inline s64 arch_atomic64_add_return(s64 i, atomic64_t *v)
{
- return i + xadd(&v->counter, i);
+ return wrapping_add(s64, i, xadd(&v->counter, i));
}
#define arch_atomic64_add_return arch_atomic64_add_return

--
2.34.1



2024-04-24 22:45:19

by Kees Cook

[permalink] [raw]
Subject: Re: [PATCH 1/4] locking/atomic/x86: Silence intentional wrapping addition

On Thu, Apr 25, 2024 at 12:41:41AM +0200, Peter Zijlstra wrote:
> On Wed, Apr 24, 2024 at 12:17:34PM -0700, Kees Cook wrote:
>
> > @@ -82,7 +83,7 @@ static __always_inline bool arch_atomic_add_negative(int i, atomic_t *v)
> >
> > static __always_inline int arch_atomic_add_return(int i, atomic_t *v)
> > {
> > - return i + xadd(&v->counter, i);
> > + return wrapping_add(int, i, xadd(&v->counter, i));
> > }
> > #define arch_atomic_add_return arch_atomic_add_return
>
> this is going to get old *real* quick :-/
>
> This must be the ugliest possible way to annotate all this, and then
> litter the kernel with all this... urgh.

I'm expecting to have explicit wrapping type annotations soon[1], but for
the atomics, it's kind of a wash on how intrusive the annotations get. I
had originally wanted to mark the function (as I did in other cases)
rather than using the helper, but Mark preferred it this way. I'm happy
to do whatever! :)

-Kees

[1] https://github.com/llvm/llvm-project/pull/86618

--
Kees Cook

2024-04-24 22:52:05

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 1/4] locking/atomic/x86: Silence intentional wrapping addition

On Thu, Apr 25, 2024 at 12:41:41AM +0200, Peter Zijlstra wrote:
> On Wed, Apr 24, 2024 at 12:17:34PM -0700, Kees Cook wrote:
>
> > @@ -82,7 +83,7 @@ static __always_inline bool arch_atomic_add_negative(int i, atomic_t *v)
> >
> > static __always_inline int arch_atomic_add_return(int i, atomic_t *v)
> > {
> > - return i + xadd(&v->counter, i);
> > + return wrapping_add(int, i, xadd(&v->counter, i));
> > }
> > #define arch_atomic_add_return arch_atomic_add_return
>
> this is going to get old *real* quick :-/
>
> This must be the ugliest possible way to annotate all this, and then
> litter the kernel with all this... urgh.

Also, what drugs is involved with __builtin_add_overflow() ? Per
-fno-strict-overflow everything is 2s complement and you can just do the
unsigned add.

Over the years we've been writing code with the express knowledge that
everything wraps properly, this annotation is going to be utter pain.

As I've said before, add an explicit non-wrapping type and use that for
the cases you care about actually not wrapping.

NAK

2024-04-24 22:55:55

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 1/4] locking/atomic/x86: Silence intentional wrapping addition

On Wed, Apr 24, 2024 at 03:45:07PM -0700, Kees Cook wrote:
> On Thu, Apr 25, 2024 at 12:41:41AM +0200, Peter Zijlstra wrote:
> > On Wed, Apr 24, 2024 at 12:17:34PM -0700, Kees Cook wrote:
> >
> > > @@ -82,7 +83,7 @@ static __always_inline bool arch_atomic_add_negative(int i, atomic_t *v)
> > >
> > > static __always_inline int arch_atomic_add_return(int i, atomic_t *v)
> > > {
> > > - return i + xadd(&v->counter, i);
> > > + return wrapping_add(int, i, xadd(&v->counter, i));
> > > }
> > > #define arch_atomic_add_return arch_atomic_add_return
> >
> > this is going to get old *real* quick :-/
> >
> > This must be the ugliest possible way to annotate all this, and then
> > litter the kernel with all this... urgh.
>
> I'm expecting to have explicit wrapping type annotations soon[1], but for
> the atomics, it's kind of a wash on how intrusive the annotations get. I
> had originally wanted to mark the function (as I did in other cases)
> rather than using the helper, but Mark preferred it this way. I'm happy
> to do whatever! :)
>
> -Kees
>
> [1] https://github.com/llvm/llvm-project/pull/86618

This is arse-about-face. Signed stuff wraps per -fno-strict-overflow.
We've been writing code for years under that assumption.

You want to mark the non-wrapping case.




2024-04-24 23:05:28

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 1/4] locking/atomic/x86: Silence intentional wrapping addition

On Thu, Apr 25, 2024 at 12:54:36AM +0200, Peter Zijlstra wrote:
> On Wed, Apr 24, 2024 at 03:45:07PM -0700, Kees Cook wrote:
> > On Thu, Apr 25, 2024 at 12:41:41AM +0200, Peter Zijlstra wrote:
> > > On Wed, Apr 24, 2024 at 12:17:34PM -0700, Kees Cook wrote:
> > >
> > > > @@ -82,7 +83,7 @@ static __always_inline bool arch_atomic_add_negative(int i, atomic_t *v)
> > > >
> > > > static __always_inline int arch_atomic_add_return(int i, atomic_t *v)
> > > > {
> > > > - return i + xadd(&v->counter, i);
> > > > + return wrapping_add(int, i, xadd(&v->counter, i));
> > > > }
> > > > #define arch_atomic_add_return arch_atomic_add_return
> > >
> > > this is going to get old *real* quick :-/
> > >
> > > This must be the ugliest possible way to annotate all this, and then
> > > litter the kernel with all this... urgh.
> >
> > I'm expecting to have explicit wrapping type annotations soon[1], but for
> > the atomics, it's kind of a wash on how intrusive the annotations get. I
> > had originally wanted to mark the function (as I did in other cases)
> > rather than using the helper, but Mark preferred it this way. I'm happy
> > to do whatever! :)
> >
> > -Kees
> >
> > [1] https://github.com/llvm/llvm-project/pull/86618
>
> This is arse-about-face. Signed stuff wraps per -fno-strict-overflow.
> We've been writing code for years under that assumption.
>
> You want to mark the non-wrapping case.

That is, anything that actively warns about signed overflow when build
with -fno-strict-overflow is a bug. If you want this warning you have to
explicitly mark things.

Signed overflow is not UB, is not a bug.

Now, it might be unexpected in some places, but fundamentally we run on
2s complement and expect 2s complement. If you want more, mark it so.

2024-04-24 23:20:34

by Kees Cook

[permalink] [raw]
Subject: Re: [PATCH 1/4] locking/atomic/x86: Silence intentional wrapping addition

On Thu, Apr 25, 2024 at 12:54:36AM +0200, Peter Zijlstra wrote:
> On Wed, Apr 24, 2024 at 03:45:07PM -0700, Kees Cook wrote:
> > On Thu, Apr 25, 2024 at 12:41:41AM +0200, Peter Zijlstra wrote:
> > > On Wed, Apr 24, 2024 at 12:17:34PM -0700, Kees Cook wrote:
> > >
> > > > @@ -82,7 +83,7 @@ static __always_inline bool arch_atomic_add_negative(int i, atomic_t *v)
> > > >
> > > > static __always_inline int arch_atomic_add_return(int i, atomic_t *v)
> > > > {
> > > > - return i + xadd(&v->counter, i);
> > > > + return wrapping_add(int, i, xadd(&v->counter, i));
> > > > }
> > > > #define arch_atomic_add_return arch_atomic_add_return
> > >
> > > this is going to get old *real* quick :-/
> > >
> > > This must be the ugliest possible way to annotate all this, and then
> > > litter the kernel with all this... urgh.
> >
> > I'm expecting to have explicit wrapping type annotations soon[1], but for
> > the atomics, it's kind of a wash on how intrusive the annotations get. I
> > had originally wanted to mark the function (as I did in other cases)
> > rather than using the helper, but Mark preferred it this way. I'm happy
> > to do whatever! :)
> >
> > -Kees
> >
> > [1] https://github.com/llvm/llvm-project/pull/86618
>
> This is arse-about-face. Signed stuff wraps per -fno-strict-overflow.
> We've been writing code for years under that assumption.

Right, which is why this is going to take time to roll out. :) What we
were really doing with -fno-strict-overflow was getting rid of undefined
behavior. That was really really horrible; we don't need the compiler
hallucinating.

> You want to mark the non-wrapping case.

What we want is lack of ambiguity. Having done these kinds of things in
the kernel for a while now, I have strong evidence that we get much better
results with the "fail safe" approach, but start by making it non-fatal.
That way we get full coverage, but we don't melt the world for anyone
that doesn't want it, and we can shake things out over a few years. For
example, it has worked well for CONFIG_FORTIFY, CONFIG_UBSAN_BOUNDS,
KCFI, etc.

The riskier condition is having something wrap when it wasn't expected
(e.g. allocations, pointer offsets, etc), so we start by defining our
regular types as non-wrapping, and annotate the wrapping types (or
specific calculations or functions).

For signed types in particular, wrapping is overwhelmingly the
uncommon case, so from a purely "how much annotations is needed"
perspective, marking wrapping is also easiest. Yes, there are cases of
expected wrapping, but we'll track them all down and get them marked
unambiguously. One thing on the short list is atomics, so here we are. :)

-Kees

--
Kees Cook

2024-04-24 23:31:10

by Kees Cook

[permalink] [raw]
Subject: Re: [PATCH 1/4] locking/atomic/x86: Silence intentional wrapping addition

On Thu, Apr 25, 2024 at 01:05:00AM +0200, Peter Zijlstra wrote:
> On Thu, Apr 25, 2024 at 12:54:36AM +0200, Peter Zijlstra wrote:
> > On Wed, Apr 24, 2024 at 03:45:07PM -0700, Kees Cook wrote:
> > > On Thu, Apr 25, 2024 at 12:41:41AM +0200, Peter Zijlstra wrote:
> > > > On Wed, Apr 24, 2024 at 12:17:34PM -0700, Kees Cook wrote:
> > > >
> > > > > @@ -82,7 +83,7 @@ static __always_inline bool arch_atomic_add_negative(int i, atomic_t *v)
> > > > >
> > > > > static __always_inline int arch_atomic_add_return(int i, atomic_t *v)
> > > > > {
> > > > > - return i + xadd(&v->counter, i);
> > > > > + return wrapping_add(int, i, xadd(&v->counter, i));
> > > > > }
> > > > > #define arch_atomic_add_return arch_atomic_add_return
> > > >
> > > > this is going to get old *real* quick :-/
> > > >
> > > > This must be the ugliest possible way to annotate all this, and then
> > > > litter the kernel with all this... urgh.
> > >
> > > I'm expecting to have explicit wrapping type annotations soon[1], but for
> > > the atomics, it's kind of a wash on how intrusive the annotations get. I
> > > had originally wanted to mark the function (as I did in other cases)
> > > rather than using the helper, but Mark preferred it this way. I'm happy
> > > to do whatever! :)
> > >
> > > -Kees
> > >
> > > [1] https://github.com/llvm/llvm-project/pull/86618
> >
> > This is arse-about-face. Signed stuff wraps per -fno-strict-overflow.
> > We've been writing code for years under that assumption.
> >
> > You want to mark the non-wrapping case.
>
> That is, anything that actively warns about signed overflow when build
> with -fno-strict-overflow is a bug. If you want this warning you have to
> explicitly mark things.

This is confusing UB with "overflow detection". We're doing the latter.

> Signed overflow is not UB, is not a bug.
>
> Now, it might be unexpected in some places, but fundamentally we run on
> 2s complement and expect 2s complement. If you want more, mark it so.

Regular C never provided us with enough choice in types to be able to
select the overflow resolution strategy. :( So we're stuck mixing
expectations into our types. (One early defense you were involved in
touched on this too: refcount_t uses a saturating overflow strategy, as
that works best for how it gets used.)

Regardless, yes, someone intent on wrapping gets their expected 2s
complement results, but in the cases were a few values started collecting
in some dark corner of protocol handling, having a calculation wrap around
is at best a behavioral bug and at worst a total system compromise.
Wrapping is the uncommon case here, so we mark those.

--
Kees Cook

2024-04-24 23:57:17

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 1/4] locking/atomic/x86: Silence intentional wrapping addition

On Wed, Apr 24, 2024 at 12:17:34PM -0700, Kees Cook wrote:

> @@ -82,7 +83,7 @@ static __always_inline bool arch_atomic_add_negative(int i, atomic_t *v)
>
> static __always_inline int arch_atomic_add_return(int i, atomic_t *v)
> {
> - return i + xadd(&v->counter, i);
> + return wrapping_add(int, i, xadd(&v->counter, i));
> }
> #define arch_atomic_add_return arch_atomic_add_return

this is going to get old *real* quick :-/

This must be the ugliest possible way to annotate all this, and then
litter the kernel with all this... urgh.


2024-04-25 10:20:17

by Mark Rutland

[permalink] [raw]
Subject: Re: [PATCH 1/4] locking/atomic/x86: Silence intentional wrapping addition

On Thu, Apr 25, 2024 at 11:28:12AM +0200, Peter Zijlstra wrote:
> On Wed, Apr 24, 2024 at 04:30:50PM -0700, Kees Cook wrote:
>
> > > That is, anything that actively warns about signed overflow when build
> > > with -fno-strict-overflow is a bug. If you want this warning you have to
> > > explicitly mark things.
> >
> > This is confusing UB with "overflow detection". We're doing the latter.
>
> Well, all of this is confusing to me because it is not presented
> coherently.
>
> The traditional 'must not let signed overflow' is because of the UB
> nonsense, which we fixed.
>
> > > Signed overflow is not UB, is not a bug.
> > >
> > > Now, it might be unexpected in some places, but fundamentally we run on
> > > 2s complement and expect 2s complement. If you want more, mark it so.
> >
> > Regular C never provided us with enough choice in types to be able to
> > select the overflow resolution strategy. :( So we're stuck mixing
> > expectations into our types.
>
> Traditionally C has explicit wrapping for unsigned and UB on signed. We
> fixed the UB, so now expect wrapping for everything.
>
> You want to add overflow, so you should make that a special and preserve
> semantics for existing code.
>
> Also I would very strongly suggest you add an overflow qualifier to the
> type system and please provide sane means of qualifier manipulation --
> stripping qualifiers is painful :/

I agree that having an overflow/nooverflow qualifier that's separate from
signed/unsigned would make more sense than inferring that from signed vs
unsigned.

Mark.

2024-04-25 10:42:29

by Mark Rutland

[permalink] [raw]
Subject: Re: [PATCH 1/4] locking/atomic/x86: Silence intentional wrapping addition

On Wed, Apr 24, 2024 at 03:45:07PM -0700, Kees Cook wrote:
> On Thu, Apr 25, 2024 at 12:41:41AM +0200, Peter Zijlstra wrote:
> > On Wed, Apr 24, 2024 at 12:17:34PM -0700, Kees Cook wrote:
> >
> > > @@ -82,7 +83,7 @@ static __always_inline bool arch_atomic_add_negative(int i, atomic_t *v)
> > >
> > > static __always_inline int arch_atomic_add_return(int i, atomic_t *v)
> > > {
> > > - return i + xadd(&v->counter, i);
> > > + return wrapping_add(int, i, xadd(&v->counter, i));
> > > }
> > > #define arch_atomic_add_return arch_atomic_add_return
> >
> > this is going to get old *real* quick :-/
> >
> > This must be the ugliest possible way to annotate all this, and then
> > litter the kernel with all this... urgh.
>
> I'm expecting to have explicit wrapping type annotations soon[1], but for
> the atomics, it's kind of a wash on how intrusive the annotations get. I
> had originally wanted to mark the function (as I did in other cases)
> rather than using the helper, but Mark preferred it this way. I'm happy
> to do whatever! :)

To be clear, I dislike the function annotation because then it applies to
*everything* within the function, which is overly broad and the intent becomes
unclear. That makes it painful to refactor the code (since e.g. if we want to
add another operation to the function which *should not* wrap, that gets
silenced too).

I'm happy with something that applies to specific types/variables or specific
operations (which is what these patches do).

As to whether or not we do this at all I'll have to defer to Peter.

Mark.

2024-04-25 15:19:14

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 1/4] locking/atomic/x86: Silence intentional wrapping addition

On Wed, Apr 24, 2024 at 04:20:20PM -0700, Kees Cook wrote:

> > This is arse-about-face. Signed stuff wraps per -fno-strict-overflow.
> > We've been writing code for years under that assumption.
>
> Right, which is why this is going to take time to roll out. :) What we
> were really doing with -fno-strict-overflow was getting rid of undefined
> behavior. That was really really horrible; we don't need the compiler
> hallucinating.

Right, but that then got us well defined semantics for signed overflow.

> > You want to mark the non-wrapping case.
>
> What we want is lack of ambiguity. Having done these kinds of things in
> the kernel for a while now, I have strong evidence that we get much better
> results with the "fail safe" approach, but start by making it non-fatal.
> That way we get full coverage, but we don't melt the world for anyone
> that doesn't want it, and we can shake things out over a few years. For
> example, it has worked well for CONFIG_FORTIFY, CONFIG_UBSAN_BOUNDS,
> KCFI, etc.

The non-fatal argument doesn't have bearing on the mark warp or mark
non-wrap argument though.

> The riskier condition is having something wrap when it wasn't expected
> (e.g. allocations, pointer offsets, etc), so we start by defining our
> regular types as non-wrapping, and annotate the wrapping types (or
> specific calculations or functions).

But but most of those you mention are unsigned. Are you saying you're
making all unsigned variables non-wrap by default too? That's bloody
insane.

> For signed types in particular, wrapping is overwhelmingly the
> uncommon case, so from a purely "how much annotations is needed"
> perspective, marking wrapping is also easiest. Yes, there are cases of
> expected wrapping, but we'll track them all down and get them marked
> unambiguously.

But I am confused now, because above you seem to imply you're making
unsigned non-wrap too, and there wrapping is *far* more common, and I
must say I hate this wrapping_add() thing with a passion.

> One thing on the short list is atomics, so here we are. :)

Well, there are wrapping and non-wrapping users of atomic. If only C had
generics etc.. (and yeah, _Generic doesn't really count).

2024-04-25 15:27:02

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 1/4] locking/atomic/x86: Silence intentional wrapping addition

On Wed, Apr 24, 2024 at 04:30:50PM -0700, Kees Cook wrote:

> > That is, anything that actively warns about signed overflow when build
> > with -fno-strict-overflow is a bug. If you want this warning you have to
> > explicitly mark things.
>
> This is confusing UB with "overflow detection". We're doing the latter.

Well, all of this is confusing to me because it is not presented
coherently.

The traditional 'must not let signed overflow' is because of the UB
nonsense, which we fixed.

> > Signed overflow is not UB, is not a bug.
> >
> > Now, it might be unexpected in some places, but fundamentally we run on
> > 2s complement and expect 2s complement. If you want more, mark it so.
>
> Regular C never provided us with enough choice in types to be able to
> select the overflow resolution strategy. :( So we're stuck mixing
> expectations into our types.

Traditionally C has explicit wrapping for unsigned and UB on signed. We
fixed the UB, so now expect wrapping for everything.

You want to add overflow, so you should make that a special and preserve
semantics for existing code.

Also I would very strongly suggest you add an overflow qualifier to the
type system and please provide sane means of qualifier manipulation --
stripping qualifiers is painful :/

> Regardless, yes, someone intent on wrapping gets their expected 2s
> complement results, but in the cases were a few values started collecting
> in some dark corner of protocol handling, having a calculation wrap around
> is at best a behavioral bug and at worst a total system compromise.
> Wrapping is the uncommon case here, so we mark those.

Then feel free to sprinkle copious amounts of 'overflow' qualifiers in
the protocol handling code.

2024-04-25 17:19:50

by Kees Cook

[permalink] [raw]
Subject: Re: [PATCH 1/4] locking/atomic/x86: Silence intentional wrapping addition

On Thu, Apr 25, 2024 at 11:15:17AM +0100, Mark Rutland wrote:
> To be clear, I dislike the function annotation because then it applies to
> *everything* within the function, which is overly broad and the intent becomes
> unclear. That makes it painful to refactor the code (since e.g. if we want to
> add another operation to the function which *should not* wrap, that gets
> silenced too).

Yeah, I find that a convincing argument for larger functions, but it
seemed to me that for these 1-line implementations it was okay. But
regardless, yup, no function-level annotation here.

> I'm happy with something that applies to specific types/variables or specific
> operations (which is what these patches do).

Thanks!

--
Kees Cook

2024-04-25 17:39:35

by Kees Cook

[permalink] [raw]
Subject: Re: [PATCH 1/4] locking/atomic/x86: Silence intentional wrapping addition

On Thu, Apr 25, 2024 at 11:17:52AM +0200, Peter Zijlstra wrote:
> On Wed, Apr 24, 2024 at 04:20:20PM -0700, Kees Cook wrote:
>
> > > This is arse-about-face. Signed stuff wraps per -fno-strict-overflow.
> > > We've been writing code for years under that assumption.
> >
> > Right, which is why this is going to take time to roll out. :) What we
> > were really doing with -fno-strict-overflow was getting rid of undefined
> > behavior. That was really really horrible; we don't need the compiler
> > hallucinating.
>
> Right, but that then got us well defined semantics for signed overflow.

Yes, and this gets us to the next step: disambiguation for general
users. It's good that we have a well-defined overflow resolution strategy,
but our decades of persistent wrap-around flaws in the kernel show
that many devs (even experienced ones) produce code with unexpected and
unwanted (to the logic of the code) wrap-around. So we have to find a
way to distinguish wrapping and non-wrapping operations or types up
front and in a clear way.

>
> > > You want to mark the non-wrapping case.
> >
> > What we want is lack of ambiguity. Having done these kinds of things in
> > the kernel for a while now, I have strong evidence that we get much better
> > results with the "fail safe" approach, but start by making it non-fatal.
> > That way we get full coverage, but we don't melt the world for anyone
> > that doesn't want it, and we can shake things out over a few years. For
> > example, it has worked well for CONFIG_FORTIFY, CONFIG_UBSAN_BOUNDS,
> > KCFI, etc.
>
> The non-fatal argument doesn't have bearing on the mark warp or mark
> non-wrap argument though.

This gets at the strategy of refactoring our code to gain our unambiguous
coverage. Since we can't sanely have a flag-day, we have to go piecemeal,
and there will continue to be places where the coverage was missed, and
so we want to progress through marking wrapping cases without BUGing the
kernel. (We don't care about catching non-wrapping -- the exceptional
condition is hitting an overflow.)

> > The riskier condition is having something wrap when it wasn't expected
> > (e.g. allocations, pointer offsets, etc), so we start by defining our
> > regular types as non-wrapping, and annotate the wrapping types (or
> > specific calculations or functions).
>
> But but most of those you mention are unsigned. Are you saying you're
> making all unsigned variables non-wrap by default too? That's bloody
> insane.

We have a mix (and a regular confusion even in core code) where "int"
gets passed around even though at one end or another of a call chain
it's actually u32 or u16 or whatever. Regardless, yes, the next step
after signed overflow mitigation would be unsigned overflow mitigation,
and as you suggest, it's much more tricky.

> > For signed types in particular, wrapping is overwhelmingly the
> > uncommon case, so from a purely "how much annotations is needed"
> > perspective, marking wrapping is also easiest. Yes, there are cases of
> > expected wrapping, but we'll track them all down and get them marked
> > unambiguously.
>
> But I am confused now, because above you seem to imply you're making
> unsigned non-wrap too, and there wrapping is *far* more common, and I
> must say I hate this wrapping_add() thing with a passion.

Yes, most people are not a fan of the wrapping_*() helpers, which is why
I'm trying to get a typedef attribute created. But again, to gain the
"fail safe by default" coverage, we have to start with the assumption
that the default is non-wrapping, and mark those that aren't. (Otherwise
we're not actually catching unexpected cases.) And no, it's not going
to be over-night. It's taken almost 5 years to disambiguate array bounds
and we're still not done. :)

> > One thing on the short list is atomics, so here we are. :)
>
> Well, there are wrapping and non-wrapping users of atomic. If only C had
> generics etc.. (and yeah, _Generic doesn't really count).

Non-wrapping users of atomics should be using refcount_t, which is
our non-wrapping atomic type. But regardless, atomics are internally
wrapping, yes?

Anyway, I suspect this whole plan needs wider discussion. I will write
up a more complete RFC that covers my plans, including the rationale for
why we should adopt this in a certain way. (These kinds of strategic RFCs
don't usually get much traction since our development style is much more
"show the patches", so that's why I have been just sending patches. But
since it's a pretty big topic, I'll give it a shot...)

--
Kees Cook

2024-04-26 07:41:15

by David Howells

[permalink] [raw]
Subject: Re: [PATCH 1/4] locking/atomic/x86: Silence intentional wrapping addition

Kees Cook <[email protected]> wrote:

> - return i + xadd(&v->counter, i);
> + return wrapping_add(int, i, xadd(&v->counter, i));

Ewww. Can't you just mark the variable as wrapping in some way, either by:

unsigned int __cyclic counter;

or, to use rxrpc packet sequence numbers as an example:

typedef unsigned int __cyclic rxrpc_seq_t;

rxrpc_seq_t tx_top;

Then you can get the compiler to spit out a warning if you use <, <=, > or >=
on the numbers as an added bonus. (You should use before() and after()
instead).

David


2024-05-02 15:10:02

by Kees Cook

[permalink] [raw]
Subject: Re: [PATCH 1/4] locking/atomic/x86: Silence intentional wrapping addition

On Fri, Apr 26, 2024 at 08:40:50AM +0100, David Howells wrote:
> Kees Cook <[email protected]> wrote:
>
> > - return i + xadd(&v->counter, i);
> > + return wrapping_add(int, i, xadd(&v->counter, i));
>
> Ewww. Can't you just mark the variable as wrapping in some way, either by:
>
> unsigned int __cyclic counter;

Yeah, that's the plan now. Justin is currently working on the "wraps"
attribute for Clang:
https://github.com/llvm/llvm-project/pull/86618

--
Kees Cook