2024-01-31 21:17:05

by Paul Heidekrüger

[permalink] [raw]
Subject: [PATCH RFC v2] kasan: add atomic tests

Hi!

This RFC patch adds tests that detect whether KASan is able to catch
unsafe atomic accesses.

Since v1, which can be found on Bugzilla (see "Closes:" tag), I've made
the following suggested changes:

* Adjust size of allocations to make kasan_atomics() work with all KASan modes
* Remove comments and move tests closer to the bitops tests
* For functions taking two addresses as an input, test each address in a separate function call.
* Rename variables for clarity
* Add tests for READ_ONCE(), WRITE_ONCE(), smp_load_acquire() and smp_store_release()

I'm still uncelar on which kinds of atomic accesses we should be testing
though. The patch below only covers a subset, and I don't know if it
would be feasible to just manually add all atomics of interest. Which
ones would those be exactly? As Andrey pointed out on Bugzilla, if we
were to include all of the atomic64_* ones, that would make a lot of
function calls.

Also, the availability of atomics varies between architectures; I did my
testing on arm64. Is something like gen-atomic-instrumented.sh required?

Many thanks,
Paul

CC: Marco Elver <[email protected]>
CC: Andrey Konovalov <[email protected]>
Closes: https://bugzilla.kernel.org/show_bug.cgi?id=214055
Signed-off-by: Paul Heidekrüger <[email protected]>
---
mm/kasan/kasan_test.c | 50 +++++++++++++++++++++++++++++++++++++++++++
1 file changed, 50 insertions(+)

diff --git a/mm/kasan/kasan_test.c b/mm/kasan/kasan_test.c
index 8281eb42464b..1ab4444fe4a0 100644
--- a/mm/kasan/kasan_test.c
+++ b/mm/kasan/kasan_test.c
@@ -1150,6 +1150,55 @@ static void kasan_bitops_tags(struct kunit *test)
kfree(bits);
}

+static void kasan_atomics_helper(struct kunit *test, void *unsafe, void *safe)
+{
+ int *i_safe = (int *)safe;
+ int *i_unsafe = (int *)unsafe;
+
+ KUNIT_EXPECT_KASAN_FAIL(test, READ_ONCE(*i_unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, WRITE_ONCE(*i_unsafe, 42));
+ KUNIT_EXPECT_KASAN_FAIL(test, smp_load_acquire(i_unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, smp_store_release(i_unsafe, 42));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_read(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_set(unsafe, 42));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_add(42, unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_sub(42, unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_and(42, unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_andnot(42, unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_or(42, unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_xor(42, unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_xchg(unsafe, 42));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_cmpxchg(unsafe, 21, 42));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_try_cmpxchg(unsafe, safe, 42));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_try_cmpxchg(safe, unsafe, 42));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_sub_and_test(42, unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_and_test(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_and_test(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_add_negative(42, unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_add_unless(unsafe, 21, 42));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_not_zero(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_unless_negative(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_unless_positive(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_if_positive(unsafe));
+}
+
+static void kasan_atomics(struct kunit *test)
+{
+ int *a1, *a2;
+
+ a1 = kzalloc(48, GFP_KERNEL);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, a1);
+ a2 = kzalloc(sizeof(*a1), GFP_KERNEL);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, a1);
+
+ kasan_atomics_helper(test, (void *)a1 + 48, (void *)a2);
+
+ kfree(a1);
+ kfree(a2);
+}
+
static void kmalloc_double_kzfree(struct kunit *test)
{
char *ptr;
@@ -1553,6 +1602,7 @@ static struct kunit_case kasan_kunit_test_cases[] = {
KUNIT_CASE(kasan_strings),
KUNIT_CASE(kasan_bitops_generic),
KUNIT_CASE(kasan_bitops_tags),
+ KUNIT_CASE(kasan_atomics),
KUNIT_CASE(kmalloc_double_kzfree),
KUNIT_CASE(rcu_uaf),
KUNIT_CASE(workqueue_uaf),
--
2.40.1



2024-02-01 09:40:05

by Marco Elver

[permalink] [raw]
Subject: Re: [PATCH RFC v2] kasan: add atomic tests

On Wed, 31 Jan 2024 at 22:01, Paul Heidekrüger <[email protected]> wrote:
>
> Hi!
>
> This RFC patch adds tests that detect whether KASan is able to catch
> unsafe atomic accesses.
>
> Since v1, which can be found on Bugzilla (see "Closes:" tag), I've made
> the following suggested changes:
>
> * Adjust size of allocations to make kasan_atomics() work with all KASan modes
> * Remove comments and move tests closer to the bitops tests
> * For functions taking two addresses as an input, test each address in a separate function call.
> * Rename variables for clarity
> * Add tests for READ_ONCE(), WRITE_ONCE(), smp_load_acquire() and smp_store_release()
>
> I'm still uncelar on which kinds of atomic accesses we should be testing
> though. The patch below only covers a subset, and I don't know if it
> would be feasible to just manually add all atomics of interest. Which
> ones would those be exactly?

The atomics wrappers are generated by a script. An exhaustive test
case would, if generated by hand, be difficult to keep in sync if some
variants are removed or renamed (although that's probably a relatively
rare occurrence).

I would probably just cover some of the most common ones that all
architectures (that support KASAN) provide. I think you are already
covering some of the most important ones, and I'd just say it's good
enough for the test.

> As Andrey pointed out on Bugzilla, if we
> were to include all of the atomic64_* ones, that would make a lot of
> function calls.

Just include a few atomic64_ cases, similar to the ones you already
include for atomic_. Although beware that the atomic64_t helpers are
likely not available on 32-bit architectures, so you need an #ifdef
CONFIG_64BIT.

Alternatively, there is also atomic_long_t, which (on 64-bit
architectures) just wraps atomic64_t helpers, and on 32-bit the
atomic_t ones. I'd probably opt for the atomic_long_t variants, just
to keep it simpler and get some additional coverage on 32-bit
architectures.

> Also, the availability of atomics varies between architectures; I did my
> testing on arm64. Is something like gen-atomic-instrumented.sh required?

I would not touch gen-atomic-instrumented.sh for the test.

> Many thanks,
> Paul
>
> CC: Marco Elver <[email protected]>
> CC: Andrey Konovalov <[email protected]>
> Closes: https://bugzilla.kernel.org/show_bug.cgi?id=214055
> Signed-off-by: Paul Heidekrüger <[email protected]>
> ---
> mm/kasan/kasan_test.c | 50 +++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 50 insertions(+)
>
> diff --git a/mm/kasan/kasan_test.c b/mm/kasan/kasan_test.c
> index 8281eb42464b..1ab4444fe4a0 100644
> --- a/mm/kasan/kasan_test.c
> +++ b/mm/kasan/kasan_test.c
> @@ -1150,6 +1150,55 @@ static void kasan_bitops_tags(struct kunit *test)
> kfree(bits);
> }
>
> +static void kasan_atomics_helper(struct kunit *test, void *unsafe, void *safe)
> +{
> + int *i_safe = (int *)safe;
> + int *i_unsafe = (int *)unsafe;
> +
> + KUNIT_EXPECT_KASAN_FAIL(test, READ_ONCE(*i_unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, WRITE_ONCE(*i_unsafe, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, smp_load_acquire(i_unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, smp_store_release(i_unsafe, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_read(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_set(unsafe, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_add(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_sub(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_and(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_andnot(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_or(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_xor(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_xchg(unsafe, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_cmpxchg(unsafe, 21, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_try_cmpxchg(unsafe, safe, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_try_cmpxchg(safe, unsafe, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_sub_and_test(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_and_test(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_and_test(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_add_negative(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_add_unless(unsafe, 21, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_not_zero(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_unless_negative(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_unless_positive(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_if_positive(unsafe));
> +}
> +
> +static void kasan_atomics(struct kunit *test)
> +{
> + int *a1, *a2;

If you're casting it to void* below and never using as an int* in this
function, just make these void* (the sizeof can just be sizeof(int)).

> + a1 = kzalloc(48, GFP_KERNEL);
> + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, a1);
> + a2 = kzalloc(sizeof(*a1), GFP_KERNEL);
> + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, a1);
> +
> + kasan_atomics_helper(test, (void *)a1 + 48, (void *)a2);

We try to ensure (where possible) that the KASAN tests are not
destructive to the rest of the kernel. I think the size of "48" was
chosen to fall into the 64-byte size class, similar to the bitops. I
would just copy that comment, so nobody attempts to change it in
future. :-)

> + kfree(a1);
> + kfree(a2);

Thanks,
-- Marco

2024-02-02 10:03:49

by Paul Heidekrüger

[permalink] [raw]
Subject: Re: Re: [PATCH RFC v2] kasan: add atomic tests

On 01.02.2024 10:38, Marco Elver wrote:
> On Wed, 31 Jan 2024 at 22:01, Paul Heidekr?ger <[email protected]> wrote:
> >
> > Hi!
> >
> > This RFC patch adds tests that detect whether KASan is able to catch
> > unsafe atomic accesses.
> >
> > Since v1, which can be found on Bugzilla (see "Closes:" tag), I've made
> > the following suggested changes:
> >
> > * Adjust size of allocations to make kasan_atomics() work with all KASan modes
> > * Remove comments and move tests closer to the bitops tests
> > * For functions taking two addresses as an input, test each address in a separate function call.
> > * Rename variables for clarity
> > * Add tests for READ_ONCE(), WRITE_ONCE(), smp_load_acquire() and smp_store_release()
> >
> > I'm still uncelar on which kinds of atomic accesses we should be testing
> > though. The patch below only covers a subset, and I don't know if it
> > would be feasible to just manually add all atomics of interest. Which
> > ones would those be exactly?
>
> The atomics wrappers are generated by a script. An exhaustive test
> case would, if generated by hand, be difficult to keep in sync if some
> variants are removed or renamed (although that's probably a relatively
> rare occurrence).
>
> I would probably just cover some of the most common ones that all
> architectures (that support KASAN) provide. I think you are already
> covering some of the most important ones, and I'd just say it's good
> enough for the test.
>
> > As Andrey pointed out on Bugzilla, if we
> > were to include all of the atomic64_* ones, that would make a lot of
> > function calls.
>
> Just include a few atomic64_ cases, similar to the ones you already
> include for atomic_. Although beware that the atomic64_t helpers are
> likely not available on 32-bit architectures, so you need an #ifdef
> CONFIG_64BIT.
>
> Alternatively, there is also atomic_long_t, which (on 64-bit
> architectures) just wraps atomic64_t helpers, and on 32-bit the
> atomic_t ones. I'd probably opt for the atomic_long_t variants, just
> to keep it simpler and get some additional coverage on 32-bit
> architectures.

If I were to add some atomic_long_* cases, e.g. atomic_long_read() or
atomic_long_write(), in addition to the test cases I already have, wouldn't that
mean that on 32-bit architectures we would have the same test case twice because
atomic_read() and long_atomic_read() both boil down to raw_atomic_read() and
raw_atomic_write() respectively? Or did I misunderstand and I should only be
covering long_atomic_* functions whose atomic_* counterpart doesn't exist in the
test cases already?

> > Also, the availability of atomics varies between architectures; I did my
> > testing on arm64. Is something like gen-atomic-instrumented.sh required?
>
> I would not touch gen-atomic-instrumented.sh for the test.
>
> > Many thanks,
> > Paul
> >
> > CC: Marco Elver <[email protected]>
> > CC: Andrey Konovalov <[email protected]>
> > Closes: https://bugzilla.kernel.org/show_bug.cgi?id=214055
> > Signed-off-by: Paul Heidekr?ger <[email protected]>
> > ---
> > mm/kasan/kasan_test.c | 50 +++++++++++++++++++++++++++++++++++++++++++
> > 1 file changed, 50 insertions(+)
> >
> > diff --git a/mm/kasan/kasan_test.c b/mm/kasan/kasan_test.c
> > index 8281eb42464b..1ab4444fe4a0 100644
> > --- a/mm/kasan/kasan_test.c
> > +++ b/mm/kasan/kasan_test.c
> > @@ -1150,6 +1150,55 @@ static void kasan_bitops_tags(struct kunit *test)
> > kfree(bits);
> > }
> >
> > +static void kasan_atomics_helper(struct kunit *test, void *unsafe, void *safe)
> > +{
> > + int *i_safe = (int *)safe;
> > + int *i_unsafe = (int *)unsafe;
> > +
> > + KUNIT_EXPECT_KASAN_FAIL(test, READ_ONCE(*i_unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, WRITE_ONCE(*i_unsafe, 42));
> > + KUNIT_EXPECT_KASAN_FAIL(test, smp_load_acquire(i_unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, smp_store_release(i_unsafe, 42));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_read(unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_set(unsafe, 42));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_add(42, unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_sub(42, unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc(unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec(unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_and(42, unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_andnot(42, unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_or(42, unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_xor(42, unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_xchg(unsafe, 42));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_cmpxchg(unsafe, 21, 42));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_try_cmpxchg(unsafe, safe, 42));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_try_cmpxchg(safe, unsafe, 42));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_sub_and_test(42, unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_and_test(unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_and_test(unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_add_negative(42, unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_add_unless(unsafe, 21, 42));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_not_zero(unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_unless_negative(unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_unless_positive(unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_if_positive(unsafe));
> > +}
> > +
> > +static void kasan_atomics(struct kunit *test)
> > +{
> > + int *a1, *a2;
>
> If you're casting it to void* below and never using as an int* in this
> function, just make these void* (the sizeof can just be sizeof(int)).
>
> > + a1 = kzalloc(48, GFP_KERNEL);
> > + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, a1);
> > + a2 = kzalloc(sizeof(*a1), GFP_KERNEL);
> > + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, a1);
> > +
> > + kasan_atomics_helper(test, (void *)a1 + 48, (void *)a2);
>
> We try to ensure (where possible) that the KASAN tests are not
> destructive to the rest of the kernel. I think the size of "48" was
> chosen to fall into the 64-byte size class, similar to the bitops. I
> would just copy that comment, so nobody attempts to change it in
> future. :-)

And yes to all the rest - thanks for the feedback!

> > + kfree(a1);
> > + kfree(a2);
>
> Thanks,
> -- Marco

Many thanks,
Paul

2024-02-02 10:22:18

by Marco Elver

[permalink] [raw]
Subject: Re: Re: [PATCH RFC v2] kasan: add atomic tests

On Fri, 2 Feb 2024 at 11:03, Paul Heidekrüger <[email protected]> wrote:
>
> On 01.02.2024 10:38, Marco Elver wrote:
> > On Wed, 31 Jan 2024 at 22:01, Paul Heidekrüger <[email protected]> wrote:
> > >
> > > Hi!
> > >
> > > This RFC patch adds tests that detect whether KASan is able to catch
> > > unsafe atomic accesses.
> > >
> > > Since v1, which can be found on Bugzilla (see "Closes:" tag), I've made
> > > the following suggested changes:
> > >
> > > * Adjust size of allocations to make kasan_atomics() work with all KASan modes
> > > * Remove comments and move tests closer to the bitops tests
> > > * For functions taking two addresses as an input, test each address in a separate function call.
> > > * Rename variables for clarity
> > > * Add tests for READ_ONCE(), WRITE_ONCE(), smp_load_acquire() and smp_store_release()
> > >
> > > I'm still uncelar on which kinds of atomic accesses we should be testing
> > > though. The patch below only covers a subset, and I don't know if it
> > > would be feasible to just manually add all atomics of interest. Which
> > > ones would those be exactly?
> >
> > The atomics wrappers are generated by a script. An exhaustive test
> > case would, if generated by hand, be difficult to keep in sync if some
> > variants are removed or renamed (although that's probably a relatively
> > rare occurrence).
> >
> > I would probably just cover some of the most common ones that all
> > architectures (that support KASAN) provide. I think you are already
> > covering some of the most important ones, and I'd just say it's good
> > enough for the test.
> >
> > > As Andrey pointed out on Bugzilla, if we
> > > were to include all of the atomic64_* ones, that would make a lot of
> > > function calls.
> >
> > Just include a few atomic64_ cases, similar to the ones you already
> > include for atomic_. Although beware that the atomic64_t helpers are
> > likely not available on 32-bit architectures, so you need an #ifdef
> > CONFIG_64BIT.
> >
> > Alternatively, there is also atomic_long_t, which (on 64-bit
> > architectures) just wraps atomic64_t helpers, and on 32-bit the
> > atomic_t ones. I'd probably opt for the atomic_long_t variants, just
> > to keep it simpler and get some additional coverage on 32-bit
> > architectures.
>
> If I were to add some atomic_long_* cases, e.g. atomic_long_read() or
> atomic_long_write(), in addition to the test cases I already have, wouldn't that
> mean that on 32-bit architectures we would have the same test case twice because
> atomic_read() and long_atomic_read() both boil down to raw_atomic_read() and
> raw_atomic_write() respectively? Or did I misunderstand and I should only be
> covering long_atomic_* functions whose atomic_* counterpart doesn't exist in the
> test cases already?

Sure, on 32-bit this would be a little redundant, but we don't care so
much about what underlying atomic is actually executed, but more about
the instrumentation being correct.

From a KASAN point of view, I can't really tell that if atomic_read()
works that atomic_long_read() also works.

On top of that, we don't care all that much about 32-bit architectures
anymore (I think KASAN should work on some 32-bit architectures, but I
haven't tested that in a long time). ;-)

2024-02-02 11:36:15

by Paul Heidekrüger

[permalink] [raw]
Subject: [PATCH] kasan: add atomic tests

Test that KASan can detect some unsafe atomic accesses.

As discussed in the linked thread below, these tests attempt to cover
the most common uses of atomics and, therefore, aren't exhaustive.

CC: Marco Elver <[email protected]>
CC: Andrey Konovalov <[email protected]>
Link: https://lore.kernel.org/all/[email protected]/T/#u
Closes: https://bugzilla.kernel.org/show_bug.cgi?id=214055
Signed-off-by: Paul Heidekrüger <[email protected]>
---
Changes PATCH RFC v2 -> PATCH v1:
* Remove casts to void*
* Remove i_safe variable
* Add atomic_long_* test cases
* Carry over comment from kasan_bitops_tags()

Changes PATCH RFC v1 -> PATCH RFC v2:
* Adjust size of allocations to make kasan_atomics() work with all KASan modes
* Remove comments and move tests closer to the bitops tests
* For functions taking two addresses as an input, test each address in a separate function call.
* Rename variables for clarity
* Add tests for READ_ONCE(), WRITE_ONCE(), smp_load_acquire() and smp_store_release()

mm/kasan/kasan_test.c | 79 +++++++++++++++++++++++++++++++++++++++++++
1 file changed, 79 insertions(+)

diff --git a/mm/kasan/kasan_test.c b/mm/kasan/kasan_test.c
index 8281eb42464b..4ef2280c322c 100644
--- a/mm/kasan/kasan_test.c
+++ b/mm/kasan/kasan_test.c
@@ -1150,6 +1150,84 @@ static void kasan_bitops_tags(struct kunit *test)
kfree(bits);
}

+static void kasan_atomics_helper(struct kunit *test, void *unsafe, void *safe)
+{
+ int *i_unsafe = (int *)unsafe;
+
+ KUNIT_EXPECT_KASAN_FAIL(test, READ_ONCE(*i_unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, WRITE_ONCE(*i_unsafe, 42));
+ KUNIT_EXPECT_KASAN_FAIL(test, smp_load_acquire(i_unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, smp_store_release(i_unsafe, 42));
+
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_read(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_set(unsafe, 42));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_add(42, unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_sub(42, unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_and(42, unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_andnot(42, unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_or(42, unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_xor(42, unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_xchg(unsafe, 42));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_cmpxchg(unsafe, 21, 42));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_try_cmpxchg(unsafe, safe, 42));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_try_cmpxchg(safe, unsafe, 42));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_sub_and_test(42, unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_and_test(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_and_test(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_add_negative(42, unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_add_unless(unsafe, 21, 42));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_not_zero(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_unless_negative(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_unless_positive(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_if_positive(unsafe));
+
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_read(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_set(unsafe, 42));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_add(42, unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_sub(42, unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_and(42, unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_andnot(42, unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_or(42, unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_xor(42, unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_xchg(unsafe, 42));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_cmpxchg(unsafe, 21, 42));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_try_cmpxchg(unsafe, safe, 42));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_try_cmpxchg(safe, unsafe, 42));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_sub_and_test(42, unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec_and_test(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc_and_test(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_add_negative(42, unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_add_unless(unsafe, 21, 42));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc_not_zero(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc_unless_negative(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec_unless_positive(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec_if_positive(unsafe));
+}
+
+static void kasan_atomics(struct kunit *test)
+{
+ void *a1, *a2;
+
+ /*
+ * Just as with kasan_bitops_tags(), we allocate 48 bytes of memory such
+ * that the following 16 bytes will make up the redzone.
+ */
+ a1 = kzalloc(48, GFP_KERNEL);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, a1);
+ a2 = kzalloc(sizeof(int), GFP_KERNEL);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, a1);
+
+ /* Use atomics to access the redzone. */
+ kasan_atomics_helper(test, a1 + 48, a2);
+
+ kfree(a1);
+ kfree(a2);
+}
+
static void kmalloc_double_kzfree(struct kunit *test)
{
char *ptr;
@@ -1553,6 +1631,7 @@ static struct kunit_case kasan_kunit_test_cases[] = {
KUNIT_CASE(kasan_strings),
KUNIT_CASE(kasan_bitops_generic),
KUNIT_CASE(kasan_bitops_tags),
+ KUNIT_CASE(kasan_atomics),
KUNIT_CASE(kmalloc_double_kzfree),
KUNIT_CASE(rcu_uaf),
KUNIT_CASE(workqueue_uaf),
--
2.40.1


2024-02-05 14:10:09

by Marco Elver

[permalink] [raw]
Subject: Re: [PATCH] kasan: add atomic tests

On Fri, 2 Feb 2024 at 12:33, Paul Heidekrüger <[email protected]> wrote:
>
> Test that KASan can detect some unsafe atomic accesses.
>
> As discussed in the linked thread below, these tests attempt to cover
> the most common uses of atomics and, therefore, aren't exhaustive.
>
> CC: Marco Elver <[email protected]>
> CC: Andrey Konovalov <[email protected]>
> Link: https://lore.kernel.org/all/[email protected]/T/#u
> Closes: https://bugzilla.kernel.org/show_bug.cgi?id=214055
> Signed-off-by: Paul Heidekrüger <[email protected]>

Reviewed-by: Marco Elver <[email protected]>
Tested-by: Marco Elver <[email protected]>

Thank you.


> ---
> Changes PATCH RFC v2 -> PATCH v1:
> * Remove casts to void*
> * Remove i_safe variable
> * Add atomic_long_* test cases
> * Carry over comment from kasan_bitops_tags()
>
> Changes PATCH RFC v1 -> PATCH RFC v2:
> * Adjust size of allocations to make kasan_atomics() work with all KASan modes
> * Remove comments and move tests closer to the bitops tests
> * For functions taking two addresses as an input, test each address in a separate function call.
> * Rename variables for clarity
> * Add tests for READ_ONCE(), WRITE_ONCE(), smp_load_acquire() and smp_store_release()
>
> mm/kasan/kasan_test.c | 79 +++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 79 insertions(+)
>
> diff --git a/mm/kasan/kasan_test.c b/mm/kasan/kasan_test.c
> index 8281eb42464b..4ef2280c322c 100644
> --- a/mm/kasan/kasan_test.c
> +++ b/mm/kasan/kasan_test.c
> @@ -1150,6 +1150,84 @@ static void kasan_bitops_tags(struct kunit *test)
> kfree(bits);
> }
>
> +static void kasan_atomics_helper(struct kunit *test, void *unsafe, void *safe)
> +{
> + int *i_unsafe = (int *)unsafe;
> +
> + KUNIT_EXPECT_KASAN_FAIL(test, READ_ONCE(*i_unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, WRITE_ONCE(*i_unsafe, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, smp_load_acquire(i_unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, smp_store_release(i_unsafe, 42));
> +
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_read(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_set(unsafe, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_add(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_sub(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_and(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_andnot(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_or(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_xor(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_xchg(unsafe, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_cmpxchg(unsafe, 21, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_try_cmpxchg(unsafe, safe, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_try_cmpxchg(safe, unsafe, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_sub_and_test(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_and_test(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_and_test(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_add_negative(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_add_unless(unsafe, 21, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_not_zero(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_unless_negative(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_unless_positive(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_if_positive(unsafe));
> +
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_read(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_set(unsafe, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_add(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_sub(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_and(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_andnot(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_or(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_xor(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_xchg(unsafe, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_cmpxchg(unsafe, 21, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_try_cmpxchg(unsafe, safe, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_try_cmpxchg(safe, unsafe, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_sub_and_test(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec_and_test(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc_and_test(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_add_negative(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_add_unless(unsafe, 21, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc_not_zero(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc_unless_negative(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec_unless_positive(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec_if_positive(unsafe));
> +}
> +
> +static void kasan_atomics(struct kunit *test)
> +{
> + void *a1, *a2;
> +
> + /*
> + * Just as with kasan_bitops_tags(), we allocate 48 bytes of memory such
> + * that the following 16 bytes will make up the redzone.
> + */
> + a1 = kzalloc(48, GFP_KERNEL);
> + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, a1);
> + a2 = kzalloc(sizeof(int), GFP_KERNEL);
> + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, a1);
> +
> + /* Use atomics to access the redzone. */
> + kasan_atomics_helper(test, a1 + 48, a2);
> +
> + kfree(a1);
> + kfree(a2);
> +}
> +
> static void kmalloc_double_kzfree(struct kunit *test)
> {
> char *ptr;
> @@ -1553,6 +1631,7 @@ static struct kunit_case kasan_kunit_test_cases[] = {
> KUNIT_CASE(kasan_strings),
> KUNIT_CASE(kasan_bitops_generic),
> KUNIT_CASE(kasan_bitops_tags),
> + KUNIT_CASE(kasan_atomics),
> KUNIT_CASE(kmalloc_double_kzfree),
> KUNIT_CASE(rcu_uaf),
> KUNIT_CASE(workqueue_uaf),
> --
> 2.40.1
>

2024-02-05 16:04:17

by Mark Rutland

[permalink] [raw]
Subject: Re: [PATCH] kasan: add atomic tests

On Fri, Feb 02, 2024 at 11:32:59AM +0000, Paul Heidekr"uger wrote:
> Test that KASan can detect some unsafe atomic accesses.
>
> As discussed in the linked thread below, these tests attempt to cover
> the most common uses of atomics and, therefore, aren't exhaustive.
>
> CC: Marco Elver <[email protected]>
> CC: Andrey Konovalov <[email protected]>
> Link: https://lore.kernel.org/all/[email protected]/T/#u
> Closes: https://bugzilla.kernel.org/show_bug.cgi?id=214055
> Signed-off-by: Paul Heidekr"uger <[email protected]>
> ---
> Changes PATCH RFC v2 -> PATCH v1:
> * Remove casts to void*
> * Remove i_safe variable
> * Add atomic_long_* test cases
> * Carry over comment from kasan_bitops_tags()
>
> Changes PATCH RFC v1 -> PATCH RFC v2:
> * Adjust size of allocations to make kasan_atomics() work with all KASan modes
> * Remove comments and move tests closer to the bitops tests
> * For functions taking two addresses as an input, test each address in a separate function call.
> * Rename variables for clarity
> * Add tests for READ_ONCE(), WRITE_ONCE(), smp_load_acquire() and smp_store_release()
>
> mm/kasan/kasan_test.c | 79 +++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 79 insertions(+)
>
> diff --git a/mm/kasan/kasan_test.c b/mm/kasan/kasan_test.c
> index 8281eb42464b..4ef2280c322c 100644
> --- a/mm/kasan/kasan_test.c
> +++ b/mm/kasan/kasan_test.c
> @@ -1150,6 +1150,84 @@ static void kasan_bitops_tags(struct kunit *test)
> kfree(bits);
> }
>
> +static void kasan_atomics_helper(struct kunit *test, void *unsafe, void *safe)
> +{
> + int *i_unsafe = (int *)unsafe;

Minor nit: you don't need the cast here either.

Regardless:

Acked-by: Mark Rutland <[email protected]>

Mark.

> +
> + KUNIT_EXPECT_KASAN_FAIL(test, READ_ONCE(*i_unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, WRITE_ONCE(*i_unsafe, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, smp_load_acquire(i_unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, smp_store_release(i_unsafe, 42));
> +
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_read(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_set(unsafe, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_add(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_sub(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_and(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_andnot(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_or(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_xor(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_xchg(unsafe, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_cmpxchg(unsafe, 21, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_try_cmpxchg(unsafe, safe, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_try_cmpxchg(safe, unsafe, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_sub_and_test(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_and_test(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_and_test(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_add_negative(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_add_unless(unsafe, 21, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_not_zero(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_unless_negative(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_unless_positive(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_if_positive(unsafe));
> +
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_read(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_set(unsafe, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_add(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_sub(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_and(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_andnot(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_or(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_xor(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_xchg(unsafe, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_cmpxchg(unsafe, 21, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_try_cmpxchg(unsafe, safe, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_try_cmpxchg(safe, unsafe, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_sub_and_test(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec_and_test(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc_and_test(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_add_negative(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_add_unless(unsafe, 21, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc_not_zero(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc_unless_negative(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec_unless_positive(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec_if_positive(unsafe));
> +}
> +
> +static void kasan_atomics(struct kunit *test)
> +{
> + void *a1, *a2;
> +
> + /*
> + * Just as with kasan_bitops_tags(), we allocate 48 bytes of memory such
> + * that the following 16 bytes will make up the redzone.
> + */
> + a1 = kzalloc(48, GFP_KERNEL);
> + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, a1);
> + a2 = kzalloc(sizeof(int), GFP_KERNEL);
> + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, a1);
> +
> + /* Use atomics to access the redzone. */
> + kasan_atomics_helper(test, a1 + 48, a2);
> +
> + kfree(a1);
> + kfree(a2);
> +}
> +
> static void kmalloc_double_kzfree(struct kunit *test)
> {
> char *ptr;
> @@ -1553,6 +1631,7 @@ static struct kunit_case kasan_kunit_test_cases[] = {
> KUNIT_CASE(kasan_strings),
> KUNIT_CASE(kasan_bitops_generic),
> KUNIT_CASE(kasan_bitops_tags),
> + KUNIT_CASE(kasan_atomics),
> KUNIT_CASE(kmalloc_double_kzfree),
> KUNIT_CASE(rcu_uaf),
> KUNIT_CASE(workqueue_uaf),
> --
> 2.40.1
>
>

2024-02-05 22:15:06

by Andrey Konovalov

[permalink] [raw]
Subject: Re: [PATCH] kasan: add atomic tests

On Fri, Feb 2, 2024 at 12:33 PM Paul Heidekrüger
<[email protected]> wrote:
>
> Test that KASan can detect some unsafe atomic accesses.
>
> As discussed in the linked thread below, these tests attempt to cover
> the most common uses of atomics and, therefore, aren't exhaustive.
>
> CC: Marco Elver <[email protected]>
> CC: Andrey Konovalov <[email protected]>
> Link: https://lore.kernel.org/all/[email protected]/T/#u
> Closes: https://bugzilla.kernel.org/show_bug.cgi?id=214055
> Signed-off-by: Paul Heidekrüger <[email protected]>
> ---
> Changes PATCH RFC v2 -> PATCH v1:
> * Remove casts to void*
> * Remove i_safe variable
> * Add atomic_long_* test cases
> * Carry over comment from kasan_bitops_tags()
>
> Changes PATCH RFC v1 -> PATCH RFC v2:
> * Adjust size of allocations to make kasan_atomics() work with all KASan modes
> * Remove comments and move tests closer to the bitops tests
> * For functions taking two addresses as an input, test each address in a separate function call.
> * Rename variables for clarity
> * Add tests for READ_ONCE(), WRITE_ONCE(), smp_load_acquire() and smp_store_release()
>
> mm/kasan/kasan_test.c | 79 +++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 79 insertions(+)
>
> diff --git a/mm/kasan/kasan_test.c b/mm/kasan/kasan_test.c
> index 8281eb42464b..4ef2280c322c 100644
> --- a/mm/kasan/kasan_test.c
> +++ b/mm/kasan/kasan_test.c
> @@ -1150,6 +1150,84 @@ static void kasan_bitops_tags(struct kunit *test)
> kfree(bits);
> }
>
> +static void kasan_atomics_helper(struct kunit *test, void *unsafe, void *safe)
> +{
> + int *i_unsafe = (int *)unsafe;
> +
> + KUNIT_EXPECT_KASAN_FAIL(test, READ_ONCE(*i_unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, WRITE_ONCE(*i_unsafe, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, smp_load_acquire(i_unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, smp_store_release(i_unsafe, 42));
> +
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_read(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_set(unsafe, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_add(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_sub(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_and(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_andnot(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_or(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_xor(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_xchg(unsafe, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_cmpxchg(unsafe, 21, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_try_cmpxchg(unsafe, safe, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_try_cmpxchg(safe, unsafe, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_sub_and_test(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_and_test(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_and_test(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_add_negative(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_add_unless(unsafe, 21, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_not_zero(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_unless_negative(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_unless_positive(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_if_positive(unsafe));
> +
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_read(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_set(unsafe, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_add(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_sub(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_and(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_andnot(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_or(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_xor(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_xchg(unsafe, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_cmpxchg(unsafe, 21, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_try_cmpxchg(unsafe, safe, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_try_cmpxchg(safe, unsafe, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_sub_and_test(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec_and_test(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc_and_test(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_add_negative(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_add_unless(unsafe, 21, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc_not_zero(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc_unless_negative(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec_unless_positive(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec_if_positive(unsafe));
> +}
> +
> +static void kasan_atomics(struct kunit *test)
> +{
> + void *a1, *a2;
> +
> + /*
> + * Just as with kasan_bitops_tags(), we allocate 48 bytes of memory such
> + * that the following 16 bytes will make up the redzone.
> + */
> + a1 = kzalloc(48, GFP_KERNEL);
> + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, a1);
> + a2 = kzalloc(sizeof(int), GFP_KERNEL);

I think this should be sizeof(atomic_long_t) or sizeof(long),
otherwise a2 will not work as the safe argument for
atomic_long_try_cmpxchg on 64-bit architectures.

> + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, a1);
> +
> + /* Use atomics to access the redzone. */
> + kasan_atomics_helper(test, a1 + 48, a2);
> +
> + kfree(a1);
> + kfree(a2);
> +}
> +
> static void kmalloc_double_kzfree(struct kunit *test)
> {
> char *ptr;
> @@ -1553,6 +1631,7 @@ static struct kunit_case kasan_kunit_test_cases[] = {
> KUNIT_CASE(kasan_strings),
> KUNIT_CASE(kasan_bitops_generic),
> KUNIT_CASE(kasan_bitops_tags),
> + KUNIT_CASE(kasan_atomics),
> KUNIT_CASE(kmalloc_double_kzfree),
> KUNIT_CASE(rcu_uaf),
> KUNIT_CASE(workqueue_uaf),
> --
> 2.40.1
>

2024-02-11 09:18:07

by Paul Heidekrüger

[permalink] [raw]
Subject: Re: [PATCH] kasan: add atomic tests

On 05.02.2024 22:00, Andrey Konovalov wrote:
> On Fri, Feb 2, 2024 at 12:33 PM Paul Heidekrüger
> <[email protected]> wrote:
> >
> > Test that KASan can detect some unsafe atomic accesses.
> >
> > As discussed in the linked thread below, these tests attempt to cover
> > the most common uses of atomics and, therefore, aren't exhaustive.
> >
> > CC: Marco Elver <[email protected]>
> > CC: Andrey Konovalov <[email protected]>
> > Link: https://lore.kernel.org/all/[email protected]/T/#u
> > Closes: https://bugzilla.kernel.org/show_bug.cgi?id=214055
> > Signed-off-by: Paul Heidekrüger <[email protected]>
> > ---
> > Changes PATCH RFC v2 -> PATCH v1:
> > * Remove casts to void*
> > * Remove i_safe variable
> > * Add atomic_long_* test cases
> > * Carry over comment from kasan_bitops_tags()
> >
> > Changes PATCH RFC v1 -> PATCH RFC v2:
> > * Adjust size of allocations to make kasan_atomics() work with all KASan modes
> > * Remove comments and move tests closer to the bitops tests
> > * For functions taking two addresses as an input, test each address in a separate function call.
> > * Rename variables for clarity
> > * Add tests for READ_ONCE(), WRITE_ONCE(), smp_load_acquire() and smp_store_release()
> >
> > mm/kasan/kasan_test.c | 79 +++++++++++++++++++++++++++++++++++++++++++
> > 1 file changed, 79 insertions(+)
> >
> > diff --git a/mm/kasan/kasan_test.c b/mm/kasan/kasan_test.c
> > index 8281eb42464b..4ef2280c322c 100644
> > --- a/mm/kasan/kasan_test.c
> > +++ b/mm/kasan/kasan_test.c
> > @@ -1150,6 +1150,84 @@ static void kasan_bitops_tags(struct kunit *test)
> > kfree(bits);
> > }
> >
> > +static void kasan_atomics_helper(struct kunit *test, void *unsafe, void *safe)
> > +{
> > + int *i_unsafe = (int *)unsafe;
> > +
> > + KUNIT_EXPECT_KASAN_FAIL(test, READ_ONCE(*i_unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, WRITE_ONCE(*i_unsafe, 42));
> > + KUNIT_EXPECT_KASAN_FAIL(test, smp_load_acquire(i_unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, smp_store_release(i_unsafe, 42));
> > +
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_read(unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_set(unsafe, 42));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_add(42, unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_sub(42, unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc(unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec(unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_and(42, unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_andnot(42, unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_or(42, unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_xor(42, unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_xchg(unsafe, 42));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_cmpxchg(unsafe, 21, 42));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_try_cmpxchg(unsafe, safe, 42));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_try_cmpxchg(safe, unsafe, 42));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_sub_and_test(42, unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_and_test(unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_and_test(unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_add_negative(42, unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_add_unless(unsafe, 21, 42));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_not_zero(unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_unless_negative(unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_unless_positive(unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_if_positive(unsafe));
> > +
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_read(unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_set(unsafe, 42));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_add(42, unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_sub(42, unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc(unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec(unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_and(42, unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_andnot(42, unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_or(42, unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_xor(42, unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_xchg(unsafe, 42));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_cmpxchg(unsafe, 21, 42));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_try_cmpxchg(unsafe, safe, 42));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_try_cmpxchg(safe, unsafe, 42));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_sub_and_test(42, unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec_and_test(unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc_and_test(unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_add_negative(42, unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_add_unless(unsafe, 21, 42));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc_not_zero(unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc_unless_negative(unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec_unless_positive(unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec_if_positive(unsafe));
> > +}
> > +
> > +static void kasan_atomics(struct kunit *test)
> > +{
> > + void *a1, *a2;
> > +
> > + /*
> > + * Just as with kasan_bitops_tags(), we allocate 48 bytes of memory such
> > + * that the following 16 bytes will make up the redzone.
> > + */
> > + a1 = kzalloc(48, GFP_KERNEL);
> > + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, a1);
> > + a2 = kzalloc(sizeof(int), GFP_KERNEL);
>
> I think this should be sizeof(atomic_long_t) or sizeof(long),
> otherwise a2 will not work as the safe argument for
> atomic_long_try_cmpxchg on 64-bit architectures.

Ah, good catch!

> > + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, a1);
> > +
> > + /* Use atomics to access the redzone. */
> > + kasan_atomics_helper(test, a1 + 48, a2);
> > +
> > + kfree(a1);
> > + kfree(a2);
> > +}
> > +
> > static void kmalloc_double_kzfree(struct kunit *test)
> > {
> > char *ptr;
> > @@ -1553,6 +1631,7 @@ static struct kunit_case kasan_kunit_test_cases[] = {
> > KUNIT_CASE(kasan_strings),
> > KUNIT_CASE(kasan_bitops_generic),
> > KUNIT_CASE(kasan_bitops_tags),
> > + KUNIT_CASE(kasan_atomics),
> > KUNIT_CASE(kmalloc_double_kzfree),
> > KUNIT_CASE(rcu_uaf),
> > KUNIT_CASE(workqueue_uaf),
> > --
> > 2.40.1
> >

I'll be sending a v2 patch right away.

Thank you Marco, Mark, and Andrey!

Paul


2024-02-11 09:18:16

by Paul Heidekrüger

[permalink] [raw]
Subject: [PATCH v2] kasan: add atomic tests

Test that KASan can detect some unsafe atomic accesses.

As discussed in the linked thread below, these tests attempt to cover
the most common uses of atomics and, therefore, aren't exhaustive.

CC: Marco Elver <[email protected]>
CC: Andrey Konovalov <[email protected]>
Link: https://lore.kernel.org/all/[email protected]/T/#u
Closes: https://bugzilla.kernel.org/show_bug.cgi?id=214055
Reviewed-by: Marco Elver <[email protected]>
Tested-by: Marco Elver <[email protected]>
Acked-by: Mark Rutland <[email protected]>
Signed-off-by: Paul Heidekrüger <[email protected]>
---
Changes PATCH v1 -> PATCH v2:
* Make explicit cast implicit as per Mark's feedback
* Increase the size of the "a2" allocation as per Andrey's feedback
* Add tags

Changes PATCH RFC v2 -> PATCH v1:
* Remove casts to void*
* Remove i_safe variable
* Add atomic_long_* test cases
* Carry over comment from kasan_bitops_tags()

Changes PATCH RFC v1 -> PATCH RFC v2:
* Adjust size of allocations to make kasan_atomics() work with all KASan modes
* Remove comments and move tests closer to the bitops tests
* For functions taking two addresses as an input, test each address in a separate function call.
* Rename variables for clarity
* Add tests for READ_ONCE(), WRITE_ONCE(), smp_load_acquire() and smp_store_release()

mm/kasan/kasan_test.c | 79 +++++++++++++++++++++++++++++++++++++++++++
1 file changed, 79 insertions(+)

diff --git a/mm/kasan/kasan_test.c b/mm/kasan/kasan_test.c
index 8281eb42464b..7bf09699b145 100644
--- a/mm/kasan/kasan_test.c
+++ b/mm/kasan/kasan_test.c
@@ -1150,6 +1150,84 @@ static void kasan_bitops_tags(struct kunit *test)
kfree(bits);
}

+static void kasan_atomics_helper(struct kunit *test, void *unsafe, void *safe)
+{
+ int *i_unsafe = unsafe;
+
+ KUNIT_EXPECT_KASAN_FAIL(test, READ_ONCE(*i_unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, WRITE_ONCE(*i_unsafe, 42));
+ KUNIT_EXPECT_KASAN_FAIL(test, smp_load_acquire(i_unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, smp_store_release(i_unsafe, 42));
+
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_read(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_set(unsafe, 42));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_add(42, unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_sub(42, unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_and(42, unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_andnot(42, unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_or(42, unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_xor(42, unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_xchg(unsafe, 42));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_cmpxchg(unsafe, 21, 42));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_try_cmpxchg(unsafe, safe, 42));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_try_cmpxchg(safe, unsafe, 42));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_sub_and_test(42, unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_and_test(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_and_test(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_add_negative(42, unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_add_unless(unsafe, 21, 42));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_not_zero(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_unless_negative(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_unless_positive(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_if_positive(unsafe));
+
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_read(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_set(unsafe, 42));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_add(42, unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_sub(42, unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_and(42, unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_andnot(42, unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_or(42, unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_xor(42, unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_xchg(unsafe, 42));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_cmpxchg(unsafe, 21, 42));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_try_cmpxchg(unsafe, safe, 42));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_try_cmpxchg(safe, unsafe, 42));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_sub_and_test(42, unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec_and_test(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc_and_test(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_add_negative(42, unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_add_unless(unsafe, 21, 42));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc_not_zero(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc_unless_negative(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec_unless_positive(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec_if_positive(unsafe));
+}
+
+static void kasan_atomics(struct kunit *test)
+{
+ void *a1, *a2;
+
+ /*
+ * Just as with kasan_bitops_tags(), we allocate 48 bytes of memory such
+ * that the following 16 bytes will make up the redzone.
+ */
+ a1 = kzalloc(48, GFP_KERNEL);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, a1);
+ a2 = kzalloc(sizeof(atomic_long_t), GFP_KERNEL);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, a1);
+
+ /* Use atomics to access the redzone. */
+ kasan_atomics_helper(test, a1 + 48, a2);
+
+ kfree(a1);
+ kfree(a2);
+}
+
static void kmalloc_double_kzfree(struct kunit *test)
{
char *ptr;
@@ -1553,6 +1631,7 @@ static struct kunit_case kasan_kunit_test_cases[] = {
KUNIT_CASE(kasan_strings),
KUNIT_CASE(kasan_bitops_generic),
KUNIT_CASE(kasan_bitops_tags),
+ KUNIT_CASE(kasan_atomics),
KUNIT_CASE(kmalloc_double_kzfree),
KUNIT_CASE(rcu_uaf),
KUNIT_CASE(workqueue_uaf),
--
2.40.1


2024-02-11 23:24:26

by Andrey Konovalov

[permalink] [raw]
Subject: Re: [PATCH v2] kasan: add atomic tests

On Sun, Feb 11, 2024 at 10:17 AM Paul Heidekrüger
<[email protected]> wrote:
>
> Test that KASan can detect some unsafe atomic accesses.
>
> As discussed in the linked thread below, these tests attempt to cover
> the most common uses of atomics and, therefore, aren't exhaustive.
>
> CC: Marco Elver <[email protected]>
> CC: Andrey Konovalov <[email protected]>
> Link: https://lore.kernel.org/all/[email protected]/T/#u
> Closes: https://bugzilla.kernel.org/show_bug.cgi?id=214055
> Reviewed-by: Marco Elver <[email protected]>
> Tested-by: Marco Elver <[email protected]>
> Acked-by: Mark Rutland <[email protected]>
> Signed-off-by: Paul Heidekrüger <[email protected]>
> ---
> Changes PATCH v1 -> PATCH v2:
> * Make explicit cast implicit as per Mark's feedback
> * Increase the size of the "a2" allocation as per Andrey's feedback
> * Add tags
>
> Changes PATCH RFC v2 -> PATCH v1:
> * Remove casts to void*
> * Remove i_safe variable
> * Add atomic_long_* test cases
> * Carry over comment from kasan_bitops_tags()
>
> Changes PATCH RFC v1 -> PATCH RFC v2:
> * Adjust size of allocations to make kasan_atomics() work with all KASan modes
> * Remove comments and move tests closer to the bitops tests
> * For functions taking two addresses as an input, test each address in a separate function call.
> * Rename variables for clarity
> * Add tests for READ_ONCE(), WRITE_ONCE(), smp_load_acquire() and smp_store_release()
>
> mm/kasan/kasan_test.c | 79 +++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 79 insertions(+)
>
> diff --git a/mm/kasan/kasan_test.c b/mm/kasan/kasan_test.c
> index 8281eb42464b..7bf09699b145 100644
> --- a/mm/kasan/kasan_test.c
> +++ b/mm/kasan/kasan_test.c
> @@ -1150,6 +1150,84 @@ static void kasan_bitops_tags(struct kunit *test)
> kfree(bits);
> }
>
> +static void kasan_atomics_helper(struct kunit *test, void *unsafe, void *safe)
> +{
> + int *i_unsafe = unsafe;
> +
> + KUNIT_EXPECT_KASAN_FAIL(test, READ_ONCE(*i_unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, WRITE_ONCE(*i_unsafe, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, smp_load_acquire(i_unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, smp_store_release(i_unsafe, 42));
> +
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_read(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_set(unsafe, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_add(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_sub(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_and(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_andnot(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_or(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_xor(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_xchg(unsafe, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_cmpxchg(unsafe, 21, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_try_cmpxchg(unsafe, safe, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_try_cmpxchg(safe, unsafe, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_sub_and_test(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_and_test(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_and_test(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_add_negative(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_add_unless(unsafe, 21, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_not_zero(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_unless_negative(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_unless_positive(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_if_positive(unsafe));
> +
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_read(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_set(unsafe, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_add(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_sub(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_and(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_andnot(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_or(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_xor(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_xchg(unsafe, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_cmpxchg(unsafe, 21, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_try_cmpxchg(unsafe, safe, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_try_cmpxchg(safe, unsafe, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_sub_and_test(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec_and_test(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc_and_test(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_add_negative(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_add_unless(unsafe, 21, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc_not_zero(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc_unless_negative(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec_unless_positive(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec_if_positive(unsafe));
> +}
> +
> +static void kasan_atomics(struct kunit *test)
> +{
> + void *a1, *a2;
> +
> + /*
> + * Just as with kasan_bitops_tags(), we allocate 48 bytes of memory such
> + * that the following 16 bytes will make up the redzone.
> + */
> + a1 = kzalloc(48, GFP_KERNEL);
> + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, a1);
> + a2 = kzalloc(sizeof(atomic_long_t), GFP_KERNEL);
> + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, a1);

This should check for a2, not a1. Sorry for not spotting this before.

> +
> + /* Use atomics to access the redzone. */
> + kasan_atomics_helper(test, a1 + 48, a2);
> +
> + kfree(a1);
> + kfree(a2);
> +}
> +
> static void kmalloc_double_kzfree(struct kunit *test)
> {
> char *ptr;
> @@ -1553,6 +1631,7 @@ static struct kunit_case kasan_kunit_test_cases[] = {
> KUNIT_CASE(kasan_strings),
> KUNIT_CASE(kasan_bitops_generic),
> KUNIT_CASE(kasan_bitops_tags),
> + KUNIT_CASE(kasan_atomics),
> KUNIT_CASE(kmalloc_double_kzfree),
> KUNIT_CASE(rcu_uaf),
> KUNIT_CASE(workqueue_uaf),
> --
> 2.40.1
>

With the mentioned change:

Reviewed-by: Andrey Konovalov <[email protected]>

Thank you!

2024-02-12 08:35:31

by Paul Heidekrüger

[permalink] [raw]
Subject: [PATCH v3] kasan: add atomic tests

Test that KASan can detect some unsafe atomic accesses.

As discussed in the linked thread below, these tests attempt to cover
the most common uses of atomics and, therefore, aren't exhaustive.

CC: Marco Elver <[email protected]>
CC: Andrey Konovalov <[email protected]>
Link: https://lore.kernel.org/all/[email protected]/T/#u
Closes: https://bugzilla.kernel.org/show_bug.cgi?id=214055
Reviewed-by: Marco Elver <[email protected]>
Tested-by: Marco Elver <[email protected]>
Acked-by: Mark Rutland <[email protected]>
Reviewed-by: Andrey Konovalov <[email protected]>
Signed-off-by: Paul Heidekrüger <[email protected]>
---
Changes PATCH v2 -> PATCH v3:
* Fix the wrong variable being used when checking a2 after allocation
* Add Andrey's reviewed-by tag

Changes PATCH v1 -> PATCH v2:
* Make explicit cast implicit as per Mark's feedback
* Increase the size of the "a2" allocation as per Andrey's feedback
* Add tags

Changes PATCH RFC v2 -> PATCH v1:
* Remove casts to void*
* Remove i_safe variable
* Add atomic_long_* test cases
* Carry over comment from kasan_bitops_tags()

Changes PATCH RFC v1 -> PATCH RFC v2:
* Adjust size of allocations to make kasan_atomics() work with all KASan modes
* Remove comments and move tests closer to the bitops tests
* For functions taking two addresses as an input, test each address in a separate function call.
* Rename variables for clarity
* Add tests for READ_ONCE(), WRITE_ONCE(), smp_load_acquire() and smp_store_release()

mm/kasan/kasan_test.c | 79 +++++++++++++++++++++++++++++++++++++++++++
1 file changed, 79 insertions(+)

diff --git a/mm/kasan/kasan_test.c b/mm/kasan/kasan_test.c
index 8281eb42464b..7f0f87a2c3c4 100644
--- a/mm/kasan/kasan_test.c
+++ b/mm/kasan/kasan_test.c
@@ -1150,6 +1150,84 @@ static void kasan_bitops_tags(struct kunit *test)
kfree(bits);
}

+static void kasan_atomics_helper(struct kunit *test, void *unsafe, void *safe)
+{
+ int *i_unsafe = unsafe;
+
+ KUNIT_EXPECT_KASAN_FAIL(test, READ_ONCE(*i_unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, WRITE_ONCE(*i_unsafe, 42));
+ KUNIT_EXPECT_KASAN_FAIL(test, smp_load_acquire(i_unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, smp_store_release(i_unsafe, 42));
+
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_read(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_set(unsafe, 42));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_add(42, unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_sub(42, unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_and(42, unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_andnot(42, unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_or(42, unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_xor(42, unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_xchg(unsafe, 42));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_cmpxchg(unsafe, 21, 42));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_try_cmpxchg(unsafe, safe, 42));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_try_cmpxchg(safe, unsafe, 42));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_sub_and_test(42, unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_and_test(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_and_test(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_add_negative(42, unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_add_unless(unsafe, 21, 42));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_not_zero(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_unless_negative(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_unless_positive(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_if_positive(unsafe));
+
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_read(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_set(unsafe, 42));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_add(42, unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_sub(42, unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_and(42, unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_andnot(42, unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_or(42, unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_xor(42, unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_xchg(unsafe, 42));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_cmpxchg(unsafe, 21, 42));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_try_cmpxchg(unsafe, safe, 42));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_try_cmpxchg(safe, unsafe, 42));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_sub_and_test(42, unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec_and_test(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc_and_test(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_add_negative(42, unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_add_unless(unsafe, 21, 42));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc_not_zero(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc_unless_negative(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec_unless_positive(unsafe));
+ KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec_if_positive(unsafe));
+}
+
+static void kasan_atomics(struct kunit *test)
+{
+ void *a1, *a2;
+
+ /*
+ * Just as with kasan_bitops_tags(), we allocate 48 bytes of memory such
+ * that the following 16 bytes will make up the redzone.
+ */
+ a1 = kzalloc(48, GFP_KERNEL);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, a1);
+ a2 = kzalloc(sizeof(atomic_long_t), GFP_KERNEL);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, a2);
+
+ /* Use atomics to access the redzone. */
+ kasan_atomics_helper(test, a1 + 48, a2);
+
+ kfree(a1);
+ kfree(a2);
+}
+
static void kmalloc_double_kzfree(struct kunit *test)
{
char *ptr;
@@ -1553,6 +1631,7 @@ static struct kunit_case kasan_kunit_test_cases[] = {
KUNIT_CASE(kasan_strings),
KUNIT_CASE(kasan_bitops_generic),
KUNIT_CASE(kasan_bitops_tags),
+ KUNIT_CASE(kasan_atomics),
KUNIT_CASE(kmalloc_double_kzfree),
KUNIT_CASE(rcu_uaf),
KUNIT_CASE(workqueue_uaf),
--
2.40.1


2024-02-12 08:37:32

by Paul Heidekrüger

[permalink] [raw]
Subject: Re: [PATCH v2] kasan: add atomic tests

On 12.02.2024 00:16, Andrey Konovalov wrote:
> On Sun, Feb 11, 2024 at 10:17 AM Paul Heidekrüger
> <[email protected]> wrote:
> >
> > Test that KASan can detect some unsafe atomic accesses.
> >
> > As discussed in the linked thread below, these tests attempt to cover
> > the most common uses of atomics and, therefore, aren't exhaustive.
> >
> > CC: Marco Elver <[email protected]>
> > CC: Andrey Konovalov <[email protected]>
> > Link: https://lore.kernel.org/all/[email protected]/T/#u
> > Closes: https://bugzilla.kernel.org/show_bug.cgi?id=214055
> > Reviewed-by: Marco Elver <[email protected]>
> > Tested-by: Marco Elver <[email protected]>
> > Acked-by: Mark Rutland <[email protected]>
> > Signed-off-by: Paul Heidekrüger <[email protected]>
> > ---
> > Changes PATCH v1 -> PATCH v2:
> > * Make explicit cast implicit as per Mark's feedback
> > * Increase the size of the "a2" allocation as per Andrey's feedback
> > * Add tags
> >
> > Changes PATCH RFC v2 -> PATCH v1:
> > * Remove casts to void*
> > * Remove i_safe variable
> > * Add atomic_long_* test cases
> > * Carry over comment from kasan_bitops_tags()
> >
> > Changes PATCH RFC v1 -> PATCH RFC v2:
> > * Adjust size of allocations to make kasan_atomics() work with all KASan modes
> > * Remove comments and move tests closer to the bitops tests
> > * For functions taking two addresses as an input, test each address in a separate function call.
> > * Rename variables for clarity
> > * Add tests for READ_ONCE(), WRITE_ONCE(), smp_load_acquire() and smp_store_release()
> >
> > mm/kasan/kasan_test.c | 79 +++++++++++++++++++++++++++++++++++++++++++
> > 1 file changed, 79 insertions(+)
> >
> > diff --git a/mm/kasan/kasan_test.c b/mm/kasan/kasan_test.c
> > index 8281eb42464b..7bf09699b145 100644
> > --- a/mm/kasan/kasan_test.c
> > +++ b/mm/kasan/kasan_test.c
> > @@ -1150,6 +1150,84 @@ static void kasan_bitops_tags(struct kunit *test)
> > kfree(bits);
> > }
> >
> > +static void kasan_atomics_helper(struct kunit *test, void *unsafe, void *safe)
> > +{
> > + int *i_unsafe = unsafe;
> > +
> > + KUNIT_EXPECT_KASAN_FAIL(test, READ_ONCE(*i_unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, WRITE_ONCE(*i_unsafe, 42));
> > + KUNIT_EXPECT_KASAN_FAIL(test, smp_load_acquire(i_unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, smp_store_release(i_unsafe, 42));
> > +
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_read(unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_set(unsafe, 42));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_add(42, unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_sub(42, unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc(unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec(unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_and(42, unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_andnot(42, unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_or(42, unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_xor(42, unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_xchg(unsafe, 42));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_cmpxchg(unsafe, 21, 42));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_try_cmpxchg(unsafe, safe, 42));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_try_cmpxchg(safe, unsafe, 42));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_sub_and_test(42, unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_and_test(unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_and_test(unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_add_negative(42, unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_add_unless(unsafe, 21, 42));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_not_zero(unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_unless_negative(unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_unless_positive(unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_if_positive(unsafe));
> > +
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_read(unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_set(unsafe, 42));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_add(42, unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_sub(42, unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc(unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec(unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_and(42, unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_andnot(42, unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_or(42, unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_xor(42, unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_xchg(unsafe, 42));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_cmpxchg(unsafe, 21, 42));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_try_cmpxchg(unsafe, safe, 42));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_try_cmpxchg(safe, unsafe, 42));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_sub_and_test(42, unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec_and_test(unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc_and_test(unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_add_negative(42, unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_add_unless(unsafe, 21, 42));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc_not_zero(unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc_unless_negative(unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec_unless_positive(unsafe));
> > + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec_if_positive(unsafe));
> > +}
> > +
> > +static void kasan_atomics(struct kunit *test)
> > +{
> > + void *a1, *a2;
> > +
> > + /*
> > + * Just as with kasan_bitops_tags(), we allocate 48 bytes of memory such
> > + * that the following 16 bytes will make up the redzone.
> > + */
> > + a1 = kzalloc(48, GFP_KERNEL);
> > + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, a1);
> > + a2 = kzalloc(sizeof(atomic_long_t), GFP_KERNEL);
> > + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, a1);
>
> This should check for a2, not a1. Sorry for not spotting this before.

No need to apologise. I'm the one who made the mistake, so I'm the one who
should've spotted it in the first place :-)

> > +
> > + /* Use atomics to access the redzone. */
> > + kasan_atomics_helper(test, a1 + 48, a2);
> > +
> > + kfree(a1);
> > + kfree(a2);
> > +}
> > +
> > static void kmalloc_double_kzfree(struct kunit *test)
> > {
> > char *ptr;
> > @@ -1553,6 +1631,7 @@ static struct kunit_case kasan_kunit_test_cases[] = {
> > KUNIT_CASE(kasan_strings),
> > KUNIT_CASE(kasan_bitops_generic),
> > KUNIT_CASE(kasan_bitops_tags),
> > + KUNIT_CASE(kasan_atomics),
> > KUNIT_CASE(kmalloc_double_kzfree),
> > KUNIT_CASE(rcu_uaf),
> > KUNIT_CASE(workqueue_uaf),
> > --
> > 2.40.1
> >
>
> With the mentioned change:
>
> Reviewed-by: Andrey Konovalov <[email protected]>
>
> Thank you!

Just sent v3.

Many thanks,
Paul