2020-07-03 13:43:53

by Marco Elver

[permalink] [raw]
Subject: [PATCH 1/3] kcsan: Add support for atomic builtins

Some architectures (currently e.g. s390 partially) implement atomics
using the compiler's atomic builtins (__atomic_*, __sync_*). To support
enabling KCSAN on such architectures in future, or support experimental
use of these builtins, implement support for them.

We should also avoid breaking KCSAN kernels due to use (accidental or
otherwise) of atomic builtins in drivers, as has happened in the past:
https://lkml.kernel.org/r/[email protected]

The instrumentation is subtly different from regular reads/writes: TSAN
instrumentation replaces the use of atomic builtins with a call into the
runtime, and the runtime's job is to also execute the desired atomic
operation. We rely on the __atomic_* compiler builtins, available with
all KCSAN-supported compilers, to implement each TSAN atomic
instrumentation function.

Signed-off-by: Marco Elver <[email protected]>
---
kernel/kcsan/core.c | 110 ++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 110 insertions(+)

diff --git a/kernel/kcsan/core.c b/kernel/kcsan/core.c
index d803765603fb..6843169da759 100644
--- a/kernel/kcsan/core.c
+++ b/kernel/kcsan/core.c
@@ -856,3 +856,113 @@ void __tsan_init(void)
{
}
EXPORT_SYMBOL(__tsan_init);
+
+/*
+ * Instrumentation for atomic builtins (__atomic_*, __sync_*).
+ *
+ * Normal kernel code _should not_ be using them directly, but some
+ * architectures may implement some or all atomics using the compilers'
+ * builtins.
+ *
+ * Note: If an architecture decides to fully implement atomics using the
+ * builtins, because they are implicitly instrumented by KCSAN (and KASAN,
+ * etc.), implementing the ARCH_ATOMIC interface (to get instrumentation via
+ * atomic-instrumented) is no longer necessary.
+ *
+ * TSAN instrumentation replaces atomic accesses with calls to any of the below
+ * functions, whose job is to also execute the operation itself.
+ */
+
+#define DEFINE_TSAN_ATOMIC_LOAD_STORE(bits) \
+ u##bits __tsan_atomic##bits##_load(const u##bits *ptr, int memorder); \
+ u##bits __tsan_atomic##bits##_load(const u##bits *ptr, int memorder) \
+ { \
+ check_access(ptr, bits / BITS_PER_BYTE, KCSAN_ACCESS_ATOMIC); \
+ return __atomic_load_n(ptr, memorder); \
+ } \
+ EXPORT_SYMBOL(__tsan_atomic##bits##_load); \
+ void __tsan_atomic##bits##_store(u##bits *ptr, u##bits v, int memorder); \
+ void __tsan_atomic##bits##_store(u##bits *ptr, u##bits v, int memorder) \
+ { \
+ check_access(ptr, bits / BITS_PER_BYTE, KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ATOMIC); \
+ __atomic_store_n(ptr, v, memorder); \
+ } \
+ EXPORT_SYMBOL(__tsan_atomic##bits##_store)
+
+#define DEFINE_TSAN_ATOMIC_RMW(op, bits, suffix) \
+ u##bits __tsan_atomic##bits##_##op(u##bits *ptr, u##bits v, int memorder); \
+ u##bits __tsan_atomic##bits##_##op(u##bits *ptr, u##bits v, int memorder) \
+ { \
+ check_access(ptr, bits / BITS_PER_BYTE, KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ATOMIC); \
+ return __atomic_##op##suffix(ptr, v, memorder); \
+ } \
+ EXPORT_SYMBOL(__tsan_atomic##bits##_##op)
+
+/*
+ * Note: CAS operations are always classified as write, even in case they
+ * fail. We cannot perform check_access() after a write, as it might lead to
+ * false positives, in cases such as:
+ *
+ * T0: __atomic_compare_exchange_n(&p->flag, &old, 1, ...)
+ *
+ * T1: if (__atomic_load_n(&p->flag, ...)) {
+ * modify *p;
+ * p->flag = 0;
+ * }
+ *
+ * The only downside is that, if there are 3 threads, with one CAS that
+ * succeeds, another CAS that fails, and an unmarked racing operation, we may
+ * point at the wrong CAS as the source of the race. However, if we assume that
+ * all CAS can succeed in some other execution, the data race is still valid.
+ */
+#define DEFINE_TSAN_ATOMIC_CMPXCHG(bits, strength, weak) \
+ int __tsan_atomic##bits##_compare_exchange_##strength(u##bits *ptr, u##bits *exp, \
+ u##bits val, int mo, int fail_mo); \
+ int __tsan_atomic##bits##_compare_exchange_##strength(u##bits *ptr, u##bits *exp, \
+ u##bits val, int mo, int fail_mo) \
+ { \
+ check_access(ptr, bits / BITS_PER_BYTE, KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ATOMIC); \
+ return __atomic_compare_exchange_n(ptr, exp, val, weak, mo, fail_mo); \
+ } \
+ EXPORT_SYMBOL(__tsan_atomic##bits##_compare_exchange_##strength)
+
+#define DEFINE_TSAN_ATOMIC_CMPXCHG_VAL(bits) \
+ u##bits __tsan_atomic##bits##_compare_exchange_val(u##bits *ptr, u##bits exp, u##bits val, \
+ int mo, int fail_mo); \
+ u##bits __tsan_atomic##bits##_compare_exchange_val(u##bits *ptr, u##bits exp, u##bits val, \
+ int mo, int fail_mo) \
+ { \
+ check_access(ptr, bits / BITS_PER_BYTE, KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ATOMIC); \
+ __atomic_compare_exchange_n(ptr, &exp, val, 0, mo, fail_mo); \
+ return exp; \
+ } \
+ EXPORT_SYMBOL(__tsan_atomic##bits##_compare_exchange_val)
+
+#define DEFINE_TSAN_ATOMIC_OPS(bits) \
+ DEFINE_TSAN_ATOMIC_LOAD_STORE(bits); \
+ DEFINE_TSAN_ATOMIC_RMW(exchange, bits, _n); \
+ DEFINE_TSAN_ATOMIC_RMW(fetch_add, bits, ); \
+ DEFINE_TSAN_ATOMIC_RMW(fetch_sub, bits, ); \
+ DEFINE_TSAN_ATOMIC_RMW(fetch_and, bits, ); \
+ DEFINE_TSAN_ATOMIC_RMW(fetch_or, bits, ); \
+ DEFINE_TSAN_ATOMIC_RMW(fetch_xor, bits, ); \
+ DEFINE_TSAN_ATOMIC_RMW(fetch_nand, bits, ); \
+ DEFINE_TSAN_ATOMIC_CMPXCHG(bits, strong, 0); \
+ DEFINE_TSAN_ATOMIC_CMPXCHG(bits, weak, 1); \
+ DEFINE_TSAN_ATOMIC_CMPXCHG_VAL(bits)
+
+DEFINE_TSAN_ATOMIC_OPS(8);
+DEFINE_TSAN_ATOMIC_OPS(16);
+DEFINE_TSAN_ATOMIC_OPS(32);
+DEFINE_TSAN_ATOMIC_OPS(64);
+
+void __tsan_atomic_thread_fence(int memorder);
+void __tsan_atomic_thread_fence(int memorder)
+{
+ __atomic_thread_fence(memorder);
+}
+EXPORT_SYMBOL(__tsan_atomic_thread_fence);
+
+void __tsan_atomic_signal_fence(int memorder);
+void __tsan_atomic_signal_fence(int memorder) { }
+EXPORT_SYMBOL(__tsan_atomic_signal_fence);
--
2.27.0.212.ge8ba1cc988-goog


2020-07-03 13:44:04

by Marco Elver

[permalink] [raw]
Subject: [PATCH 3/3] kcsan: Add atomic builtin test case

Adds test case to kcsan-test module, to test atomic builtin
instrumentation works.

Signed-off-by: Marco Elver <[email protected]>
---
kernel/kcsan/kcsan-test.c | 63 +++++++++++++++++++++++++++++++++++++++
1 file changed, 63 insertions(+)

diff --git a/kernel/kcsan/kcsan-test.c b/kernel/kcsan/kcsan-test.c
index fed6fcb5768c..721180cbbab1 100644
--- a/kernel/kcsan/kcsan-test.c
+++ b/kernel/kcsan/kcsan-test.c
@@ -390,6 +390,15 @@ static noinline void test_kernel_seqlock_writer(void)
write_sequnlock_irqrestore(&test_seqlock, flags);
}

+static noinline void test_kernel_atomic_builtins(void)
+{
+ /*
+ * Generate concurrent accesses, expecting no reports, ensuring KCSAN
+ * treats builtin atomics as actually atomic.
+ */
+ __atomic_load_n(&test_var, __ATOMIC_RELAXED);
+}
+
/* ===== Test cases ===== */

/* Simple test with normal data race. */
@@ -852,6 +861,59 @@ static void test_seqlock_noreport(struct kunit *test)
KUNIT_EXPECT_FALSE(test, match_never);
}

+/*
+ * Test atomic builtins work and required instrumentation functions exist. We
+ * also test that KCSAN understands they're atomic by racing with them via
+ * test_kernel_atomic_builtins(), and expect no reports.
+ *
+ * The atomic builtins _SHOULD NOT_ be used in normal kernel code!
+ */
+static void test_atomic_builtins(struct kunit *test)
+{
+ bool match_never = false;
+
+ begin_test_checks(test_kernel_atomic_builtins, test_kernel_atomic_builtins);
+ do {
+ long tmp;
+
+ kcsan_enable_current();
+
+ __atomic_store_n(&test_var, 42L, __ATOMIC_RELAXED);
+ KUNIT_EXPECT_EQ(test, 42L, __atomic_load_n(&test_var, __ATOMIC_RELAXED));
+
+ KUNIT_EXPECT_EQ(test, 42L, __atomic_exchange_n(&test_var, 20, __ATOMIC_RELAXED));
+ KUNIT_EXPECT_EQ(test, 20L, test_var);
+
+ tmp = 20L;
+ KUNIT_EXPECT_TRUE(test, __atomic_compare_exchange_n(&test_var, &tmp, 30L,
+ 0, __ATOMIC_RELAXED,
+ __ATOMIC_RELAXED));
+ KUNIT_EXPECT_EQ(test, tmp, 20L);
+ KUNIT_EXPECT_EQ(test, test_var, 30L);
+ KUNIT_EXPECT_FALSE(test, __atomic_compare_exchange_n(&test_var, &tmp, 40L,
+ 1, __ATOMIC_RELAXED,
+ __ATOMIC_RELAXED));
+ KUNIT_EXPECT_EQ(test, tmp, 30L);
+ KUNIT_EXPECT_EQ(test, test_var, 30L);
+
+ KUNIT_EXPECT_EQ(test, 30L, __atomic_fetch_add(&test_var, 1, __ATOMIC_RELAXED));
+ KUNIT_EXPECT_EQ(test, 31L, __atomic_fetch_sub(&test_var, 1, __ATOMIC_RELAXED));
+ KUNIT_EXPECT_EQ(test, 30L, __atomic_fetch_and(&test_var, 0xf, __ATOMIC_RELAXED));
+ KUNIT_EXPECT_EQ(test, 14L, __atomic_fetch_xor(&test_var, 0xf, __ATOMIC_RELAXED));
+ KUNIT_EXPECT_EQ(test, 1L, __atomic_fetch_or(&test_var, 0xf0, __ATOMIC_RELAXED));
+ KUNIT_EXPECT_EQ(test, 241L, __atomic_fetch_nand(&test_var, 0xf, __ATOMIC_RELAXED));
+ KUNIT_EXPECT_EQ(test, -2L, test_var);
+
+ __atomic_thread_fence(__ATOMIC_SEQ_CST);
+ __atomic_signal_fence(__ATOMIC_SEQ_CST);
+
+ kcsan_disable_current();
+
+ match_never = report_available();
+ } while (!end_test_checks(match_never));
+ KUNIT_EXPECT_FALSE(test, match_never);
+}
+
/*
* Each test case is run with different numbers of threads. Until KUnit supports
* passing arguments for each test case, we encode #threads in the test case
@@ -891,6 +953,7 @@ static struct kunit_case kcsan_test_cases[] = {
KCSAN_KUNIT_CASE(test_assert_exclusive_access_scoped),
KCSAN_KUNIT_CASE(test_jiffies_noreport),
KCSAN_KUNIT_CASE(test_seqlock_noreport),
+ KCSAN_KUNIT_CASE(test_atomic_builtins),
{},
};

--
2.27.0.212.ge8ba1cc988-goog

2020-07-03 13:44:08

by Marco Elver

[permalink] [raw]
Subject: [PATCH 2/3] objtool: Add atomic builtin TSAN instrumentation to uaccess whitelist

Adds the new TSAN functions that may be emitted for atomic builtins to
objtool's uaccess whitelist.

Signed-off-by: Marco Elver <[email protected]>
---
tools/objtool/check.c | 50 +++++++++++++++++++++++++++++++++++++++++++
1 file changed, 50 insertions(+)

diff --git a/tools/objtool/check.c b/tools/objtool/check.c
index 5e0d70a89fb8..63d8b630c67a 100644
--- a/tools/objtool/check.c
+++ b/tools/objtool/check.c
@@ -528,6 +528,56 @@ static const char *uaccess_safe_builtin[] = {
"__tsan_write4",
"__tsan_write8",
"__tsan_write16",
+ "__tsan_atomic8_load",
+ "__tsan_atomic16_load",
+ "__tsan_atomic32_load",
+ "__tsan_atomic64_load",
+ "__tsan_atomic8_store",
+ "__tsan_atomic16_store",
+ "__tsan_atomic32_store",
+ "__tsan_atomic64_store",
+ "__tsan_atomic8_exchange",
+ "__tsan_atomic16_exchange",
+ "__tsan_atomic32_exchange",
+ "__tsan_atomic64_exchange",
+ "__tsan_atomic8_fetch_add",
+ "__tsan_atomic16_fetch_add",
+ "__tsan_atomic32_fetch_add",
+ "__tsan_atomic64_fetch_add",
+ "__tsan_atomic8_fetch_sub",
+ "__tsan_atomic16_fetch_sub",
+ "__tsan_atomic32_fetch_sub",
+ "__tsan_atomic64_fetch_sub",
+ "__tsan_atomic8_fetch_and",
+ "__tsan_atomic16_fetch_and",
+ "__tsan_atomic32_fetch_and",
+ "__tsan_atomic64_fetch_and",
+ "__tsan_atomic8_fetch_or",
+ "__tsan_atomic16_fetch_or",
+ "__tsan_atomic32_fetch_or",
+ "__tsan_atomic64_fetch_or",
+ "__tsan_atomic8_fetch_xor",
+ "__tsan_atomic16_fetch_xor",
+ "__tsan_atomic32_fetch_xor",
+ "__tsan_atomic64_fetch_xor",
+ "__tsan_atomic8_fetch_nand",
+ "__tsan_atomic16_fetch_nand",
+ "__tsan_atomic32_fetch_nand",
+ "__tsan_atomic64_fetch_nand",
+ "__tsan_atomic8_compare_exchange_strong",
+ "__tsan_atomic16_compare_exchange_strong",
+ "__tsan_atomic32_compare_exchange_strong",
+ "__tsan_atomic64_compare_exchange_strong",
+ "__tsan_atomic8_compare_exchange_weak",
+ "__tsan_atomic16_compare_exchange_weak",
+ "__tsan_atomic32_compare_exchange_weak",
+ "__tsan_atomic64_compare_exchange_weak",
+ "__tsan_atomic8_compare_exchange_val",
+ "__tsan_atomic16_compare_exchange_val",
+ "__tsan_atomic32_compare_exchange_val",
+ "__tsan_atomic64_compare_exchange_val",
+ "__tsan_atomic_thread_fence",
+ "__tsan_atomic_signal_fence",
/* KCOV */
"write_comp_data",
"check_kcov_mode",
--
2.27.0.212.ge8ba1cc988-goog

2020-07-03 14:37:20

by Dmitry Vyukov

[permalink] [raw]
Subject: Re: [PATCH 1/3] kcsan: Add support for atomic builtins

On Fri, Jul 3, 2020 at 3:40 PM Marco Elver <[email protected]> wrote:
>
> Some architectures (currently e.g. s390 partially) implement atomics
> using the compiler's atomic builtins (__atomic_*, __sync_*). To support
> enabling KCSAN on such architectures in future, or support experimental
> use of these builtins, implement support for them.
>
> We should also avoid breaking KCSAN kernels due to use (accidental or
> otherwise) of atomic builtins in drivers, as has happened in the past:
> https://lkml.kernel.org/r/[email protected]
>
> The instrumentation is subtly different from regular reads/writes: TSAN
> instrumentation replaces the use of atomic builtins with a call into the
> runtime, and the runtime's job is to also execute the desired atomic
> operation. We rely on the __atomic_* compiler builtins, available with
> all KCSAN-supported compilers, to implement each TSAN atomic
> instrumentation function.
>
> Signed-off-by: Marco Elver <[email protected]>

Reviewed-by: Dmitry Vyukov <[email protected]>

> ---
> kernel/kcsan/core.c | 110 ++++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 110 insertions(+)
>
> diff --git a/kernel/kcsan/core.c b/kernel/kcsan/core.c
> index d803765603fb..6843169da759 100644
> --- a/kernel/kcsan/core.c
> +++ b/kernel/kcsan/core.c
> @@ -856,3 +856,113 @@ void __tsan_init(void)
> {
> }
> EXPORT_SYMBOL(__tsan_init);
> +
> +/*
> + * Instrumentation for atomic builtins (__atomic_*, __sync_*).
> + *
> + * Normal kernel code _should not_ be using them directly, but some
> + * architectures may implement some or all atomics using the compilers'
> + * builtins.
> + *
> + * Note: If an architecture decides to fully implement atomics using the
> + * builtins, because they are implicitly instrumented by KCSAN (and KASAN,
> + * etc.), implementing the ARCH_ATOMIC interface (to get instrumentation via
> + * atomic-instrumented) is no longer necessary.
> + *
> + * TSAN instrumentation replaces atomic accesses with calls to any of the below
> + * functions, whose job is to also execute the operation itself.
> + */
> +
> +#define DEFINE_TSAN_ATOMIC_LOAD_STORE(bits) \
> + u##bits __tsan_atomic##bits##_load(const u##bits *ptr, int memorder); \
> + u##bits __tsan_atomic##bits##_load(const u##bits *ptr, int memorder) \
> + { \
> + check_access(ptr, bits / BITS_PER_BYTE, KCSAN_ACCESS_ATOMIC); \
> + return __atomic_load_n(ptr, memorder); \
> + } \
> + EXPORT_SYMBOL(__tsan_atomic##bits##_load); \
> + void __tsan_atomic##bits##_store(u##bits *ptr, u##bits v, int memorder); \
> + void __tsan_atomic##bits##_store(u##bits *ptr, u##bits v, int memorder) \
> + { \
> + check_access(ptr, bits / BITS_PER_BYTE, KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ATOMIC); \
> + __atomic_store_n(ptr, v, memorder); \
> + } \
> + EXPORT_SYMBOL(__tsan_atomic##bits##_store)
> +
> +#define DEFINE_TSAN_ATOMIC_RMW(op, bits, suffix) \
> + u##bits __tsan_atomic##bits##_##op(u##bits *ptr, u##bits v, int memorder); \
> + u##bits __tsan_atomic##bits##_##op(u##bits *ptr, u##bits v, int memorder) \
> + { \
> + check_access(ptr, bits / BITS_PER_BYTE, KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ATOMIC); \
> + return __atomic_##op##suffix(ptr, v, memorder); \
> + } \
> + EXPORT_SYMBOL(__tsan_atomic##bits##_##op)
> +
> +/*
> + * Note: CAS operations are always classified as write, even in case they
> + * fail. We cannot perform check_access() after a write, as it might lead to
> + * false positives, in cases such as:
> + *
> + * T0: __atomic_compare_exchange_n(&p->flag, &old, 1, ...)
> + *
> + * T1: if (__atomic_load_n(&p->flag, ...)) {
> + * modify *p;
> + * p->flag = 0;
> + * }
> + *
> + * The only downside is that, if there are 3 threads, with one CAS that
> + * succeeds, another CAS that fails, and an unmarked racing operation, we may
> + * point at the wrong CAS as the source of the race. However, if we assume that
> + * all CAS can succeed in some other execution, the data race is still valid.
> + */
> +#define DEFINE_TSAN_ATOMIC_CMPXCHG(bits, strength, weak) \
> + int __tsan_atomic##bits##_compare_exchange_##strength(u##bits *ptr, u##bits *exp, \
> + u##bits val, int mo, int fail_mo); \
> + int __tsan_atomic##bits##_compare_exchange_##strength(u##bits *ptr, u##bits *exp, \
> + u##bits val, int mo, int fail_mo) \
> + { \
> + check_access(ptr, bits / BITS_PER_BYTE, KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ATOMIC); \
> + return __atomic_compare_exchange_n(ptr, exp, val, weak, mo, fail_mo); \
> + } \
> + EXPORT_SYMBOL(__tsan_atomic##bits##_compare_exchange_##strength)
> +
> +#define DEFINE_TSAN_ATOMIC_CMPXCHG_VAL(bits) \
> + u##bits __tsan_atomic##bits##_compare_exchange_val(u##bits *ptr, u##bits exp, u##bits val, \
> + int mo, int fail_mo); \
> + u##bits __tsan_atomic##bits##_compare_exchange_val(u##bits *ptr, u##bits exp, u##bits val, \
> + int mo, int fail_mo) \
> + { \
> + check_access(ptr, bits / BITS_PER_BYTE, KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ATOMIC); \
> + __atomic_compare_exchange_n(ptr, &exp, val, 0, mo, fail_mo); \
> + return exp; \
> + } \
> + EXPORT_SYMBOL(__tsan_atomic##bits##_compare_exchange_val)
> +
> +#define DEFINE_TSAN_ATOMIC_OPS(bits) \
> + DEFINE_TSAN_ATOMIC_LOAD_STORE(bits); \
> + DEFINE_TSAN_ATOMIC_RMW(exchange, bits, _n); \
> + DEFINE_TSAN_ATOMIC_RMW(fetch_add, bits, ); \
> + DEFINE_TSAN_ATOMIC_RMW(fetch_sub, bits, ); \
> + DEFINE_TSAN_ATOMIC_RMW(fetch_and, bits, ); \
> + DEFINE_TSAN_ATOMIC_RMW(fetch_or, bits, ); \
> + DEFINE_TSAN_ATOMIC_RMW(fetch_xor, bits, ); \
> + DEFINE_TSAN_ATOMIC_RMW(fetch_nand, bits, ); \
> + DEFINE_TSAN_ATOMIC_CMPXCHG(bits, strong, 0); \
> + DEFINE_TSAN_ATOMIC_CMPXCHG(bits, weak, 1); \
> + DEFINE_TSAN_ATOMIC_CMPXCHG_VAL(bits)
> +
> +DEFINE_TSAN_ATOMIC_OPS(8);
> +DEFINE_TSAN_ATOMIC_OPS(16);
> +DEFINE_TSAN_ATOMIC_OPS(32);
> +DEFINE_TSAN_ATOMIC_OPS(64);
> +
> +void __tsan_atomic_thread_fence(int memorder);
> +void __tsan_atomic_thread_fence(int memorder)
> +{
> + __atomic_thread_fence(memorder);
> +}
> +EXPORT_SYMBOL(__tsan_atomic_thread_fence);
> +
> +void __tsan_atomic_signal_fence(int memorder);
> +void __tsan_atomic_signal_fence(int memorder) { }
> +EXPORT_SYMBOL(__tsan_atomic_signal_fence);
> --
> 2.27.0.212.ge8ba1cc988-goog
>

2020-07-06 23:46:21

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH 3/3] kcsan: Add atomic builtin test case

On Fri, Jul 03, 2020 at 03:40:31PM +0200, Marco Elver wrote:
> Adds test case to kcsan-test module, to test atomic builtin
> instrumentation works.
>
> Signed-off-by: Marco Elver <[email protected]>

Applied all three, thank you!!!

Thanx, Paul

> ---
> kernel/kcsan/kcsan-test.c | 63 +++++++++++++++++++++++++++++++++++++++
> 1 file changed, 63 insertions(+)
>
> diff --git a/kernel/kcsan/kcsan-test.c b/kernel/kcsan/kcsan-test.c
> index fed6fcb5768c..721180cbbab1 100644
> --- a/kernel/kcsan/kcsan-test.c
> +++ b/kernel/kcsan/kcsan-test.c
> @@ -390,6 +390,15 @@ static noinline void test_kernel_seqlock_writer(void)
> write_sequnlock_irqrestore(&test_seqlock, flags);
> }
>
> +static noinline void test_kernel_atomic_builtins(void)
> +{
> + /*
> + * Generate concurrent accesses, expecting no reports, ensuring KCSAN
> + * treats builtin atomics as actually atomic.
> + */
> + __atomic_load_n(&test_var, __ATOMIC_RELAXED);
> +}
> +
> /* ===== Test cases ===== */
>
> /* Simple test with normal data race. */
> @@ -852,6 +861,59 @@ static void test_seqlock_noreport(struct kunit *test)
> KUNIT_EXPECT_FALSE(test, match_never);
> }
>
> +/*
> + * Test atomic builtins work and required instrumentation functions exist. We
> + * also test that KCSAN understands they're atomic by racing with them via
> + * test_kernel_atomic_builtins(), and expect no reports.
> + *
> + * The atomic builtins _SHOULD NOT_ be used in normal kernel code!
> + */
> +static void test_atomic_builtins(struct kunit *test)
> +{
> + bool match_never = false;
> +
> + begin_test_checks(test_kernel_atomic_builtins, test_kernel_atomic_builtins);
> + do {
> + long tmp;
> +
> + kcsan_enable_current();
> +
> + __atomic_store_n(&test_var, 42L, __ATOMIC_RELAXED);
> + KUNIT_EXPECT_EQ(test, 42L, __atomic_load_n(&test_var, __ATOMIC_RELAXED));
> +
> + KUNIT_EXPECT_EQ(test, 42L, __atomic_exchange_n(&test_var, 20, __ATOMIC_RELAXED));
> + KUNIT_EXPECT_EQ(test, 20L, test_var);
> +
> + tmp = 20L;
> + KUNIT_EXPECT_TRUE(test, __atomic_compare_exchange_n(&test_var, &tmp, 30L,
> + 0, __ATOMIC_RELAXED,
> + __ATOMIC_RELAXED));
> + KUNIT_EXPECT_EQ(test, tmp, 20L);
> + KUNIT_EXPECT_EQ(test, test_var, 30L);
> + KUNIT_EXPECT_FALSE(test, __atomic_compare_exchange_n(&test_var, &tmp, 40L,
> + 1, __ATOMIC_RELAXED,
> + __ATOMIC_RELAXED));
> + KUNIT_EXPECT_EQ(test, tmp, 30L);
> + KUNIT_EXPECT_EQ(test, test_var, 30L);
> +
> + KUNIT_EXPECT_EQ(test, 30L, __atomic_fetch_add(&test_var, 1, __ATOMIC_RELAXED));
> + KUNIT_EXPECT_EQ(test, 31L, __atomic_fetch_sub(&test_var, 1, __ATOMIC_RELAXED));
> + KUNIT_EXPECT_EQ(test, 30L, __atomic_fetch_and(&test_var, 0xf, __ATOMIC_RELAXED));
> + KUNIT_EXPECT_EQ(test, 14L, __atomic_fetch_xor(&test_var, 0xf, __ATOMIC_RELAXED));
> + KUNIT_EXPECT_EQ(test, 1L, __atomic_fetch_or(&test_var, 0xf0, __ATOMIC_RELAXED));
> + KUNIT_EXPECT_EQ(test, 241L, __atomic_fetch_nand(&test_var, 0xf, __ATOMIC_RELAXED));
> + KUNIT_EXPECT_EQ(test, -2L, test_var);
> +
> + __atomic_thread_fence(__ATOMIC_SEQ_CST);
> + __atomic_signal_fence(__ATOMIC_SEQ_CST);
> +
> + kcsan_disable_current();
> +
> + match_never = report_available();
> + } while (!end_test_checks(match_never));
> + KUNIT_EXPECT_FALSE(test, match_never);
> +}
> +
> /*
> * Each test case is run with different numbers of threads. Until KUnit supports
> * passing arguments for each test case, we encode #threads in the test case
> @@ -891,6 +953,7 @@ static struct kunit_case kcsan_test_cases[] = {
> KCSAN_KUNIT_CASE(test_assert_exclusive_access_scoped),
> KCSAN_KUNIT_CASE(test_jiffies_noreport),
> KCSAN_KUNIT_CASE(test_seqlock_noreport),
> + KCSAN_KUNIT_CASE(test_atomic_builtins),
> {},
> };
>
> --
> 2.27.0.212.ge8ba1cc988-goog
>