2020-02-11 17:09:26

by Marco Elver

[permalink] [raw]
Subject: [PATCH v2 1/5] kcsan: Move interfaces that affects checks to kcsan-checks.h

This moves functions that affect state changing the behaviour of
kcsan_check_access() to kcsan-checks.h. Since these are likely used with
kcsan_check_access() it makes more sense to have them in kcsan-checks.h,
to avoid including all of 'include/linux/kcsan.h'.

No functional change intended.

Signed-off-by: Marco Elver <[email protected]>
---
include/linux/kcsan-checks.h | 48 ++++++++++++++++++++++++++++++++++--
include/linux/kcsan.h | 41 ------------------------------
2 files changed, 46 insertions(+), 43 deletions(-)

diff --git a/include/linux/kcsan-checks.h b/include/linux/kcsan-checks.h
index cf6961794e9a1..8675411c8dbcd 100644
--- a/include/linux/kcsan-checks.h
+++ b/include/linux/kcsan-checks.h
@@ -32,10 +32,54 @@
*/
void __kcsan_check_access(const volatile void *ptr, size_t size, int type);

-#else
+/**
+ * kcsan_nestable_atomic_begin - begin nestable atomic region
+ *
+ * Accesses within the atomic region may appear to race with other accesses but
+ * should be considered atomic.
+ */
+void kcsan_nestable_atomic_begin(void);
+
+/**
+ * kcsan_nestable_atomic_end - end nestable atomic region
+ */
+void kcsan_nestable_atomic_end(void);
+
+/**
+ * kcsan_flat_atomic_begin - begin flat atomic region
+ *
+ * Accesses within the atomic region may appear to race with other accesses but
+ * should be considered atomic.
+ */
+void kcsan_flat_atomic_begin(void);
+
+/**
+ * kcsan_flat_atomic_end - end flat atomic region
+ */
+void kcsan_flat_atomic_end(void);
+
+/**
+ * kcsan_atomic_next - consider following accesses as atomic
+ *
+ * Force treating the next n memory accesses for the current context as atomic
+ * operations.
+ *
+ * @n number of following memory accesses to treat as atomic.
+ */
+void kcsan_atomic_next(int n);
+
+#else /* CONFIG_KCSAN */
+
static inline void __kcsan_check_access(const volatile void *ptr, size_t size,
int type) { }
-#endif
+
+static inline void kcsan_nestable_atomic_begin(void) { }
+static inline void kcsan_nestable_atomic_end(void) { }
+static inline void kcsan_flat_atomic_begin(void) { }
+static inline void kcsan_flat_atomic_end(void) { }
+static inline void kcsan_atomic_next(int n) { }
+
+#endif /* CONFIG_KCSAN */

/*
* kcsan_*: Only calls into the runtime when the particular compilation unit has
diff --git a/include/linux/kcsan.h b/include/linux/kcsan.h
index 1019e3a2c6897..7a614ca558f65 100644
--- a/include/linux/kcsan.h
+++ b/include/linux/kcsan.h
@@ -56,52 +56,11 @@ void kcsan_disable_current(void);
*/
void kcsan_enable_current(void);

-/**
- * kcsan_nestable_atomic_begin - begin nestable atomic region
- *
- * Accesses within the atomic region may appear to race with other accesses but
- * should be considered atomic.
- */
-void kcsan_nestable_atomic_begin(void);
-
-/**
- * kcsan_nestable_atomic_end - end nestable atomic region
- */
-void kcsan_nestable_atomic_end(void);
-
-/**
- * kcsan_flat_atomic_begin - begin flat atomic region
- *
- * Accesses within the atomic region may appear to race with other accesses but
- * should be considered atomic.
- */
-void kcsan_flat_atomic_begin(void);
-
-/**
- * kcsan_flat_atomic_end - end flat atomic region
- */
-void kcsan_flat_atomic_end(void);
-
-/**
- * kcsan_atomic_next - consider following accesses as atomic
- *
- * Force treating the next n memory accesses for the current context as atomic
- * operations.
- *
- * @n number of following memory accesses to treat as atomic.
- */
-void kcsan_atomic_next(int n);
-
#else /* CONFIG_KCSAN */

static inline void kcsan_init(void) { }
static inline void kcsan_disable_current(void) { }
static inline void kcsan_enable_current(void) { }
-static inline void kcsan_nestable_atomic_begin(void) { }
-static inline void kcsan_nestable_atomic_end(void) { }
-static inline void kcsan_flat_atomic_begin(void) { }
-static inline void kcsan_flat_atomic_end(void) { }
-static inline void kcsan_atomic_next(int n) { }

#endif /* CONFIG_KCSAN */

--
2.25.0.225.g125e21ebc7-goog


2020-02-11 17:09:27

by Marco Elver

[permalink] [raw]
Subject: [PATCH v2 2/5] compiler.h, seqlock.h: Remove unnecessary kcsan.h includes

No we longer have to include kcsan.h, since the required KCSAN interface
for both compiler.h and seqlock.h are now provided by kcsan-checks.h.

Signed-off-by: Marco Elver <[email protected]>
---
include/linux/compiler.h | 2 --
include/linux/seqlock.h | 2 +-
2 files changed, 1 insertion(+), 3 deletions(-)

diff --git a/include/linux/compiler.h b/include/linux/compiler.h
index c1bdf37571cb8..f504edebd5d71 100644
--- a/include/linux/compiler.h
+++ b/include/linux/compiler.h
@@ -313,8 +313,6 @@ unsigned long read_word_at_a_time(const void *addr)
__u.__val; \
})

-#include <linux/kcsan.h>
-
/**
* data_race - mark an expression as containing intentional data races
*
diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h
index 239701cae3764..8b97204f35a77 100644
--- a/include/linux/seqlock.h
+++ b/include/linux/seqlock.h
@@ -37,7 +37,7 @@
#include <linux/preempt.h>
#include <linux/lockdep.h>
#include <linux/compiler.h>
-#include <linux/kcsan.h>
+#include <linux/kcsan-checks.h>
#include <asm/processor.h>

/*
--
2.25.0.225.g125e21ebc7-goog

2020-02-11 17:09:44

by Marco Elver

[permalink] [raw]
Subject: [PATCH v2 5/5] kcsan: Introduce ASSERT_EXCLUSIVE_BITS(var, mask)

This introduces ASSERT_EXCLUSIVE_BITS(var, mask).
ASSERT_EXCLUSIVE_BITS(var, mask) will cause KCSAN to assume that the
following access is safe w.r.t. data races (however, please see the
docbook comment for disclaimer here).

For more context on why this was considered necessary, please see:
http://lkml.kernel.org/r/[email protected]

In particular, before this patch, data races between reads (that use
@mask bits of an access that should not be modified concurrently) and
writes (that change ~@mask bits not used by the readers) would have been
annotated with "data_race()" (or "READ_ONCE()"). However, doing so would
then hide real problems: we would no longer be able to detect harmful
races between reads to @mask bits and writes to @mask bits.

Therefore, by using ASSERT_EXCLUSIVE_BITS(var, mask), we accomplish:

1. Avoid proliferation of specific macros at the call sites: by
including a single mask in the argument list, we can use the same
macro in a wide variety of call sites, regardless of how and which
bits in a field each call site actually accesses.

2. The existing code does not need to be modified (although READ_ONCE()
may still be advisable if we cannot prove that the data race is
always safe).

3. We catch bugs where the exclusive bits are modified concurrently.

4. We document properties of the current code.

Signed-off-by: Marco Elver <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: Jan Kara <[email protected]>
Cc: John Hubbard <[email protected]>
Cc: Paul E. McKenney <[email protected]>
Cc: Qian Cai <[email protected]>
---
v2:
* Update API documentation to be clearer about how this compares to the
existing assertions, and update use-cases. [Based on suggestions from
John Hubbard]
* Update commit message. [Suggestions from John Hubbard]
---
include/linux/kcsan-checks.h | 69 ++++++++++++++++++++++++++++++++----
kernel/kcsan/debugfs.c | 15 +++++++-
2 files changed, 77 insertions(+), 7 deletions(-)

diff --git a/include/linux/kcsan-checks.h b/include/linux/kcsan-checks.h
index 4ef5233ff3f04..1b8aac5d6a0b5 100644
--- a/include/linux/kcsan-checks.h
+++ b/include/linux/kcsan-checks.h
@@ -152,9 +152,9 @@ static inline void kcsan_check_access(const volatile void *ptr, size_t size,
#endif

/**
- * ASSERT_EXCLUSIVE_WRITER - assert no other threads are writing @var
+ * ASSERT_EXCLUSIVE_WRITER - assert no concurrent writes to @var
*
- * Assert that there are no other threads writing @var; other readers are
+ * Assert that there are no concurrent writes to @var; other readers are
* allowed. This assertion can be used to specify properties of concurrent code,
* where violation cannot be detected as a normal data race.
*
@@ -171,11 +171,11 @@ static inline void kcsan_check_access(const volatile void *ptr, size_t size,
__kcsan_check_access(&(var), sizeof(var), KCSAN_ACCESS_ASSERT)

/**
- * ASSERT_EXCLUSIVE_ACCESS - assert no other threads are accessing @var
+ * ASSERT_EXCLUSIVE_ACCESS - assert no concurrent accesses to @var
*
- * Assert that no other thread is accessing @var (no readers nor writers). This
- * assertion can be used to specify properties of concurrent code, where
- * violation cannot be detected as a normal data race.
+ * Assert that there are no concurrent accesses to @var (no readers nor
+ * writers). This assertion can be used to specify properties of concurrent
+ * code, where violation cannot be detected as a normal data race.
*
* For example, in a reference-counting algorithm where exclusive access is
* expected after the refcount reaches 0. We can check that this property
@@ -191,4 +191,61 @@ static inline void kcsan_check_access(const volatile void *ptr, size_t size,
#define ASSERT_EXCLUSIVE_ACCESS(var) \
__kcsan_check_access(&(var), sizeof(var), KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ASSERT)

+/**
+ * ASSERT_EXCLUSIVE_BITS - assert no concurrent writes to subset of bits in @var
+ *
+ * Bit-granular variant of ASSERT_EXCLUSIVE_WRITER(var).
+ *
+ * Assert that there are no concurrent writes to a subset of bits in @var;
+ * concurrent readers are permitted. This assertion captures more detailed
+ * bit-level properties, compared to the other (word granularity) assertions.
+ * Only the bits set in @mask are checked for concurrent modifications, while
+ * ignoring the remaining bits, i.e. concurrent writes (or reads) to ~@mask bits
+ * are ignored.
+ *
+ * Use this for variables, where some bits must not be modified concurrently,
+ * yet other bits are expected to be modified concurrently.
+ *
+ * For example, variables where, after initialization, some bits are read-only,
+ * but other bits may still be modified concurrently. A reader may wish to
+ * assert that this is true as follows:
+ *
+ * ASSERT_EXCLUSIVE_BITS(flags, READ_ONLY_MASK);
+ * foo = (READ_ONCE(flags) & READ_ONLY_MASK) >> READ_ONLY_SHIFT;
+ *
+ * Note: The access that immediately follows ASSERT_EXCLUSIVE_BITS() is
+ * assumed to access the masked bits only, and KCSAN optimistically assumes it
+ * is therefore safe, even in the presence of data races, and marking it with
+ * READ_ONCE() is optional from KCSAN's point-of-view. We caution, however,
+ * that it may still be advisable to do so, since we cannot reason about all
+ * compiler optimizations when it comes to bit manipulations (on the reader
+ * and writer side). If you are sure nothing can go wrong, we can write the
+ * above simply as:
+ *
+ * ASSERT_EXCLUSIVE_BITS(flags, READ_ONLY_MASK);
+ * foo = (flags & READ_ONLY_MASK) >> READ_ONLY_SHIFT;
+ *
+ * Another example, where this may be used, is when certain bits of @var may
+ * only be modified when holding the appropriate lock, but other bits may still
+ * be modified concurrently. Writers, where other bits may change concurrently,
+ * could use the assertion as follows:
+ *
+ * spin_lock(&foo_lock);
+ * ASSERT_EXCLUSIVE_BITS(flags, FOO_MASK);
+ * old_flags = READ_ONCE(flags);
+ * new_flags = (old_flags & ~FOO_MASK) | (new_foo << FOO_SHIFT);
+ * if (cmpxchg(&flags, old_flags, new_flags) != old_flags) { ... }
+ * spin_unlock(&foo_lock);
+ *
+ * @var variable to assert on
+ * @mask only check for modifications to bits set in @mask
+ */
+#define ASSERT_EXCLUSIVE_BITS(var, mask) \
+ do { \
+ kcsan_set_access_mask(mask); \
+ __kcsan_check_access(&(var), sizeof(var), KCSAN_ACCESS_ASSERT);\
+ kcsan_set_access_mask(0); \
+ kcsan_atomic_next(1); \
+ } while (0)
+
#endif /* _LINUX_KCSAN_CHECKS_H */
diff --git a/kernel/kcsan/debugfs.c b/kernel/kcsan/debugfs.c
index 9bbba0e57c9b3..2ff1961239778 100644
--- a/kernel/kcsan/debugfs.c
+++ b/kernel/kcsan/debugfs.c
@@ -100,8 +100,10 @@ static noinline void microbenchmark(unsigned long iters)
* debugfs file from multiple tasks to generate real conflicts and show reports.
*/
static long test_dummy;
+static long test_flags;
static noinline void test_thread(unsigned long iters)
{
+ const long CHANGE_BITS = 0xff00ff00ff00ff00L;
const struct kcsan_ctx ctx_save = current->kcsan_ctx;
cycles_t cycles;

@@ -109,16 +111,27 @@ static noinline void test_thread(unsigned long iters)
memset(&current->kcsan_ctx, 0, sizeof(current->kcsan_ctx));

pr_info("KCSAN: %s begin | iters: %lu\n", __func__, iters);
+ pr_info("test_dummy@%px, test_flags@%px\n", &test_dummy, &test_flags);

cycles = get_cycles();
while (iters--) {
+ /* These all should generate reports. */
__kcsan_check_read(&test_dummy, sizeof(test_dummy));
- __kcsan_check_write(&test_dummy, sizeof(test_dummy));
ASSERT_EXCLUSIVE_WRITER(test_dummy);
ASSERT_EXCLUSIVE_ACCESS(test_dummy);

+ ASSERT_EXCLUSIVE_BITS(test_flags, ~CHANGE_BITS); /* no report */
+ __kcsan_check_read(&test_flags, sizeof(test_flags)); /* no report */
+
+ ASSERT_EXCLUSIVE_BITS(test_flags, CHANGE_BITS); /* report */
+ __kcsan_check_read(&test_flags, sizeof(test_flags)); /* no report */
+
/* not actually instrumented */
WRITE_ONCE(test_dummy, iters); /* to observe value-change */
+ __kcsan_check_write(&test_dummy, sizeof(test_dummy));
+
+ test_flags ^= CHANGE_BITS; /* generate value-change */
+ __kcsan_check_write(&test_flags, sizeof(test_flags));
}
cycles = get_cycles() - cycles;

--
2.25.0.225.g125e21ebc7-goog

2020-02-11 21:43:22

by John Hubbard

[permalink] [raw]
Subject: Re: [PATCH v2 5/5] kcsan: Introduce ASSERT_EXCLUSIVE_BITS(var, mask)

On 2/11/20 8:04 AM, Marco Elver wrote:
> This introduces ASSERT_EXCLUSIVE_BITS(var, mask).
> ASSERT_EXCLUSIVE_BITS(var, mask) will cause KCSAN to assume that the
> following access is safe w.r.t. data races (however, please see the
> docbook comment for disclaimer here).
>
> For more context on why this was considered necessary, please see:
> http://lkml.kernel.org/r/[email protected]
>
> In particular, before this patch, data races between reads (that use
> @mask bits of an access that should not be modified concurrently) and
> writes (that change ~@mask bits not used by the readers) would have been
> annotated with "data_race()" (or "READ_ONCE()"). However, doing so would
> then hide real problems: we would no longer be able to detect harmful
> races between reads to @mask bits and writes to @mask bits.
>
> Therefore, by using ASSERT_EXCLUSIVE_BITS(var, mask), we accomplish:
>
> 1. Avoid proliferation of specific macros at the call sites: by
> including a single mask in the argument list, we can use the same
> macro in a wide variety of call sites, regardless of how and which
> bits in a field each call site actually accesses.
>
> 2. The existing code does not need to be modified (although READ_ONCE()
> may still be advisable if we cannot prove that the data race is
> always safe).
>
> 3. We catch bugs where the exclusive bits are modified concurrently.
>
> 4. We document properties of the current code.


API looks good to me. (I'm not yet familiar enough with KCSAN to provide
any useful review of about the various kcsan*() calls that implement the
new macro.)

btw, it might be helpful for newcomers if you mentioned which tree this
is based on. I poked around briefly and failed several times to find one. :)

You can add:

Acked-by: John Hubbard <[email protected]>


thanks,
--
John Hubbard
NVIDIA
>
> Signed-off-by: Marco Elver <[email protected]>
> Cc: Andrew Morton <[email protected]>
> Cc: David Hildenbrand <[email protected]>
> Cc: Jan Kara <[email protected]>
> Cc: John Hubbard <[email protected]>
> Cc: Paul E. McKenney <[email protected]>
> Cc: Qian Cai <[email protected]>
> ---
> v2:
> * Update API documentation to be clearer about how this compares to the
> existing assertions, and update use-cases. [Based on suggestions from
> John Hubbard]
> * Update commit message. [Suggestions from John Hubbard]
> ---
> include/linux/kcsan-checks.h | 69 ++++++++++++++++++++++++++++++++----
> kernel/kcsan/debugfs.c | 15 +++++++-
> 2 files changed, 77 insertions(+), 7 deletions(-)
>
> diff --git a/include/linux/kcsan-checks.h b/include/linux/kcsan-checks.h
> index 4ef5233ff3f04..1b8aac5d6a0b5 100644
> --- a/include/linux/kcsan-checks.h
> +++ b/include/linux/kcsan-checks.h
> @@ -152,9 +152,9 @@ static inline void kcsan_check_access(const volatile void *ptr, size_t size,
> #endif
>
> /**
> - * ASSERT_EXCLUSIVE_WRITER - assert no other threads are writing @var
> + * ASSERT_EXCLUSIVE_WRITER - assert no concurrent writes to @var
> *
> - * Assert that there are no other threads writing @var; other readers are
> + * Assert that there are no concurrent writes to @var; other readers are
> * allowed. This assertion can be used to specify properties of concurrent code,
> * where violation cannot be detected as a normal data race.
> *
> @@ -171,11 +171,11 @@ static inline void kcsan_check_access(const volatile void *ptr, size_t size,
> __kcsan_check_access(&(var), sizeof(var), KCSAN_ACCESS_ASSERT)
>
> /**
> - * ASSERT_EXCLUSIVE_ACCESS - assert no other threads are accessing @var
> + * ASSERT_EXCLUSIVE_ACCESS - assert no concurrent accesses to @var
> *
> - * Assert that no other thread is accessing @var (no readers nor writers). This
> - * assertion can be used to specify properties of concurrent code, where
> - * violation cannot be detected as a normal data race.
> + * Assert that there are no concurrent accesses to @var (no readers nor
> + * writers). This assertion can be used to specify properties of concurrent
> + * code, where violation cannot be detected as a normal data race.
> *
> * For example, in a reference-counting algorithm where exclusive access is
> * expected after the refcount reaches 0. We can check that this property
> @@ -191,4 +191,61 @@ static inline void kcsan_check_access(const volatile void *ptr, size_t size,
> #define ASSERT_EXCLUSIVE_ACCESS(var) \
> __kcsan_check_access(&(var), sizeof(var), KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ASSERT)
>
> +/**
> + * ASSERT_EXCLUSIVE_BITS - assert no concurrent writes to subset of bits in @var
> + *
> + * Bit-granular variant of ASSERT_EXCLUSIVE_WRITER(var).
> + *
> + * Assert that there are no concurrent writes to a subset of bits in @var;
> + * concurrent readers are permitted. This assertion captures more detailed
> + * bit-level properties, compared to the other (word granularity) assertions.
> + * Only the bits set in @mask are checked for concurrent modifications, while
> + * ignoring the remaining bits, i.e. concurrent writes (or reads) to ~@mask bits
> + * are ignored.
> + *
> + * Use this for variables, where some bits must not be modified concurrently,
> + * yet other bits are expected to be modified concurrently.
> + *
> + * For example, variables where, after initialization, some bits are read-only,
> + * but other bits may still be modified concurrently. A reader may wish to
> + * assert that this is true as follows:
> + *
> + * ASSERT_EXCLUSIVE_BITS(flags, READ_ONLY_MASK);
> + * foo = (READ_ONCE(flags) & READ_ONLY_MASK) >> READ_ONLY_SHIFT;
> + *
> + * Note: The access that immediately follows ASSERT_EXCLUSIVE_BITS() is
> + * assumed to access the masked bits only, and KCSAN optimistically assumes it
> + * is therefore safe, even in the presence of data races, and marking it with
> + * READ_ONCE() is optional from KCSAN's point-of-view. We caution, however,
> + * that it may still be advisable to do so, since we cannot reason about all
> + * compiler optimizations when it comes to bit manipulations (on the reader
> + * and writer side). If you are sure nothing can go wrong, we can write the
> + * above simply as:
> + *
> + * ASSERT_EXCLUSIVE_BITS(flags, READ_ONLY_MASK);
> + * foo = (flags & READ_ONLY_MASK) >> READ_ONLY_SHIFT;
> + *
> + * Another example, where this may be used, is when certain bits of @var may
> + * only be modified when holding the appropriate lock, but other bits may still
> + * be modified concurrently. Writers, where other bits may change concurrently,
> + * could use the assertion as follows:
> + *
> + * spin_lock(&foo_lock);
> + * ASSERT_EXCLUSIVE_BITS(flags, FOO_MASK);
> + * old_flags = READ_ONCE(flags);
> + * new_flags = (old_flags & ~FOO_MASK) | (new_foo << FOO_SHIFT);
> + * if (cmpxchg(&flags, old_flags, new_flags) != old_flags) { ... }
> + * spin_unlock(&foo_lock);
> + *
> + * @var variable to assert on
> + * @mask only check for modifications to bits set in @mask
> + */
> +#define ASSERT_EXCLUSIVE_BITS(var, mask) \
> + do { \
> + kcsan_set_access_mask(mask); \
> + __kcsan_check_access(&(var), sizeof(var), KCSAN_ACCESS_ASSERT);\
> + kcsan_set_access_mask(0); \
> + kcsan_atomic_next(1); \
> + } while (0)
> +
> #endif /* _LINUX_KCSAN_CHECKS_H */
> diff --git a/kernel/kcsan/debugfs.c b/kernel/kcsan/debugfs.c
> index 9bbba0e57c9b3..2ff1961239778 100644
> --- a/kernel/kcsan/debugfs.c
> +++ b/kernel/kcsan/debugfs.c
> @@ -100,8 +100,10 @@ static noinline void microbenchmark(unsigned long iters)
> * debugfs file from multiple tasks to generate real conflicts and show reports.
> */
> static long test_dummy;
> +static long test_flags;
> static noinline void test_thread(unsigned long iters)
> {
> + const long CHANGE_BITS = 0xff00ff00ff00ff00L;
> const struct kcsan_ctx ctx_save = current->kcsan_ctx;
> cycles_t cycles;
>
> @@ -109,16 +111,27 @@ static noinline void test_thread(unsigned long iters)
> memset(&current->kcsan_ctx, 0, sizeof(current->kcsan_ctx));
>
> pr_info("KCSAN: %s begin | iters: %lu\n", __func__, iters);
> + pr_info("test_dummy@%px, test_flags@%px\n", &test_dummy, &test_flags);
>
> cycles = get_cycles();
> while (iters--) {
> + /* These all should generate reports. */
> __kcsan_check_read(&test_dummy, sizeof(test_dummy));
> - __kcsan_check_write(&test_dummy, sizeof(test_dummy));
> ASSERT_EXCLUSIVE_WRITER(test_dummy);
> ASSERT_EXCLUSIVE_ACCESS(test_dummy);
>
> + ASSERT_EXCLUSIVE_BITS(test_flags, ~CHANGE_BITS); /* no report */
> + __kcsan_check_read(&test_flags, sizeof(test_flags)); /* no report */
> +
> + ASSERT_EXCLUSIVE_BITS(test_flags, CHANGE_BITS); /* report */
> + __kcsan_check_read(&test_flags, sizeof(test_flags)); /* no report */
> +
> /* not actually instrumented */
> WRITE_ONCE(test_dummy, iters); /* to observe value-change */
> + __kcsan_check_write(&test_dummy, sizeof(test_dummy));
> +
> + test_flags ^= CHANGE_BITS; /* generate value-change */
> + __kcsan_check_write(&test_flags, sizeof(test_flags));
> }
> cycles = get_cycles() - cycles;
>
>

2020-02-12 10:58:10

by Marco Elver

[permalink] [raw]
Subject: Re: [PATCH v2 5/5] kcsan: Introduce ASSERT_EXCLUSIVE_BITS(var, mask)

On Tue, 11 Feb 2020 at 22:41, John Hubbard <[email protected]> wrote:
>
> On 2/11/20 8:04 AM, Marco Elver wrote:
> > This introduces ASSERT_EXCLUSIVE_BITS(var, mask).
> > ASSERT_EXCLUSIVE_BITS(var, mask) will cause KCSAN to assume that the
> > following access is safe w.r.t. data races (however, please see the
> > docbook comment for disclaimer here).
> >
> > For more context on why this was considered necessary, please see:
> > http://lkml.kernel.org/r/[email protected]
> >
> > In particular, before this patch, data races between reads (that use
> > @mask bits of an access that should not be modified concurrently) and
> > writes (that change ~@mask bits not used by the readers) would have been
> > annotated with "data_race()" (or "READ_ONCE()"). However, doing so would
> > then hide real problems: we would no longer be able to detect harmful
> > races between reads to @mask bits and writes to @mask bits.
> >
> > Therefore, by using ASSERT_EXCLUSIVE_BITS(var, mask), we accomplish:
> >
> > 1. Avoid proliferation of specific macros at the call sites: by
> > including a single mask in the argument list, we can use the same
> > macro in a wide variety of call sites, regardless of how and which
> > bits in a field each call site actually accesses.
> >
> > 2. The existing code does not need to be modified (although READ_ONCE()
> > may still be advisable if we cannot prove that the data race is
> > always safe).
> >
> > 3. We catch bugs where the exclusive bits are modified concurrently.
> >
> > 4. We document properties of the current code.
>
>
> API looks good to me. (I'm not yet familiar enough with KCSAN to provide
> any useful review of about the various kcsan*() calls that implement the
> new macro.)
>
> btw, it might be helpful for newcomers if you mentioned which tree this
> is based on. I poked around briefly and failed several times to find one. :)

KCSAN is currently in -rcu (kcsan branch has the latest version),
-tip, and -next.

> You can add:
>
> Acked-by: John Hubbard <[email protected]>

Thank you!
-- Marco

>
> thanks,
> --
> John Hubbard
> NVIDIA
> >
> > Signed-off-by: Marco Elver <[email protected]>
> > Cc: Andrew Morton <[email protected]>
> > Cc: David Hildenbrand <[email protected]>
> > Cc: Jan Kara <[email protected]>
> > Cc: John Hubbard <[email protected]>
> > Cc: Paul E. McKenney <[email protected]>
> > Cc: Qian Cai <[email protected]>
> > ---
> > v2:
> > * Update API documentation to be clearer about how this compares to the
> > existing assertions, and update use-cases. [Based on suggestions from
> > John Hubbard]
> > * Update commit message. [Suggestions from John Hubbard]
> > ---
> > include/linux/kcsan-checks.h | 69 ++++++++++++++++++++++++++++++++----
> > kernel/kcsan/debugfs.c | 15 +++++++-
> > 2 files changed, 77 insertions(+), 7 deletions(-)
> >
> > diff --git a/include/linux/kcsan-checks.h b/include/linux/kcsan-checks.h
> > index 4ef5233ff3f04..1b8aac5d6a0b5 100644
> > --- a/include/linux/kcsan-checks.h
> > +++ b/include/linux/kcsan-checks.h
> > @@ -152,9 +152,9 @@ static inline void kcsan_check_access(const volatile void *ptr, size_t size,
> > #endif
> >
> > /**
> > - * ASSERT_EXCLUSIVE_WRITER - assert no other threads are writing @var
> > + * ASSERT_EXCLUSIVE_WRITER - assert no concurrent writes to @var
> > *
> > - * Assert that there are no other threads writing @var; other readers are
> > + * Assert that there are no concurrent writes to @var; other readers are
> > * allowed. This assertion can be used to specify properties of concurrent code,
> > * where violation cannot be detected as a normal data race.
> > *
> > @@ -171,11 +171,11 @@ static inline void kcsan_check_access(const volatile void *ptr, size_t size,
> > __kcsan_check_access(&(var), sizeof(var), KCSAN_ACCESS_ASSERT)
> >
> > /**
> > - * ASSERT_EXCLUSIVE_ACCESS - assert no other threads are accessing @var
> > + * ASSERT_EXCLUSIVE_ACCESS - assert no concurrent accesses to @var
> > *
> > - * Assert that no other thread is accessing @var (no readers nor writers). This
> > - * assertion can be used to specify properties of concurrent code, where
> > - * violation cannot be detected as a normal data race.
> > + * Assert that there are no concurrent accesses to @var (no readers nor
> > + * writers). This assertion can be used to specify properties of concurrent
> > + * code, where violation cannot be detected as a normal data race.
> > *
> > * For example, in a reference-counting algorithm where exclusive access is
> > * expected after the refcount reaches 0. We can check that this property
> > @@ -191,4 +191,61 @@ static inline void kcsan_check_access(const volatile void *ptr, size_t size,
> > #define ASSERT_EXCLUSIVE_ACCESS(var) \
> > __kcsan_check_access(&(var), sizeof(var), KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ASSERT)
> >
> > +/**
> > + * ASSERT_EXCLUSIVE_BITS - assert no concurrent writes to subset of bits in @var
> > + *
> > + * Bit-granular variant of ASSERT_EXCLUSIVE_WRITER(var).
> > + *
> > + * Assert that there are no concurrent writes to a subset of bits in @var;
> > + * concurrent readers are permitted. This assertion captures more detailed
> > + * bit-level properties, compared to the other (word granularity) assertions.
> > + * Only the bits set in @mask are checked for concurrent modifications, while
> > + * ignoring the remaining bits, i.e. concurrent writes (or reads) to ~@mask bits
> > + * are ignored.
> > + *
> > + * Use this for variables, where some bits must not be modified concurrently,
> > + * yet other bits are expected to be modified concurrently.
> > + *
> > + * For example, variables where, after initialization, some bits are read-only,
> > + * but other bits may still be modified concurrently. A reader may wish to
> > + * assert that this is true as follows:
> > + *
> > + * ASSERT_EXCLUSIVE_BITS(flags, READ_ONLY_MASK);
> > + * foo = (READ_ONCE(flags) & READ_ONLY_MASK) >> READ_ONLY_SHIFT;
> > + *
> > + * Note: The access that immediately follows ASSERT_EXCLUSIVE_BITS() is
> > + * assumed to access the masked bits only, and KCSAN optimistically assumes it
> > + * is therefore safe, even in the presence of data races, and marking it with
> > + * READ_ONCE() is optional from KCSAN's point-of-view. We caution, however,
> > + * that it may still be advisable to do so, since we cannot reason about all
> > + * compiler optimizations when it comes to bit manipulations (on the reader
> > + * and writer side). If you are sure nothing can go wrong, we can write the
> > + * above simply as:
> > + *
> > + * ASSERT_EXCLUSIVE_BITS(flags, READ_ONLY_MASK);
> > + * foo = (flags & READ_ONLY_MASK) >> READ_ONLY_SHIFT;
> > + *
> > + * Another example, where this may be used, is when certain bits of @var may
> > + * only be modified when holding the appropriate lock, but other bits may still
> > + * be modified concurrently. Writers, where other bits may change concurrently,
> > + * could use the assertion as follows:
> > + *
> > + * spin_lock(&foo_lock);
> > + * ASSERT_EXCLUSIVE_BITS(flags, FOO_MASK);
> > + * old_flags = READ_ONCE(flags);
> > + * new_flags = (old_flags & ~FOO_MASK) | (new_foo << FOO_SHIFT);
> > + * if (cmpxchg(&flags, old_flags, new_flags) != old_flags) { ... }
> > + * spin_unlock(&foo_lock);
> > + *
> > + * @var variable to assert on
> > + * @mask only check for modifications to bits set in @mask
> > + */
> > +#define ASSERT_EXCLUSIVE_BITS(var, mask) \
> > + do { \
> > + kcsan_set_access_mask(mask); \
> > + __kcsan_check_access(&(var), sizeof(var), KCSAN_ACCESS_ASSERT);\
> > + kcsan_set_access_mask(0); \
> > + kcsan_atomic_next(1); \
> > + } while (0)
> > +
> > #endif /* _LINUX_KCSAN_CHECKS_H */
> > diff --git a/kernel/kcsan/debugfs.c b/kernel/kcsan/debugfs.c
> > index 9bbba0e57c9b3..2ff1961239778 100644
> > --- a/kernel/kcsan/debugfs.c
> > +++ b/kernel/kcsan/debugfs.c
> > @@ -100,8 +100,10 @@ static noinline void microbenchmark(unsigned long iters)
> > * debugfs file from multiple tasks to generate real conflicts and show reports.
> > */
> > static long test_dummy;
> > +static long test_flags;
> > static noinline void test_thread(unsigned long iters)
> > {
> > + const long CHANGE_BITS = 0xff00ff00ff00ff00L;
> > const struct kcsan_ctx ctx_save = current->kcsan_ctx;
> > cycles_t cycles;
> >
> > @@ -109,16 +111,27 @@ static noinline void test_thread(unsigned long iters)
> > memset(&current->kcsan_ctx, 0, sizeof(current->kcsan_ctx));
> >
> > pr_info("KCSAN: %s begin | iters: %lu\n", __func__, iters);
> > + pr_info("test_dummy@%px, test_flags@%px\n", &test_dummy, &test_flags);
> >
> > cycles = get_cycles();
> > while (iters--) {
> > + /* These all should generate reports. */
> > __kcsan_check_read(&test_dummy, sizeof(test_dummy));
> > - __kcsan_check_write(&test_dummy, sizeof(test_dummy));
> > ASSERT_EXCLUSIVE_WRITER(test_dummy);
> > ASSERT_EXCLUSIVE_ACCESS(test_dummy);
> >
> > + ASSERT_EXCLUSIVE_BITS(test_flags, ~CHANGE_BITS); /* no report */
> > + __kcsan_check_read(&test_flags, sizeof(test_flags)); /* no report */
> > +
> > + ASSERT_EXCLUSIVE_BITS(test_flags, CHANGE_BITS); /* report */
> > + __kcsan_check_read(&test_flags, sizeof(test_flags)); /* no report */
> > +
> > /* not actually instrumented */
> > WRITE_ONCE(test_dummy, iters); /* to observe value-change */
> > + __kcsan_check_write(&test_dummy, sizeof(test_dummy));
> > +
> > + test_flags ^= CHANGE_BITS; /* generate value-change */
> > + __kcsan_check_write(&test_flags, sizeof(test_flags));
> > }
> > cycles = get_cycles() - cycles;
> >
> >
>
> --
> You received this message because you are subscribed to the Google Groups "kasan-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
> To view this discussion on the web visit https://groups.google.com/d/msgid/kasan-dev/29718fab-0da5-e734-796c-339144ac5080%40nvidia.com.

2020-02-12 12:30:51

by Qian Cai

[permalink] [raw]
Subject: Re: [PATCH v2 5/5] kcsan: Introduce ASSERT_EXCLUSIVE_BITS(var, mask)



> On Feb 12, 2020, at 5:57 AM, Marco Elver <[email protected]> wrote:
>
> KCSAN is currently in -rcu (kcsan branch has the latest version),
> -tip, and -next.

It would like be nice to at least have this patchset can be applied against the linux-next, so I can try it a spin.

Maybe a better question to Paul if he could push all the latest kcsan code base to linux-next soon since we are now past the merging window. I also noticed some data races in rcu but only found out some of them had already been fixed in rcu tree but not in linux-next.

2020-02-12 21:37:30

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH v2 5/5] kcsan: Introduce ASSERT_EXCLUSIVE_BITS(var, mask)

On Tue, Feb 11, 2020 at 01:41:14PM -0800, John Hubbard wrote:
> On 2/11/20 8:04 AM, Marco Elver wrote:
> > This introduces ASSERT_EXCLUSIVE_BITS(var, mask).
> > ASSERT_EXCLUSIVE_BITS(var, mask) will cause KCSAN to assume that the
> > following access is safe w.r.t. data races (however, please see the
> > docbook comment for disclaimer here).
> >
> > For more context on why this was considered necessary, please see:
> > http://lkml.kernel.org/r/[email protected]
> >
> > In particular, before this patch, data races between reads (that use
> > @mask bits of an access that should not be modified concurrently) and
> > writes (that change ~@mask bits not used by the readers) would have been
> > annotated with "data_race()" (or "READ_ONCE()"). However, doing so would
> > then hide real problems: we would no longer be able to detect harmful
> > races between reads to @mask bits and writes to @mask bits.
> >
> > Therefore, by using ASSERT_EXCLUSIVE_BITS(var, mask), we accomplish:
> >
> > 1. Avoid proliferation of specific macros at the call sites: by
> > including a single mask in the argument list, we can use the same
> > macro in a wide variety of call sites, regardless of how and which
> > bits in a field each call site actually accesses.
> >
> > 2. The existing code does not need to be modified (although READ_ONCE()
> > may still be advisable if we cannot prove that the data race is
> > always safe).
> >
> > 3. We catch bugs where the exclusive bits are modified concurrently.
> >
> > 4. We document properties of the current code.
>
>
> API looks good to me. (I'm not yet familiar enough with KCSAN to provide
> any useful review of about the various kcsan*() calls that implement the
> new macro.)
>
> btw, it might be helpful for newcomers if you mentioned which tree this
> is based on. I poked around briefly and failed several times to find one. :)
>
> You can add:
>
> Acked-by: John Hubbard <[email protected]>

Queued for testing and further review, thank you both!

Thanx, Paul

> thanks,
> --
> John Hubbard
> NVIDIA
> >
> > Signed-off-by: Marco Elver <[email protected]>
> > Cc: Andrew Morton <[email protected]>
> > Cc: David Hildenbrand <[email protected]>
> > Cc: Jan Kara <[email protected]>
> > Cc: John Hubbard <[email protected]>
> > Cc: Paul E. McKenney <[email protected]>
> > Cc: Qian Cai <[email protected]>
> > ---
> > v2:
> > * Update API documentation to be clearer about how this compares to the
> > existing assertions, and update use-cases. [Based on suggestions from
> > John Hubbard]
> > * Update commit message. [Suggestions from John Hubbard]
> > ---
> > include/linux/kcsan-checks.h | 69 ++++++++++++++++++++++++++++++++----
> > kernel/kcsan/debugfs.c | 15 +++++++-
> > 2 files changed, 77 insertions(+), 7 deletions(-)
> >
> > diff --git a/include/linux/kcsan-checks.h b/include/linux/kcsan-checks.h
> > index 4ef5233ff3f04..1b8aac5d6a0b5 100644
> > --- a/include/linux/kcsan-checks.h
> > +++ b/include/linux/kcsan-checks.h
> > @@ -152,9 +152,9 @@ static inline void kcsan_check_access(const volatile void *ptr, size_t size,
> > #endif
> >
> > /**
> > - * ASSERT_EXCLUSIVE_WRITER - assert no other threads are writing @var
> > + * ASSERT_EXCLUSIVE_WRITER - assert no concurrent writes to @var
> > *
> > - * Assert that there are no other threads writing @var; other readers are
> > + * Assert that there are no concurrent writes to @var; other readers are
> > * allowed. This assertion can be used to specify properties of concurrent code,
> > * where violation cannot be detected as a normal data race.
> > *
> > @@ -171,11 +171,11 @@ static inline void kcsan_check_access(const volatile void *ptr, size_t size,
> > __kcsan_check_access(&(var), sizeof(var), KCSAN_ACCESS_ASSERT)
> >
> > /**
> > - * ASSERT_EXCLUSIVE_ACCESS - assert no other threads are accessing @var
> > + * ASSERT_EXCLUSIVE_ACCESS - assert no concurrent accesses to @var
> > *
> > - * Assert that no other thread is accessing @var (no readers nor writers). This
> > - * assertion can be used to specify properties of concurrent code, where
> > - * violation cannot be detected as a normal data race.
> > + * Assert that there are no concurrent accesses to @var (no readers nor
> > + * writers). This assertion can be used to specify properties of concurrent
> > + * code, where violation cannot be detected as a normal data race.
> > *
> > * For example, in a reference-counting algorithm where exclusive access is
> > * expected after the refcount reaches 0. We can check that this property
> > @@ -191,4 +191,61 @@ static inline void kcsan_check_access(const volatile void *ptr, size_t size,
> > #define ASSERT_EXCLUSIVE_ACCESS(var) \
> > __kcsan_check_access(&(var), sizeof(var), KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ASSERT)
> >
> > +/**
> > + * ASSERT_EXCLUSIVE_BITS - assert no concurrent writes to subset of bits in @var
> > + *
> > + * Bit-granular variant of ASSERT_EXCLUSIVE_WRITER(var).
> > + *
> > + * Assert that there are no concurrent writes to a subset of bits in @var;
> > + * concurrent readers are permitted. This assertion captures more detailed
> > + * bit-level properties, compared to the other (word granularity) assertions.
> > + * Only the bits set in @mask are checked for concurrent modifications, while
> > + * ignoring the remaining bits, i.e. concurrent writes (or reads) to ~@mask bits
> > + * are ignored.
> > + *
> > + * Use this for variables, where some bits must not be modified concurrently,
> > + * yet other bits are expected to be modified concurrently.
> > + *
> > + * For example, variables where, after initialization, some bits are read-only,
> > + * but other bits may still be modified concurrently. A reader may wish to
> > + * assert that this is true as follows:
> > + *
> > + * ASSERT_EXCLUSIVE_BITS(flags, READ_ONLY_MASK);
> > + * foo = (READ_ONCE(flags) & READ_ONLY_MASK) >> READ_ONLY_SHIFT;
> > + *
> > + * Note: The access that immediately follows ASSERT_EXCLUSIVE_BITS() is
> > + * assumed to access the masked bits only, and KCSAN optimistically assumes it
> > + * is therefore safe, even in the presence of data races, and marking it with
> > + * READ_ONCE() is optional from KCSAN's point-of-view. We caution, however,
> > + * that it may still be advisable to do so, since we cannot reason about all
> > + * compiler optimizations when it comes to bit manipulations (on the reader
> > + * and writer side). If you are sure nothing can go wrong, we can write the
> > + * above simply as:
> > + *
> > + * ASSERT_EXCLUSIVE_BITS(flags, READ_ONLY_MASK);
> > + * foo = (flags & READ_ONLY_MASK) >> READ_ONLY_SHIFT;
> > + *
> > + * Another example, where this may be used, is when certain bits of @var may
> > + * only be modified when holding the appropriate lock, but other bits may still
> > + * be modified concurrently. Writers, where other bits may change concurrently,
> > + * could use the assertion as follows:
> > + *
> > + * spin_lock(&foo_lock);
> > + * ASSERT_EXCLUSIVE_BITS(flags, FOO_MASK);
> > + * old_flags = READ_ONCE(flags);
> > + * new_flags = (old_flags & ~FOO_MASK) | (new_foo << FOO_SHIFT);
> > + * if (cmpxchg(&flags, old_flags, new_flags) != old_flags) { ... }
> > + * spin_unlock(&foo_lock);
> > + *
> > + * @var variable to assert on
> > + * @mask only check for modifications to bits set in @mask
> > + */
> > +#define ASSERT_EXCLUSIVE_BITS(var, mask) \
> > + do { \
> > + kcsan_set_access_mask(mask); \
> > + __kcsan_check_access(&(var), sizeof(var), KCSAN_ACCESS_ASSERT);\
> > + kcsan_set_access_mask(0); \
> > + kcsan_atomic_next(1); \
> > + } while (0)
> > +
> > #endif /* _LINUX_KCSAN_CHECKS_H */
> > diff --git a/kernel/kcsan/debugfs.c b/kernel/kcsan/debugfs.c
> > index 9bbba0e57c9b3..2ff1961239778 100644
> > --- a/kernel/kcsan/debugfs.c
> > +++ b/kernel/kcsan/debugfs.c
> > @@ -100,8 +100,10 @@ static noinline void microbenchmark(unsigned long iters)
> > * debugfs file from multiple tasks to generate real conflicts and show reports.
> > */
> > static long test_dummy;
> > +static long test_flags;
> > static noinline void test_thread(unsigned long iters)
> > {
> > + const long CHANGE_BITS = 0xff00ff00ff00ff00L;
> > const struct kcsan_ctx ctx_save = current->kcsan_ctx;
> > cycles_t cycles;
> >
> > @@ -109,16 +111,27 @@ static noinline void test_thread(unsigned long iters)
> > memset(&current->kcsan_ctx, 0, sizeof(current->kcsan_ctx));
> >
> > pr_info("KCSAN: %s begin | iters: %lu\n", __func__, iters);
> > + pr_info("test_dummy@%px, test_flags@%px\n", &test_dummy, &test_flags);
> >
> > cycles = get_cycles();
> > while (iters--) {
> > + /* These all should generate reports. */
> > __kcsan_check_read(&test_dummy, sizeof(test_dummy));
> > - __kcsan_check_write(&test_dummy, sizeof(test_dummy));
> > ASSERT_EXCLUSIVE_WRITER(test_dummy);
> > ASSERT_EXCLUSIVE_ACCESS(test_dummy);
> >
> > + ASSERT_EXCLUSIVE_BITS(test_flags, ~CHANGE_BITS); /* no report */
> > + __kcsan_check_read(&test_flags, sizeof(test_flags)); /* no report */
> > +
> > + ASSERT_EXCLUSIVE_BITS(test_flags, CHANGE_BITS); /* report */
> > + __kcsan_check_read(&test_flags, sizeof(test_flags)); /* no report */
> > +
> > /* not actually instrumented */
> > WRITE_ONCE(test_dummy, iters); /* to observe value-change */
> > + __kcsan_check_write(&test_dummy, sizeof(test_dummy));
> > +
> > + test_flags ^= CHANGE_BITS; /* generate value-change */
> > + __kcsan_check_write(&test_flags, sizeof(test_flags));
> > }
> > cycles = get_cycles() - cycles;
> >
> >

2020-02-12 21:40:55

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH v2 5/5] kcsan: Introduce ASSERT_EXCLUSIVE_BITS(var, mask)

On Wed, Feb 12, 2020 at 07:30:16AM -0500, Qian Cai wrote:
>
>
> > On Feb 12, 2020, at 5:57 AM, Marco Elver <[email protected]> wrote:
> >
> > KCSAN is currently in -rcu (kcsan branch has the latest version),
> > -tip, and -next.
>
> It would like be nice to at least have this patchset can be applied against the linux-next, so I can try it a spin.
>
> Maybe a better question to Paul if he could push all the latest kcsan code base to linux-next soon since we are now past the merging window. I also noticed some data races in rcu but only found out some of them had already been fixed in rcu tree but not in linux-next.

I have pushed all that I have queued other than the last set of five,
which I will do tomorrow (Prague time) if testing goes well.

Could you please check the -rcu "dev" branch to see if I am missing any
of the KCSAN patches?

Thanx, Paul

2020-02-13 00:50:27

by Qian Cai

[permalink] [raw]
Subject: Re: [PATCH v2 5/5] kcsan: Introduce ASSERT_EXCLUSIVE_BITS(var, mask)



> On Feb 12, 2020, at 4:40 PM, Paul E. McKenney <[email protected]> wrote:
>
> On Wed, Feb 12, 2020 at 07:30:16AM -0500, Qian Cai wrote:
>>
>>
>>> On Feb 12, 2020, at 5:57 AM, Marco Elver <[email protected]> wrote:
>>>
>>> KCSAN is currently in -rcu (kcsan branch has the latest version),
>>> -tip, and -next.
>>
>> It would like be nice to at least have this patchset can be applied against the linux-next, so I can try it a spin.
>>
>> Maybe a better question to Paul if he could push all the latest kcsan code base to linux-next soon since we are now past the merging window. I also noticed some data races in rcu but only found out some of them had already been fixed in rcu tree but not in linux-next.
>
> I have pushed all that I have queued other than the last set of five,
> which I will do tomorrow (Prague time) if testing goes well.
>
> Could you please check the -rcu "dev" branch to see if I am missing any
> of the KCSAN patches?

Nope. It looks good to me.

2020-02-13 06:35:50

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH v2 5/5] kcsan: Introduce ASSERT_EXCLUSIVE_BITS(var, mask)

On Wed, Feb 12, 2020 at 07:48:15PM -0500, Qian Cai wrote:
>
>
> > On Feb 12, 2020, at 4:40 PM, Paul E. McKenney <[email protected]> wrote:
> >
> > On Wed, Feb 12, 2020 at 07:30:16AM -0500, Qian Cai wrote:
> >>
> >>
> >>> On Feb 12, 2020, at 5:57 AM, Marco Elver <[email protected]> wrote:
> >>>
> >>> KCSAN is currently in -rcu (kcsan branch has the latest version),
> >>> -tip, and -next.
> >>
> >> It would like be nice to at least have this patchset can be applied against the linux-next, so I can try it a spin.
> >>
> >> Maybe a better question to Paul if he could push all the latest kcsan code base to linux-next soon since we are now past the merging window. I also noticed some data races in rcu but only found out some of them had already been fixed in rcu tree but not in linux-next.
> >
> > I have pushed all that I have queued other than the last set of five,
> > which I will do tomorrow (Prague time) if testing goes well.
> >
> > Could you please check the -rcu "dev" branch to see if I am missing any
> > of the KCSAN patches?
>
> Nope. It looks good to me.

Thank you for checking!

Thanx, Paul