2020-02-04 17:23:56

by Marco Elver

[permalink] [raw]
Subject: [PATCH v2 1/3] kcsan: Add option to assume plain aligned writes up to word size are atomic

This adds option KCSAN_ASSUME_PLAIN_WRITES_ATOMIC. If enabled, plain
aligned writes up to word size are assumed to be atomic, and also not
subject to other unsafe compiler optimizations resulting in data races.

This option has been enabled by default to reflect current kernel-wide
preferences.

Signed-off-by: Marco Elver <[email protected]>
---
v2:
* Also check for alignment of writes.
---
kernel/kcsan/core.c | 22 +++++++++++++++++-----
lib/Kconfig.kcsan | 27 ++++++++++++++++++++-------
2 files changed, 37 insertions(+), 12 deletions(-)

diff --git a/kernel/kcsan/core.c b/kernel/kcsan/core.c
index 64b30f7716a12..e3c7d8f34f2ff 100644
--- a/kernel/kcsan/core.c
+++ b/kernel/kcsan/core.c
@@ -5,6 +5,7 @@
#include <linux/delay.h>
#include <linux/export.h>
#include <linux/init.h>
+#include <linux/kernel.h>
#include <linux/percpu.h>
#include <linux/preempt.h>
#include <linux/random.h>
@@ -169,10 +170,20 @@ static __always_inline struct kcsan_ctx *get_ctx(void)
return in_task() ? &current->kcsan_ctx : raw_cpu_ptr(&kcsan_cpu_ctx);
}

-static __always_inline bool is_atomic(const volatile void *ptr)
+static __always_inline bool
+is_atomic(const volatile void *ptr, size_t size, int type)
{
- struct kcsan_ctx *ctx = get_ctx();
+ struct kcsan_ctx *ctx;
+
+ if ((type & KCSAN_ACCESS_ATOMIC) != 0)
+ return true;

+ if (IS_ENABLED(CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC) &&
+ (type & KCSAN_ACCESS_WRITE) != 0 && size <= sizeof(long) &&
+ IS_ALIGNED((unsigned long)ptr, size))
+ return true; /* Assume aligned writes up to word size are atomic. */
+
+ ctx = get_ctx();
if (unlikely(ctx->atomic_next > 0)) {
/*
* Because we do not have separate contexts for nested
@@ -193,7 +204,8 @@ static __always_inline bool is_atomic(const volatile void *ptr)
return kcsan_is_atomic(ptr);
}

-static __always_inline bool should_watch(const volatile void *ptr, int type)
+static __always_inline bool
+should_watch(const volatile void *ptr, size_t size, int type)
{
/*
* Never set up watchpoints when memory operations are atomic.
@@ -202,7 +214,7 @@ static __always_inline bool should_watch(const volatile void *ptr, int type)
* should not count towards skipped instructions, and (2) to actually
* decrement kcsan_atomic_next for consecutive instruction stream.
*/
- if ((type & KCSAN_ACCESS_ATOMIC) != 0 || is_atomic(ptr))
+ if (is_atomic(ptr, size, type))
return false;

if (this_cpu_dec_return(kcsan_skip) >= 0)
@@ -460,7 +472,7 @@ static __always_inline void check_access(const volatile void *ptr, size_t size,
if (unlikely(watchpoint != NULL))
kcsan_found_watchpoint(ptr, size, type, watchpoint,
encoded_watchpoint);
- else if (unlikely(should_watch(ptr, type)))
+ else if (unlikely(should_watch(ptr, size, type)))
kcsan_setup_watchpoint(ptr, size, type);
}

diff --git a/lib/Kconfig.kcsan b/lib/Kconfig.kcsan
index 3552990abcfe5..66126853dab02 100644
--- a/lib/Kconfig.kcsan
+++ b/lib/Kconfig.kcsan
@@ -91,13 +91,13 @@ config KCSAN_REPORT_ONCE_IN_MS
limiting reporting to avoid flooding the console with reports.
Setting this to 0 disables rate limiting.

-# Note that, while some of the below options could be turned into boot
-# parameters, to optimize for the common use-case, we avoid this because: (a)
-# it would impact performance (and we want to avoid static branch for all
-# {READ,WRITE}_ONCE, atomic_*, bitops, etc.), and (b) complicate the design
-# without real benefit. The main purpose of the below options is for use in
-# fuzzer configs to control reported data races, and they are not expected
-# to be switched frequently by a user.
+# The main purpose of the below options is to control reported data races (e.g.
+# in fuzzer configs), and are not expected to be switched frequently by other
+# users. We could turn some of them into boot parameters, but given they should
+# not be switched normally, let's keep them here to simplify configuration.
+#
+# The defaults below are chosen to be very conservative, and may miss certain
+# bugs.

config KCSAN_REPORT_RACE_UNKNOWN_ORIGIN
bool "Report races of unknown origin"
@@ -116,6 +116,19 @@ config KCSAN_REPORT_VALUE_CHANGE_ONLY
the data value of the memory location was observed to remain
unchanged, do not report the data race.

+config KCSAN_ASSUME_PLAIN_WRITES_ATOMIC
+ bool "Assume that plain aligned writes up to word size are atomic"
+ default y
+ help
+ Assume that plain aligned writes up to word size are atomic by
+ default, and also not subject to other unsafe compiler optimizations
+ resulting in data races. This will cause KCSAN to not report data
+ races due to conflicts where the only plain accesses are aligned
+ writes up to word size: conflicts between marked reads and plain
+ aligned writes up to word size will not be reported as data races;
+ notice that data races between two conflicting plain aligned writes
+ will also not be reported.
+
config KCSAN_IGNORE_ATOMICS
bool "Do not instrument marked atomic accesses"
help
--
2.25.0.341.g760bfbb309-goog


2020-02-04 17:24:02

by Marco Elver

[permalink] [raw]
Subject: [PATCH v2 3/3] kcsan: Cleanup of main KCSAN Kconfig option

This patch cleans up the rules of the 'KCSAN' Kconfig option by:
1. implicitly selecting 'STACKTRACE' instead of depending on it;
2. depending on DEBUG_KERNEL, to avoid accidentally turning KCSAN on if
the kernel is not meant to be a debug kernel;
3. updating the short and long summaries.

Signed-off-by: Marco Elver <[email protected]>
---
lib/Kconfig.kcsan | 13 ++++++++-----
1 file changed, 8 insertions(+), 5 deletions(-)

diff --git a/lib/Kconfig.kcsan b/lib/Kconfig.kcsan
index 020ac63e43617..9785bbf9a1d11 100644
--- a/lib/Kconfig.kcsan
+++ b/lib/Kconfig.kcsan
@@ -4,12 +4,15 @@ config HAVE_ARCH_KCSAN
bool

menuconfig KCSAN
- bool "KCSAN: watchpoint-based dynamic data race detector"
- depends on HAVE_ARCH_KCSAN && !KASAN && STACKTRACE
+ bool "KCSAN: dynamic data race detector"
+ depends on HAVE_ARCH_KCSAN && DEBUG_KERNEL && !KASAN
+ select STACKTRACE
help
- Kernel Concurrency Sanitizer is a dynamic data race detector, which
- uses a watchpoint-based sampling approach to detect races. See
- <file:Documentation/dev-tools/kcsan.rst> for more details.
+ The Kernel Concurrency Sanitizer (KCSAN) is a dynamic data race
+ detector, which relies on compile-time instrumentation, and uses a
+ watchpoint-based sampling approach to detect data races.
+
+ See <file:Documentation/dev-tools/kcsan.rst> for more details.

if KCSAN

--
2.25.0.341.g760bfbb309-goog

2020-02-04 18:17:11

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH v2 1/3] kcsan: Add option to assume plain aligned writes up to word size are atomic

On Tue, Feb 04, 2020 at 06:21:10PM +0100, Marco Elver wrote:
> This adds option KCSAN_ASSUME_PLAIN_WRITES_ATOMIC. If enabled, plain
> aligned writes up to word size are assumed to be atomic, and also not
> subject to other unsafe compiler optimizations resulting in data races.
>
> This option has been enabled by default to reflect current kernel-wide
> preferences.
>
> Signed-off-by: Marco Elver <[email protected]>

Queued all three for further testing and review, thank you!

Thanx, Paul

> ---
> v2:
> * Also check for alignment of writes.
> ---
> kernel/kcsan/core.c | 22 +++++++++++++++++-----
> lib/Kconfig.kcsan | 27 ++++++++++++++++++++-------
> 2 files changed, 37 insertions(+), 12 deletions(-)
>
> diff --git a/kernel/kcsan/core.c b/kernel/kcsan/core.c
> index 64b30f7716a12..e3c7d8f34f2ff 100644
> --- a/kernel/kcsan/core.c
> +++ b/kernel/kcsan/core.c
> @@ -5,6 +5,7 @@
> #include <linux/delay.h>
> #include <linux/export.h>
> #include <linux/init.h>
> +#include <linux/kernel.h>
> #include <linux/percpu.h>
> #include <linux/preempt.h>
> #include <linux/random.h>
> @@ -169,10 +170,20 @@ static __always_inline struct kcsan_ctx *get_ctx(void)
> return in_task() ? &current->kcsan_ctx : raw_cpu_ptr(&kcsan_cpu_ctx);
> }
>
> -static __always_inline bool is_atomic(const volatile void *ptr)
> +static __always_inline bool
> +is_atomic(const volatile void *ptr, size_t size, int type)
> {
> - struct kcsan_ctx *ctx = get_ctx();
> + struct kcsan_ctx *ctx;
> +
> + if ((type & KCSAN_ACCESS_ATOMIC) != 0)
> + return true;
>
> + if (IS_ENABLED(CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC) &&
> + (type & KCSAN_ACCESS_WRITE) != 0 && size <= sizeof(long) &&
> + IS_ALIGNED((unsigned long)ptr, size))
> + return true; /* Assume aligned writes up to word size are atomic. */
> +
> + ctx = get_ctx();
> if (unlikely(ctx->atomic_next > 0)) {
> /*
> * Because we do not have separate contexts for nested
> @@ -193,7 +204,8 @@ static __always_inline bool is_atomic(const volatile void *ptr)
> return kcsan_is_atomic(ptr);
> }
>
> -static __always_inline bool should_watch(const volatile void *ptr, int type)
> +static __always_inline bool
> +should_watch(const volatile void *ptr, size_t size, int type)
> {
> /*
> * Never set up watchpoints when memory operations are atomic.
> @@ -202,7 +214,7 @@ static __always_inline bool should_watch(const volatile void *ptr, int type)
> * should not count towards skipped instructions, and (2) to actually
> * decrement kcsan_atomic_next for consecutive instruction stream.
> */
> - if ((type & KCSAN_ACCESS_ATOMIC) != 0 || is_atomic(ptr))
> + if (is_atomic(ptr, size, type))
> return false;
>
> if (this_cpu_dec_return(kcsan_skip) >= 0)
> @@ -460,7 +472,7 @@ static __always_inline void check_access(const volatile void *ptr, size_t size,
> if (unlikely(watchpoint != NULL))
> kcsan_found_watchpoint(ptr, size, type, watchpoint,
> encoded_watchpoint);
> - else if (unlikely(should_watch(ptr, type)))
> + else if (unlikely(should_watch(ptr, size, type)))
> kcsan_setup_watchpoint(ptr, size, type);
> }
>
> diff --git a/lib/Kconfig.kcsan b/lib/Kconfig.kcsan
> index 3552990abcfe5..66126853dab02 100644
> --- a/lib/Kconfig.kcsan
> +++ b/lib/Kconfig.kcsan
> @@ -91,13 +91,13 @@ config KCSAN_REPORT_ONCE_IN_MS
> limiting reporting to avoid flooding the console with reports.
> Setting this to 0 disables rate limiting.
>
> -# Note that, while some of the below options could be turned into boot
> -# parameters, to optimize for the common use-case, we avoid this because: (a)
> -# it would impact performance (and we want to avoid static branch for all
> -# {READ,WRITE}_ONCE, atomic_*, bitops, etc.), and (b) complicate the design
> -# without real benefit. The main purpose of the below options is for use in
> -# fuzzer configs to control reported data races, and they are not expected
> -# to be switched frequently by a user.
> +# The main purpose of the below options is to control reported data races (e.g.
> +# in fuzzer configs), and are not expected to be switched frequently by other
> +# users. We could turn some of them into boot parameters, but given they should
> +# not be switched normally, let's keep them here to simplify configuration.
> +#
> +# The defaults below are chosen to be very conservative, and may miss certain
> +# bugs.
>
> config KCSAN_REPORT_RACE_UNKNOWN_ORIGIN
> bool "Report races of unknown origin"
> @@ -116,6 +116,19 @@ config KCSAN_REPORT_VALUE_CHANGE_ONLY
> the data value of the memory location was observed to remain
> unchanged, do not report the data race.
>
> +config KCSAN_ASSUME_PLAIN_WRITES_ATOMIC
> + bool "Assume that plain aligned writes up to word size are atomic"
> + default y
> + help
> + Assume that plain aligned writes up to word size are atomic by
> + default, and also not subject to other unsafe compiler optimizations
> + resulting in data races. This will cause KCSAN to not report data
> + races due to conflicts where the only plain accesses are aligned
> + writes up to word size: conflicts between marked reads and plain
> + aligned writes up to word size will not be reported as data races;
> + notice that data races between two conflicting plain aligned writes
> + will also not be reported.
> +
> config KCSAN_IGNORE_ATOMICS
> bool "Do not instrument marked atomic accesses"
> help
> --
> 2.25.0.341.g760bfbb309-goog
>