2020-08-31 20:49:16

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH kcsan 0/19] KCSAN updates for v5.10

Hello!

This series provides KCSAN updates:

1. Add support for atomic builtins.

2. Add atomic builtin TSAN instrumentation to uaccess whitelist.

3. Add atomic builtin test case.

4. Support compounded read-write instrumentation.

5. objtool, kcsan: Add __tsan_read_write to uaccess whitelist.

6. Skew delay to be longer for certain access types.

7. Add missing CONFIG_KCSAN_IGNORE_ATOMICS checks.

8. Test support for compound instrumentation.

9. instrumented.h: Introduce read-write instrumentation hooks.

10. asm-generic/bitops: Use instrument_read_write() where appropriate.

11. locking/atomics: Use read-write instrumentation for atomic RMWs.

12. Simplify debugfs counter to name mapping.

13. Simplify constant string handling.

14. Remove debugfs test command.

15. Show message if enabled early.

16. Use pr_fmt for consistency.

17. Optimize debugfs stats counters.

18. bitops, kcsan: Partially revert instrumentation for non-atomic bitops.

19. Use tracing-safe version of prandom.

Thanx, Paul

------------------------------------------------------------------------

include/asm-generic/atomic-instrumented.h | 330 +++++++++----------
include/asm-generic/bitops/instrumented-atomic.h | 6
include/asm-generic/bitops/instrumented-lock.h | 2
include/asm-generic/bitops/instrumented-non-atomic.h | 36 +-
include/linux/instrumented.h | 30 +
include/linux/kcsan-checks.h | 45 +-
kernel/kcsan/core.c | 238 +++++++++++--
kernel/kcsan/debugfs.c | 136 +------
kernel/kcsan/kcsan-test.c | 128 ++++++-
kernel/kcsan/kcsan.h | 12
kernel/kcsan/report.c | 10
kernel/kcsan/selftest.c | 8
lib/Kconfig.kcsan | 5
scripts/Makefile.kcsan | 2
scripts/atomic/gen-atomic-instrumented.sh | 21 -
tools/objtool/check.c | 55 +++
16 files changed, 697 insertions(+), 367 deletions(-)


2020-08-31 20:49:43

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH kcsan 05/19] objtool, kcsan: Add __tsan_read_write to uaccess whitelist

From: Marco Elver <[email protected]>

Adds the new __tsan_read_write compound instrumentation to objtool's
uaccess whitelist.

Acked-by: Peter Zijlstra (Intel) <[email protected]>
Cc: Josh Poimboeuf <[email protected]>
Signed-off-by: Marco Elver <[email protected]>
Signed-off-by: Paul E. McKenney <[email protected]>
---
tools/objtool/check.c | 5 +++++
1 file changed, 5 insertions(+)

diff --git a/tools/objtool/check.c b/tools/objtool/check.c
index 7546a9d..5eee156 100644
--- a/tools/objtool/check.c
+++ b/tools/objtool/check.c
@@ -528,6 +528,11 @@ static const char *uaccess_safe_builtin[] = {
"__tsan_write4",
"__tsan_write8",
"__tsan_write16",
+ "__tsan_read_write1",
+ "__tsan_read_write2",
+ "__tsan_read_write4",
+ "__tsan_read_write8",
+ "__tsan_read_write16",
"__tsan_atomic8_load",
"__tsan_atomic16_load",
"__tsan_atomic32_load",
--
2.9.5

2020-08-31 20:50:19

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH kcsan 03/19] kcsan: Add atomic builtin test case

From: Marco Elver <[email protected]>

Adds test case to kcsan-test module, to test atomic builtin
instrumentation works.

Signed-off-by: Marco Elver <[email protected]>
Signed-off-by: Paul E. McKenney <[email protected]>
---
kernel/kcsan/kcsan-test.c | 63 +++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 63 insertions(+)

diff --git a/kernel/kcsan/kcsan-test.c b/kernel/kcsan/kcsan-test.c
index fed6fcb..721180c 100644
--- a/kernel/kcsan/kcsan-test.c
+++ b/kernel/kcsan/kcsan-test.c
@@ -390,6 +390,15 @@ static noinline void test_kernel_seqlock_writer(void)
write_sequnlock_irqrestore(&test_seqlock, flags);
}

+static noinline void test_kernel_atomic_builtins(void)
+{
+ /*
+ * Generate concurrent accesses, expecting no reports, ensuring KCSAN
+ * treats builtin atomics as actually atomic.
+ */
+ __atomic_load_n(&test_var, __ATOMIC_RELAXED);
+}
+
/* ===== Test cases ===== */

/* Simple test with normal data race. */
@@ -853,6 +862,59 @@ static void test_seqlock_noreport(struct kunit *test)
}

/*
+ * Test atomic builtins work and required instrumentation functions exist. We
+ * also test that KCSAN understands they're atomic by racing with them via
+ * test_kernel_atomic_builtins(), and expect no reports.
+ *
+ * The atomic builtins _SHOULD NOT_ be used in normal kernel code!
+ */
+static void test_atomic_builtins(struct kunit *test)
+{
+ bool match_never = false;
+
+ begin_test_checks(test_kernel_atomic_builtins, test_kernel_atomic_builtins);
+ do {
+ long tmp;
+
+ kcsan_enable_current();
+
+ __atomic_store_n(&test_var, 42L, __ATOMIC_RELAXED);
+ KUNIT_EXPECT_EQ(test, 42L, __atomic_load_n(&test_var, __ATOMIC_RELAXED));
+
+ KUNIT_EXPECT_EQ(test, 42L, __atomic_exchange_n(&test_var, 20, __ATOMIC_RELAXED));
+ KUNIT_EXPECT_EQ(test, 20L, test_var);
+
+ tmp = 20L;
+ KUNIT_EXPECT_TRUE(test, __atomic_compare_exchange_n(&test_var, &tmp, 30L,
+ 0, __ATOMIC_RELAXED,
+ __ATOMIC_RELAXED));
+ KUNIT_EXPECT_EQ(test, tmp, 20L);
+ KUNIT_EXPECT_EQ(test, test_var, 30L);
+ KUNIT_EXPECT_FALSE(test, __atomic_compare_exchange_n(&test_var, &tmp, 40L,
+ 1, __ATOMIC_RELAXED,
+ __ATOMIC_RELAXED));
+ KUNIT_EXPECT_EQ(test, tmp, 30L);
+ KUNIT_EXPECT_EQ(test, test_var, 30L);
+
+ KUNIT_EXPECT_EQ(test, 30L, __atomic_fetch_add(&test_var, 1, __ATOMIC_RELAXED));
+ KUNIT_EXPECT_EQ(test, 31L, __atomic_fetch_sub(&test_var, 1, __ATOMIC_RELAXED));
+ KUNIT_EXPECT_EQ(test, 30L, __atomic_fetch_and(&test_var, 0xf, __ATOMIC_RELAXED));
+ KUNIT_EXPECT_EQ(test, 14L, __atomic_fetch_xor(&test_var, 0xf, __ATOMIC_RELAXED));
+ KUNIT_EXPECT_EQ(test, 1L, __atomic_fetch_or(&test_var, 0xf0, __ATOMIC_RELAXED));
+ KUNIT_EXPECT_EQ(test, 241L, __atomic_fetch_nand(&test_var, 0xf, __ATOMIC_RELAXED));
+ KUNIT_EXPECT_EQ(test, -2L, test_var);
+
+ __atomic_thread_fence(__ATOMIC_SEQ_CST);
+ __atomic_signal_fence(__ATOMIC_SEQ_CST);
+
+ kcsan_disable_current();
+
+ match_never = report_available();
+ } while (!end_test_checks(match_never));
+ KUNIT_EXPECT_FALSE(test, match_never);
+}
+
+/*
* Each test case is run with different numbers of threads. Until KUnit supports
* passing arguments for each test case, we encode #threads in the test case
* name (read by get_num_threads()). [The '-' was chosen as a stylistic
@@ -891,6 +953,7 @@ static struct kunit_case kcsan_test_cases[] = {
KCSAN_KUNIT_CASE(test_assert_exclusive_access_scoped),
KCSAN_KUNIT_CASE(test_jiffies_noreport),
KCSAN_KUNIT_CASE(test_seqlock_noreport),
+ KCSAN_KUNIT_CASE(test_atomic_builtins),
{},
};

--
2.9.5

2020-08-31 22:31:17

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH kcsan 18/19] bitops, kcsan: Partially revert instrumentation for non-atomic bitops

From: Marco Elver <[email protected]>

Previous to the change to distinguish read-write accesses, when
CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=y is set, KCSAN would consider
the non-atomic bitops as atomic. We want to partially revert to this
behaviour, but with one important distinction: report racing
modifications, since lost bits due to non-atomicity are certainly
possible.

Given the operations here only modify a single bit, assuming
non-atomicity of the writer is sufficient may be reasonable for certain
usage (and follows the permissible nature of the "assume plain writes
atomic" rule). In other words:

1. We want non-atomic read-modify-write races to be reported;
this is accomplished by kcsan_check_read(), where any
concurrent write (atomic or not) will generate a report.

2. We do not want to report races with marked readers, but -do-
want to report races with unmarked readers; this is
accomplished by the instrument_write() ("assume atomic
write" with Kconfig option set).

With the above rules, when KCSAN_ASSUME_PLAIN_WRITES_ATOMIC is selected,
it is hoped that KCSAN's reporting behaviour is better aligned with
current expected permissible usage for non-atomic bitops.

Note that, a side-effect of not telling KCSAN that the accesses are
read-writes, is that this information is not displayed in the access
summary in the report. It is, however, visible in inline-expanded stack
traces. For now, it does not make sense to introduce yet another special
case to KCSAN's runtime, only to cater to the case here.

Cc: Dmitry Vyukov <[email protected]>
Cc: Paul E. McKenney <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: Daniel Axtens <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: <[email protected]>
Signed-off-by: Marco Elver <[email protected]>
Signed-off-by: Paul E. McKenney <[email protected]>
---
.../asm-generic/bitops/instrumented-non-atomic.h | 30 +++++++++++++++++++---
1 file changed, 27 insertions(+), 3 deletions(-)

diff --git a/include/asm-generic/bitops/instrumented-non-atomic.h b/include/asm-generic/bitops/instrumented-non-atomic.h
index f86234c..37363d5 100644
--- a/include/asm-generic/bitops/instrumented-non-atomic.h
+++ b/include/asm-generic/bitops/instrumented-non-atomic.h
@@ -58,6 +58,30 @@ static inline void __change_bit(long nr, volatile unsigned long *addr)
arch___change_bit(nr, addr);
}

+static inline void __instrument_read_write_bitop(long nr, volatile unsigned long *addr)
+{
+ if (IS_ENABLED(CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC)) {
+ /*
+ * We treat non-atomic read-write bitops a little more special.
+ * Given the operations here only modify a single bit, assuming
+ * non-atomicity of the writer is sufficient may be reasonable
+ * for certain usage (and follows the permissible nature of the
+ * assume-plain-writes-atomic rule):
+ * 1. report read-modify-write races -> check read;
+ * 2. do not report races with marked readers, but do report
+ * races with unmarked readers -> check "atomic" write.
+ */
+ kcsan_check_read(addr + BIT_WORD(nr), sizeof(long));
+ /*
+ * Use generic write instrumentation, in case other sanitizers
+ * or tools are enabled alongside KCSAN.
+ */
+ instrument_write(addr + BIT_WORD(nr), sizeof(long));
+ } else {
+ instrument_read_write(addr + BIT_WORD(nr), sizeof(long));
+ }
+}
+
/**
* __test_and_set_bit - Set a bit and return its old value
* @nr: Bit to set
@@ -68,7 +92,7 @@ static inline void __change_bit(long nr, volatile unsigned long *addr)
*/
static inline bool __test_and_set_bit(long nr, volatile unsigned long *addr)
{
- instrument_read_write(addr + BIT_WORD(nr), sizeof(long));
+ __instrument_read_write_bitop(nr, addr);
return arch___test_and_set_bit(nr, addr);
}

@@ -82,7 +106,7 @@ static inline bool __test_and_set_bit(long nr, volatile unsigned long *addr)
*/
static inline bool __test_and_clear_bit(long nr, volatile unsigned long *addr)
{
- instrument_read_write(addr + BIT_WORD(nr), sizeof(long));
+ __instrument_read_write_bitop(nr, addr);
return arch___test_and_clear_bit(nr, addr);
}

@@ -96,7 +120,7 @@ static inline bool __test_and_clear_bit(long nr, volatile unsigned long *addr)
*/
static inline bool __test_and_change_bit(long nr, volatile unsigned long *addr)
{
- instrument_read_write(addr + BIT_WORD(nr), sizeof(long));
+ __instrument_read_write_bitop(nr, addr);
return arch___test_and_change_bit(nr, addr);
}

--
2.9.5

2020-08-31 22:31:19

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH kcsan 17/19] kcsan: Optimize debugfs stats counters

From: Marco Elver <[email protected]>

Remove kcsan_counter_inc/dec() functions, as they perform no other
logic, and are no longer needed.

This avoids several calls in kcsan_setup_watchpoint() and
kcsan_found_watchpoint(), as well as lets the compiler warn us about
potential out-of-bounds accesses as the array's size is known at all
usage sites at compile-time.

Signed-off-by: Marco Elver <[email protected]>
Signed-off-by: Paul E. McKenney <[email protected]>
---
kernel/kcsan/core.c | 22 +++++++++++-----------
kernel/kcsan/debugfs.c | 21 +++++----------------
kernel/kcsan/kcsan.h | 12 ++++++------
kernel/kcsan/report.c | 2 +-
4 files changed, 23 insertions(+), 34 deletions(-)

diff --git a/kernel/kcsan/core.c b/kernel/kcsan/core.c
index b176400..8a1ff605 100644
--- a/kernel/kcsan/core.c
+++ b/kernel/kcsan/core.c
@@ -367,13 +367,13 @@ static noinline void kcsan_found_watchpoint(const volatile void *ptr,
* already removed the watchpoint, or another thread consumed
* the watchpoint before this thread.
*/
- kcsan_counter_inc(KCSAN_COUNTER_REPORT_RACES);
+ atomic_long_inc(&kcsan_counters[KCSAN_COUNTER_REPORT_RACES]);
}

if ((type & KCSAN_ACCESS_ASSERT) != 0)
- kcsan_counter_inc(KCSAN_COUNTER_ASSERT_FAILURES);
+ atomic_long_inc(&kcsan_counters[KCSAN_COUNTER_ASSERT_FAILURES]);
else
- kcsan_counter_inc(KCSAN_COUNTER_DATA_RACES);
+ atomic_long_inc(&kcsan_counters[KCSAN_COUNTER_DATA_RACES]);

user_access_restore(flags);
}
@@ -414,7 +414,7 @@ kcsan_setup_watchpoint(const volatile void *ptr, size_t size, int type)
goto out;

if (!check_encodable((unsigned long)ptr, size)) {
- kcsan_counter_inc(KCSAN_COUNTER_UNENCODABLE_ACCESSES);
+ atomic_long_inc(&kcsan_counters[KCSAN_COUNTER_UNENCODABLE_ACCESSES]);
goto out;
}

@@ -434,12 +434,12 @@ kcsan_setup_watchpoint(const volatile void *ptr, size_t size, int type)
* with which should_watch() returns true should be tweaked so
* that this case happens very rarely.
*/
- kcsan_counter_inc(KCSAN_COUNTER_NO_CAPACITY);
+ atomic_long_inc(&kcsan_counters[KCSAN_COUNTER_NO_CAPACITY]);
goto out_unlock;
}

- kcsan_counter_inc(KCSAN_COUNTER_SETUP_WATCHPOINTS);
- kcsan_counter_inc(KCSAN_COUNTER_USED_WATCHPOINTS);
+ atomic_long_inc(&kcsan_counters[KCSAN_COUNTER_SETUP_WATCHPOINTS]);
+ atomic_long_inc(&kcsan_counters[KCSAN_COUNTER_USED_WATCHPOINTS]);

/*
* Read the current value, to later check and infer a race if the data
@@ -541,16 +541,16 @@ kcsan_setup_watchpoint(const volatile void *ptr, size_t size, int type)
* increment this counter.
*/
if (is_assert && value_change == KCSAN_VALUE_CHANGE_TRUE)
- kcsan_counter_inc(KCSAN_COUNTER_ASSERT_FAILURES);
+ atomic_long_inc(&kcsan_counters[KCSAN_COUNTER_ASSERT_FAILURES]);

kcsan_report(ptr, size, type, value_change, KCSAN_REPORT_RACE_SIGNAL,
watchpoint - watchpoints);
} else if (value_change == KCSAN_VALUE_CHANGE_TRUE) {
/* Inferring a race, since the value should not have changed. */

- kcsan_counter_inc(KCSAN_COUNTER_RACES_UNKNOWN_ORIGIN);
+ atomic_long_inc(&kcsan_counters[KCSAN_COUNTER_RACES_UNKNOWN_ORIGIN]);
if (is_assert)
- kcsan_counter_inc(KCSAN_COUNTER_ASSERT_FAILURES);
+ atomic_long_inc(&kcsan_counters[KCSAN_COUNTER_ASSERT_FAILURES]);

if (IS_ENABLED(CONFIG_KCSAN_REPORT_RACE_UNKNOWN_ORIGIN) || is_assert)
kcsan_report(ptr, size, type, KCSAN_VALUE_CHANGE_TRUE,
@@ -563,7 +563,7 @@ kcsan_setup_watchpoint(const volatile void *ptr, size_t size, int type)
* reused after this point.
*/
remove_watchpoint(watchpoint);
- kcsan_counter_dec(KCSAN_COUNTER_USED_WATCHPOINTS);
+ atomic_long_dec(&kcsan_counters[KCSAN_COUNTER_USED_WATCHPOINTS]);
out_unlock:
if (!kcsan_interrupt_watcher)
local_irq_restore(irq_flags);
diff --git a/kernel/kcsan/debugfs.c b/kernel/kcsan/debugfs.c
index 6c4914f..3c8093a 100644
--- a/kernel/kcsan/debugfs.c
+++ b/kernel/kcsan/debugfs.c
@@ -17,10 +17,7 @@

#include "kcsan.h"

-/*
- * Statistics counters.
- */
-static atomic_long_t counters[KCSAN_COUNTER_COUNT];
+atomic_long_t kcsan_counters[KCSAN_COUNTER_COUNT];
static const char *const counter_names[] = {
[KCSAN_COUNTER_USED_WATCHPOINTS] = "used_watchpoints",
[KCSAN_COUNTER_SETUP_WATCHPOINTS] = "setup_watchpoints",
@@ -53,16 +50,6 @@ static struct {
};
static DEFINE_SPINLOCK(report_filterlist_lock);

-void kcsan_counter_inc(enum kcsan_counter_id id)
-{
- atomic_long_inc(&counters[id]);
-}
-
-void kcsan_counter_dec(enum kcsan_counter_id id)
-{
- atomic_long_dec(&counters[id]);
-}
-
/*
* The microbenchmark allows benchmarking KCSAN core runtime only. To run
* multiple threads, pipe 'microbench=<iters>' from multiple tasks into the
@@ -206,8 +193,10 @@ static int show_info(struct seq_file *file, void *v)

/* show stats */
seq_printf(file, "enabled: %i\n", READ_ONCE(kcsan_enabled));
- for (i = 0; i < KCSAN_COUNTER_COUNT; ++i)
- seq_printf(file, "%s: %ld\n", counter_names[i], atomic_long_read(&counters[i]));
+ for (i = 0; i < KCSAN_COUNTER_COUNT; ++i) {
+ seq_printf(file, "%s: %ld\n", counter_names[i],
+ atomic_long_read(&kcsan_counters[i]));
+ }

/* show filter functions, and filter type */
spin_lock_irqsave(&report_filterlist_lock, flags);
diff --git a/kernel/kcsan/kcsan.h b/kernel/kcsan/kcsan.h
index 2948001..8d4bf34 100644
--- a/kernel/kcsan/kcsan.h
+++ b/kernel/kcsan/kcsan.h
@@ -8,6 +8,7 @@
#ifndef _KERNEL_KCSAN_KCSAN_H
#define _KERNEL_KCSAN_KCSAN_H

+#include <linux/atomic.h>
#include <linux/kcsan.h>
#include <linux/sched.h>

@@ -34,6 +35,10 @@ void kcsan_restore_irqtrace(struct task_struct *task);
*/
void kcsan_debugfs_init(void);

+/*
+ * Statistics counters displayed via debugfs; should only be modified in
+ * slow-paths.
+ */
enum kcsan_counter_id {
/*
* Number of watchpoints currently in use.
@@ -86,12 +91,7 @@ enum kcsan_counter_id {

KCSAN_COUNTER_COUNT, /* number of counters */
};
-
-/*
- * Increment/decrement counter with given id; avoid calling these in fast-path.
- */
-extern void kcsan_counter_inc(enum kcsan_counter_id id);
-extern void kcsan_counter_dec(enum kcsan_counter_id id);
+extern atomic_long_t kcsan_counters[KCSAN_COUNTER_COUNT];

/*
* Returns true if data races in the function symbol that maps to func_addr
diff --git a/kernel/kcsan/report.c b/kernel/kcsan/report.c
index bf1d594..d3bf87e 100644
--- a/kernel/kcsan/report.c
+++ b/kernel/kcsan/report.c
@@ -559,7 +559,7 @@ static bool prepare_report_consumer(unsigned long *flags,
* If the actual accesses to not match, this was a false
* positive due to watchpoint encoding.
*/
- kcsan_counter_inc(KCSAN_COUNTER_ENCODING_FALSE_POSITIVES);
+ atomic_long_inc(&kcsan_counters[KCSAN_COUNTER_ENCODING_FALSE_POSITIVES]);
goto discard;
}

--
2.9.5

2020-08-31 22:32:30

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH kcsan 09/19] instrumented.h: Introduce read-write instrumentation hooks

From: Marco Elver <[email protected]>

Introduce read-write instrumentation hooks, to more precisely denote an
operation's behaviour.

KCSAN is able to distinguish compound instrumentation, and with the new
instrumentation we then benefit from improved reporting. More
importantly, read-write compound operations should not implicitly be
treated as atomic, if they aren't actually atomic.

Acked-by: Peter Zijlstra (Intel) <[email protected]>
Signed-off-by: Marco Elver <[email protected]>
Signed-off-by: Paul E. McKenney <[email protected]>
---
include/linux/instrumented.h | 30 ++++++++++++++++++++++++++++++
1 file changed, 30 insertions(+)

diff --git a/include/linux/instrumented.h b/include/linux/instrumented.h
index 43e6ea5..42faebb 100644
--- a/include/linux/instrumented.h
+++ b/include/linux/instrumented.h
@@ -43,6 +43,21 @@ static __always_inline void instrument_write(const volatile void *v, size_t size
}

/**
+ * instrument_read_write - instrument regular read-write access
+ *
+ * Instrument a regular write access. The instrumentation should be inserted
+ * before the actual write happens.
+ *
+ * @ptr address of access
+ * @size size of access
+ */
+static __always_inline void instrument_read_write(const volatile void *v, size_t size)
+{
+ kasan_check_write(v, size);
+ kcsan_check_read_write(v, size);
+}
+
+/**
* instrument_atomic_read - instrument atomic read access
*
* Instrument an atomic read access. The instrumentation should be inserted
@@ -73,6 +88,21 @@ static __always_inline void instrument_atomic_write(const volatile void *v, size
}

/**
+ * instrument_atomic_read_write - instrument atomic read-write access
+ *
+ * Instrument an atomic read-write access. The instrumentation should be
+ * inserted before the actual write happens.
+ *
+ * @ptr address of access
+ * @size size of access
+ */
+static __always_inline void instrument_atomic_read_write(const volatile void *v, size_t size)
+{
+ kasan_check_write(v, size);
+ kcsan_check_atomic_read_write(v, size);
+}
+
+/**
* instrument_copy_to_user - instrument reads of copy_to_user
*
* Instrument reads from kernel memory, that are due to copy_to_user (and
--
2.9.5

2020-08-31 22:33:33

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH kcsan 13/19] kcsan: Simplify constant string handling

From: Marco Elver <[email protected]>

Simplify checking prefixes and length calculation of constant strings.
For the former, the kernel provides str_has_prefix(), and the latter we
should just use strlen("..") because GCC and Clang have optimizations
that optimize these into constants.

No functional change intended.

Signed-off-by: Marco Elver <[email protected]>
Signed-off-by: Paul E. McKenney <[email protected]>
---
kernel/kcsan/debugfs.c | 8 ++++----
kernel/kcsan/report.c | 4 ++--
2 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/kernel/kcsan/debugfs.c b/kernel/kcsan/debugfs.c
index 3a9566a..116bdd8 100644
--- a/kernel/kcsan/debugfs.c
+++ b/kernel/kcsan/debugfs.c
@@ -300,16 +300,16 @@ debugfs_write(struct file *file, const char __user *buf, size_t count, loff_t *o
WRITE_ONCE(kcsan_enabled, true);
} else if (!strcmp(arg, "off")) {
WRITE_ONCE(kcsan_enabled, false);
- } else if (!strncmp(arg, "microbench=", sizeof("microbench=") - 1)) {
+ } else if (str_has_prefix(arg, "microbench=")) {
unsigned long iters;

- if (kstrtoul(&arg[sizeof("microbench=") - 1], 0, &iters))
+ if (kstrtoul(&arg[strlen("microbench=")], 0, &iters))
return -EINVAL;
microbenchmark(iters);
- } else if (!strncmp(arg, "test=", sizeof("test=") - 1)) {
+ } else if (str_has_prefix(arg, "test=")) {
unsigned long iters;

- if (kstrtoul(&arg[sizeof("test=") - 1], 0, &iters))
+ if (kstrtoul(&arg[strlen("test=")], 0, &iters))
return -EINVAL;
test_thread(iters);
} else if (!strcmp(arg, "whitelist")) {
diff --git a/kernel/kcsan/report.c b/kernel/kcsan/report.c
index 3e83a69..bf1d594 100644
--- a/kernel/kcsan/report.c
+++ b/kernel/kcsan/report.c
@@ -279,8 +279,8 @@ static int get_stack_skipnr(const unsigned long stack_entries[], int num_entries

cur = strnstr(buf, "kcsan_", len);
if (cur) {
- cur += sizeof("kcsan_") - 1;
- if (strncmp(cur, "test", sizeof("test") - 1))
+ cur += strlen("kcsan_");
+ if (!str_has_prefix(cur, "test"))
continue; /* KCSAN runtime function. */
/* KCSAN related test. */
}
--
2.9.5

2020-09-02 03:31:29

by Boqun Feng

[permalink] [raw]
Subject: Re: [PATCH kcsan 18/19] bitops, kcsan: Partially revert instrumentation for non-atomic bitops

Hi Paul and Marco,

The whole update patchset looks good to me, just one question out of
curiosity fo this one, please see below:

On Mon, Aug 31, 2020 at 11:18:04AM -0700, [email protected] wrote:
> From: Marco Elver <[email protected]>
>
> Previous to the change to distinguish read-write accesses, when
> CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=y is set, KCSAN would consider
> the non-atomic bitops as atomic. We want to partially revert to this
> behaviour, but with one important distinction: report racing
> modifications, since lost bits due to non-atomicity are certainly
> possible.
>
> Given the operations here only modify a single bit, assuming
> non-atomicity of the writer is sufficient may be reasonable for certain
> usage (and follows the permissible nature of the "assume plain writes
> atomic" rule). In other words:
>
> 1. We want non-atomic read-modify-write races to be reported;
> this is accomplished by kcsan_check_read(), where any
> concurrent write (atomic or not) will generate a report.
>
> 2. We do not want to report races with marked readers, but -do-
> want to report races with unmarked readers; this is
> accomplished by the instrument_write() ("assume atomic
> write" with Kconfig option set).
>

Is there any code in kernel using the above assumption (i.e.
non-atomicity of the writer is sufficient)? IOW, have you observed
anything bad (e.g. an anoying false positive) after applying the
read_write changes but without this patch?

Regards,
Boqun

> With the above rules, when KCSAN_ASSUME_PLAIN_WRITES_ATOMIC is selected,
> it is hoped that KCSAN's reporting behaviour is better aligned with
> current expected permissible usage for non-atomic bitops.
>
> Note that, a side-effect of not telling KCSAN that the accesses are
> read-writes, is that this information is not displayed in the access
> summary in the report. It is, however, visible in inline-expanded stack
> traces. For now, it does not make sense to introduce yet another special
> case to KCSAN's runtime, only to cater to the case here.
>
> Cc: Dmitry Vyukov <[email protected]>
> Cc: Paul E. McKenney <[email protected]>
> Cc: Will Deacon <[email protected]>
> Cc: Arnd Bergmann <[email protected]>
> Cc: Daniel Axtens <[email protected]>
> Cc: Michael Ellerman <[email protected]>
> Cc: <[email protected]>
> Signed-off-by: Marco Elver <[email protected]>
> Signed-off-by: Paul E. McKenney <[email protected]>
> ---
> .../asm-generic/bitops/instrumented-non-atomic.h | 30 +++++++++++++++++++---
> 1 file changed, 27 insertions(+), 3 deletions(-)
>
> diff --git a/include/asm-generic/bitops/instrumented-non-atomic.h b/include/asm-generic/bitops/instrumented-non-atomic.h
> index f86234c..37363d5 100644
> --- a/include/asm-generic/bitops/instrumented-non-atomic.h
> +++ b/include/asm-generic/bitops/instrumented-non-atomic.h
> @@ -58,6 +58,30 @@ static inline void __change_bit(long nr, volatile unsigned long *addr)
> arch___change_bit(nr, addr);
> }
>
> +static inline void __instrument_read_write_bitop(long nr, volatile unsigned long *addr)
> +{
> + if (IS_ENABLED(CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC)) {
> + /*
> + * We treat non-atomic read-write bitops a little more special.
> + * Given the operations here only modify a single bit, assuming
> + * non-atomicity of the writer is sufficient may be reasonable
> + * for certain usage (and follows the permissible nature of the
> + * assume-plain-writes-atomic rule):
> + * 1. report read-modify-write races -> check read;
> + * 2. do not report races with marked readers, but do report
> + * races with unmarked readers -> check "atomic" write.
> + */
> + kcsan_check_read(addr + BIT_WORD(nr), sizeof(long));
> + /*
> + * Use generic write instrumentation, in case other sanitizers
> + * or tools are enabled alongside KCSAN.
> + */
> + instrument_write(addr + BIT_WORD(nr), sizeof(long));
> + } else {
> + instrument_read_write(addr + BIT_WORD(nr), sizeof(long));
> + }
> +}
> +
> /**
> * __test_and_set_bit - Set a bit and return its old value
> * @nr: Bit to set
> @@ -68,7 +92,7 @@ static inline void __change_bit(long nr, volatile unsigned long *addr)
> */
> static inline bool __test_and_set_bit(long nr, volatile unsigned long *addr)
> {
> - instrument_read_write(addr + BIT_WORD(nr), sizeof(long));
> + __instrument_read_write_bitop(nr, addr);
> return arch___test_and_set_bit(nr, addr);
> }
>
> @@ -82,7 +106,7 @@ static inline bool __test_and_set_bit(long nr, volatile unsigned long *addr)
> */
> static inline bool __test_and_clear_bit(long nr, volatile unsigned long *addr)
> {
> - instrument_read_write(addr + BIT_WORD(nr), sizeof(long));
> + __instrument_read_write_bitop(nr, addr);
> return arch___test_and_clear_bit(nr, addr);
> }
>
> @@ -96,7 +120,7 @@ static inline bool __test_and_clear_bit(long nr, volatile unsigned long *addr)
> */
> static inline bool __test_and_change_bit(long nr, volatile unsigned long *addr)
> {
> - instrument_read_write(addr + BIT_WORD(nr), sizeof(long));
> + __instrument_read_write_bitop(nr, addr);
> return arch___test_and_change_bit(nr, addr);
> }
>
> --
> 2.9.5
>

2020-09-02 06:14:28

by Marco Elver

[permalink] [raw]
Subject: Re: [PATCH kcsan 18/19] bitops, kcsan: Partially revert instrumentation for non-atomic bitops

On Wed, Sep 02, 2020 at 11:30AM +0800, Boqun Feng wrote:
> Hi Paul and Marco,
>
> The whole update patchset looks good to me, just one question out of
> curiosity fo this one, please see below:
>
> On Mon, Aug 31, 2020 at 11:18:04AM -0700, [email protected] wrote:
> > From: Marco Elver <[email protected]>
> >
> > Previous to the change to distinguish read-write accesses, when
> > CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=y is set, KCSAN would consider
> > the non-atomic bitops as atomic. We want to partially revert to this
> > behaviour, but with one important distinction: report racing
> > modifications, since lost bits due to non-atomicity are certainly
> > possible.
> >
> > Given the operations here only modify a single bit, assuming
> > non-atomicity of the writer is sufficient may be reasonable for certain
> > usage (and follows the permissible nature of the "assume plain writes
> > atomic" rule). In other words:
> >
> > 1. We want non-atomic read-modify-write races to be reported;
> > this is accomplished by kcsan_check_read(), where any
> > concurrent write (atomic or not) will generate a report.
> >
> > 2. We do not want to report races with marked readers, but -do-
> > want to report races with unmarked readers; this is
> > accomplished by the instrument_write() ("assume atomic
> > write" with Kconfig option set).
> >
>
> Is there any code in kernel using the above assumption (i.e.
> non-atomicity of the writer is sufficient)? IOW, have you observed
> anything bad (e.g. an anoying false positive) after applying the
> read_write changes but without this patch?

We were looking for an answer to:

https://lkml.kernel.org/r/[email protected]

Initially we thought using atomic bitops might be required, but after a
longer offline discussion realized that simply marking the reader in
this case, but retaining the non-atomic bitop is probably all that's
needed.

The version of KCSAN that found the above was still using KCSAN from
Linux 5.8, but we realized with the changed read-write instrumentation
to bitops in this series, we'd regress and still report the race even if
the reader was marked. To avoid this with the default KCSAN config, we
determined that we need the patch here.

The bitops are indeed a bit more special, because for both the atomic
and non-atomic bitops we *can* reason about the generated code (since we
control it, although not sure about the asm-generic ones), and that
makes reasoning about accesses racing with non-atomic bitops more
feasible. At least that's our rationale for deciding that reverting
non-atomic bitops treatment to it's more relaxed version is ok.

Thanks,
-- Marco

2020-09-04 01:28:34

by Boqun Feng

[permalink] [raw]
Subject: Re: [PATCH kcsan 18/19] bitops, kcsan: Partially revert instrumentation for non-atomic bitops

On Wed, Sep 02, 2020 at 08:13:15AM +0200, Marco Elver wrote:
> On Wed, Sep 02, 2020 at 11:30AM +0800, Boqun Feng wrote:
> > Hi Paul and Marco,
> >
> > The whole update patchset looks good to me, just one question out of
> > curiosity fo this one, please see below:
> >
> > On Mon, Aug 31, 2020 at 11:18:04AM -0700, [email protected] wrote:
> > > From: Marco Elver <[email protected]>
> > >
> > > Previous to the change to distinguish read-write accesses, when
> > > CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=y is set, KCSAN would consider
> > > the non-atomic bitops as atomic. We want to partially revert to this
> > > behaviour, but with one important distinction: report racing
> > > modifications, since lost bits due to non-atomicity are certainly
> > > possible.
> > >
> > > Given the operations here only modify a single bit, assuming
> > > non-atomicity of the writer is sufficient may be reasonable for certain
> > > usage (and follows the permissible nature of the "assume plain writes
> > > atomic" rule). In other words:
> > >
> > > 1. We want non-atomic read-modify-write races to be reported;
> > > this is accomplished by kcsan_check_read(), where any
> > > concurrent write (atomic or not) will generate a report.
> > >
> > > 2. We do not want to report races with marked readers, but -do-
> > > want to report races with unmarked readers; this is
> > > accomplished by the instrument_write() ("assume atomic
> > > write" with Kconfig option set).
> > >
> >
> > Is there any code in kernel using the above assumption (i.e.
> > non-atomicity of the writer is sufficient)? IOW, have you observed
> > anything bad (e.g. an anoying false positive) after applying the
> > read_write changes but without this patch?
>
> We were looking for an answer to:
>
> https://lkml.kernel.org/r/[email protected]
>
> Initially we thought using atomic bitops might be required, but after a
> longer offline discussion realized that simply marking the reader in
> this case, but retaining the non-atomic bitop is probably all that's
> needed.
>
> The version of KCSAN that found the above was still using KCSAN from
> Linux 5.8, but we realized with the changed read-write instrumentation
> to bitops in this series, we'd regress and still report the race even if
> the reader was marked. To avoid this with the default KCSAN config, we
> determined that we need the patch here.
>

Thanks for the background! Now I see the point of having this patch ;-)

FWIW, feel free to add for the whole series:

Reviewed-by: Boqun Feng <[email protected]>

Regards,
Boqun

> The bitops are indeed a bit more special, because for both the atomic
> and non-atomic bitops we *can* reason about the generated code (since we
> control it, although not sure about the asm-generic ones), and that
> makes reasoning about accesses racing with non-atomic bitops more
> feasible. At least that's our rationale for deciding that reverting
> non-atomic bitops treatment to it's more relaxed version is ok.
>
> Thanks,
> -- Marco