v3: https://lore.kernel.org/lkml/[email protected]/
v3 -> v4: Rebase.
Elaborate why ftrace_ops_list_func() change is needed on
x64_64 (Steven).
Add a comment to the DFLTCC patch (Alexander P.).
Simplify diag224();
Improve __arch_local_irq_attributes style;
Use IS_ENABLED(CONFIG_KMSAN) for vmalloc area (Heiko).
Align vmalloc area on _SEGMENT_SIZE (Alexander G.).
v2: https://lore.kernel.org/lkml/[email protected]/
v2 -> v3: Drop kmsan_memmove_metadata() and strlcpy() patches;
Remove kmsan_get_metadata() stub;
Move kmsan_enable_current() and kmsan_disable_current() to
include/linux/kmsan.h, explain why a counter is needed;
Drop the memset_no_sanitize_memory() patch;
Use __memset() in the SLAB_POISON patch;
Add kmsan-checks.h to the DFLTCC patch;
Add recursion check to the arch_kmsan_get_meta_or_null()
patch (Alexander P.).
Fix inline + __no_kmsan_checks issues.
New patch for s390/irqflags, that resolves a lockdep warning.
New patch for s390/diag, that resolves a false positive when
running on an LPAR.
New patch for STCCTM, same as above.
New patch for check_bytes_and_report() that resolves a false
positive that occurs even on Intel.
v1: https://lore.kernel.org/lkml/[email protected]/
v1 -> v2: Add comments, sort #includes, introduce
memset_no_sanitize_memory() and use it to avoid unpoisoning
of redzones, change vmalloc alignment to _REGION3_SIZE, add
R-bs (Alexander P.).
Fix building
[PATCH 28/33] s390/string: Add KMSAN support
with FORTIFY_SOURCE.
Reported-by: kernel test robot <[email protected]>
Closes: https://lore.kernel.org/oe-kbuild-all/[email protected]/
Hi,
This series provides the minimal support for Kernel Memory Sanitizer on
s390. Kernel Memory Sanitizer is clang-only instrumentation for finding
accesses to uninitialized memory. The clang support for s390 has already
been merged [1].
With this series, I can successfully boot s390 defconfig and
debug_defconfig with kmsan.panic=1. The tool found one real
s390-specific bug (fixed in master).
Best regards,
Ilya
[1] https://reviews.llvm.org/D148596
Ilya Leoshkevich (35):
ftrace: Unpoison ftrace_regs in ftrace_ops_list_func()
kmsan: Make the tests compatible with kmsan.panic=1
kmsan: Disable KMSAN when DEFERRED_STRUCT_PAGE_INIT is enabled
kmsan: Increase the maximum store size to 4096
kmsan: Fix is_bad_asm_addr() on arches with overlapping address spaces
kmsan: Fix kmsan_copy_to_user() on arches with overlapping address
spaces
kmsan: Remove a useless assignment from
kmsan_vmap_pages_range_noflush()
kmsan: Remove an x86-specific #include from kmsan.h
kmsan: Expose kmsan_get_metadata()
kmsan: Export panic_on_kmsan
kmsan: Allow disabling KMSAN checks for the current task
kmsan: Support SLAB_POISON
kmsan: Use ALIGN_DOWN() in kmsan_get_metadata()
kmsan: Do not round up pg_data_t size
mm: slub: Let KMSAN access metadata
mm: slub: Unpoison the memchr_inv() return value
mm: kfence: Disable KMSAN when checking the canary
lib/zlib: Unpoison DFLTCC output buffers
kmsan: Accept ranges starting with 0 on s390
s390/boot: Turn off KMSAN
s390: Use a larger stack for KMSAN
s390/boot: Add the KMSAN runtime stub
s390/checksum: Add a KMSAN check
s390/cpacf: Unpoison the results of cpacf_trng()
s390/cpumf: Unpoison STCCTM output buffer
s390/diag: Unpoison diag224() output buffer
s390/ftrace: Unpoison ftrace_regs in kprobe_ftrace_handler()
s390/irqflags: Do not instrument arch_local_irq_*() with KMSAN
s390/mm: Define KMSAN metadata for vmalloc and modules
s390/string: Add KMSAN support
s390/traps: Unpoison the kernel_stack_overflow()'s pt_regs
s390/uaccess: Add KMSAN support to put_user() and get_user()
s390/unwind: Disable KMSAN checks
s390: Implement the architecture-specific KMSAN functions
kmsan: Enable on s390
Documentation/dev-tools/kmsan.rst | 4 +-
arch/s390/Kconfig | 1 +
arch/s390/Makefile | 2 +-
arch/s390/boot/Makefile | 3 +
arch/s390/boot/kmsan.c | 6 ++
arch/s390/boot/startup.c | 7 ++
arch/s390/boot/string.c | 16 ++++
arch/s390/include/asm/checksum.h | 2 +
arch/s390/include/asm/cpacf.h | 3 +
arch/s390/include/asm/cpu_mf.h | 6 ++
arch/s390/include/asm/irqflags.h | 17 ++++-
arch/s390/include/asm/kmsan.h | 43 +++++++++++
arch/s390/include/asm/pgtable.h | 8 ++
arch/s390/include/asm/string.h | 20 +++--
arch/s390/include/asm/thread_info.h | 2 +-
arch/s390/include/asm/uaccess.h | 111 ++++++++++++++++++++--------
arch/s390/kernel/diag.c | 10 ++-
arch/s390/kernel/ftrace.c | 2 +
arch/s390/kernel/traps.c | 6 ++
arch/s390/kernel/unwind_bc.c | 4 +
drivers/s390/char/sclp.c | 2 +-
include/linux/kmsan.h | 33 +++++++++
include/linux/kmsan_types.h | 2 +-
kernel/trace/ftrace.c | 1 +
lib/zlib_dfltcc/dfltcc.h | 1 +
lib/zlib_dfltcc/dfltcc_util.h | 28 +++++++
mm/Kconfig | 1 +
mm/kfence/core.c | 11 ++-
mm/kmsan/core.c | 1 -
mm/kmsan/hooks.c | 23 ++++--
mm/kmsan/init.c | 7 +-
mm/kmsan/instrumentation.c | 11 +--
mm/kmsan/kmsan.h | 9 +--
mm/kmsan/kmsan_test.c | 5 ++
mm/kmsan/report.c | 8 +-
mm/kmsan/shadow.c | 9 +--
mm/slub.c | 17 ++++-
tools/objtool/check.c | 2 +
38 files changed, 361 insertions(+), 83 deletions(-)
create mode 100644 arch/s390/boot/kmsan.c
create mode 100644 arch/s390/include/asm/kmsan.h
--
2.45.1
Now that everything else is in place, enable KMSAN in Kconfig.
Acked-by: Heiko Carstens <[email protected]>
Signed-off-by: Ilya Leoshkevich <[email protected]>
---
arch/s390/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
index c59d2b54df49..3cba4993d7c7 100644
--- a/arch/s390/Kconfig
+++ b/arch/s390/Kconfig
@@ -158,6 +158,7 @@ config S390
select HAVE_ARCH_KASAN
select HAVE_ARCH_KASAN_VMALLOC
select HAVE_ARCH_KCSAN
+ select HAVE_ARCH_KMSAN
select HAVE_ARCH_KFENCE
select HAVE_ARCH_RANDOMIZE_KSTACK_OFFSET
select HAVE_ARCH_SECCOMP_FILTER
--
2.45.1
When building the kmsan test as a module, modpost fails with the
following error message:
ERROR: modpost: "panic_on_kmsan" [mm/kmsan/kmsan_test.ko] undefined!
Export panic_on_kmsan in order to improve the KMSAN usability for
modules.
Reviewed-by: Alexander Potapenko <[email protected]>
Signed-off-by: Ilya Leoshkevich <[email protected]>
---
mm/kmsan/report.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/mm/kmsan/report.c b/mm/kmsan/report.c
index 02736ec757f2..c79d3b0d2d0d 100644
--- a/mm/kmsan/report.c
+++ b/mm/kmsan/report.c
@@ -20,6 +20,7 @@ static DEFINE_RAW_SPINLOCK(kmsan_report_lock);
/* Protected by kmsan_report_lock */
static char report_local_descr[DESCR_SIZE];
int panic_on_kmsan __read_mostly;
+EXPORT_SYMBOL_GPL(panic_on_kmsan);
#ifdef MODULE_PARAM_PREFIX
#undef MODULE_PARAM_PREFIX
--
2.45.1
x86's alloc_node_data() rounds up node data size to PAGE_SIZE. It's not
explained why it's needed, but it's most likely for performance
reasons, since the padding bytes are not used anywhere. Some other
architectures do it as well, e.g., mips rounds it up to the cache line
size.
kmsan_init_shadow() initializes metadata for each node data and assumes
the x86 rounding, which does not match other architectures. This may
cause the range end to overshoot the end of available memory, in turn
causing virt_to_page_or_null() in kmsan_init_alloc_meta_for_range() to
return NULL, which leads to kernel panic shortly after.
Since the padding bytes are not used, drop the rounding.
Signed-off-by: Ilya Leoshkevich <[email protected]>
---
mm/kmsan/init.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/kmsan/init.c b/mm/kmsan/init.c
index 3ac3b8921d36..9de76ac7062c 100644
--- a/mm/kmsan/init.c
+++ b/mm/kmsan/init.c
@@ -72,7 +72,7 @@ static void __init kmsan_record_future_shadow_range(void *start, void *end)
*/
void __init kmsan_init_shadow(void)
{
- const size_t nd_size = roundup(sizeof(pg_data_t), PAGE_SIZE);
+ const size_t nd_size = sizeof(pg_data_t);
phys_addr_t p_start, p_end;
u64 loop;
int nid;
--
2.45.1
The unwind code can read uninitialized frames. Furthermore, even in
the good case, KMSAN does not emit shadow for backchains. Therefore
disable it for the unwinding functions.
Reviewed-by: Alexander Potapenko <[email protected]>
Acked-by: Heiko Carstens <[email protected]>
Signed-off-by: Ilya Leoshkevich <[email protected]>
---
arch/s390/kernel/unwind_bc.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/arch/s390/kernel/unwind_bc.c b/arch/s390/kernel/unwind_bc.c
index 0ece156fdd7c..cd44be2b6ce8 100644
--- a/arch/s390/kernel/unwind_bc.c
+++ b/arch/s390/kernel/unwind_bc.c
@@ -49,6 +49,8 @@ static inline bool is_final_pt_regs(struct unwind_state *state,
READ_ONCE_NOCHECK(regs->psw.mask) & PSW_MASK_PSTATE;
}
+/* Avoid KMSAN false positives from touching uninitialized frames. */
+__no_kmsan_checks
bool unwind_next_frame(struct unwind_state *state)
{
struct stack_info *info = &state->stack_info;
@@ -118,6 +120,8 @@ bool unwind_next_frame(struct unwind_state *state)
}
EXPORT_SYMBOL_GPL(unwind_next_frame);
+/* Avoid KMSAN false positives from touching uninitialized frames. */
+__no_kmsan_checks
void __unwind_start(struct unwind_state *state, struct task_struct *task,
struct pt_regs *regs, unsigned long first_frame)
{
--
2.45.1
Like for KASAN, it's useful to temporarily disable KMSAN checks around,
e.g., redzone accesses. Introduce kmsan_disable_current() and
kmsan_enable_current(), which are similar to their KASAN counterparts.
Make them reentrant in order to handle memory allocations in interrupt
context. Repurpose the allow_reporting field for this.
Signed-off-by: Ilya Leoshkevich <[email protected]>
---
Documentation/dev-tools/kmsan.rst | 4 ++--
include/linux/kmsan.h | 24 ++++++++++++++++++++++++
include/linux/kmsan_types.h | 2 +-
mm/kmsan/core.c | 1 -
mm/kmsan/hooks.c | 18 +++++++++++++++---
mm/kmsan/report.c | 7 ++++---
tools/objtool/check.c | 2 ++
7 files changed, 48 insertions(+), 10 deletions(-)
diff --git a/Documentation/dev-tools/kmsan.rst b/Documentation/dev-tools/kmsan.rst
index 323eedad53cd..022a823f5f1b 100644
--- a/Documentation/dev-tools/kmsan.rst
+++ b/Documentation/dev-tools/kmsan.rst
@@ -338,11 +338,11 @@ Per-task KMSAN state
~~~~~~~~~~~~~~~~~~~~
Every task_struct has an associated KMSAN task state that holds the KMSAN
-context (see above) and a per-task flag disallowing KMSAN reports::
+context (see above) and a per-task counter disallowing KMSAN reports::
struct kmsan_context {
...
- bool allow_reporting;
+ unsigned int depth;
struct kmsan_context_state cstate;
...
}
diff --git a/include/linux/kmsan.h b/include/linux/kmsan.h
index fe6c2212bdb1..23de1b3d6aee 100644
--- a/include/linux/kmsan.h
+++ b/include/linux/kmsan.h
@@ -239,6 +239,22 @@ void kmsan_unpoison_entry_regs(const struct pt_regs *regs);
*/
void *kmsan_get_metadata(void *addr, bool is_origin);
+/*
+ * kmsan_enable_current(): Enable KMSAN for the current task.
+ *
+ * Each kmsan_enable_current() current call must be preceded by a
+ * kmsan_disable_current() call. These call pairs may be nested.
+ */
+void kmsan_enable_current(void);
+
+/*
+ * kmsan_disable_current(): Disable KMSAN for the current task.
+ *
+ * Each kmsan_disable_current() current call must be followed by a
+ * kmsan_enable_current() call. These call pairs may be nested.
+ */
+void kmsan_disable_current(void);
+
#else
static inline void kmsan_init_shadow(void)
@@ -338,6 +354,14 @@ static inline void kmsan_unpoison_entry_regs(const struct pt_regs *regs)
{
}
+static inline void kmsan_enable_current(void)
+{
+}
+
+static inline void kmsan_disable_current(void)
+{
+}
+
#endif
#endif /* _LINUX_KMSAN_H */
diff --git a/include/linux/kmsan_types.h b/include/linux/kmsan_types.h
index 929287981afe..dfc59918b3c0 100644
--- a/include/linux/kmsan_types.h
+++ b/include/linux/kmsan_types.h
@@ -31,7 +31,7 @@ struct kmsan_context_state {
struct kmsan_ctx {
struct kmsan_context_state cstate;
int kmsan_in_runtime;
- bool allow_reporting;
+ unsigned int depth;
};
#endif /* _LINUX_KMSAN_TYPES_H */
diff --git a/mm/kmsan/core.c b/mm/kmsan/core.c
index 95f859e38c53..81b22220711a 100644
--- a/mm/kmsan/core.c
+++ b/mm/kmsan/core.c
@@ -43,7 +43,6 @@ void kmsan_internal_task_create(struct task_struct *task)
struct thread_info *info = current_thread_info();
__memset(ctx, 0, sizeof(*ctx));
- ctx->allow_reporting = true;
kmsan_internal_unpoison_memory(info, sizeof(*info), false);
}
diff --git a/mm/kmsan/hooks.c b/mm/kmsan/hooks.c
index b408714f9ba3..267d0afa2e8b 100644
--- a/mm/kmsan/hooks.c
+++ b/mm/kmsan/hooks.c
@@ -39,12 +39,10 @@ void kmsan_task_create(struct task_struct *task)
void kmsan_task_exit(struct task_struct *task)
{
- struct kmsan_ctx *ctx = &task->kmsan_ctx;
-
if (!kmsan_enabled || kmsan_in_runtime())
return;
- ctx->allow_reporting = false;
+ kmsan_disable_current();
}
void kmsan_slab_alloc(struct kmem_cache *s, void *object, gfp_t flags)
@@ -424,3 +422,17 @@ void kmsan_check_memory(const void *addr, size_t size)
REASON_ANY);
}
EXPORT_SYMBOL(kmsan_check_memory);
+
+void kmsan_enable_current(void)
+{
+ KMSAN_WARN_ON(current->kmsan_ctx.depth == 0);
+ current->kmsan_ctx.depth--;
+}
+EXPORT_SYMBOL(kmsan_enable_current);
+
+void kmsan_disable_current(void)
+{
+ current->kmsan_ctx.depth++;
+ KMSAN_WARN_ON(current->kmsan_ctx.depth == 0);
+}
+EXPORT_SYMBOL(kmsan_disable_current);
diff --git a/mm/kmsan/report.c b/mm/kmsan/report.c
index c79d3b0d2d0d..92e73ec61435 100644
--- a/mm/kmsan/report.c
+++ b/mm/kmsan/report.c
@@ -8,6 +8,7 @@
*/
#include <linux/console.h>
+#include <linux/kmsan.h>
#include <linux/moduleparam.h>
#include <linux/stackdepot.h>
#include <linux/stacktrace.h>
@@ -158,12 +159,12 @@ void kmsan_report(depot_stack_handle_t origin, void *address, int size,
if (!kmsan_enabled)
return;
- if (!current->kmsan_ctx.allow_reporting)
+ if (current->kmsan_ctx.depth)
return;
if (!origin)
return;
- current->kmsan_ctx.allow_reporting = false;
+ kmsan_disable_current();
ua_flags = user_access_save();
raw_spin_lock(&kmsan_report_lock);
pr_err("=====================================================\n");
@@ -216,5 +217,5 @@ void kmsan_report(depot_stack_handle_t origin, void *address, int size,
if (panic_on_kmsan)
panic("kmsan.panic set ...\n");
user_access_restore(ua_flags);
- current->kmsan_ctx.allow_reporting = true;
+ kmsan_enable_current();
}
diff --git a/tools/objtool/check.c b/tools/objtool/check.c
index 0a33d9195b7a..01237d167223 100644
--- a/tools/objtool/check.c
+++ b/tools/objtool/check.c
@@ -1202,6 +1202,8 @@ static const char *uaccess_safe_builtin[] = {
"__sanitizer_cov_trace_switch",
/* KMSAN */
"kmsan_copy_to_user",
+ "kmsan_disable_current",
+ "kmsan_enable_current",
"kmsan_report",
"kmsan_unpoison_entry_regs",
"kmsan_unpoison_memory",
--
2.45.1
stcctm() uses the "Q" constraint for dest, therefore KMSAN does not
understand that it fills multiple doublewords pointed to by dest, not
just one. This results in false positives.
Unpoison the whole dest manually with kmsan_unpoison_memory().
Reported-by: Alexander Gordeev <[email protected]>
Reviewed-by: Alexander Potapenko <[email protected]>
Acked-by: Heiko Carstens <[email protected]>
Signed-off-by: Ilya Leoshkevich <[email protected]>
---
arch/s390/include/asm/cpu_mf.h | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/arch/s390/include/asm/cpu_mf.h b/arch/s390/include/asm/cpu_mf.h
index a0de5b9b02ea..9e4bbc3e53f8 100644
--- a/arch/s390/include/asm/cpu_mf.h
+++ b/arch/s390/include/asm/cpu_mf.h
@@ -10,6 +10,7 @@
#define _ASM_S390_CPU_MF_H
#include <linux/errno.h>
+#include <linux/kmsan-checks.h>
#include <asm/asm-extable.h>
#include <asm/facility.h>
@@ -239,6 +240,11 @@ static __always_inline int stcctm(enum stcctm_ctr_set set, u64 range, u64 *dest)
: "=d" (cc)
: "Q" (*dest), "d" (range), "i" (set)
: "cc", "memory");
+ /*
+ * If cc == 2, less than RANGE counters are stored, but it's not easy
+ * to tell how many. Always unpoison the whole range for simplicity.
+ */
+ kmsan_unpoison_memory(dest, range * sizeof(u64));
return cc;
}
--
2.45.1
Lockdep generates the following false positives with KMSAN on s390x:
[ 6.063666] DEBUG_LOCKS_WARN_ON(lockdep_hardirqs_enabled())
[ ...]
[ 6.577050] Call Trace:
[ 6.619637] [<000000000690d2de>] check_flags+0x1fe/0x210
[ 6.665411] ([<000000000690d2da>] check_flags+0x1fa/0x210)
[ 6.707478] [<00000000006cec1a>] lock_acquire+0x2ca/0xce0
[ 6.749959] [<00000000069820ea>] _raw_spin_lock_irqsave+0xea/0x190
[ 6.794912] [<00000000041fc988>] __stack_depot_save+0x218/0x5b0
[ 6.838420] [<000000000197affe>] __msan_poison_alloca+0xfe/0x1a0
[ 6.882985] [<0000000007c5827c>] start_kernel+0x70c/0xd50
[ 6.927454] [<0000000000100036>] startup_continue+0x36/0x40
Between trace_hardirqs_on() and `stosm __mask, 3` lockdep thinks that
interrupts are on, but on the CPU they are still off. KMSAN
instrumentation takes spinlocks, giving lockdep a chance to see and
complain about this discrepancy.
KMSAN instrumentation is inserted in order to poison the __mask
variable. Disable instrumentation in the respective functions. They are
very small and it's easy to see that no important metadata updates are
lost because of this.
Reviewed-by: Alexander Potapenko <[email protected]>
Signed-off-by: Ilya Leoshkevich <[email protected]>
---
arch/s390/include/asm/irqflags.h | 17 ++++++++++++++---
drivers/s390/char/sclp.c | 2 +-
2 files changed, 15 insertions(+), 4 deletions(-)
diff --git a/arch/s390/include/asm/irqflags.h b/arch/s390/include/asm/irqflags.h
index 02427b205c11..bcab456dfb80 100644
--- a/arch/s390/include/asm/irqflags.h
+++ b/arch/s390/include/asm/irqflags.h
@@ -37,12 +37,18 @@ static __always_inline void __arch_local_irq_ssm(unsigned long flags)
asm volatile("ssm %0" : : "Q" (flags) : "memory");
}
-static __always_inline unsigned long arch_local_save_flags(void)
+#ifdef CONFIG_KMSAN
+#define arch_local_irq_attributes noinline notrace __no_sanitize_memory __maybe_unused
+#else
+#define arch_local_irq_attributes __always_inline
+#endif
+
+static arch_local_irq_attributes unsigned long arch_local_save_flags(void)
{
return __arch_local_irq_stnsm(0xff);
}
-static __always_inline unsigned long arch_local_irq_save(void)
+static arch_local_irq_attributes unsigned long arch_local_irq_save(void)
{
return __arch_local_irq_stnsm(0xfc);
}
@@ -52,7 +58,12 @@ static __always_inline void arch_local_irq_disable(void)
arch_local_irq_save();
}
-static __always_inline void arch_local_irq_enable(void)
+static arch_local_irq_attributes void arch_local_irq_enable_external(void)
+{
+ __arch_local_irq_stosm(0x01);
+}
+
+static arch_local_irq_attributes void arch_local_irq_enable(void)
{
__arch_local_irq_stosm(0x03);
}
diff --git a/drivers/s390/char/sclp.c b/drivers/s390/char/sclp.c
index d53ee34d398f..fb1d9949adca 100644
--- a/drivers/s390/char/sclp.c
+++ b/drivers/s390/char/sclp.c
@@ -736,7 +736,7 @@ sclp_sync_wait(void)
cr0_sync.val = cr0.val & ~CR0_IRQ_SUBCLASS_MASK;
cr0_sync.val |= 1UL << (63 - 54);
local_ctl_load(0, &cr0_sync);
- __arch_local_irq_stosm(0x01);
+ arch_local_irq_enable_external();
/* Loop until driver state indicates finished request */
while (sclp_running_state != sclp_running_state_idle) {
/* Check for expired request timer */
--
2.45.1
Prevent KMSAN from complaining about buffers filled by cpacf_trng()
being uninitialized.
Tested-by: Alexander Gordeev <[email protected]>
Reviewed-by: Alexander Potapenko <[email protected]>
Acked-by: Heiko Carstens <[email protected]>
Signed-off-by: Ilya Leoshkevich <[email protected]>
---
arch/s390/include/asm/cpacf.h | 3 +++
1 file changed, 3 insertions(+)
diff --git a/arch/s390/include/asm/cpacf.h b/arch/s390/include/asm/cpacf.h
index c786538e397c..dae8843b164f 100644
--- a/arch/s390/include/asm/cpacf.h
+++ b/arch/s390/include/asm/cpacf.h
@@ -12,6 +12,7 @@
#define _ASM_S390_CPACF_H
#include <asm/facility.h>
+#include <linux/kmsan-checks.h>
/*
* Instruction opcodes for the CPACF instructions
@@ -542,6 +543,8 @@ static inline void cpacf_trng(u8 *ucbuf, unsigned long ucbuf_len,
: [ucbuf] "+&d" (u.pair), [cbuf] "+&d" (c.pair)
: [fc] "K" (CPACF_PRNO_TRNG), [opc] "i" (CPACF_PRNO)
: "cc", "memory", "0");
+ kmsan_unpoison_memory(ucbuf, ucbuf_len);
+ kmsan_unpoison_memory(cbuf, cbuf_len);
}
/**
--
2.45.1
It should be possible to have inline functions in the s390 header
files, which call kmsan_unpoison_memory(). The problem is that these
header files might be included by the decompressor, which does not
contain KMSAN runtime, causing linker errors.
Not compiling these calls if __SANITIZE_MEMORY__ is not defined -
either by changing kmsan-checks.h or at the call sites - may cause
unintended side effects, since calling these functions from an
uninstrumented code that is linked into the kernel is valid use case.
One might want to explicitly distinguish between the kernel and the
decompressor. Checking for a decompressor-specific #define is quite
heavy-handed, and will have to be done at all call sites.
A more generic approach is to provide a dummy kmsan_unpoison_memory()
definition. This produces some runtime overhead, but only when building
with CONFIG_KMSAN. The benefit is that it does not disturb the existing
KMSAN build logic and call sites don't need to be changed.
Reviewed-by: Alexander Potapenko <[email protected]>
Signed-off-by: Ilya Leoshkevich <[email protected]>
---
arch/s390/boot/Makefile | 1 +
arch/s390/boot/kmsan.c | 6 ++++++
2 files changed, 7 insertions(+)
create mode 100644 arch/s390/boot/kmsan.c
diff --git a/arch/s390/boot/Makefile b/arch/s390/boot/Makefile
index 526ed20b9d31..e7658997452b 100644
--- a/arch/s390/boot/Makefile
+++ b/arch/s390/boot/Makefile
@@ -44,6 +44,7 @@ obj-$(findstring y, $(CONFIG_PROTECTED_VIRTUALIZATION_GUEST) $(CONFIG_PGSTE)) +=
obj-$(CONFIG_RANDOMIZE_BASE) += kaslr.o
obj-y += $(if $(CONFIG_KERNEL_UNCOMPRESSED),,decompressor.o) info.o
obj-$(CONFIG_KERNEL_ZSTD) += clz_ctz.o
+obj-$(CONFIG_KMSAN) += kmsan.o
obj-all := $(obj-y) piggy.o syms.o
targets := bzImage section_cmp.boot.data section_cmp.boot.preserved.data $(obj-y)
diff --git a/arch/s390/boot/kmsan.c b/arch/s390/boot/kmsan.c
new file mode 100644
index 000000000000..e7b3ac48143e
--- /dev/null
+++ b/arch/s390/boot/kmsan.c
@@ -0,0 +1,6 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <linux/kmsan-checks.h>
+
+void kmsan_unpoison_memory(const void *address, size_t size)
+{
+}
--
2.45.1
Each s390 CPU has lowcore pages associated with it. Each CPU sees its
own lowcore at virtual address 0 through a hardware mechanism called
prefixing. Additionally, all lowcores are mapped to non-0 virtual
addresses stored in the lowcore_ptr[] array.
When lowcore is accessed through virtual address 0, one needs to
resolve metadata for lowcore_ptr[raw_smp_processor_id()].
Expose kmsan_get_metadata() to make it possible to do this from the
arch code.
Signed-off-by: Ilya Leoshkevich <[email protected]>
---
include/linux/kmsan.h | 9 +++++++++
mm/kmsan/instrumentation.c | 1 +
mm/kmsan/kmsan.h | 1 -
3 files changed, 10 insertions(+), 1 deletion(-)
diff --git a/include/linux/kmsan.h b/include/linux/kmsan.h
index e0c23a32cdf0..fe6c2212bdb1 100644
--- a/include/linux/kmsan.h
+++ b/include/linux/kmsan.h
@@ -230,6 +230,15 @@ void kmsan_handle_urb(const struct urb *urb, bool is_out);
*/
void kmsan_unpoison_entry_regs(const struct pt_regs *regs);
+/**
+ * kmsan_get_metadata() - Return a pointer to KMSAN shadow or origins.
+ * @addr: kernel address.
+ * @is_origin: whether to return origins or shadow.
+ *
+ * Return NULL if metadata cannot be found.
+ */
+void *kmsan_get_metadata(void *addr, bool is_origin);
+
#else
static inline void kmsan_init_shadow(void)
diff --git a/mm/kmsan/instrumentation.c b/mm/kmsan/instrumentation.c
index 8a1bbbc723ab..94b49fac9d8b 100644
--- a/mm/kmsan/instrumentation.c
+++ b/mm/kmsan/instrumentation.c
@@ -14,6 +14,7 @@
#include "kmsan.h"
#include <linux/gfp.h>
+#include <linux/kmsan.h>
#include <linux/kmsan_string.h>
#include <linux/mm.h>
#include <linux/uaccess.h>
diff --git a/mm/kmsan/kmsan.h b/mm/kmsan/kmsan.h
index adf443bcffe8..34b83c301d57 100644
--- a/mm/kmsan/kmsan.h
+++ b/mm/kmsan/kmsan.h
@@ -66,7 +66,6 @@ struct shadow_origin_ptr {
struct shadow_origin_ptr kmsan_get_shadow_origin_ptr(void *addr, u64 size,
bool store);
-void *kmsan_get_metadata(void *addr, bool is_origin);
void __init kmsan_init_alloc_meta_for_range(void *start, void *end);
enum kmsan_bug_reason {
--
2.45.1
The pages for the KMSAN metadata associated with most kernel mappings
are taken from memblock by the common code. However, vmalloc and module
metadata needs to be defined by the architectures.
Be a little bit more careful than x86: allocate exactly MODULES_LEN
for the module shadow and origins, and then take 2/3 of vmalloc for
the vmalloc shadow and origins. This ensures that users passing small
vmalloc= values on the command line do not cause module metadata
collisions.
Reviewed-by: Alexander Potapenko <[email protected]>
Acked-by: Alexander Gordeev <[email protected]>
Acked-by: Heiko Carstens <[email protected]>
Signed-off-by: Ilya Leoshkevich <[email protected]>
---
arch/s390/boot/startup.c | 7 +++++++
arch/s390/include/asm/pgtable.h | 8 ++++++++
2 files changed, 15 insertions(+)
diff --git a/arch/s390/boot/startup.c b/arch/s390/boot/startup.c
index 182aac6a0f77..93775142322d 100644
--- a/arch/s390/boot/startup.c
+++ b/arch/s390/boot/startup.c
@@ -301,11 +301,18 @@ static unsigned long setup_kernel_memory_layout(unsigned long kernel_size)
MODULES_END = round_down(kernel_start, _SEGMENT_SIZE);
MODULES_VADDR = MODULES_END - MODULES_LEN;
VMALLOC_END = MODULES_VADDR;
+ if (IS_ENABLED(CONFIG_KMSAN))
+ VMALLOC_END -= MODULES_LEN * 2;
/* allow vmalloc area to occupy up to about 1/2 of the rest virtual space left */
vsize = (VMALLOC_END - FIXMAP_SIZE) / 2;
vsize = round_down(vsize, _SEGMENT_SIZE);
vmalloc_size = min(vmalloc_size, vsize);
+ if (IS_ENABLED(CONFIG_KMSAN)) {
+ /* take 2/3 of vmalloc area for KMSAN shadow and origins */
+ vmalloc_size = round_down(vmalloc_size / 3, _SEGMENT_SIZE);
+ VMALLOC_END -= vmalloc_size * 2;
+ }
VMALLOC_START = VMALLOC_END - vmalloc_size;
__memcpy_real_area = round_down(VMALLOC_START - MEMCPY_REAL_SIZE, PAGE_SIZE);
diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h
index 70b6ee557eb2..2f44c23efec0 100644
--- a/arch/s390/include/asm/pgtable.h
+++ b/arch/s390/include/asm/pgtable.h
@@ -107,6 +107,14 @@ static inline int is_module_addr(void *addr)
return 1;
}
+#ifdef CONFIG_KMSAN
+#define KMSAN_VMALLOC_SIZE (VMALLOC_END - VMALLOC_START)
+#define KMSAN_VMALLOC_SHADOW_START VMALLOC_END
+#define KMSAN_VMALLOC_ORIGIN_START (KMSAN_VMALLOC_SHADOW_START + KMSAN_VMALLOC_SIZE)
+#define KMSAN_MODULES_SHADOW_START (KMSAN_VMALLOC_ORIGIN_START + KMSAN_VMALLOC_SIZE)
+#define KMSAN_MODULES_ORIGIN_START (KMSAN_MODULES_SHADOW_START + MODULES_LEN)
+#endif
+
#ifdef CONFIG_RANDOMIZE_BASE
#define KASLR_LEN (1UL << 31)
#else
--
2.45.1
This is normally done by the generic entry code, but the
kernel_stack_overflow() flow bypasses it.
Reviewed-by: Alexander Potapenko <[email protected]>
Acked-by: Heiko Carstens <[email protected]>
Signed-off-by: Ilya Leoshkevich <[email protected]>
---
arch/s390/kernel/traps.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/arch/s390/kernel/traps.c b/arch/s390/kernel/traps.c
index 52578b5cecbd..dde69d2a64f0 100644
--- a/arch/s390/kernel/traps.c
+++ b/arch/s390/kernel/traps.c
@@ -27,6 +27,7 @@
#include <linux/uaccess.h>
#include <linux/cpu.h>
#include <linux/entry-common.h>
+#include <linux/kmsan.h>
#include <asm/asm-extable.h>
#include <asm/vtime.h>
#include <asm/fpu.h>
@@ -262,6 +263,11 @@ static void monitor_event_exception(struct pt_regs *regs)
void kernel_stack_overflow(struct pt_regs *regs)
{
+ /*
+ * Normally regs are unpoisoned by the generic entry code, but
+ * kernel_stack_overflow() is a rare case that is called bypassing it.
+ */
+ kmsan_unpoison_entry_regs(regs);
bust_spinlocks(1);
printk("Kernel stack overflow.\n");
show_regs(regs);
--
2.45.1
Add KMSAN support for the s390 implementations of the string functions.
Do this similar to how it's already done for KASAN, except that the
optimized memset{16,32,64}() functions need to be disabled: it's
important for KMSAN to know that they initialized something.
The way boot code is built with regard to string functions is
problematic, since most files think it's configured with sanitizers,
but boot/string.c doesn't. This creates various problems with the
memset64() definitions, depending on whether the code is built with
sanitizers or fortify. This should probably be streamlined, but in the
meantime resolve the issues by introducing the IN_BOOT_STRING_C macro,
similar to the existing IN_ARCH_STRING_C macro.
Reviewed-by: Alexander Potapenko <[email protected]>
Acked-by: Heiko Carstens <[email protected]>
Signed-off-by: Ilya Leoshkevich <[email protected]>
---
arch/s390/boot/string.c | 16 ++++++++++++++++
arch/s390/include/asm/string.h | 20 +++++++++++++++-----
2 files changed, 31 insertions(+), 5 deletions(-)
diff --git a/arch/s390/boot/string.c b/arch/s390/boot/string.c
index faccb33b462c..f6b9b1df48a8 100644
--- a/arch/s390/boot/string.c
+++ b/arch/s390/boot/string.c
@@ -1,11 +1,18 @@
// SPDX-License-Identifier: GPL-2.0
+#define IN_BOOT_STRING_C 1
#include <linux/ctype.h>
#include <linux/kernel.h>
#include <linux/errno.h>
#undef CONFIG_KASAN
#undef CONFIG_KASAN_GENERIC
+#undef CONFIG_KMSAN
#include "../lib/string.c"
+/*
+ * Duplicate some functions from the common lib/string.c
+ * instead of fully including it.
+ */
+
int strncmp(const char *cs, const char *ct, size_t count)
{
unsigned char c1, c2;
@@ -22,6 +29,15 @@ int strncmp(const char *cs, const char *ct, size_t count)
return 0;
}
+void *memset64(uint64_t *s, uint64_t v, size_t count)
+{
+ uint64_t *xs = s;
+
+ while (count--)
+ *xs++ = v;
+ return s;
+}
+
char *skip_spaces(const char *str)
{
while (isspace(*str))
diff --git a/arch/s390/include/asm/string.h b/arch/s390/include/asm/string.h
index 351685de53d2..2ab868cbae6c 100644
--- a/arch/s390/include/asm/string.h
+++ b/arch/s390/include/asm/string.h
@@ -15,15 +15,12 @@
#define __HAVE_ARCH_MEMCPY /* gcc builtin & arch function */
#define __HAVE_ARCH_MEMMOVE /* gcc builtin & arch function */
#define __HAVE_ARCH_MEMSET /* gcc builtin & arch function */
-#define __HAVE_ARCH_MEMSET16 /* arch function */
-#define __HAVE_ARCH_MEMSET32 /* arch function */
-#define __HAVE_ARCH_MEMSET64 /* arch function */
void *memcpy(void *dest, const void *src, size_t n);
void *memset(void *s, int c, size_t n);
void *memmove(void *dest, const void *src, size_t n);
-#ifndef CONFIG_KASAN
+#if !defined(CONFIG_KASAN) && !defined(CONFIG_KMSAN)
#define __HAVE_ARCH_MEMCHR /* inline & arch function */
#define __HAVE_ARCH_MEMCMP /* arch function */
#define __HAVE_ARCH_MEMSCAN /* inline & arch function */
@@ -36,6 +33,9 @@ void *memmove(void *dest, const void *src, size_t n);
#define __HAVE_ARCH_STRNCPY /* arch function */
#define __HAVE_ARCH_STRNLEN /* inline & arch function */
#define __HAVE_ARCH_STRSTR /* arch function */
+#define __HAVE_ARCH_MEMSET16 /* arch function */
+#define __HAVE_ARCH_MEMSET32 /* arch function */
+#define __HAVE_ARCH_MEMSET64 /* arch function */
/* Prototypes for non-inlined arch strings functions. */
int memcmp(const void *s1, const void *s2, size_t n);
@@ -44,7 +44,7 @@ size_t strlcat(char *dest, const char *src, size_t n);
char *strncat(char *dest, const char *src, size_t n);
char *strncpy(char *dest, const char *src, size_t n);
char *strstr(const char *s1, const char *s2);
-#endif /* !CONFIG_KASAN */
+#endif /* !defined(CONFIG_KASAN) && !defined(CONFIG_KMSAN) */
#undef __HAVE_ARCH_STRCHR
#undef __HAVE_ARCH_STRNCHR
@@ -74,20 +74,30 @@ void *__memset16(uint16_t *s, uint16_t v, size_t count);
void *__memset32(uint32_t *s, uint32_t v, size_t count);
void *__memset64(uint64_t *s, uint64_t v, size_t count);
+#ifdef __HAVE_ARCH_MEMSET16
static inline void *memset16(uint16_t *s, uint16_t v, size_t count)
{
return __memset16(s, v, count * sizeof(v));
}
+#endif
+#ifdef __HAVE_ARCH_MEMSET32
static inline void *memset32(uint32_t *s, uint32_t v, size_t count)
{
return __memset32(s, v, count * sizeof(v));
}
+#endif
+#ifdef __HAVE_ARCH_MEMSET64
+#ifdef IN_BOOT_STRING_C
+void *memset64(uint64_t *s, uint64_t v, size_t count);
+#else
static inline void *memset64(uint64_t *s, uint64_t v, size_t count)
{
return __memset64(s, v, count * sizeof(v));
}
+#endif
+#endif
#if !defined(IN_ARCH_STRING_C) && (!defined(CONFIG_FORTIFY_SOURCE) || defined(__NO_FORTIFY))
--
2.45.1
Diagnose 224 stores 4k bytes, which currently cannot be deduced from
the inline assembly constraints. This leads to KMSAN false positives.
Fix the constraints by using a 4k-sized struct instead of a raw
pointer. While at it, prettify them too.
Suggested-by: Heiko Carstens <[email protected]>
Signed-off-by: Ilya Leoshkevich <[email protected]>
---
arch/s390/kernel/diag.c | 10 ++++++----
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/arch/s390/kernel/diag.c b/arch/s390/kernel/diag.c
index 8dee9aa0ec95..8a7009618ba7 100644
--- a/arch/s390/kernel/diag.c
+++ b/arch/s390/kernel/diag.c
@@ -278,12 +278,14 @@ int diag224(void *ptr)
int rc = -EOPNOTSUPP;
diag_stat_inc(DIAG_STAT_X224);
- asm volatile(
- " diag %1,%2,0x224\n"
- "0: lhi %0,0x0\n"
+ asm volatile("\n"
+ " diag %[type],%[addr],0x224\n"
+ "0: lhi %[rc],0\n"
"1:\n"
EX_TABLE(0b,1b)
- : "+d" (rc) :"d" (0), "d" (addr) : "memory");
+ : [rc] "+d" (rc)
+ , "=m" (*(struct { char buf[PAGE_SIZE]; } *)ptr)
+ : [type] "d" (0), [addr] "d" (addr));
return rc;
}
EXPORT_SYMBOL(diag224);
--
2.45.1
put_user() uses inline assembly with precise constraints, so Clang is
in principle capable of instrumenting it automatically. Unfortunately,
one of the constraints contains a dereferenced user pointer, and Clang
does not currently distinguish user and kernel pointers. Therefore
KMSAN attempts to access shadow for user pointers, which is not a right
thing to do.
An obvious fix to add __no_sanitize_memory to __put_user_fn() does not
work, since it's __always_inline. And __always_inline cannot be removed
due to the __put_user_bad() trick.
A different obvious fix of using the "a" instead of the "+Q" constraint
degrades the code quality, which is very important here, since it's a
hot path.
Instead, repurpose the __put_user_asm() macro to define
__put_user_{char,short,int,long}_noinstr() functions and mark them with
__no_sanitize_memory. For the non-KMSAN builds make them
__always_inline in order to keep the generated code quality. Also
define __put_user_{char,short,int,long}() functions, which call the
aforementioned ones and which *are* instrumented, because they call
KMSAN hooks, which may be implemented as macros.
The same applies to get_user() as well.
Acked-by: Heiko Carstens <[email protected]>
Signed-off-by: Ilya Leoshkevich <[email protected]>
---
arch/s390/include/asm/uaccess.h | 111 +++++++++++++++++++++++---------
1 file changed, 79 insertions(+), 32 deletions(-)
diff --git a/arch/s390/include/asm/uaccess.h b/arch/s390/include/asm/uaccess.h
index 81ae8a98e7ec..c3c26dd1fc04 100644
--- a/arch/s390/include/asm/uaccess.h
+++ b/arch/s390/include/asm/uaccess.h
@@ -78,13 +78,24 @@ union oac {
int __noreturn __put_user_bad(void);
-#define __put_user_asm(to, from, size) \
-({ \
+#ifdef CONFIG_KMSAN
+#define GET_PUT_USER_NOINSTR_ATTRIBUTES \
+ noinline __maybe_unused __no_sanitize_memory
+#else
+#define GET_PUT_USER_NOINSTR_ATTRIBUTES __always_inline
+#endif
+
+#define DEFINE_PUT_USER(type) \
+static GET_PUT_USER_NOINSTR_ATTRIBUTES int \
+__put_user_##type##_noinstr(unsigned type __user *to, \
+ unsigned type *from, \
+ unsigned long size) \
+{ \
union oac __oac_spec = { \
.oac1.as = PSW_BITS_AS_SECONDARY, \
.oac1.a = 1, \
}; \
- int __rc; \
+ int rc; \
\
asm volatile( \
" lr 0,%[spec]\n" \
@@ -93,12 +104,28 @@ int __noreturn __put_user_bad(void);
"2:\n" \
EX_TABLE_UA_STORE(0b, 2b, %[rc]) \
EX_TABLE_UA_STORE(1b, 2b, %[rc]) \
- : [rc] "=&d" (__rc), [_to] "+Q" (*(to)) \
+ : [rc] "=&d" (rc), [_to] "+Q" (*(to)) \
: [_size] "d" (size), [_from] "Q" (*(from)), \
[spec] "d" (__oac_spec.val) \
: "cc", "0"); \
- __rc; \
-})
+ return rc; \
+} \
+ \
+static __always_inline int \
+__put_user_##type(unsigned type __user *to, unsigned type *from, \
+ unsigned long size) \
+{ \
+ int rc; \
+ \
+ rc = __put_user_##type##_noinstr(to, from, size); \
+ instrument_put_user(*from, to, size); \
+ return rc; \
+}
+
+DEFINE_PUT_USER(char);
+DEFINE_PUT_USER(short);
+DEFINE_PUT_USER(int);
+DEFINE_PUT_USER(long);
static __always_inline int __put_user_fn(void *x, void __user *ptr, unsigned long size)
{
@@ -106,24 +133,24 @@ static __always_inline int __put_user_fn(void *x, void __user *ptr, unsigned lon
switch (size) {
case 1:
- rc = __put_user_asm((unsigned char __user *)ptr,
- (unsigned char *)x,
- size);
+ rc = __put_user_char((unsigned char __user *)ptr,
+ (unsigned char *)x,
+ size);
break;
case 2:
- rc = __put_user_asm((unsigned short __user *)ptr,
- (unsigned short *)x,
- size);
+ rc = __put_user_short((unsigned short __user *)ptr,
+ (unsigned short *)x,
+ size);
break;
case 4:
- rc = __put_user_asm((unsigned int __user *)ptr,
+ rc = __put_user_int((unsigned int __user *)ptr,
(unsigned int *)x,
size);
break;
case 8:
- rc = __put_user_asm((unsigned long __user *)ptr,
- (unsigned long *)x,
- size);
+ rc = __put_user_long((unsigned long __user *)ptr,
+ (unsigned long *)x,
+ size);
break;
default:
__put_user_bad();
@@ -134,13 +161,17 @@ static __always_inline int __put_user_fn(void *x, void __user *ptr, unsigned lon
int __noreturn __get_user_bad(void);
-#define __get_user_asm(to, from, size) \
-({ \
+#define DEFINE_GET_USER(type) \
+static GET_PUT_USER_NOINSTR_ATTRIBUTES int \
+__get_user_##type##_noinstr(unsigned type *to, \
+ unsigned type __user *from, \
+ unsigned long size) \
+{ \
union oac __oac_spec = { \
.oac2.as = PSW_BITS_AS_SECONDARY, \
.oac2.a = 1, \
}; \
- int __rc; \
+ int rc; \
\
asm volatile( \
" lr 0,%[spec]\n" \
@@ -149,13 +180,29 @@ int __noreturn __get_user_bad(void);
"2:\n" \
EX_TABLE_UA_LOAD_MEM(0b, 2b, %[rc], %[_to], %[_ksize]) \
EX_TABLE_UA_LOAD_MEM(1b, 2b, %[rc], %[_to], %[_ksize]) \
- : [rc] "=&d" (__rc), "=Q" (*(to)) \
+ : [rc] "=&d" (rc), "=Q" (*(to)) \
: [_size] "d" (size), [_from] "Q" (*(from)), \
[spec] "d" (__oac_spec.val), [_to] "a" (to), \
[_ksize] "K" (size) \
: "cc", "0"); \
- __rc; \
-})
+ return rc; \
+} \
+ \
+static __always_inline int \
+__get_user_##type(unsigned type *to, unsigned type __user *from, \
+ unsigned long size) \
+{ \
+ int rc; \
+ \
+ rc = __get_user_##type##_noinstr(to, from, size); \
+ instrument_get_user(*to); \
+ return rc; \
+}
+
+DEFINE_GET_USER(char);
+DEFINE_GET_USER(short);
+DEFINE_GET_USER(int);
+DEFINE_GET_USER(long);
static __always_inline int __get_user_fn(void *x, const void __user *ptr, unsigned long size)
{
@@ -163,24 +210,24 @@ static __always_inline int __get_user_fn(void *x, const void __user *ptr, unsign
switch (size) {
case 1:
- rc = __get_user_asm((unsigned char *)x,
- (unsigned char __user *)ptr,
- size);
+ rc = __get_user_char((unsigned char *)x,
+ (unsigned char __user *)ptr,
+ size);
break;
case 2:
- rc = __get_user_asm((unsigned short *)x,
- (unsigned short __user *)ptr,
- size);
+ rc = __get_user_short((unsigned short *)x,
+ (unsigned short __user *)ptr,
+ size);
break;
case 4:
- rc = __get_user_asm((unsigned int *)x,
+ rc = __get_user_int((unsigned int *)x,
(unsigned int __user *)ptr,
size);
break;
case 8:
- rc = __get_user_asm((unsigned long *)x,
- (unsigned long __user *)ptr,
- size);
+ rc = __get_user_long((unsigned long *)x,
+ (unsigned long __user *)ptr,
+ size);
break;
default:
__get_user_bad();
--
2.45.1
s390 uses assembly code to initialize ftrace_regs and call
kprobe_ftrace_handler(). Therefore, from the KMSAN's point of view,
ftrace_regs is poisoned on kprobe_ftrace_handler() entry. This causes
KMSAN warnings when running the ftrace testsuite.
Fix by trusting the assembly code and always unpoisoning ftrace_regs in
kprobe_ftrace_handler().
Reviewed-by: Alexander Potapenko <[email protected]>
Acked-by: Heiko Carstens <[email protected]>
Signed-off-by: Ilya Leoshkevich <[email protected]>
---
arch/s390/kernel/ftrace.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/s390/kernel/ftrace.c b/arch/s390/kernel/ftrace.c
index ddf2ee47cb87..0bd6adc40a34 100644
--- a/arch/s390/kernel/ftrace.c
+++ b/arch/s390/kernel/ftrace.c
@@ -12,6 +12,7 @@
#include <linux/ftrace.h>
#include <linux/kernel.h>
#include <linux/types.h>
+#include <linux/kmsan-checks.h>
#include <linux/kprobes.h>
#include <linux/execmem.h>
#include <trace/syscall.h>
@@ -303,6 +304,7 @@ void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip,
if (bit < 0)
return;
+ kmsan_unpoison_memory(fregs, sizeof(*fregs));
regs = ftrace_get_regs(fregs);
p = get_kprobe((kprobe_opcode_t *)ip);
if (!regs || unlikely(!p) || kprobe_disabled(p))
--
2.45.1
Building the kernel with CONFIG_SLUB_DEBUG and CONFIG_KMSAN causes
KMSAN to complain about touching redzones in kfree().
Fix by extending the existing KASAN-related metadata_access_enable()
and metadata_access_disable() functions to KMSAN.
Acked-by: Vlastimil Babka <[email protected]>
Signed-off-by: Ilya Leoshkevich <[email protected]>
---
mm/slub.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/mm/slub.c b/mm/slub.c
index 4dd55cabe701..a290f6c63e7b 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -829,10 +829,12 @@ static int disable_higher_order_debug;
static inline void metadata_access_enable(void)
{
kasan_disable_current();
+ kmsan_disable_current();
}
static inline void metadata_access_disable(void)
{
+ kmsan_enable_current();
kasan_enable_current();
}
--
2.45.1
Replace the x86-specific asm/pgtable_64_types.h #include with the
linux/pgtable.h one, which all architectures have.
While at it, sort the headers alphabetically for the sake of
consistency with other KMSAN code.
Fixes: f80be4571b19 ("kmsan: add KMSAN runtime core")
Suggested-by: Heiko Carstens <[email protected]>
Reviewed-by: Alexander Potapenko <[email protected]>
Signed-off-by: Ilya Leoshkevich <[email protected]>
---
mm/kmsan/kmsan.h | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/mm/kmsan/kmsan.h b/mm/kmsan/kmsan.h
index a14744205435..adf443bcffe8 100644
--- a/mm/kmsan/kmsan.h
+++ b/mm/kmsan/kmsan.h
@@ -10,14 +10,14 @@
#ifndef __MM_KMSAN_KMSAN_H
#define __MM_KMSAN_KMSAN_H
-#include <asm/pgtable_64_types.h>
#include <linux/irqflags.h>
+#include <linux/mm.h>
+#include <linux/nmi.h>
+#include <linux/pgtable.h>
+#include <linux/printk.h>
#include <linux/sched.h>
#include <linux/stackdepot.h>
#include <linux/stacktrace.h>
-#include <linux/nmi.h>
-#include <linux/mm.h>
-#include <linux/printk.h>
#define KMSAN_ALLOCA_MAGIC_ORIGIN 0xabcd0100
#define KMSAN_CHAIN_MAGIC_ORIGIN 0xabcd0200
--
2.45.1
It's useful to have both tests and kmsan.panic=1 during development,
but right now the warnings, that the tests cause, lead to kernel
panics.
Temporarily set kmsan.panic=0 for the duration of the KMSAN testing.
Reviewed-by: Alexander Potapenko <[email protected]>
Signed-off-by: Ilya Leoshkevich <[email protected]>
---
mm/kmsan/kmsan_test.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/mm/kmsan/kmsan_test.c b/mm/kmsan/kmsan_test.c
index 07d3a3a5a9c5..9bfd11674fe3 100644
--- a/mm/kmsan/kmsan_test.c
+++ b/mm/kmsan/kmsan_test.c
@@ -659,9 +659,13 @@ static void test_exit(struct kunit *test)
{
}
+static int orig_panic_on_kmsan;
+
static int kmsan_suite_init(struct kunit_suite *suite)
{
register_trace_console(probe_console, NULL);
+ orig_panic_on_kmsan = panic_on_kmsan;
+ panic_on_kmsan = 0;
return 0;
}
@@ -669,6 +673,7 @@ static void kmsan_suite_exit(struct kunit_suite *suite)
{
unregister_trace_console(probe_console, NULL);
tracepoint_synchronize_unregister();
+ panic_on_kmsan = orig_panic_on_kmsan;
}
static struct kunit_suite kmsan_test_suite = {
--
2.45.1
arch_kmsan_get_meta_or_null() finds the lowcore shadow by querying the
prefix and calling kmsan_get_metadata() again.
kmsan_virt_addr_valid() delegates to virt_addr_valid().
Reviewed-by: Alexander Potapenko <[email protected]>
Acked-by: Heiko Carstens <[email protected]>
Signed-off-by: Ilya Leoshkevich <[email protected]>
---
arch/s390/include/asm/kmsan.h | 43 +++++++++++++++++++++++++++++++++++
1 file changed, 43 insertions(+)
create mode 100644 arch/s390/include/asm/kmsan.h
diff --git a/arch/s390/include/asm/kmsan.h b/arch/s390/include/asm/kmsan.h
new file mode 100644
index 000000000000..e572686d340c
--- /dev/null
+++ b/arch/s390/include/asm/kmsan.h
@@ -0,0 +1,43 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _ASM_S390_KMSAN_H
+#define _ASM_S390_KMSAN_H
+
+#include <asm/lowcore.h>
+#include <asm/page.h>
+#include <linux/kmsan.h>
+#include <linux/mmzone.h>
+#include <linux/stddef.h>
+
+#ifndef MODULE
+
+static inline bool is_lowcore_addr(void *addr)
+{
+ return addr >= (void *)&S390_lowcore &&
+ addr < (void *)(&S390_lowcore + 1);
+}
+
+static inline void *arch_kmsan_get_meta_or_null(void *addr, bool is_origin)
+{
+ if (is_lowcore_addr(addr)) {
+ /*
+ * Different lowcores accessed via S390_lowcore are described
+ * by the same struct page. Resolve the prefix manually in
+ * order to get a distinct struct page.
+ */
+ addr += (void *)lowcore_ptr[raw_smp_processor_id()] -
+ (void *)&S390_lowcore;
+ if (WARN_ON_ONCE(is_lowcore_addr(addr)))
+ return NULL;
+ return kmsan_get_metadata(addr, is_origin);
+ }
+ return NULL;
+}
+
+static inline bool kmsan_virt_addr_valid(void *addr)
+{
+ return virt_addr_valid(addr);
+}
+
+#endif /* !MODULE */
+
+#endif /* _ASM_S390_KMSAN_H */
--
2.45.1
All other sanitizers are disabled for boot as well. While at it, add a
comment explaining why we need this.
Reviewed-by: Alexander Gordeev <[email protected]>
Reviewed-by: Alexander Potapenko <[email protected]>
Signed-off-by: Ilya Leoshkevich <[email protected]>
---
arch/s390/boot/Makefile | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/s390/boot/Makefile b/arch/s390/boot/Makefile
index 070c9b2e905f..526ed20b9d31 100644
--- a/arch/s390/boot/Makefile
+++ b/arch/s390/boot/Makefile
@@ -3,11 +3,13 @@
# Makefile for the linux s390-specific parts of the memory manager.
#
+# Tooling runtimes are unavailable and cannot be linked for early boot code
KCOV_INSTRUMENT := n
GCOV_PROFILE := n
UBSAN_SANITIZE := n
KASAN_SANITIZE := n
KCSAN_SANITIZE := n
+KMSAN_SANITIZE := n
KBUILD_AFLAGS := $(KBUILD_AFLAGS_DECOMPRESSOR)
KBUILD_CFLAGS := $(KBUILD_CFLAGS_DECOMPRESSOR)
--
2.45.1
On s390 the virtual address 0 is valid (current CPU's lowcore is mapped
there), therefore KMSAN should not complain about it.
Disable the respective check on s390. There doesn't seem to be a
Kconfig option to describe this situation, so explicitly check for
s390.
Reviewed-by: Alexander Potapenko <[email protected]>
Signed-off-by: Ilya Leoshkevich <[email protected]>
---
mm/kmsan/init.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/mm/kmsan/init.c b/mm/kmsan/init.c
index 9de76ac7062c..3f8b1bbb9060 100644
--- a/mm/kmsan/init.c
+++ b/mm/kmsan/init.c
@@ -33,7 +33,10 @@ static void __init kmsan_record_future_shadow_range(void *start, void *end)
bool merged = false;
KMSAN_WARN_ON(future_index == NUM_FUTURE_RANGES);
- KMSAN_WARN_ON((nstart >= nend) || !nstart || !nend);
+ KMSAN_WARN_ON((nstart >= nend) ||
+ /* Virtual address 0 is valid on s390. */
+ (!IS_ENABLED(CONFIG_S390) && !nstart) ||
+ !nend);
nstart = ALIGN_DOWN(nstart, PAGE_SIZE);
nend = ALIGN(nend, PAGE_SIZE);
--
2.45.1