2023-12-14 05:57:19

by Nicholas Miehlbradt

[permalink] [raw]
Subject: [PATCH 00/13] kmsan: Enable on powerpc

This series provides the minimal support for Kernal Memory Sanitizer on
powerpc pseries le guests. Kernal Memory Sanitizer is a tool which detects
uses of uninitialized memory. Currently KMSAN is clang only.

The clang support for powerpc has not yet been merged, the pull request can
be found here [1].

In addition to this series, there are a number of changes required in
generic kmsan code. These changes are already on mailing lists as part of
the series implementing KMSAN for s390 [2]. This series is intended to be
rebased on top of the s390 series.

In addition, I found a bug in the rtc driver used on powerpc. I have sent
a fix to this in a seperate series [3].

With this series and the two series mentioned above, I can successfully
boot pseries le defconfig without KMSAN warnings. I have not tested other
powerpc platforms.

[1] https://github.com/llvm/llvm-project/pull/73611
[2] https://lore.kernel.org/linux-mm/[email protected]/
[3] https://lore.kernel.org/linux-rtc/[email protected]/

Nicholas Miehlbradt (13):
kmsan: Export kmsan_handle_dma
hvc: Fix use of uninitialized array in udbg_hvc_putc
powerpc: Disable KMSAN santitization for prom_init, vdso and purgatory
powerpc: Disable CONFIG_DCACHE_WORD_ACCESS when KMSAN is enabled
powerpc: Unpoison buffers populated by hcalls
powerpc/pseries/nvram: Unpoison buffer populated by rtas_call
powerpc/kprobes: Unpoison instruction in kprobe struct
powerpc: Unpoison pt_regs
powerpc: Disable KMSAN checks on functions which walk the stack
powerpc: Define KMSAN metadata address ranges for vmalloc and ioremap
powerpc: Implement architecture specific KMSAN interface
powerpc/string: Add KMSAN support
powerpc: Enable KMSAN on powerpc

arch/powerpc/Kconfig | 3 +-
arch/powerpc/include/asm/book3s/64/pgtable.h | 42 +++++++++++++++
arch/powerpc/include/asm/interrupt.h | 2 +
arch/powerpc/include/asm/kmsan.h | 51 +++++++++++++++++++
arch/powerpc/include/asm/string.h | 18 ++++++-
arch/powerpc/kernel/Makefile | 2 +
arch/powerpc/kernel/irq_64.c | 2 +
arch/powerpc/kernel/kprobes.c | 2 +
arch/powerpc/kernel/module.c | 2 +-
arch/powerpc/kernel/process.c | 6 +--
arch/powerpc/kernel/stacktrace.c | 10 ++--
arch/powerpc/kernel/vdso/Makefile | 1 +
arch/powerpc/lib/Makefile | 2 +
arch/powerpc/lib/mem_64.S | 5 +-
arch/powerpc/lib/memcpy_64.S | 2 +
arch/powerpc/perf/callchain.c | 2 +-
arch/powerpc/platforms/pseries/hvconsole.c | 2 +
arch/powerpc/platforms/pseries/nvram.c | 4 ++
arch/powerpc/purgatory/Makefile | 1 +
arch/powerpc/sysdev/xive/spapr.c | 3 ++
drivers/tty/hvc/hvc_vio.c | 2 +-
mm/kmsan/hooks.c | 1 +
.../selftests/powerpc/copyloops/asm/kmsan.h | 0
.../powerpc/copyloops/linux/export.h | 1 +
24 files changed, 152 insertions(+), 14 deletions(-)
create mode 100644 arch/powerpc/include/asm/kmsan.h
create mode 100644 tools/testing/selftests/powerpc/copyloops/asm/kmsan.h

--
2.40.1


2023-12-14 05:57:36

by Nicholas Miehlbradt

[permalink] [raw]
Subject: [PATCH 06/13] powerpc/pseries/nvram: Unpoison buffer populated by rtas_call

rtas_call provides a buffer where the return data should be placed. Rtas
initializes the buffer which is not visible to KMSAN so unpoison it
manually.

Signed-off-by: Nicholas Miehlbradt <[email protected]>
---
arch/powerpc/platforms/pseries/nvram.c | 4 ++++
1 file changed, 4 insertions(+)

diff --git a/arch/powerpc/platforms/pseries/nvram.c b/arch/powerpc/platforms/pseries/nvram.c
index 8130c37962c0..21a27d459347 100644
--- a/arch/powerpc/platforms/pseries/nvram.c
+++ b/arch/powerpc/platforms/pseries/nvram.c
@@ -14,6 +14,7 @@
#include <linux/ctype.h>
#include <linux/uaccess.h>
#include <linux/of.h>
+#include <linux/kmsan-checks.h>
#include <asm/nvram.h>
#include <asm/rtas.h>
#include <asm/machdep.h>
@@ -41,6 +42,7 @@ static ssize_t pSeries_nvram_read(char *buf, size_t count, loff_t *index)
int done;
unsigned long flags;
char *p = buf;
+ size_t l;


if (nvram_size == 0 || nvram_fetch == RTAS_UNKNOWN_SERVICE)
@@ -53,6 +55,7 @@ static ssize_t pSeries_nvram_read(char *buf, size_t count, loff_t *index)
if (i + count > nvram_size)
count = nvram_size - i;

+ l = count;
spin_lock_irqsave(&nvram_lock, flags);

for (; count != 0; count -= len) {
@@ -73,6 +76,7 @@ static ssize_t pSeries_nvram_read(char *buf, size_t count, loff_t *index)
}

spin_unlock_irqrestore(&nvram_lock, flags);
+ kmsan_unpoison_memory(buf, l);

*index = i;
return p - buf;
--
2.40.1

2023-12-14 05:57:48

by Nicholas Miehlbradt

[permalink] [raw]
Subject: [PATCH 10/13] powerpc: Define KMSAN metadata address ranges for vmalloc and ioremap

Splits the vmalloc region into four. The first quarter is the new
vmalloc region, the second is used to store shadow metadata and the
third is used to store origin metadata. The fourth quarter is unused.

Do the same for the ioremap region.

Module data is stored in the vmalloc region so alias the modules
metadata addresses to the respective vmalloc metadata addresses. Define
MODULES_VADDR and MODULES_END to the start and end of the vmalloc
region.

Since MODULES_VADDR was previously only defined on ppc32 targets checks
for if this macro is defined need to be updated to include
defined(CONFIG_PPC32).

Signed-off-by: Nicholas Miehlbradt <[email protected]>
---
arch/powerpc/include/asm/book3s/64/pgtable.h | 42 ++++++++++++++++++++
arch/powerpc/kernel/module.c | 2 +-
2 files changed, 43 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
index cb77eddca54b..b3a02b8d96e3 100644
--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
@@ -249,7 +249,38 @@ enum pgtable_index {
extern unsigned long __vmalloc_start;
extern unsigned long __vmalloc_end;
#define VMALLOC_START __vmalloc_start
+
+#ifndef CONFIG_KMSAN
#define VMALLOC_END __vmalloc_end
+#else
+/*
+ * In KMSAN builds vmalloc area is four times smaller, and the remaining 3/4
+ * are used to keep the metadata for virtual pages. The memory formerly
+ * belonging to vmalloc area is now laid out as follows:
+ *
+ * 1st quarter: VMALLOC_START to VMALLOC_END - new vmalloc area
+ * 2nd quarter: KMSAN_VMALLOC_SHADOW_START to
+ * KMSAN_VMALLOC_SHADOW_START+VMALLOC_LEN - vmalloc area shadow
+ * 3rd quarter: KMSAN_VMALLOC_ORIGIN_START to
+ * KMSAN_VMALLOC_ORIGIN_START+VMALLOC_LEN - vmalloc area origins
+ * 4th quarter: unused
+ */
+#define VMALLOC_LEN ((__vmalloc_end - __vmalloc_start) >> 2)
+#define VMALLOC_END (VMALLOC_START + VMALLOC_LEN)
+
+#define KMSAN_VMALLOC_SHADOW_START VMALLOC_END
+#define KMSAN_VMALLOC_ORIGIN_START (VMALLOC_END + VMALLOC_LEN)
+
+/*
+ * Module metadata is stored in the corresponding vmalloc metadata regions
+ */
+#define KMSAN_MODULES_SHADOW_START KMSAN_VMALLOC_SHADOW_START
+#define KMSAN_MODULES_ORIGIN_START KMSAN_VMALLOC_ORIGIN_START
+#endif /* CONFIG_KMSAN */
+
+#define MODULES_VADDR VMALLOC_START
+#define MODULES_END VMALLOC_END
+#define MODULES_LEN (MODULES_END - MODULES_VADDR)

static inline unsigned int ioremap_max_order(void)
{
@@ -264,7 +295,18 @@ extern unsigned long __kernel_io_start;
extern unsigned long __kernel_io_end;
#define KERN_VIRT_START __kernel_virt_start
#define KERN_IO_START __kernel_io_start
+#ifndef CONFIG_KMSAN
#define KERN_IO_END __kernel_io_end
+#else
+/*
+ * In KMSAN builds IO space is 4 times smaller, the remaining space is used to
+ * store metadata. See comment for vmalloc regions above.
+ */
+#define KERN_IO_LEN ((__kernel_io_end - __kernel_io_start) >> 2)
+#define KERN_IO_END (KERN_IO_START + KERN_IO_LEN)
+#define KERN_IO_SHADOW_START KERN_IO_END
+#define KERN_IO_ORIGIN_START (KERN_IO_SHADOW_START + KERN_IO_LEN)
+#endif /* !CONFIG_KMSAN */

extern struct page *vmemmap;
extern unsigned long pci_io_base;
diff --git a/arch/powerpc/kernel/module.c b/arch/powerpc/kernel/module.c
index f6d6ae0a1692..5043b959ad4d 100644
--- a/arch/powerpc/kernel/module.c
+++ b/arch/powerpc/kernel/module.c
@@ -107,7 +107,7 @@ __module_alloc(unsigned long size, unsigned long start, unsigned long end, bool

void *module_alloc(unsigned long size)
{
-#ifdef MODULES_VADDR
+#if defined(MODULES_VADDR) && defined(CONFIG_PPC32)
unsigned long limit = (unsigned long)_etext - SZ_32M;
void *ptr = NULL;

--
2.40.1

2023-12-14 05:57:55

by Nicholas Miehlbradt

[permalink] [raw]
Subject: [PATCH 11/13] powerpc: Implement architecture specific KMSAN interface

arch_kmsan_get_meta_or_null finds the metadata addresses for addresses
in the ioremap region which is mapped separately on powerpc.

kmsan_vir_addr_valid is the same as virt_addr_valid except excludes the
check that addr is less than high_memory since this function can be
called on addresses higher than this.

Signed-off-by: Nicholas Miehlbradt <[email protected]>
---
arch/powerpc/include/asm/kmsan.h | 44 ++++++++++++++++++++++++++++++++
1 file changed, 44 insertions(+)
create mode 100644 arch/powerpc/include/asm/kmsan.h

diff --git a/arch/powerpc/include/asm/kmsan.h b/arch/powerpc/include/asm/kmsan.h
new file mode 100644
index 000000000000..bc84f6ff2ee9
--- /dev/null
+++ b/arch/powerpc/include/asm/kmsan.h
@@ -0,0 +1,44 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * powerpc KMSAN support.
+ *
+ */
+
+#ifndef _ASM_POWERPC_KMSAN_H
+#define _ASM_POWERPC_KMSAN_H
+
+#ifndef __ASSEMBLY__
+#ifndef MODULE
+
+#include <linux/mmzone.h>
+#include <asm/page.h>
+#include <asm/book3s/64/pgtable.h>
+
+/*
+ * Functions below are declared in the header to make sure they are inlined.
+ * They all are called from kmsan_get_metadata() for every memory access in
+ * the kernel, so speed is important here.
+ */
+
+/*
+ * No powerpc specific metadata locations
+ */
+static inline void *arch_kmsan_get_meta_or_null(void *addr, bool is_origin)
+{
+ unsigned long addr64 = (unsigned long)addr, off;
+ if (KERN_IO_START <= addr64 && addr64 < KERN_IO_END) {
+ off = addr64 - KERN_IO_START;
+ return (void *)off + (is_origin ? KERN_IO_ORIGIN_START : KERN_IO_SHADOW_START);
+ } else {
+ return 0;
+ }
+}
+
+static inline bool kmsan_virt_addr_valid(void *addr)
+{
+ return (unsigned long)addr >= PAGE_OFFSET && pfn_valid(virt_to_pfn(addr));
+}
+
+#endif /* !MODULE */
+#endif /* !__ASSEMBLY__ */
+#endif /* _ASM_POWERPC_KMSAN_H */
--
2.40.1

2023-12-14 05:58:00

by Nicholas Miehlbradt

[permalink] [raw]
Subject: [PATCH 07/13] powerpc/kprobes: Unpoison instruction in kprobe struct

KMSAN does not unpoison the ainsn field of a kprobe struct correctly.
Manually unpoison it to prevent false positives.

Signed-off-by: Nicholas Miehlbradt <[email protected]>
---
arch/powerpc/kernel/kprobes.c | 2 ++
1 file changed, 2 insertions(+)

diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
index b20ee72e873a..1cbec54f2b6a 100644
--- a/arch/powerpc/kernel/kprobes.c
+++ b/arch/powerpc/kernel/kprobes.c
@@ -27,6 +27,7 @@
#include <asm/sections.h>
#include <asm/inst.h>
#include <linux/uaccess.h>
+#include <linux/kmsan-checks.h>

DEFINE_PER_CPU(struct kprobe *, current_kprobe) = NULL;
DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
@@ -179,6 +180,7 @@ int arch_prepare_kprobe(struct kprobe *p)

if (!ret) {
patch_instruction(p->ainsn.insn, insn);
+ kmsan_unpoison_memory(p->ainsn.insn, sizeof(kprobe_opcode_t));
p->opcode = ppc_inst_val(insn);
}

--
2.40.1

2023-12-14 05:59:31

by Nicholas Miehlbradt

[permalink] [raw]
Subject: [PATCH 05/13] powerpc: Unpoison buffers populated by hcalls

plpar_hcall provides to the hypervisor a buffer where return data should be
placed. The hypervisor initializes the buffers which is not visible to
KMSAN so unpoison them manually.

Signed-off-by: Nicholas Miehlbradt <[email protected]>
---
arch/powerpc/platforms/pseries/hvconsole.c | 2 ++
arch/powerpc/sysdev/xive/spapr.c | 3 +++
2 files changed, 5 insertions(+)

diff --git a/arch/powerpc/platforms/pseries/hvconsole.c b/arch/powerpc/platforms/pseries/hvconsole.c
index 1ac52963e08b..7ad66acd5db8 100644
--- a/arch/powerpc/platforms/pseries/hvconsole.c
+++ b/arch/powerpc/platforms/pseries/hvconsole.c
@@ -13,6 +13,7 @@
#include <linux/kernel.h>
#include <linux/export.h>
#include <linux/errno.h>
+#include <linux/kmsan-checks.h>
#include <asm/hvcall.h>
#include <asm/hvconsole.h>
#include <asm/plpar_wrappers.h>
@@ -32,6 +33,7 @@ int hvc_get_chars(uint32_t vtermno, char *buf, int count)
unsigned long *lbuf = (unsigned long *)buf;

ret = plpar_hcall(H_GET_TERM_CHAR, retbuf, vtermno);
+ kmsan_unpoison_memory(retbuf, sizeof(retbuf));
lbuf[0] = be64_to_cpu(retbuf[1]);
lbuf[1] = be64_to_cpu(retbuf[2]);

diff --git a/arch/powerpc/sysdev/xive/spapr.c b/arch/powerpc/sysdev/xive/spapr.c
index e45419264391..a9f48a336e4d 100644
--- a/arch/powerpc/sysdev/xive/spapr.c
+++ b/arch/powerpc/sysdev/xive/spapr.c
@@ -20,6 +20,7 @@
#include <linux/mm.h>
#include <linux/delay.h>
#include <linux/libfdt.h>
+#include <linux/kmsan-checks.h>

#include <asm/machdep.h>
#include <asm/prom.h>
@@ -191,6 +192,8 @@ static long plpar_int_get_source_info(unsigned long flags,
return rc;
}

+ kmsan_unpoison_memory(retbuf, sizeof(retbuf));
+
*src_flags = retbuf[0];
*eoi_page = retbuf[1];
*trig_page = retbuf[2];
--
2.40.1

2023-12-14 06:09:59

by Nicholas Miehlbradt

[permalink] [raw]
Subject: [PATCH 01/13] kmsan: Export kmsan_handle_dma

kmsan_handle_dma is required by virtio drivers. Export kmsan_handle_dma
so that the drivers can be compiled as modules.

Signed-off-by: Nicholas Miehlbradt <[email protected]>
---
mm/kmsan/hooks.c | 1 +
1 file changed, 1 insertion(+)

diff --git a/mm/kmsan/hooks.c b/mm/kmsan/hooks.c
index 7a30274b893c..3532d9275ca5 100644
--- a/mm/kmsan/hooks.c
+++ b/mm/kmsan/hooks.c
@@ -358,6 +358,7 @@ void kmsan_handle_dma(struct page *page, size_t offset, size_t size,
size -= to_go;
}
}
+EXPORT_SYMBOL(kmsan_handle_dma);

void kmsan_handle_dma_sg(struct scatterlist *sg, int nents,
enum dma_data_direction dir)
--
2.40.1

2023-12-14 09:17:45

by Christophe Leroy

[permalink] [raw]
Subject: Re: [PATCH 10/13] powerpc: Define KMSAN metadata address ranges for vmalloc and ioremap



Le 14/12/2023 à 06:55, Nicholas Miehlbradt a écrit :
> Splits the vmalloc region into four. The first quarter is the new
> vmalloc region, the second is used to store shadow metadata and the
> third is used to store origin metadata. The fourth quarter is unused.
>
> Do the same for the ioremap region.
>
> Module data is stored in the vmalloc region so alias the modules
> metadata addresses to the respective vmalloc metadata addresses. Define
> MODULES_VADDR and MODULES_END to the start and end of the vmalloc
> region.
>
> Since MODULES_VADDR was previously only defined on ppc32 targets checks
> for if this macro is defined need to be updated to include
> defined(CONFIG_PPC32).

Why ?

In your case MODULES_VADDR is above PAGE_OFFSET so there should be no
difference.

Christophe

>
> Signed-off-by: Nicholas Miehlbradt <[email protected]>
> ---
> arch/powerpc/include/asm/book3s/64/pgtable.h | 42 ++++++++++++++++++++
> arch/powerpc/kernel/module.c | 2 +-
> 2 files changed, 43 insertions(+), 1 deletion(-)
>
> diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
> index cb77eddca54b..b3a02b8d96e3 100644
> --- a/arch/powerpc/include/asm/book3s/64/pgtable.h
> +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
> @@ -249,7 +249,38 @@ enum pgtable_index {
> extern unsigned long __vmalloc_start;
> extern unsigned long __vmalloc_end;
> #define VMALLOC_START __vmalloc_start
> +
> +#ifndef CONFIG_KMSAN
> #define VMALLOC_END __vmalloc_end
> +#else
> +/*
> + * In KMSAN builds vmalloc area is four times smaller, and the remaining 3/4
> + * are used to keep the metadata for virtual pages. The memory formerly
> + * belonging to vmalloc area is now laid out as follows:
> + *
> + * 1st quarter: VMALLOC_START to VMALLOC_END - new vmalloc area
> + * 2nd quarter: KMSAN_VMALLOC_SHADOW_START to
> + * KMSAN_VMALLOC_SHADOW_START+VMALLOC_LEN - vmalloc area shadow
> + * 3rd quarter: KMSAN_VMALLOC_ORIGIN_START to
> + * KMSAN_VMALLOC_ORIGIN_START+VMALLOC_LEN - vmalloc area origins
> + * 4th quarter: unused
> + */
> +#define VMALLOC_LEN ((__vmalloc_end - __vmalloc_start) >> 2)
> +#define VMALLOC_END (VMALLOC_START + VMALLOC_LEN)
> +
> +#define KMSAN_VMALLOC_SHADOW_START VMALLOC_END
> +#define KMSAN_VMALLOC_ORIGIN_START (VMALLOC_END + VMALLOC_LEN)
> +
> +/*
> + * Module metadata is stored in the corresponding vmalloc metadata regions
> + */
> +#define KMSAN_MODULES_SHADOW_START KMSAN_VMALLOC_SHADOW_START
> +#define KMSAN_MODULES_ORIGIN_START KMSAN_VMALLOC_ORIGIN_START
> +#endif /* CONFIG_KMSAN */
> +
> +#define MODULES_VADDR VMALLOC_START
> +#define MODULES_END VMALLOC_END
> +#define MODULES_LEN (MODULES_END - MODULES_VADDR)
>
> static inline unsigned int ioremap_max_order(void)
> {
> @@ -264,7 +295,18 @@ extern unsigned long __kernel_io_start;
> extern unsigned long __kernel_io_end;
> #define KERN_VIRT_START __kernel_virt_start
> #define KERN_IO_START __kernel_io_start
> +#ifndef CONFIG_KMSAN
> #define KERN_IO_END __kernel_io_end
> +#else
> +/*
> + * In KMSAN builds IO space is 4 times smaller, the remaining space is used to
> + * store metadata. See comment for vmalloc regions above.
> + */
> +#define KERN_IO_LEN ((__kernel_io_end - __kernel_io_start) >> 2)
> +#define KERN_IO_END (KERN_IO_START + KERN_IO_LEN)
> +#define KERN_IO_SHADOW_START KERN_IO_END
> +#define KERN_IO_ORIGIN_START (KERN_IO_SHADOW_START + KERN_IO_LEN)
> +#endif /* !CONFIG_KMSAN */
>
> extern struct page *vmemmap;
> extern unsigned long pci_io_base;
> diff --git a/arch/powerpc/kernel/module.c b/arch/powerpc/kernel/module.c
> index f6d6ae0a1692..5043b959ad4d 100644
> --- a/arch/powerpc/kernel/module.c
> +++ b/arch/powerpc/kernel/module.c
> @@ -107,7 +107,7 @@ __module_alloc(unsigned long size, unsigned long start, unsigned long end, bool
>
> void *module_alloc(unsigned long size)
> {
> -#ifdef MODULES_VADDR
> +#if defined(MODULES_VADDR) && defined(CONFIG_PPC32)
> unsigned long limit = (unsigned long)_etext - SZ_32M;
> void *ptr = NULL;
>

2023-12-14 09:21:12

by Christophe Leroy

[permalink] [raw]
Subject: Re: [PATCH 11/13] powerpc: Implement architecture specific KMSAN interface



Le 14/12/2023 à 06:55, Nicholas Miehlbradt a écrit :
> arch_kmsan_get_meta_or_null finds the metadata addresses for addresses
> in the ioremap region which is mapped separately on powerpc.
>
> kmsan_vir_addr_valid is the same as virt_addr_valid except excludes the
> check that addr is less than high_memory since this function can be
> called on addresses higher than this.
>
> Signed-off-by: Nicholas Miehlbradt <[email protected]>
> ---
> arch/powerpc/include/asm/kmsan.h | 44 ++++++++++++++++++++++++++++++++
> 1 file changed, 44 insertions(+)
> create mode 100644 arch/powerpc/include/asm/kmsan.h
>
> diff --git a/arch/powerpc/include/asm/kmsan.h b/arch/powerpc/include/asm/kmsan.h
> new file mode 100644
> index 000000000000..bc84f6ff2ee9
> --- /dev/null
> +++ b/arch/powerpc/include/asm/kmsan.h
> @@ -0,0 +1,44 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * powerpc KMSAN support.
> + *
> + */
> +
> +#ifndef _ASM_POWERPC_KMSAN_H
> +#define _ASM_POWERPC_KMSAN_H
> +
> +#ifndef __ASSEMBLY__
> +#ifndef MODULE
> +
> +#include <linux/mmzone.h>
> +#include <asm/page.h>
> +#include <asm/book3s/64/pgtable.h>
> +
> +/*
> + * Functions below are declared in the header to make sure they are inlined.
> + * They all are called from kmsan_get_metadata() for every memory access in
> + * the kernel, so speed is important here.
> + */
> +
> +/*
> + * No powerpc specific metadata locations
> + */
> +static inline void *arch_kmsan_get_meta_or_null(void *addr, bool is_origin)
> +{
> + unsigned long addr64 = (unsigned long)addr, off;

Missing blank line.

> + if (KERN_IO_START <= addr64 && addr64 < KERN_IO_END) {

off is only used in that block so it should be declared here, can be
done as a single line (followed by a blank line too):

unsigned long off = addr64 - KERN_IO_START;

> + off = addr64 - KERN_IO_START;
> + return (void *)off + (is_origin ? KERN_IO_ORIGIN_START : KERN_IO_SHADOW_START);
> + } else {
> + return 0;
> + }
> +}
> +
> +static inline bool kmsan_virt_addr_valid(void *addr)
> +{
> + return (unsigned long)addr >= PAGE_OFFSET && pfn_valid(virt_to_pfn(addr));
> +}
> +
> +#endif /* !MODULE */
> +#endif /* !__ASSEMBLY__ */
> +#endif /* _ASM_POWERPC_KMSAN_H */

2023-12-15 07:59:27

by Naveen N Rao

[permalink] [raw]
Subject: Re: [PATCH 07/13] powerpc/kprobes: Unpoison instruction in kprobe struct

On Thu, Dec 14, 2023 at 05:55:33AM +0000, Nicholas Miehlbradt wrote:
> KMSAN does not unpoison the ainsn field of a kprobe struct correctly.
> Manually unpoison it to prevent false positives.
>
> Signed-off-by: Nicholas Miehlbradt <[email protected]>
> ---
> arch/powerpc/kernel/kprobes.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
> index b20ee72e873a..1cbec54f2b6a 100644
> --- a/arch/powerpc/kernel/kprobes.c
> +++ b/arch/powerpc/kernel/kprobes.c
> @@ -27,6 +27,7 @@
> #include <asm/sections.h>
> #include <asm/inst.h>
> #include <linux/uaccess.h>
> +#include <linux/kmsan-checks.h>
>
> DEFINE_PER_CPU(struct kprobe *, current_kprobe) = NULL;
> DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
> @@ -179,6 +180,7 @@ int arch_prepare_kprobe(struct kprobe *p)
>
> if (!ret) {
> patch_instruction(p->ainsn.insn, insn);
> + kmsan_unpoison_memory(p->ainsn.insn, sizeof(kprobe_opcode_t));

kprobe_opcode_t is u32, but we could be probing a prefixed instruction.
You can pass the instruction length through ppc_inst_len(insn).


- Naveen

2023-12-15 09:27:57

by Aneesh Kumar K.V

[permalink] [raw]
Subject: Re: [PATCH 10/13] powerpc: Define KMSAN metadata address ranges for vmalloc and ioremap

Nicholas Miehlbradt <[email protected]> writes:

> Splits the vmalloc region into four. The first quarter is the new
> vmalloc region, the second is used to store shadow metadata and the
> third is used to store origin metadata. The fourth quarter is unused.
>

Do we support KMSAN for both hash and radix? If hash is not supported
can we then using radix.h for these changes?

> Do the same for the ioremap region.
>
> Module data is stored in the vmalloc region so alias the modules
> metadata addresses to the respective vmalloc metadata addresses. Define
> MODULES_VADDR and MODULES_END to the start and end of the vmalloc
> region.
>
> Since MODULES_VADDR was previously only defined on ppc32 targets checks
> for if this macro is defined need to be updated to include
> defined(CONFIG_PPC32).

-aneesh

2024-01-10 03:55:14

by Nicholas Miehlbradt

[permalink] [raw]
Subject: Re: [PATCH 10/13] powerpc: Define KMSAN metadata address ranges for vmalloc and ioremap



On 14/12/2023 8:17 pm, Christophe Leroy wrote:
>
>
> Le 14/12/2023 à 06:55, Nicholas Miehlbradt a écrit :
>> Splits the vmalloc region into four. The first quarter is the new
>> vmalloc region, the second is used to store shadow metadata and the
>> third is used to store origin metadata. The fourth quarter is unused.
>>
>> Do the same for the ioremap region.
>>
>> Module data is stored in the vmalloc region so alias the modules
>> metadata addresses to the respective vmalloc metadata addresses. Define
>> MODULES_VADDR and MODULES_END to the start and end of the vmalloc
>> region.
>>
>> Since MODULES_VADDR was previously only defined on ppc32 targets checks
>> for if this macro is defined need to be updated to include
>> defined(CONFIG_PPC32).
>
> Why ?
>
> In your case MODULES_VADDR is above PAGE_OFFSET so there should be no
> difference.
>
> Christophe
>
On 64 bit builds the BUILD_BUG always triggers since MODULES_VADDR
expands to __vmalloc_start which is defined in a different translation
unit. I can restrict the #ifdef CONFIG_PPC32 to just around the
BUILD_BUG since as you pointed out there is no difference otherwise.
>>
>> Signed-off-by: Nicholas Miehlbradt <[email protected]>
>> ---
>> arch/powerpc/include/asm/book3s/64/pgtable.h | 42 ++++++++++++++++++++
>> arch/powerpc/kernel/module.c | 2 +-
>> 2 files changed, 43 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
>> index cb77eddca54b..b3a02b8d96e3 100644
>> --- a/arch/powerpc/include/asm/book3s/64/pgtable.h
>> +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
>> @@ -249,7 +249,38 @@ enum pgtable_index {
>> extern unsigned long __vmalloc_start;
>> extern unsigned long __vmalloc_end;
>> #define VMALLOC_START __vmalloc_start
>> +
>> +#ifndef CONFIG_KMSAN
>> #define VMALLOC_END __vmalloc_end
>> +#else
>> +/*
>> + * In KMSAN builds vmalloc area is four times smaller, and the remaining 3/4
>> + * are used to keep the metadata for virtual pages. The memory formerly
>> + * belonging to vmalloc area is now laid out as follows:
>> + *
>> + * 1st quarter: VMALLOC_START to VMALLOC_END - new vmalloc area
>> + * 2nd quarter: KMSAN_VMALLOC_SHADOW_START to
>> + * KMSAN_VMALLOC_SHADOW_START+VMALLOC_LEN - vmalloc area shadow
>> + * 3rd quarter: KMSAN_VMALLOC_ORIGIN_START to
>> + * KMSAN_VMALLOC_ORIGIN_START+VMALLOC_LEN - vmalloc area origins
>> + * 4th quarter: unused
>> + */
>> +#define VMALLOC_LEN ((__vmalloc_end - __vmalloc_start) >> 2)
>> +#define VMALLOC_END (VMALLOC_START + VMALLOC_LEN)
>> +
>> +#define KMSAN_VMALLOC_SHADOW_START VMALLOC_END
>> +#define KMSAN_VMALLOC_ORIGIN_START (VMALLOC_END + VMALLOC_LEN)
>> +
>> +/*
>> + * Module metadata is stored in the corresponding vmalloc metadata regions
>> + */
>> +#define KMSAN_MODULES_SHADOW_START KMSAN_VMALLOC_SHADOW_START
>> +#define KMSAN_MODULES_ORIGIN_START KMSAN_VMALLOC_ORIGIN_START
>> +#endif /* CONFIG_KMSAN */
>> +
>> +#define MODULES_VADDR VMALLOC_START
>> +#define MODULES_END VMALLOC_END
>> +#define MODULES_LEN (MODULES_END - MODULES_VADDR)
>>
>> static inline unsigned int ioremap_max_order(void)
>> {
>> @@ -264,7 +295,18 @@ extern unsigned long __kernel_io_start;
>> extern unsigned long __kernel_io_end;
>> #define KERN_VIRT_START __kernel_virt_start
>> #define KERN_IO_START __kernel_io_start
>> +#ifndef CONFIG_KMSAN
>> #define KERN_IO_END __kernel_io_end
>> +#else
>> +/*
>> + * In KMSAN builds IO space is 4 times smaller, the remaining space is used to
>> + * store metadata. See comment for vmalloc regions above.
>> + */
>> +#define KERN_IO_LEN ((__kernel_io_end - __kernel_io_start) >> 2)
>> +#define KERN_IO_END (KERN_IO_START + KERN_IO_LEN)
>> +#define KERN_IO_SHADOW_START KERN_IO_END
>> +#define KERN_IO_ORIGIN_START (KERN_IO_SHADOW_START + KERN_IO_LEN)
>> +#endif /* !CONFIG_KMSAN */
>>
>> extern struct page *vmemmap;
>> extern unsigned long pci_io_base;
>> diff --git a/arch/powerpc/kernel/module.c b/arch/powerpc/kernel/module.c
>> index f6d6ae0a1692..5043b959ad4d 100644
>> --- a/arch/powerpc/kernel/module.c
>> +++ b/arch/powerpc/kernel/module.c
>> @@ -107,7 +107,7 @@ __module_alloc(unsigned long size, unsigned long start, unsigned long end, bool
>>
>> void *module_alloc(unsigned long size)
>> {
>> -#ifdef MODULES_VADDR
>> +#if defined(MODULES_VADDR) && defined(CONFIG_PPC32)
>> unsigned long limit = (unsigned long)_etext - SZ_32M;
>> void *ptr = NULL;
>>

2024-02-19 19:37:29

by Christophe Leroy

[permalink] [raw]
Subject: Re: [PATCH 01/13] kmsan: Export kmsan_handle_dma



Le 14/12/2023 à 06:55, Nicholas Miehlbradt a écrit :
> kmsan_handle_dma is required by virtio drivers. Export kmsan_handle_dma
> so that the drivers can be compiled as modules.
>
> Signed-off-by: Nicholas Miehlbradt <[email protected]>
> ---
> mm/kmsan/hooks.c | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/mm/kmsan/hooks.c b/mm/kmsan/hooks.c
> index 7a30274b893c..3532d9275ca5 100644
> --- a/mm/kmsan/hooks.c
> +++ b/mm/kmsan/hooks.c
> @@ -358,6 +358,7 @@ void kmsan_handle_dma(struct page *page, size_t offset, size_t size,
> size -= to_go;
> }
> }
> +EXPORT_SYMBOL(kmsan_handle_dma);

virtio is GPL and all exports inside virtio are EXPORT_SYMBOL_GPL().
Should this one be _GPL as well ?

>
> void kmsan_handle_dma_sg(struct scatterlist *sg, int nents,
> enum dma_data_direction dir)

2024-02-20 06:39:44

by Christophe Leroy

[permalink] [raw]
Subject: Re: [PATCH 00/13] kmsan: Enable on powerpc



Le 14/12/2023 à 06:55, Nicholas Miehlbradt a écrit :
> This series provides the minimal support for Kernal Memory Sanitizer on
> powerpc pseries le guests. Kernal Memory Sanitizer is a tool which detects
> uses of uninitialized memory. Currently KMSAN is clang only.
>
> The clang support for powerpc has not yet been merged, the pull request can
> be found here [1].

As clang doesn't support it yet, it is probably prematurate to merge
that in the kernel.

I have open https://github.com/linuxppc/issues/issues/475 to follow through

In the meantime I flag this series as "change requested" for a revisit
it when time comes

Christophe

>
> In addition to this series, there are a number of changes required in
> generic kmsan code. These changes are already on mailing lists as part of
> the series implementing KMSAN for s390 [2]. This series is intended to be
> rebased on top of the s390 series.
>
> In addition, I found a bug in the rtc driver used on powerpc. I have sent
> a fix to this in a seperate series [3].
>
> With this series and the two series mentioned above, I can successfully
> boot pseries le defconfig without KMSAN warnings. I have not tested other
> powerpc platforms.
>
> [1] https://github.com/llvm/llvm-project/pull/73611
> [2] https://lore.kernel.org/linux-mm/[email protected]/
> [3] https://lore.kernel.org/linux-rtc/[email protected]/
>
> Nicholas Miehlbradt (13):
> kmsan: Export kmsan_handle_dma
> hvc: Fix use of uninitialized array in udbg_hvc_putc
> powerpc: Disable KMSAN santitization for prom_init, vdso and purgatory
> powerpc: Disable CONFIG_DCACHE_WORD_ACCESS when KMSAN is enabled
> powerpc: Unpoison buffers populated by hcalls
> powerpc/pseries/nvram: Unpoison buffer populated by rtas_call
> powerpc/kprobes: Unpoison instruction in kprobe struct
> powerpc: Unpoison pt_regs
> powerpc: Disable KMSAN checks on functions which walk the stack
> powerpc: Define KMSAN metadata address ranges for vmalloc and ioremap
> powerpc: Implement architecture specific KMSAN interface
> powerpc/string: Add KMSAN support
> powerpc: Enable KMSAN on powerpc
>
> arch/powerpc/Kconfig | 3 +-
> arch/powerpc/include/asm/book3s/64/pgtable.h | 42 +++++++++++++++
> arch/powerpc/include/asm/interrupt.h | 2 +
> arch/powerpc/include/asm/kmsan.h | 51 +++++++++++++++++++
> arch/powerpc/include/asm/string.h | 18 ++++++-
> arch/powerpc/kernel/Makefile | 2 +
> arch/powerpc/kernel/irq_64.c | 2 +
> arch/powerpc/kernel/kprobes.c | 2 +
> arch/powerpc/kernel/module.c | 2 +-
> arch/powerpc/kernel/process.c | 6 +--
> arch/powerpc/kernel/stacktrace.c | 10 ++--
> arch/powerpc/kernel/vdso/Makefile | 1 +
> arch/powerpc/lib/Makefile | 2 +
> arch/powerpc/lib/mem_64.S | 5 +-
> arch/powerpc/lib/memcpy_64.S | 2 +
> arch/powerpc/perf/callchain.c | 2 +-
> arch/powerpc/platforms/pseries/hvconsole.c | 2 +
> arch/powerpc/platforms/pseries/nvram.c | 4 ++
> arch/powerpc/purgatory/Makefile | 1 +
> arch/powerpc/sysdev/xive/spapr.c | 3 ++
> drivers/tty/hvc/hvc_vio.c | 2 +-
> mm/kmsan/hooks.c | 1 +
> .../selftests/powerpc/copyloops/asm/kmsan.h | 0
> .../powerpc/copyloops/linux/export.h | 1 +
> 24 files changed, 152 insertions(+), 14 deletions(-)
> create mode 100644 arch/powerpc/include/asm/kmsan.h
> create mode 100644 tools/testing/selftests/powerpc/copyloops/asm/kmsan.h
>