Hi,
This patchset implements kernel address sanitizer for ppc64.
Since ppc64 virtual address range is divided into different regions,
we can't have one contigous area for the kasan shadow range. Hence
we don't support the INLINE kasan instrumentation. With Outline
instrumentation, we override the shadow_to_mem and mem_to_shadow
callbacks, so that we map only the kernel linear range (ie,
region with ID 0xc). For region with ID 0xd and 0xf (vmalloc
and vmemmap ) we return the address of the zero page. This
works because kasan doesn't track both vmemmap and vmalloc address.
Known issues:
* Kasan is not yet enabled for arch/powerpc/kvm
* kexec hang
* outline stack and global support
Once we fix the kexec hang, we can look at merging ppc64 patch.
IMHO kasan changes can be reviewed/merged earlier
Aneesh Kumar K.V (10):
powerpc/mm: Add virt_to_pfn and use this instead of opencoding
kasan: MODULE_VADDR is not available on all archs
kasan: Rename kasan_enabled to kasan_report_enabled
kasan: Don't use kasan shadow pointer in generic functions
kasan: Enable arch to hook into kasan callbacks.
kasan: Allow arch to overrride kasan shadow offsets
kasan: Make INLINE KASan support arch selectable
kasan: Update feature support file
kasan: Prevent deadlock in kasan reporting
powerpc/mm: kasan: Add kasan support for ppc64
.../debug/KASAN/KASAN_INLINE/arch-support.txt | 40 ++++++++++++
.../KASAN/{ => KASAN_OUTLINE}/arch-support.txt | 0
arch/powerpc/include/asm/kasan.h | 74 ++++++++++++++++++++++
arch/powerpc/include/asm/page.h | 5 +-
arch/powerpc/include/asm/pgtable-ppc64.h | 1 +
arch/powerpc/include/asm/ppc_asm.h | 10 +++
arch/powerpc/include/asm/string.h | 13 ++++
arch/powerpc/kernel/Makefile | 5 ++
arch/powerpc/kernel/prom_init_check.sh | 2 +-
arch/powerpc/kernel/setup_64.c | 3 +
arch/powerpc/kvm/Makefile | 1 +
arch/powerpc/lib/mem_64.S | 6 +-
arch/powerpc/lib/memcpy_64.S | 3 +-
arch/powerpc/lib/ppc_ksyms.c | 10 +++
arch/powerpc/mm/Makefile | 7 ++
arch/powerpc/mm/kasan_init.c | 44 +++++++++++++
arch/powerpc/mm/slb_low.S | 4 ++
arch/powerpc/platforms/Kconfig.cputype | 1 +
arch/x86/Kconfig | 1 +
include/linux/kasan.h | 3 +
lib/Kconfig.kasan | 2 +
mm/kasan/kasan.c | 9 +++
mm/kasan/kasan.h | 20 +++++-
mm/kasan/report.c | 29 ++++++---
scripts/Makefile.kasan | 28 ++++----
25 files changed, 290 insertions(+), 31 deletions(-)
create mode 100644 Documentation/features/debug/KASAN/KASAN_INLINE/arch-support.txt
rename Documentation/features/debug/KASAN/{ => KASAN_OUTLINE}/arch-support.txt (100%)
create mode 100644 arch/powerpc/include/asm/kasan.h
create mode 100644 arch/powerpc/mm/kasan_init.c
--
2.5.0
This add helper virt_to_pfn and remove the opencoded usage of the
same.
Signed-off-by: Aneesh Kumar K.V <[email protected]>
---
arch/powerpc/include/asm/page.h | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
index 71294a6e976e..168ca67e39b3 100644
--- a/arch/powerpc/include/asm/page.h
+++ b/arch/powerpc/include/asm/page.h
@@ -127,9 +127,10 @@ extern long long virt_phys_offset;
#define pfn_valid(pfn) ((pfn) >= ARCH_PFN_OFFSET && (pfn) < max_mapnr)
#endif
-#define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
+#define virt_to_pfn(kaddr) (__pa(kaddr) >> PAGE_SHIFT)
+#define virt_to_page(kaddr) pfn_to_page(virt_to_pfn(kaddr))
#define pfn_to_kaddr(pfn) __va((pfn) << PAGE_SHIFT)
-#define virt_addr_valid(kaddr) pfn_valid(__pa(kaddr) >> PAGE_SHIFT)
+#define virt_addr_valid(kaddr) pfn_valid(virt_to_pfn(kaddr))
/*
* On Book-E parts we need __va to parse the device tree and we can't
--
2.5.0
Conditionalize the check using #ifdef
Signed-off-by: Aneesh Kumar K.V <[email protected]>
---
mm/kasan/report.c | 11 ++++++++---
1 file changed, 8 insertions(+), 3 deletions(-)
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index e07c94fbd0ac..71ce7548d914 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -85,9 +85,14 @@ static void print_error_description(struct kasan_access_info *info)
static inline bool kernel_or_module_addr(const void *addr)
{
- return (addr >= (void *)_stext && addr < (void *)_end)
- || (addr >= (void *)MODULES_VADDR
- && addr < (void *)MODULES_END);
+ if (addr >= (void *)_stext && addr < (void *)_end)
+ return true;
+#if defined(CONFIG_MODULES) && defined(MODULES_VADDR)
+ if (addr >= (void *)MODULES_VADDR
+ && addr < (void *)MODULES_END)
+ return true;
+#endif
+ return false;
}
static inline bool init_task_stack_addr(const void *addr)
--
2.5.0
The function only disable/enable reporting. In the later patch
we will be adding a kasan early enable/disable. Rename kasan_enabled
to properly reflect its function.
Signed-off-by: Aneesh Kumar K.V <[email protected]>
---
mm/kasan/kasan.h | 2 +-
mm/kasan/report.c | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index c242adf6bc85..a6b46cc94907 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -63,7 +63,7 @@ static inline const void *kasan_shadow_to_mem(const void *shadow_addr)
<< KASAN_SHADOW_SCALE_SHIFT);
}
-static inline bool kasan_enabled(void)
+static inline bool kasan_report_enabled(void)
{
return !current->kasan_depth;
}
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 71ce7548d914..d19d01823a68 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -225,7 +225,7 @@ void kasan_report(unsigned long addr, size_t size,
{
struct kasan_access_info info;
- if (likely(!kasan_enabled()))
+ if (likely(!kasan_report_enabled()))
return;
info.access_addr = (void *)addr;
--
2.5.0
We can't use generic functions like print_hex_dump to access kasan
shadow region. This require us to setup another kasan shadow region
for the address passed (kasan shadow address). Most architecture won't
be able to do that. Hence remove dumping kasan shadow region dump. If
we really want to do this we will have to have a kasan internal implemen
tation of print_hex_dump for which we will disable address sanitizer
operation.
Signed-off-by: Aneesh Kumar K.V <[email protected]>
---
mm/kasan/report.c | 6 ------
1 file changed, 6 deletions(-)
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index d19d01823a68..79fbc5d14bd2 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -170,12 +170,6 @@ static void print_shadow_for_address(const void *addr)
snprintf(buffer, sizeof(buffer),
(i == 0) ? ">%p: " : " %p: ", kaddr);
- kasan_disable_current();
- print_hex_dump(KERN_ERR, buffer,
- DUMP_PREFIX_NONE, SHADOW_BYTES_PER_ROW, 1,
- shadow_row, SHADOW_BYTES_PER_ROW, 0);
- kasan_enable_current();
-
if (row_is_guilty(shadow_row, shadow))
pr_err("%*c\n",
shadow_pointer_offset(shadow_row, shadow),
--
2.5.0
We add enable/disable callbacks in this patch which architecture
can implemement. We will use this in the later patches for architecture
like ppc64, that cannot have early zero page kasan shadow region for the
entire virtual address space. Such architectures also cannot use
inline kasan support.
Signed-off-by: Aneesh Kumar K.V <[email protected]>
---
mm/kasan/kasan.c | 9 +++++++++
mm/kasan/kasan.h | 15 +++++++++++++++
2 files changed, 24 insertions(+)
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 7b28e9cdf1c7..e4d33afd0eaf 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -43,6 +43,9 @@ static void kasan_poison_shadow(const void *address, size_t size, u8 value)
{
void *shadow_start, *shadow_end;
+ if (!kasan_enabled())
+ return;
+
shadow_start = kasan_mem_to_shadow(address);
shadow_end = kasan_mem_to_shadow(address + size);
@@ -51,6 +54,9 @@ static void kasan_poison_shadow(const void *address, size_t size, u8 value)
void kasan_unpoison_shadow(const void *address, size_t size)
{
+ if (!kasan_enabled())
+ return;
+
kasan_poison_shadow(address, size, 0);
if (size & KASAN_SHADOW_MASK) {
@@ -238,6 +244,9 @@ static __always_inline void check_memory_region(unsigned long addr,
{
struct kasan_access_info info;
+ if (!kasan_enabled())
+ return;
+
if (unlikely(size == 0))
return;
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index a6b46cc94907..deb547d5a916 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -68,6 +68,21 @@ static inline bool kasan_report_enabled(void)
return !current->kasan_depth;
}
+#ifndef kasan_enabled
+/*
+ * Some archs may want to disable kasan callbacks.
+ */
+static inline bool kasan_enabled(void)
+{
+ return true;
+}
+#define kasan_enabled kasan_enabled
+#else
+#ifdef CONFIG_KASAN_INLINE
+#error "Kasan inline support cannot work with KASAN arch hooks"
+#endif
+#endif
+
void kasan_report(unsigned long addr, size_t size,
bool is_write, unsigned long ip);
--
2.5.0
Some archs may want to provide kasan shadow memory as a constant
offset from the address. Such arch even though cannot use inline kasan
support, they can work with outofline kasan support.
Signed-off-by: Aneesh Kumar K.V <[email protected]>
---
include/linux/kasan.h | 3 +++
mm/kasan/kasan.h | 3 +++
2 files changed, 6 insertions(+)
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 5486d777b706..e458ca64cdaf 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -15,11 +15,14 @@ struct vm_struct;
#include <asm/kasan.h>
#include <linux/sched.h>
+#ifndef kasan_mem_to_shadow
static inline void *kasan_mem_to_shadow(const void *addr)
{
return (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT)
+ KASAN_SHADOW_OFFSET;
}
+#define kasan_mem_to_shadow kasan_mem_to_shadow
+#endif
/* Enable reporting bugs after kasan_disable_current() */
static inline void kasan_enable_current(void)
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index deb547d5a916..c0686f2b1224 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -57,11 +57,14 @@ struct kasan_global {
void kasan_report_error(struct kasan_access_info *info);
void kasan_report_user_access(struct kasan_access_info *info);
+#ifndef kasan_shadow_to_mem
static inline const void *kasan_shadow_to_mem(const void *shadow_addr)
{
return (void *)(((unsigned long)shadow_addr - KASAN_SHADOW_OFFSET)
<< KASAN_SHADOW_SCALE_SHIFT);
}
+#define kasan_shadow_to_mem kasan_shadow_to_mem
+#endif
static inline bool kasan_report_enabled(void)
{
--
2.5.0
Some of the archs, may find it difficult to support inline KASan
mode. Add HAVE_ARCH_KASAN_INLINE so that we can disable inline
support at config time.
Signed-off-by: Aneesh Kumar K.V <[email protected]>
---
arch/x86/Kconfig | 1 +
lib/Kconfig.kasan | 2 ++
scripts/Makefile.kasan | 28 ++++++++++++++--------------
3 files changed, 17 insertions(+), 14 deletions(-)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index b3a1a5d77d92..4416f80580fb 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -78,6 +78,7 @@ config X86
select HAVE_ARCH_HUGE_VMAP if X86_64 || X86_PAE
select HAVE_ARCH_JUMP_LABEL
select HAVE_ARCH_KASAN if X86_64 && SPARSEMEM_VMEMMAP
+ select HAVE_ARCH_KASAN_INLINE if X86_64 && SPARSEMEM_VMEMMAP
select HAVE_ARCH_KGDB
select HAVE_ARCH_KMEMCHECK
select HAVE_ARCH_SECCOMP_FILTER
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index 39f24d6721e5..e9d1bb1175b8 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -32,6 +32,7 @@ config KASAN_OUTLINE
however it doesn't bloat size of kernel's .text section so
much as inline does.
+if HAVE_ARCH_KASAN_INLINE
config KASAN_INLINE
bool "Inline instrumentation"
help
@@ -40,6 +41,7 @@ config KASAN_INLINE
it gives about x2 boost over outline instrumentation), but
make kernel's .text size much bigger.
This requires a gcc version of 5.0 or later.
+endif
endchoice
diff --git a/scripts/Makefile.kasan b/scripts/Makefile.kasan
index 3f874d24234f..c1c06e9e107a 100644
--- a/scripts/Makefile.kasan
+++ b/scripts/Makefile.kasan
@@ -1,29 +1,29 @@
ifdef CONFIG_KASAN
-ifdef CONFIG_KASAN_INLINE
- call_threshold := 10000
-else
- call_threshold := 0
-endif
-
-CFLAGS_KASAN_MINIMAL := -fsanitize=kernel-address
CFLAGS_KASAN := $(call cc-option, -fsanitize=kernel-address \
- -fasan-shadow-offset=$(CONFIG_KASAN_SHADOW_OFFSET) \
- --param asan-stack=1 --param asan-globals=1 \
- --param asan-instrumentation-with-call-threshold=$(call_threshold))
-
-ifeq ($(call cc-option, $(CFLAGS_KASAN_MINIMAL) -Werror),)
+ --param asan-instrumentation-with-call-threshold=0)
+ifeq ($(CFLAGS_KASAN),)
ifneq ($(CONFIG_COMPILE_TEST),y)
$(warning Cannot use CONFIG_KASAN: \
-fsanitize=kernel-address is not supported by compiler)
endif
else
- ifeq ($(CFLAGS_KASAN),)
+
+ ifdef CONFIG_KASAN_INLINE
+ CFLAGS_KASAN_INLINE := $(call cc-option, -fsanitize=kernel-address \
+ -fasan-shadow-offset=$(CONFIG_KASAN_SHADOW_OFFSET) \
+ --param asan-stack=1 --param asan-globals=1 \
+ --param asan-instrumentation-with-call-threshold=10000)
+
+ ifeq ($(CFLAGS_KASAN_INLINE),)
ifneq ($(CONFIG_COMPILE_TEST),y)
$(warning CONFIG_KASAN: compiler does not support all options.\
Trying minimal configuration)
endif
- CFLAGS_KASAN := $(CFLAGS_KASAN_MINIMAL)
+ else
+ CFLAGS_KASAN := $(CFLAGS_KASAN_INLINE)
endif
+ endif
+
endif
endif
--
2.5.0
Now that we have two features KASAN and KASAN_INLINE, add new feature
support file for the same.
Signed-off-by: Aneesh Kumar K.V <[email protected]>
---
.../debug/KASAN/KASAN_INLINE/arch-support.txt | 40 ++++++++++++++++++++++
.../KASAN/{ => KASAN_OUTLINE}/arch-support.txt | 0
2 files changed, 40 insertions(+)
create mode 100644 Documentation/features/debug/KASAN/KASAN_INLINE/arch-support.txt
rename Documentation/features/debug/KASAN/{ => KASAN_OUTLINE}/arch-support.txt (100%)
diff --git a/Documentation/features/debug/KASAN/KASAN_INLINE/arch-support.txt b/Documentation/features/debug/KASAN/KASAN_INLINE/arch-support.txt
new file mode 100644
index 000000000000..834818fba910
--- /dev/null
+++ b/Documentation/features/debug/KASAN/KASAN_INLINE/arch-support.txt
@@ -0,0 +1,40 @@
+#
+# Feature name: KASAN
+# Kconfig: HAVE_ARCH_KASAN_INLINE
+# description: arch supports the KASAN runtime memory checker
+#
+ -----------------------
+ | arch |status|
+ -----------------------
+ | alpha: | TODO |
+ | arc: | TODO |
+ | arm: | TODO |
+ | arm64: | TODO |
+ | avr32: | TODO |
+ | blackfin: | TODO |
+ | c6x: | TODO |
+ | cris: | TODO |
+ | frv: | TODO |
+ | h8300: | TODO |
+ | hexagon: | TODO |
+ | ia64: | TODO |
+ | m32r: | TODO |
+ | m68k: | TODO |
+ | metag: | TODO |
+ | microblaze: | TODO |
+ | mips: | TODO |
+ | mn10300: | TODO |
+ | nios2: | TODO |
+ | openrisc: | TODO |
+ | parisc: | TODO |
+ | powerpc: | TODO |
+ | s390: | TODO |
+ | score: | TODO |
+ | sh: | TODO |
+ | sparc: | TODO |
+ | tile: | TODO |
+ | um: | TODO |
+ | unicore32: | TODO |
+ | x86: | ok |
+ | xtensa: | TODO |
+ -----------------------
diff --git a/Documentation/features/debug/KASAN/arch-support.txt b/Documentation/features/debug/KASAN/KASAN_OUTLINE/arch-support.txt
similarity index 100%
rename from Documentation/features/debug/KASAN/arch-support.txt
rename to Documentation/features/debug/KASAN/KASAN_OUTLINE/arch-support.txt
--
2.5.0
We we end up calling kasan_report in real mode, our shadow mapping
for even spinlock variable will show poisoned. This will result
in us calling kasan_report_error with lock_report spin lock held.
To prevent this disable kasan reporting when we are priting
error w.r.t kasan.
Signed-off-by: Aneesh Kumar K.V <[email protected]>
---
mm/kasan/report.c | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 79fbc5d14bd2..82b41eb83e43 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -185,6 +185,10 @@ void kasan_report_error(struct kasan_access_info *info)
{
unsigned long flags;
+ /*
+ * Make sure we don't end up in loop.
+ */
+ kasan_disable_current();
spin_lock_irqsave(&report_lock, flags);
pr_err("================================="
"=================================\n");
@@ -194,12 +198,17 @@ void kasan_report_error(struct kasan_access_info *info)
pr_err("================================="
"=================================\n");
spin_unlock_irqrestore(&report_lock, flags);
+ kasan_enable_current();
}
void kasan_report_user_access(struct kasan_access_info *info)
{
unsigned long flags;
+ /*
+ * Make sure we don't end up in loop.
+ */
+ kasan_disable_current();
spin_lock_irqsave(&report_lock, flags);
pr_err("================================="
"=================================\n");
@@ -212,6 +221,7 @@ void kasan_report_user_access(struct kasan_access_info *info)
pr_err("================================="
"=================================\n");
spin_unlock_irqrestore(&report_lock, flags);
+ kasan_enable_current();
}
void kasan_report(unsigned long addr, size_t size,
--
2.5.0
We use the region with region ID 0xe as the kasan shadow region. Since
we use hash page table, we can't have the early zero page based shadow
region support. Hence we disable kasan in the early code and runtime
enable this. We could imporve the condition using static keys. (but
that is for a later patch). We also can't support inline instrumentation
because our kernel mapping doesn't give us a large enough free window
to map the entire range. For VMALLOC and VMEMMAP region we just
return a zero page instead of having a translation bolted into the
htab. This simplifies handling VMALLOC and VMEMAP area. Kasan is not
tracking both the region as of now
Signed-off-by: Aneesh Kumar K.V <[email protected]>
---
arch/powerpc/include/asm/kasan.h | 74 ++++++++++++++++++++++++++++++++
arch/powerpc/include/asm/pgtable-ppc64.h | 1 +
arch/powerpc/include/asm/ppc_asm.h | 10 +++++
arch/powerpc/include/asm/string.h | 13 ++++++
arch/powerpc/kernel/Makefile | 5 +++
arch/powerpc/kernel/prom_init_check.sh | 2 +-
arch/powerpc/kernel/setup_64.c | 3 ++
arch/powerpc/kvm/Makefile | 1 +
arch/powerpc/lib/mem_64.S | 6 ++-
arch/powerpc/lib/memcpy_64.S | 3 +-
arch/powerpc/lib/ppc_ksyms.c | 10 +++++
arch/powerpc/mm/Makefile | 7 +++
arch/powerpc/mm/kasan_init.c | 44 +++++++++++++++++++
arch/powerpc/mm/slb_low.S | 4 ++
arch/powerpc/platforms/Kconfig.cputype | 1 +
15 files changed, 180 insertions(+), 4 deletions(-)
create mode 100644 arch/powerpc/include/asm/kasan.h
create mode 100644 arch/powerpc/mm/kasan_init.c
diff --git a/arch/powerpc/include/asm/kasan.h b/arch/powerpc/include/asm/kasan.h
new file mode 100644
index 000000000000..51e76e698bb9
--- /dev/null
+++ b/arch/powerpc/include/asm/kasan.h
@@ -0,0 +1,74 @@
+#ifndef __ASM_KASAN_H
+#define __ASM_KASAN_H
+
+#ifndef __ASSEMBLY__
+
+#ifdef CONFIG_KASAN
+/*
+ * KASAN_SHADOW_START: We use a new region for kasan mapping
+ * KASAN_SHADOW_END: KASAN_SHADOW_START + 1/8 of kernel virtual addresses.
+ */
+#define KASAN_SHADOW_START (KASAN_REGION_ID << REGION_SHIFT)
+#define KASAN_SHADOW_END (KASAN_SHADOW_START + (1UL << (PGTABLE_RANGE - 3)))
+/*
+ * This value is used to map an address to the corresponding shadow
+ * address by the following formula:
+ * shadow_addr = (address >> 3) + KASAN_SHADOW_OFFSET;
+ *
+ * This applies to the linear mapping.
+ * Hence 0xc000000000000000 -> 0xe000000000000000
+ * We use an internal zero page as the shadow address for vmall and vmemmap
+ * region, since we don't track both of them now.
+ *
+ */
+#define KASAN_SHADOW_KERNEL_OFFSET ((KASAN_REGION_ID << REGION_SHIFT) - \
+ (KERNEL_REGION_ID << (REGION_SHIFT - 3)))
+
+extern unsigned char kasan_zero_page[PAGE_SIZE];
+#define kasan_mem_to_shadow kasan_mem_to_shadow
+static inline void *kasan_mem_to_shadow(const void *addr)
+{
+ unsigned long offset = 0;
+
+ switch (REGION_ID(addr)) {
+ case KERNEL_REGION_ID:
+ offset = KASAN_SHADOW_KERNEL_OFFSET;
+ break;
+ default:
+ return (void *)kasan_zero_page;
+ }
+ return (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT)
+ + offset;
+}
+
+#define kasan_shadow_to_mem kasan_shadow_to_mem
+static inline void *kasan_shadow_to_mem(const void *shadow_addr)
+{
+ unsigned long offset = 0;
+
+ switch (REGION_ID(shadow_addr)) {
+ case KASAN_REGION_ID:
+ offset = KASAN_SHADOW_KERNEL_OFFSET;
+ break;
+ default:
+ pr_err("Shadow memory whose origin not found %p\n", shadow_addr);
+ BUG();
+ }
+ return (void *)(((unsigned long)shadow_addr - offset)
+ << KASAN_SHADOW_SCALE_SHIFT);
+}
+
+#define kasan_enabled kasan_enabled
+extern bool __kasan_enabled;
+static inline bool kasan_enabled(void)
+{
+ return __kasan_enabled;
+}
+
+void kasan_init(void);
+#else
+static inline void kasan_init(void) { }
+#endif
+
+#endif
+#endif
diff --git a/arch/powerpc/include/asm/pgtable-ppc64.h b/arch/powerpc/include/asm/pgtable-ppc64.h
index 3bb7488bd24b..369ce5442aa6 100644
--- a/arch/powerpc/include/asm/pgtable-ppc64.h
+++ b/arch/powerpc/include/asm/pgtable-ppc64.h
@@ -80,6 +80,7 @@
#define KERNEL_REGION_ID (REGION_ID(PAGE_OFFSET))
#define VMEMMAP_REGION_ID (0xfUL) /* Server only */
#define USER_REGION_ID (0UL)
+#define KASAN_REGION_ID (0xeUL) /* Server only */
/*
* Defines the address of the vmemap area, in its own region on
diff --git a/arch/powerpc/include/asm/ppc_asm.h b/arch/powerpc/include/asm/ppc_asm.h
index dd0fc18d8103..e75ae67e804e 100644
--- a/arch/powerpc/include/asm/ppc_asm.h
+++ b/arch/powerpc/include/asm/ppc_asm.h
@@ -226,6 +226,11 @@ name:
#define DOTSYM(a) a
+#define KASAN_OVERRIDE(x, y) \
+ .weak x; \
+ .set x, y
+
+
#else
#define XGLUE(a,b) a##b
@@ -263,6 +268,11 @@ GLUE(.,name):
#define DOTSYM(a) GLUE(.,a)
+#define KASAN_OVERRIDE(x, y) \
+ .weak x; \
+ .set x, y; \
+ .weak DOTSYM(x); \
+ .set DOTSYM(x), DOTSYM(y)
#endif
#else /* 32-bit */
diff --git a/arch/powerpc/include/asm/string.h b/arch/powerpc/include/asm/string.h
index e40010abcaf1..b10a4c01cdbf 100644
--- a/arch/powerpc/include/asm/string.h
+++ b/arch/powerpc/include/asm/string.h
@@ -27,6 +27,19 @@ extern void * memmove(void *,const void *,__kernel_size_t);
extern int memcmp(const void *,const void *,__kernel_size_t);
extern void * memchr(const void *,int,__kernel_size_t);
+extern void * __memset(void *, int, __kernel_size_t);
+extern void * __memcpy(void *, const void *, __kernel_size_t);
+extern void * __memmove(void *, const void *, __kernel_size_t);
+
+#if defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__)
+/*
+ * For files that are not instrumented (e.g. mm/slub.c) we
+ * should use not instrumented version of mem* functions.
+ */
+#define memcpy(dst, src, len) __memcpy(dst, src, len)
+#define memmove(dst, src, len) __memmove(dst, src, len)
+#define memset(s, c, n) __memset(s, c, n)
+#endif
#endif /* __KERNEL__ */
#endif /* _ASM_POWERPC_STRING_H */
diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
index 12868b1c4e05..7b205628fd1b 100644
--- a/arch/powerpc/kernel/Makefile
+++ b/arch/powerpc/kernel/Makefile
@@ -26,6 +26,11 @@ CFLAGS_REMOVE_ftrace.o = -pg -mno-sched-epilog
CFLAGS_REMOVE_time.o = -pg -mno-sched-epilog
endif
+KASAN_SANITIZE_prom_init.o := n
+KASAN_SANITIZE_align.o := n
+KASAN_SANITIZE_dbell.o := n
+KASAN_SANITIZE_setup_64.o := n
+
obj-y := cputable.o ptrace.o syscalls.o \
irq.o align.o signal_32.o pmc.o vdso.o \
process.o systbl.o idle.o \
diff --git a/arch/powerpc/kernel/prom_init_check.sh b/arch/powerpc/kernel/prom_init_check.sh
index 12640f7e726b..e25777956123 100644
--- a/arch/powerpc/kernel/prom_init_check.sh
+++ b/arch/powerpc/kernel/prom_init_check.sh
@@ -17,7 +17,7 @@
# it to the list below:
WHITELIST="add_reloc_offset __bss_start __bss_stop copy_and_flush
-_end enter_prom memcpy memset reloc_offset __secondary_hold
+_end enter_prom __memcpy __memset memcpy memset reloc_offset __secondary_hold
__secondary_hold_acknowledge __secondary_hold_spinloop __start
strcmp strcpy strlcpy strlen strncmp strstr logo_linux_clut224
reloc_got2 kernstart_addr memstart_addr linux_banner _stext
diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index bdcbb716f4d6..4b766638ead9 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -69,6 +69,7 @@
#include <asm/kvm_ppc.h>
#include <asm/hugetlb.h>
#include <asm/epapr_hcalls.h>
+#include <asm/kasan.h>
#ifdef DEBUG
#define DBG(fmt...) udbg_printf(fmt)
@@ -708,6 +709,8 @@ void __init setup_arch(char **cmdline_p)
/* Initialize the MMU context management stuff */
mmu_context_init();
+ kasan_init();
+
/* Interrupt code needs to be 64K-aligned */
if ((unsigned long)_stext & 0xffff)
panic("Kernelbase not 64K-aligned (0x%lx)!\n",
diff --git a/arch/powerpc/kvm/Makefile b/arch/powerpc/kvm/Makefile
index 0570eef83fba..26288d16899e 100644
--- a/arch/powerpc/kvm/Makefile
+++ b/arch/powerpc/kvm/Makefile
@@ -3,6 +3,7 @@
#
subdir-ccflags-$(CONFIG_PPC_WERROR) := -Werror
+KASAN_SANITIZE :=n
ccflags-y := -Ivirt/kvm -Iarch/powerpc/kvm
KVM := ../../../virt/kvm
diff --git a/arch/powerpc/lib/mem_64.S b/arch/powerpc/lib/mem_64.S
index 43435c6892fb..0e1f811babdd 100644
--- a/arch/powerpc/lib/mem_64.S
+++ b/arch/powerpc/lib/mem_64.S
@@ -12,7 +12,8 @@
#include <asm/errno.h>
#include <asm/ppc_asm.h>
-_GLOBAL(memset)
+KASAN_OVERRIDE(memset,__memset)
+_GLOBAL(__memset)
neg r0,r3
rlwimi r4,r4,8,16,23
andi. r0,r0,7 /* # bytes to be 8-byte aligned */
@@ -77,7 +78,8 @@ _GLOBAL(memset)
stb r4,0(r6)
blr
-_GLOBAL_TOC(memmove)
+KASAN_OVERRIDE(memmove,__memmove)
+_GLOBAL_TOC(__memmove)
cmplw 0,r3,r4
bgt backwards_memcpy
b memcpy
diff --git a/arch/powerpc/lib/memcpy_64.S b/arch/powerpc/lib/memcpy_64.S
index 32a06ec395d2..396b44181ec1 100644
--- a/arch/powerpc/lib/memcpy_64.S
+++ b/arch/powerpc/lib/memcpy_64.S
@@ -10,7 +10,8 @@
#include <asm/ppc_asm.h>
.align 7
-_GLOBAL_TOC(memcpy)
+KASAN_OVERRIDE(memcpy,__memcpy)
+_GLOBAL_TOC(__memcpy)
BEGIN_FTR_SECTION
#ifdef __LITTLE_ENDIAN__
cmpdi cr7,r5,0
diff --git a/arch/powerpc/lib/ppc_ksyms.c b/arch/powerpc/lib/ppc_ksyms.c
index c7f8e9586316..3a27b08bee26 100644
--- a/arch/powerpc/lib/ppc_ksyms.c
+++ b/arch/powerpc/lib/ppc_ksyms.c
@@ -9,6 +9,16 @@ EXPORT_SYMBOL(memmove);
EXPORT_SYMBOL(memcmp);
EXPORT_SYMBOL(memchr);
+#ifdef CONFIG_PPC64
+/*
+ * There symbols are needed with kasan. We only
+ * have that enabled for ppc64 now.
+ */
+EXPORT_SYMBOL(__memcpy);
+EXPORT_SYMBOL(__memset);
+EXPORT_SYMBOL(__memmove);
+#endif
+
EXPORT_SYMBOL(strcpy);
EXPORT_SYMBOL(strncpy);
EXPORT_SYMBOL(strcat);
diff --git a/arch/powerpc/mm/Makefile b/arch/powerpc/mm/Makefile
index 3eb73a38220d..ad7d589c7e44 100644
--- a/arch/powerpc/mm/Makefile
+++ b/arch/powerpc/mm/Makefile
@@ -6,6 +6,11 @@ subdir-ccflags-$(CONFIG_PPC_WERROR) := -Werror
ccflags-$(CONFIG_PPC64) := $(NO_MINIMAL_TOC)
+KASAN_SANITIZE_kasan_init.o := n
+KASAN_SANITIZE_hash_utils_64.o := n
+KASAN_SANITIZE_hugetlbpage.o := n
+KASAN_SANITIZE_slb.o := n
+
obj-y := fault.o mem.o pgtable.o mmap.o \
init_$(CONFIG_WORD_SIZE).o \
pgtable_$(CONFIG_WORD_SIZE).o
@@ -37,3 +42,5 @@ obj-$(CONFIG_NOT_COHERENT_CACHE) += dma-noncoherent.o
obj-$(CONFIG_HIGHMEM) += highmem.o
obj-$(CONFIG_PPC_COPRO_BASE) += copro_fault.o
obj-$(CONFIG_SPAPR_TCE_IOMMU) += mmu_context_iommu.o
+
+obj-$(CONFIG_KASAN) += kasan_init.o
diff --git a/arch/powerpc/mm/kasan_init.c b/arch/powerpc/mm/kasan_init.c
new file mode 100644
index 000000000000..9deba6019fbf
--- /dev/null
+++ b/arch/powerpc/mm/kasan_init.c
@@ -0,0 +1,44 @@
+#define pr_fmt(fmt) "kasan: " fmt
+#include <linux/kernel.h>
+#include <linux/memblock.h>
+#include <linux/kasan.h>
+
+bool __kasan_enabled = false;
+unsigned char kasan_zero_page[PAGE_SIZE] __page_aligned_bss;
+void __init kasan_init(void)
+{
+ unsigned long k_start, k_end;
+ struct memblock_region *reg;
+ unsigned long page_size = 1 << mmu_psize_defs[mmu_vmemmap_psize].shift;
+
+
+ for_each_memblock(memory, reg) {
+ void *p;
+ void *start = __va(reg->base);
+ void *end = __va(reg->base + reg->size);
+ int node = pfn_to_nid(virt_to_pfn(start));
+
+ if (start >= end)
+ break;
+
+ k_start = (unsigned long)kasan_mem_to_shadow(start);
+ k_end = (unsigned long)kasan_mem_to_shadow(end);
+ for (; k_start < k_end; k_start += page_size) {
+ p = vmemmap_alloc_block(page_size, node);
+ if (!p) {
+ pr_info("Disabled Kasan, for lack of free mem\n");
+ /* Free the stuff or panic ? */
+ return;
+ }
+ htab_bolt_mapping(k_start, k_start + page_size,
+ __pa(p), pgprot_val(PAGE_KERNEL),
+ mmu_vmemmap_psize, mmu_kernel_ssize);
+ }
+ }
+ /*
+ * At this point kasan is fully initialized. Enable error messages
+ */
+ init_task.kasan_depth = 0;
+ __kasan_enabled = true;
+ pr_info("Kernel address sanitizer initialized\n");
+}
diff --git a/arch/powerpc/mm/slb_low.S b/arch/powerpc/mm/slb_low.S
index 736d18b3cefd..154bd8a0b437 100644
--- a/arch/powerpc/mm/slb_low.S
+++ b/arch/powerpc/mm/slb_low.S
@@ -80,11 +80,15 @@ END_MMU_FTR_SECTION_IFCLR(MMU_FTR_1T_SEGMENT)
/* Check virtual memmap region. To be patches at kernel boot */
cmpldi cr0,r9,0xf
bne 1f
+2:
.globl slb_miss_kernel_load_vmemmap
slb_miss_kernel_load_vmemmap:
li r11,0
b 6f
1:
+ /* Kasan region same as vmemmap mapping */
+ cmpldi cr0,r9,0xe
+ beq 2b
#endif /* CONFIG_SPARSEMEM_VMEMMAP */
/* vmalloc mapping gets the encoding from the PACA as the mapping
diff --git a/arch/powerpc/platforms/Kconfig.cputype b/arch/powerpc/platforms/Kconfig.cputype
index c140e94c7c72..7a7c9d54f80e 100644
--- a/arch/powerpc/platforms/Kconfig.cputype
+++ b/arch/powerpc/platforms/Kconfig.cputype
@@ -75,6 +75,7 @@ config PPC_BOOK3S_64
select HAVE_ARCH_TRANSPARENT_HUGEPAGE if PPC_64K_PAGES
select ARCH_SUPPORTS_NUMA_BALANCING
select IRQ_WORK
+ select HAVE_ARCH_KASAN if SPARSEMEM_VMEMMAP
config PPC_BOOK3E_64
bool "Embedded processors"
--
2.5.0
Missed to cherry-pick the updated version of this patch, before sending
the series out.
commit aeb324e09d95c189eda4ce03790da94b535d1dfc
Author: Aneesh Kumar K.V <[email protected]>
Date: Fri Aug 14 12:28:58 2015 +0530
kasan: Don't use kasan shadow pointer in generic functions
We can't use generic functions like print_hex_dump to access kasan
shadow region. This require us to setup another kasan shadow region
for the address passed (kasan shadow address). Most architecture won't
be able to do that. Hence make a copy of the shadow region row and
pass that to generic functions.
Signed-off-by: Aneesh Kumar K.V <[email protected]>
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index d19d01823a68..60fdb0413f3b 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -166,14 +166,20 @@ static void print_shadow_for_address(const void *addr)
for (i = -SHADOW_ROWS_AROUND_ADDR; i <= SHADOW_ROWS_AROUND_ADDR; i++) {
const void *kaddr = kasan_shadow_to_mem(shadow_row);
char buffer[4 + (BITS_PER_LONG/8)*2];
+ char shadow_buf[SHADOW_BYTES_PER_ROW];
snprintf(buffer, sizeof(buffer),
(i == 0) ? ">%p: " : " %p: ", kaddr);
-
+ /*
+ * We should not pass a shadow pointer to generic
+ * function, because generic functions may try to
+ * kasan mapping for the passed address.
+ */
+ memcpy(shadow_buf, shadow_row, SHADOW_BYTES_PER_ROW);
kasan_disable_current();
print_hex_dump(KERN_ERR, buffer,
DUMP_PREFIX_NONE, SHADOW_BYTES_PER_ROW, 1,
- shadow_row, SHADOW_BYTES_PER_ROW, 0);
+ shadow_buf, SHADOW_BYTES_PER_ROW, 0);
kasan_enable_current();
if (row_is_guilty(shadow_row, shadow))
2015-08-26 11:26 GMT+03:00 Aneesh Kumar K.V <[email protected]>:
> Hi,
>
> This patchset implements kernel address sanitizer for ppc64.
> Since ppc64 virtual address range is divided into different regions,
> we can't have one contigous area for the kasan shadow range. Hence
> we don't support the INLINE kasan instrumentation. With Outline
> instrumentation, we override the shadow_to_mem and mem_to_shadow
> callbacks, so that we map only the kernel linear range (ie,
> region with ID 0xc). For region with ID 0xd and 0xf (vmalloc
> and vmemmap ) we return the address of the zero page. This
> works because kasan doesn't track both vmemmap and vmalloc address.
>
> Known issues:
> * Kasan is not yet enabled for arch/powerpc/kvm
> * kexec hang
> * outline stack and global support
>
Is there any problem with globals or you just didn't try it yet?
I think it should just work. You need only to add --param
asan-globals=0 to KBUILD_CFLAGS_MODULE
to disable it for modules.
2015-08-26 11:26 GMT+03:00 Aneesh Kumar K.V <[email protected]>:
> Conditionalize the check using #ifdef
>
> Signed-off-by: Aneesh Kumar K.V <[email protected]>
> ---
> mm/kasan/report.c | 11 ++++++++---
> 1 file changed, 8 insertions(+), 3 deletions(-)
>
> diff --git a/mm/kasan/report.c b/mm/kasan/report.c
> index e07c94fbd0ac..71ce7548d914 100644
> --- a/mm/kasan/report.c
> +++ b/mm/kasan/report.c
> @@ -85,9 +85,14 @@ static void print_error_description(struct kasan_access_info *info)
>
> static inline bool kernel_or_module_addr(const void *addr)
> {
> - return (addr >= (void *)_stext && addr < (void *)_end)
> - || (addr >= (void *)MODULES_VADDR
> - && addr < (void *)MODULES_END);
> + if (addr >= (void *)_stext && addr < (void *)_end)
> + return true;
> +#if defined(CONFIG_MODULES) && defined(MODULES_VADDR)
> + if (addr >= (void *)MODULES_VADDR
> + && addr < (void *)MODULES_END)
> + return true;
> +#endif
I don't think that this is correct change.
On ppc64 modules are in VMALLOC, so you should check for this.
Yes, we don't handle VMALLOC now, but we will at some point.
So I think we should use is_module_address() here.
It will be slower, but we don't care about performance in error reporting route.
2015-08-26 11:26 GMT+03:00 Aneesh Kumar K.V <[email protected]>:
> The function only disable/enable reporting. In the later patch
> we will be adding a kasan early enable/disable. Rename kasan_enabled
> to properly reflect its function.
>
> Signed-off-by: Aneesh Kumar K.V <[email protected]>
Reviewed-by: Andrey Ryabinin <[email protected]>
Andrey Ryabinin <[email protected]> writes:
> 2015-08-26 11:26 GMT+03:00 Aneesh Kumar K.V <[email protected]>:
>> Hi,
>>
>> This patchset implements kernel address sanitizer for ppc64.
>> Since ppc64 virtual address range is divided into different regions,
>> we can't have one contigous area for the kasan shadow range. Hence
>> we don't support the INLINE kasan instrumentation. With Outline
>> instrumentation, we override the shadow_to_mem and mem_to_shadow
>> callbacks, so that we map only the kernel linear range (ie,
>> region with ID 0xc). For region with ID 0xd and 0xf (vmalloc
>> and vmemmap ) we return the address of the zero page. This
>> works because kasan doesn't track both vmemmap and vmalloc address.
>>
>> Known issues:
>> * Kasan is not yet enabled for arch/powerpc/kvm
>> * kexec hang
>> * outline stack and global support
>>
>
> Is there any problem with globals or you just didn't try it yet?
> I think it should just work. You need only to add --param
> asan-globals=0 to KBUILD_CFLAGS_MODULE
> to disable it for modules.
I am hitting BUG_ON in early vmalloc code. I still haven't got time to
debug it further. Should get to that soon.
-aneesh
2015-08-26 11:54 GMT+03:00 Aneesh Kumar K.V <[email protected]>:
>
> Missed to cherry-pick the updated version of this patch, before sending
> the series out.
>
> commit aeb324e09d95c189eda4ce03790da94b535d1dfc
> Author: Aneesh Kumar K.V <[email protected]>
> Date: Fri Aug 14 12:28:58 2015 +0530
>
> kasan: Don't use kasan shadow pointer in generic functions
>
> We can't use generic functions like print_hex_dump to access kasan
> shadow region. This require us to setup another kasan shadow region
> for the address passed (kasan shadow address). Most architecture won't
> be able to do that. Hence make a copy of the shadow region row and
> pass that to generic functions.
>
> Signed-off-by: Aneesh Kumar K.V <[email protected]>
>
> diff --git a/mm/kasan/report.c b/mm/kasan/report.c
> index d19d01823a68..60fdb0413f3b 100644
> --- a/mm/kasan/report.c
> +++ b/mm/kasan/report.c
> @@ -166,14 +166,20 @@ static void print_shadow_for_address(const void *addr)
> for (i = -SHADOW_ROWS_AROUND_ADDR; i <= SHADOW_ROWS_AROUND_ADDR; i++) {
> const void *kaddr = kasan_shadow_to_mem(shadow_row);
> char buffer[4 + (BITS_PER_LONG/8)*2];
> + char shadow_buf[SHADOW_BYTES_PER_ROW];
>
> snprintf(buffer, sizeof(buffer),
> (i == 0) ? ">%p: " : " %p: ", kaddr);
> -
> + /*
> + * We should not pass a shadow pointer to generic
> + * function, because generic functions may try to
> + * kasan mapping for the passed address.
may try to *access* kasan mapping?
> + */
> + memcpy(shadow_buf, shadow_row, SHADOW_BYTES_PER_ROW);
> kasan_disable_current();
> print_hex_dump(KERN_ERR, buffer,
> DUMP_PREFIX_NONE, SHADOW_BYTES_PER_ROW, 1,
> - shadow_row, SHADOW_BYTES_PER_ROW, 0);
> + shadow_buf, SHADOW_BYTES_PER_ROW, 0);
> kasan_enable_current();
>
> if (row_is_guilty(shadow_row, shadow))
>
2015-08-26 11:26 GMT+03:00 Aneesh Kumar K.V <[email protected]>:
> We add enable/disable callbacks in this patch which architecture
> can implemement. We will use this in the later patches for architecture
> like ppc64, that cannot have early zero page kasan shadow region for the
> entire virtual address space. Such architectures also cannot use
> inline kasan support.
>
> Signed-off-by: Aneesh Kumar K.V <[email protected]>
> ---
> mm/kasan/kasan.c | 9 +++++++++
> mm/kasan/kasan.h | 15 +++++++++++++++
> 2 files changed, 24 insertions(+)
>
> diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
> index 7b28e9cdf1c7..e4d33afd0eaf 100644
> --- a/mm/kasan/kasan.c
> +++ b/mm/kasan/kasan.c
> @@ -43,6 +43,9 @@ static void kasan_poison_shadow(const void *address, size_t size, u8 value)
> {
> void *shadow_start, *shadow_end;
>
> + if (!kasan_enabled())
> + return;
> +
By the time this function called we already should have shadow,
so it should be safe to remove this check.
> shadow_start = kasan_mem_to_shadow(address);
> shadow_end = kasan_mem_to_shadow(address + size);
>
> @@ -51,6 +54,9 @@ static void kasan_poison_shadow(const void *address, size_t size, u8 value)
>
> void kasan_unpoison_shadow(const void *address, size_t size)
> {
> + if (!kasan_enabled())
> + return;
> +
Ditto.
> kasan_poison_shadow(address, size, 0);
>
> if (size & KASAN_SHADOW_MASK) {
> @@ -238,6 +244,9 @@ static __always_inline void check_memory_region(unsigned long addr,
> {
> struct kasan_access_info info;
>
> + if (!kasan_enabled())
> + return;
> +
> if (unlikely(size == 0))
> return;
>
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index a6b46cc94907..deb547d5a916 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -68,6 +68,21 @@ static inline bool kasan_report_enabled(void)
> return !current->kasan_depth;
> }
>
> +#ifndef kasan_enabled
> +/*
> + * Some archs may want to disable kasan callbacks.
> + */
> +static inline bool kasan_enabled(void)
> +{
> + return true;
> +}
> +#define kasan_enabled kasan_enabled
Why we need this define?
> +#else
> +#ifdef CONFIG_KASAN_INLINE
> +#error "Kasan inline support cannot work with KASAN arch hooks"
> +#endif
> +#endif
> +
> void kasan_report(unsigned long addr, size_t size,
> bool is_write, unsigned long ip);
>
> --
> 2.5.0
>
2015-08-26 11:26 GMT+03:00 Aneesh Kumar K.V <[email protected]>:
> Some archs may want to provide kasan shadow memory as a constant
> offset from the address. Such arch even though cannot use inline kasan
> support, they can work with outofline kasan support.
>
> Signed-off-by: Aneesh Kumar K.V <[email protected]>
> ---
> include/linux/kasan.h | 3 +++
> mm/kasan/kasan.h | 3 +++
> 2 files changed, 6 insertions(+)
>
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index 5486d777b706..e458ca64cdaf 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -15,11 +15,14 @@ struct vm_struct;
> #include <asm/kasan.h>
> #include <linux/sched.h>
>
> +#ifndef kasan_mem_to_shadow
> static inline void *kasan_mem_to_shadow(const void *addr)
> {
> return (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT)
> + KASAN_SHADOW_OFFSET;
> }
> +#define kasan_mem_to_shadow kasan_mem_to_shadow
Again, why we need this define here? I think it should be safe to remove it.
> +#endif
>
> /* Enable reporting bugs after kasan_disable_current() */
> static inline void kasan_enable_current(void)
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index deb547d5a916..c0686f2b1224 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -57,11 +57,14 @@ struct kasan_global {
> void kasan_report_error(struct kasan_access_info *info);
> void kasan_report_user_access(struct kasan_access_info *info);
>
> +#ifndef kasan_shadow_to_mem
> static inline const void *kasan_shadow_to_mem(const void *shadow_addr)
> {
> return (void *)(((unsigned long)shadow_addr - KASAN_SHADOW_OFFSET)
> << KASAN_SHADOW_SCALE_SHIFT);
> }
> +#define kasan_shadow_to_mem kasan_shadow_to_mem
ditto
> +#endif
>
> static inline bool kasan_report_enabled(void)
> {
> --
> 2.5.0
>
2015-08-26 11:26 GMT+03:00 Aneesh Kumar K.V <[email protected]>:
> Some of the archs, may find it difficult to support inline KASan
> mode. Add HAVE_ARCH_KASAN_INLINE so that we can disable inline
> support at config time.
>
> Signed-off-by: Aneesh Kumar K.V <[email protected]>
> ---
> arch/x86/Kconfig | 1 +
> lib/Kconfig.kasan | 2 ++
> scripts/Makefile.kasan | 28 ++++++++++++++--------------
> 3 files changed, 17 insertions(+), 14 deletions(-)
>
> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> index b3a1a5d77d92..4416f80580fb 100644
> --- a/arch/x86/Kconfig
> +++ b/arch/x86/Kconfig
> @@ -78,6 +78,7 @@ config X86
> select HAVE_ARCH_HUGE_VMAP if X86_64 || X86_PAE
> select HAVE_ARCH_JUMP_LABEL
> select HAVE_ARCH_KASAN if X86_64 && SPARSEMEM_VMEMMAP
> + select HAVE_ARCH_KASAN_INLINE if X86_64 && SPARSEMEM_VMEMMAP
This will not work because config HAVE_ARCH_KASAN_INLINE is not defined.
Instead of you can just add following in this file:
config HAVE_ARCH_KASAN_INLINE
def_bool y
depends on KASAN
> select HAVE_ARCH_KGDB
> select HAVE_ARCH_KMEMCHECK
> select HAVE_ARCH_SECCOMP_FILTER
> diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
> index 39f24d6721e5..e9d1bb1175b8 100644
> --- a/lib/Kconfig.kasan
> +++ b/lib/Kconfig.kasan
> @@ -32,6 +32,7 @@ config KASAN_OUTLINE
> however it doesn't bloat size of kernel's .text section so
> much as inline does.
>
> +if HAVE_ARCH_KASAN_INLINE
> config KASAN_INLINE
> bool "Inline instrumentation"
depends on HAVE_ARCH_KASAN_INLINE
> help
> @@ -40,6 +41,7 @@ config KASAN_INLINE
> it gives about x2 boost over outline instrumentation), but
> make kernel's .text size much bigger.
> This requires a gcc version of 5.0 or later.
> +endif
>
> endchoice
>
> diff --git a/scripts/Makefile.kasan b/scripts/Makefile.kasan
> index 3f874d24234f..c1c06e9e107a 100644
> --- a/scripts/Makefile.kasan
> +++ b/scripts/Makefile.kasan
> @@ -1,29 +1,29 @@
> ifdef CONFIG_KASAN
> -ifdef CONFIG_KASAN_INLINE
> - call_threshold := 10000
> -else
> - call_threshold := 0
> -endif
> -
> -CFLAGS_KASAN_MINIMAL := -fsanitize=kernel-address
>
> CFLAGS_KASAN := $(call cc-option, -fsanitize=kernel-address \
> - -fasan-shadow-offset=$(CONFIG_KASAN_SHADOW_OFFSET) \
> - --param asan-stack=1 --param asan-globals=1 \
> - --param asan-instrumentation-with-call-threshold=$(call_threshold))
> -
> -ifeq ($(call cc-option, $(CFLAGS_KASAN_MINIMAL) -Werror),)
> + --param asan-instrumentation-with-call-threshold=0)
> +ifeq ($(CFLAGS_KASAN),)
> ifneq ($(CONFIG_COMPILE_TEST),y)
> $(warning Cannot use CONFIG_KASAN: \
> -fsanitize=kernel-address is not supported by compiler)
> endif
> else
> - ifeq ($(CFLAGS_KASAN),)
> +
> + ifdef CONFIG_KASAN_INLINE
> + CFLAGS_KASAN_INLINE := $(call cc-option, -fsanitize=kernel-address \
> + -fasan-shadow-offset=$(CONFIG_KASAN_SHADOW_OFFSET) \
> + --param asan-stack=1 --param asan-globals=1 \
> + --param asan-instrumentation-with-call-threshold=10000)
> +
> + ifeq ($(CFLAGS_KASAN_INLINE),)
> ifneq ($(CONFIG_COMPILE_TEST),y)
> $(warning CONFIG_KASAN: compiler does not support all options.\
> Trying minimal configuration)
> endif
> - CFLAGS_KASAN := $(CFLAGS_KASAN_MINIMAL)
> + else
> + CFLAGS_KASAN := $(CFLAGS_KASAN_INLINE)
> endif
> + endif
> +
This removes stack and globals for CONFIG_KASAN_OUTLINE=y. Why?
Those are completely separate features. So this patch shouldn't touch
this Makefile at all.
Depends on HAVE_ARCH_KASAN_INLINE in CONFIG_KASAN_INLINE should be enough.
But you need to disable 'asan-stack' and 'asan-globals' for pcc64.
I'd suggest to introduce CFLAGS_ARCH_KASAN.
Define it in ppc64 Makefile:
CFLAGS_ARCH_KASAN := --param asan-globals=0 --param asan-stack=0
and add these flags to CFLAGS_KASAN_MINIMAL and CFLAGS_KASAN in Makefile.kasan.
> endif
> endif
> --
> 2.5.0
>
2015-08-26 11:26 GMT+03:00 Aneesh Kumar K.V <[email protected]>:
> We we end up calling kasan_report in real mode, our shadow mapping
> for even spinlock variable will show poisoned.
Generally I agree with this patch. We should disable reports when we
print report as early
as possible to prevent recursion in case of bug in spinlock or printk etc.
But I don't understand what is the problem that you observing.
How we ended up with shadow poisoned for a valid spinlock struct?
And since shadow poisoned for some valid memory we should get
enormous amount of false positive reports.
> This will result
> in us calling kasan_report_error with lock_report spin lock held.
> To prevent this disable kasan reporting when we are priting
> error w.r.t kasan.
>
> Signed-off-by: Aneesh Kumar K.V <[email protected]>
> ---
> mm/kasan/report.c | 10 ++++++++++
> 1 file changed, 10 insertions(+)
>
> diff --git a/mm/kasan/report.c b/mm/kasan/report.c
> index 79fbc5d14bd2..82b41eb83e43 100644
> --- a/mm/kasan/report.c
> +++ b/mm/kasan/report.c
> @@ -185,6 +185,10 @@ void kasan_report_error(struct kasan_access_info *info)
> {
> unsigned long flags;
>
> + /*
> + * Make sure we don't end up in loop.
> + */
> + kasan_disable_current();
> spin_lock_irqsave(&report_lock, flags);
> pr_err("================================="
> "=================================\n");
> @@ -194,12 +198,17 @@ void kasan_report_error(struct kasan_access_info *info)
> pr_err("================================="
> "=================================\n");
> spin_unlock_irqrestore(&report_lock, flags);
> + kasan_enable_current();
> }
>
> void kasan_report_user_access(struct kasan_access_info *info)
> {
> unsigned long flags;
>
> + /*
> + * Make sure we don't end up in loop.
> + */
> + kasan_disable_current();
> spin_lock_irqsave(&report_lock, flags);
> pr_err("================================="
> "=================================\n");
> @@ -212,6 +221,7 @@ void kasan_report_user_access(struct kasan_access_info *info)
> pr_err("================================="
> "=================================\n");
> spin_unlock_irqrestore(&report_lock, flags);
> + kasan_enable_current();
> }
>
> void kasan_report(unsigned long addr, size_t size,
> --
> 2.5.0
>
2015-08-26 11:26 GMT+03:00 Aneesh Kumar K.V <[email protected]>:
> + k_start = (unsigned long)kasan_mem_to_shadow(start);
> + k_end = (unsigned long)kasan_mem_to_shadow(end);
> + for (; k_start < k_end; k_start += page_size) {
> + p = vmemmap_alloc_block(page_size, node);
> + if (!p) {
> + pr_info("Disabled Kasan, for lack of free mem\n");
> + /* Free the stuff or panic ? */
vmemmap_alloc_block() panics on allocation failure, so you don't need
this if block.
You could replace this with memblock_virt_alloc_try_nid_nopanic(), but
note that if/when
we will have working asan-stack=1 there will be no way for fallback.
> + return;
> + }
> + htab_bolt_mapping(k_start, k_start + page_size,
> + __pa(p), pgprot_val(PAGE_KERNEL),
> + mmu_vmemmap_psize, mmu_kernel_ssize);
> + }
> + }
> + /*
> + * At this point kasan is fully initialized. Enable error messages
> + */
> + init_task.kasan_depth = 0;
> + __kasan_enabled = true;
> + pr_info("Kernel address sanitizer initialized\n");
> +}
> diff --git a/arch/powerpc/mm/slb_low.S b/arch/powerpc/mm/slb_low.S
> index 736d18b3cefd..154bd8a0b437 100644
> --- a/arch/powerpc/mm/slb_low.S
> +++ b/arch/powerpc/mm/slb_low.S
> @@ -80,11 +80,15 @@ END_MMU_FTR_SECTION_IFCLR(MMU_FTR_1T_SEGMENT)
> /* Check virtual memmap region. To be patches at kernel boot */
> cmpldi cr0,r9,0xf
> bne 1f
> +2:
> .globl slb_miss_kernel_load_vmemmap
> slb_miss_kernel_load_vmemmap:
> li r11,0
> b 6f
> 1:
> + /* Kasan region same as vmemmap mapping */
> + cmpldi cr0,r9,0xe
> + beq 2b
> #endif /* CONFIG_SPARSEMEM_VMEMMAP */
>
> /* vmalloc mapping gets the encoding from the PACA as the mapping
> diff --git a/arch/powerpc/platforms/Kconfig.cputype b/arch/powerpc/platforms/Kconfig.cputype
> index c140e94c7c72..7a7c9d54f80e 100644
> --- a/arch/powerpc/platforms/Kconfig.cputype
> +++ b/arch/powerpc/platforms/Kconfig.cputype
> @@ -75,6 +75,7 @@ config PPC_BOOK3S_64
> select HAVE_ARCH_TRANSPARENT_HUGEPAGE if PPC_64K_PAGES
> select ARCH_SUPPORTS_NUMA_BALANCING
> select IRQ_WORK
> + select HAVE_ARCH_KASAN if SPARSEMEM_VMEMMAP
>
> config PPC_BOOK3E_64
> bool "Embedded processors"
> --
> 2.5.0
>
Andrey Ryabinin <[email protected]> writes:
> 2015-08-26 11:26 GMT+03:00 Aneesh Kumar K.V <[email protected]>:
>> Conditionalize the check using #ifdef
>>
>> Signed-off-by: Aneesh Kumar K.V <[email protected]>
>> ---
>> mm/kasan/report.c | 11 ++++++++---
>> 1 file changed, 8 insertions(+), 3 deletions(-)
>>
>> diff --git a/mm/kasan/report.c b/mm/kasan/report.c
>> index e07c94fbd0ac..71ce7548d914 100644
>> --- a/mm/kasan/report.c
>> +++ b/mm/kasan/report.c
>> @@ -85,9 +85,14 @@ static void print_error_description(struct kasan_access_info *info)
>>
>> static inline bool kernel_or_module_addr(const void *addr)
>> {
>> - return (addr >= (void *)_stext && addr < (void *)_end)
>> - || (addr >= (void *)MODULES_VADDR
>> - && addr < (void *)MODULES_END);
>> + if (addr >= (void *)_stext && addr < (void *)_end)
>> + return true;
>> +#if defined(CONFIG_MODULES) && defined(MODULES_VADDR)
>> + if (addr >= (void *)MODULES_VADDR
>> + && addr < (void *)MODULES_END)
>> + return true;
>> +#endif
>
> I don't think that this is correct change.
> On ppc64 modules are in VMALLOC, so you should check for this.
> Yes, we don't handle VMALLOC now, but we will at some point.
>
> So I think we should use is_module_address() here.
> It will be slower, but we don't care about performance in error reporting route.
Will fix in the next update.
-aneesh
Andrey Ryabinin <[email protected]> writes:
> 2015-08-26 11:26 GMT+03:00 Aneesh Kumar K.V <[email protected]>:
>> We add enable/disable callbacks in this patch which architecture
>> can implemement. We will use this in the later patches for architecture
>> like ppc64, that cannot have early zero page kasan shadow region for the
>> entire virtual address space. Such architectures also cannot use
>> inline kasan support.
>>
>> Signed-off-by: Aneesh Kumar K.V <[email protected]>
>> ---
>> mm/kasan/kasan.c | 9 +++++++++
>> mm/kasan/kasan.h | 15 +++++++++++++++
>> 2 files changed, 24 insertions(+)
>>
>> diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
>> index 7b28e9cdf1c7..e4d33afd0eaf 100644
>> --- a/mm/kasan/kasan.c
>> +++ b/mm/kasan/kasan.c
>> @@ -43,6 +43,9 @@ static void kasan_poison_shadow(const void *address, size_t size, u8 value)
>> {
>> void *shadow_start, *shadow_end;
>>
>> + if (!kasan_enabled())
>> + return;
>> +
>
> By the time this function called we already should have shadow,
> so it should be safe to remove this check.
>
I remember hitting that call before enabling kasan completely. Will
check that again.
>
>> shadow_start = kasan_mem_to_shadow(address);
>> shadow_end = kasan_mem_to_shadow(address + size);
>>
>> @@ -51,6 +54,9 @@ static void kasan_poison_shadow(const void *address, size_t size, u8 value)
>>
>> void kasan_unpoison_shadow(const void *address, size_t size)
>> {
>> + if (!kasan_enabled())
>> + return;
>> +
>
> Ditto.
>
>> kasan_poison_shadow(address, size, 0);
>>
>> if (size & KASAN_SHADOW_MASK) {
>> @@ -238,6 +244,9 @@ static __always_inline void check_memory_region(unsigned long addr,
>> {
>> struct kasan_access_info info;
>>
>> + if (!kasan_enabled())
>> + return;
>> +
>> if (unlikely(size == 0))
>> return;
>>
>> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
>> index a6b46cc94907..deb547d5a916 100644
>> --- a/mm/kasan/kasan.h
>> +++ b/mm/kasan/kasan.h
>> @@ -68,6 +68,21 @@ static inline bool kasan_report_enabled(void)
>> return !current->kasan_depth;
>> }
>>
>> +#ifndef kasan_enabled
>> +/*
>> + * Some archs may want to disable kasan callbacks.
>> + */
>> +static inline bool kasan_enabled(void)
>> +{
>> + return true;
>> +}
>> +#define kasan_enabled kasan_enabled
>
> Why we need this define?
That is to make sure that we don't endup with different definition of
kasan_enabled due to header include errors. Once we select a
particular definition of the overloaded function, it is always good to mark
the function as defined, so that checks like #ifndef kasan_enabled will
always fail.
>
>> +#else
>> +#ifdef CONFIG_KASAN_INLINE
>> +#error "Kasan inline support cannot work with KASAN arch hooks"
>> +#endif
>> +#endif
>> +
>> void kasan_report(unsigned long addr, size_t size,
>> bool is_write, unsigned long ip);
>>
>> --
>> 2.5.0
>>
-aneesh
Andrey Ryabinin <[email protected]> writes:
> 2015-08-26 11:26 GMT+03:00 Aneesh Kumar K.V <[email protected]>:
>> Some of the archs, may find it difficult to support inline KASan
>> mode. Add HAVE_ARCH_KASAN_INLINE so that we can disable inline
>> support at config time.
>>
>> Signed-off-by: Aneesh Kumar K.V <[email protected]>
>> ---
>> arch/x86/Kconfig | 1 +
>> lib/Kconfig.kasan | 2 ++
>> scripts/Makefile.kasan | 28 ++++++++++++++--------------
>> 3 files changed, 17 insertions(+), 14 deletions(-)
>>
>> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
>> index b3a1a5d77d92..4416f80580fb 100644
>> --- a/arch/x86/Kconfig
>> +++ b/arch/x86/Kconfig
>> @@ -78,6 +78,7 @@ config X86
>> select HAVE_ARCH_HUGE_VMAP if X86_64 || X86_PAE
>> select HAVE_ARCH_JUMP_LABEL
>> select HAVE_ARCH_KASAN if X86_64 && SPARSEMEM_VMEMMAP
>> + select HAVE_ARCH_KASAN_INLINE if X86_64 && SPARSEMEM_VMEMMAP
>
> This will not work because config HAVE_ARCH_KASAN_INLINE is not defined.
> Instead of you can just add following in this file:
>
> config HAVE_ARCH_KASAN_INLINE
> def_bool y
> depends on KASAN
>
Missed cherry-pick
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index e9d1bb1175b8..5dba03bc3f01 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -1,6 +1,9 @@
config HAVE_ARCH_KASAN
bool
+config HAVE_ARCH_KASAN_INLINE
+ bool
+
if HAVE_ARCH_KASAN
config KASAN
>
>> select HAVE_ARCH_KGDB
>> select HAVE_ARCH_KMEMCHECK
>> select HAVE_ARCH_SECCOMP_FILTER
>> diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
>> index 39f24d6721e5..e9d1bb1175b8 100644
>> --- a/lib/Kconfig.kasan
>> +++ b/lib/Kconfig.kasan
>> @@ -32,6 +32,7 @@ config KASAN_OUTLINE
>> however it doesn't bloat size of kernel's .text section so
>> much as inline does.
>>
>> +if HAVE_ARCH_KASAN_INLINE
>> config KASAN_INLINE
>> bool "Inline instrumentation"
>
> depends on HAVE_ARCH_KASAN_INLINE
>
>> help
>> @@ -40,6 +41,7 @@ config KASAN_INLINE
>> it gives about x2 boost over outline instrumentation), but
>> make kernel's .text size much bigger.
>> This requires a gcc version of 5.0 or later.
>> +endif
>>
>> endchoice
>>
>> diff --git a/scripts/Makefile.kasan b/scripts/Makefile.kasan
>> index 3f874d24234f..c1c06e9e107a 100644
>> --- a/scripts/Makefile.kasan
>> +++ b/scripts/Makefile.kasan
>> @@ -1,29 +1,29 @@
>> ifdef CONFIG_KASAN
>> -ifdef CONFIG_KASAN_INLINE
>> - call_threshold := 10000
>> -else
>> - call_threshold := 0
>> -endif
>> -
>> -CFLAGS_KASAN_MINIMAL := -fsanitize=kernel-address
>>
>> CFLAGS_KASAN := $(call cc-option, -fsanitize=kernel-address \
>> - -fasan-shadow-offset=$(CONFIG_KASAN_SHADOW_OFFSET) \
>> - --param asan-stack=1 --param asan-globals=1 \
>> - --param asan-instrumentation-with-call-threshold=$(call_threshold))
>> -
>> -ifeq ($(call cc-option, $(CFLAGS_KASAN_MINIMAL) -Werror),)
>> + --param asan-instrumentation-with-call-threshold=0)
>> +ifeq ($(CFLAGS_KASAN),)
>> ifneq ($(CONFIG_COMPILE_TEST),y)
>> $(warning Cannot use CONFIG_KASAN: \
>> -fsanitize=kernel-address is not supported by compiler)
>> endif
>> else
>> - ifeq ($(CFLAGS_KASAN),)
>> +
>> + ifdef CONFIG_KASAN_INLINE
>> + CFLAGS_KASAN_INLINE := $(call cc-option, -fsanitize=kernel-address \
>> + -fasan-shadow-offset=$(CONFIG_KASAN_SHADOW_OFFSET) \
>> + --param asan-stack=1 --param asan-globals=1 \
>> + --param asan-instrumentation-with-call-threshold=10000)
>> +
>> + ifeq ($(CFLAGS_KASAN_INLINE),)
>> ifneq ($(CONFIG_COMPILE_TEST),y)
>> $(warning CONFIG_KASAN: compiler does not support all options.\
>> Trying minimal configuration)
>> endif
>> - CFLAGS_KASAN := $(CFLAGS_KASAN_MINIMAL)
>> + else
>> + CFLAGS_KASAN := $(CFLAGS_KASAN_INLINE)
>> endif
>> + endif
>> +
>
> This removes stack and globals for CONFIG_KASAN_OUTLINE=y. Why?
> Those are completely separate features. So this patch shouldn't touch
> this Makefile at all.
> Depends on HAVE_ARCH_KASAN_INLINE in CONFIG_KASAN_INLINE should be enough.
Correct. I have changes that will make kasan_global and kasan_stack as
config items. I didn't add it as part of this series because I still
haven't got outline global and stack work on ppc64. Right now the split
config only applies to ppc64. Hence didn't want to make the patch more
complicated.
>
> But you need to disable 'asan-stack' and 'asan-globals' for pcc64.
> I'd suggest to introduce CFLAGS_ARCH_KASAN.
> Define it in ppc64 Makefile:
>
> CFLAGS_ARCH_KASAN := --param asan-globals=0 --param asan-stack=0
>
> and add these flags to CFLAGS_KASAN_MINIMAL and CFLAGS_KASAN in Makefile.kasan.
>
>
>> endif
>> endif
>>
-aneesh
Andrey Ryabinin <[email protected]> writes:
> 2015-08-26 11:26 GMT+03:00 Aneesh Kumar K.V <[email protected]>:
>> We we end up calling kasan_report in real mode, our shadow mapping
>> for even spinlock variable will show poisoned.
>
> Generally I agree with this patch. We should disable reports when we
> print report as early
> as possible to prevent recursion in case of bug in spinlock or printk etc.
>
> But I don't understand what is the problem that you observing.
> How we ended up with shadow poisoned for a valid spinlock struct?
> And since shadow poisoned for some valid memory we should get
> enormous amount of false positive reports.
>
I still haven't fully isolated all the .c files which should not be
kasan instrumented. That means in case of ppc64 i ended up calling
kasan _load/_store in real mode. That will result in failure w.r.t
to the above spin_lock code.
-aneesh
Hi Aneesh,
Are you still working on support for KASan for ppc64 ?
Thanks,
Christophe
Le 26/08/2015 à 19:14, Aneesh Kumar K.V a écrit :
> Andrey Ryabinin <[email protected]> writes:
>
>> 2015-08-26 11:26 GMT+03:00 Aneesh Kumar K.V <[email protected]>:
>>> Hi,
>>>
>>> This patchset implements kernel address sanitizer for ppc64.
>>> Since ppc64 virtual address range is divided into different regions,
>>> we can't have one contigous area for the kasan shadow range. Hence
>>> we don't support the INLINE kasan instrumentation. With Outline
>>> instrumentation, we override the shadow_to_mem and mem_to_shadow
>>> callbacks, so that we map only the kernel linear range (ie,
>>> region with ID 0xc). For region with ID 0xd and 0xf (vmalloc
>>> and vmemmap ) we return the address of the zero page. This
>>> works because kasan doesn't track both vmemmap and vmalloc address.
>>>
>>> Known issues:
>>> * Kasan is not yet enabled for arch/powerpc/kvm
>>> * kexec hang
>>> * outline stack and global support
>>>
>>
>> Is there any problem with globals or you just didn't try it yet?
>> I think it should just work. You need only to add --param
>> asan-globals=0 to KBUILD_CFLAGS_MODULE
>> to disable it for modules.
>
> I am hitting BUG_ON in early vmalloc code. I still haven't got time to
> debug it further. Should get to that soon.
>
> -aneesh
>
> _______________________________________________
> Linuxppc-dev mailing list
> [email protected]
> https://lists.ozlabs.org/listinfo/linuxppc-dev
>
Christophe LEROY <[email protected]> writes:
> Hi Aneesh,
>
> Are you still working on support for KASan for ppc64 ?
>
Haven't got time to work on this. The hash memory layout makes it
a bit complicated to implement this.
-aneesh
Le 06/07/2018 à 16:11, Aneesh Kumar K.V a écrit :
> Christophe LEROY <[email protected]> writes:
>
>> Hi Aneesh,
>>
>> Are you still working on support for KASan for ppc64 ?
>>
>
> Haven't got time to work on this. The hash memory layout makes it
> a bit complicated to implement this.
>
Ok, maybe would be easier to start with nohash.
Is there some literature somewhere about what an arch has to implement
to use KAsan ?
Thanks,
Christophe