Hi all.
This patch set introduces address sanitizer for linux kernel (kasan).
Address sanitizer is dynamic memory error detector. It detects:
- Use after free bugs.
- Out of bounds reads/writes in kmalloc
It is possible, but not implemented yet or not included into this patch series:
- Global buffer overflow
- Stack buffer overflow
- Use after return
In this patches contains kasan for x86/x86_64/arm architectures, for buddy and SLUB allocator.
Patches are base on next-20140704 and also available in git:
git://github.com/aryabinin/linux.git --branch=kasan/kasan_v1
The main idea was borrowed from https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel.
The original implementation (only x88_64 and only for SLAB) by Andrey Konovalov could be
found here http://github.com/xairy/linux. Some of code in this patches was stolen from there.
To use this feature you need pretty fresh GCC (revision r211699 from 2014-06-16 or
above).
To enable kasan configure kernel with:
CONFIG_KASAN = y
and
CONFIG_KASAN_SANTIZE_ALL = y
Currently KASAN works only with SLUB allocator. It is highly recommended to run KASAN with
CONFIG_SLUB_DEBUG=y and use 'slub_debug=U' in boot cmdline to enable user tracking
(free and alloc stacktraces).
Basic concept of kasan:
The main idea of KASAN is to use shadow memory to record whether each byte of memory
is safe to access or not, and use compiler's instrumentation to check the shadow memory
on each memory access.
Address sanitizer dedicates 1/8 of the low memory to the shadow memory and uses direct
mapping with a scale and offset to translate a memory address to its corresponding
shadow address.
Here is function to translate address to corresponding shadow address:
unsigned long kasan_mem_to_shadow(unsigned long addr)
{
return ((addr) >> KASAN_SHADOW_SCALE_SHIFT)
+ kasan_shadow_start - (PAGE_OFFSET >> KASAN_SHADOW_SCALE_SHIFT);
}
where KASAN_SHADOW_SCALE_SHIFT = 3.
So for every 8 bytes of lowmemory there is one corresponding byte of shadow memory.
The following encoding used for each shadow byte: 0 means that all 8 bytes of the
corresponding memory region are valid for access; k (1 <= k <= 7) means that
the first k bytes are valid for access, and other (8 - k) bytes are not;
Any negative value indicates that the entire 8-bytes are unaccessible.
Different negative values used to distinguish between different kinds of
unaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).
To be able to detect accesses to bad memory we need a special compiler.
Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
before each memory access of size 1, 2, 4, 8 or 16.
These functions check whether memory region is valid to access or not by checking
corresponding shadow memory. If access is not valid an error printed.
TODO:
- Optimizations: __asan_load*/__asan_store* are called for every memory access, so it's
important to make them as fast as possible.
In this patch set introduced only reference design of memory checking algorithm. It's
slow but very simple, so anyone could easily understand basic concept.
In future versions I'll try bring optimized versions with some numbers.
- It seems like guard page introduced in c0a32f (mm: more intensive memory corruption debugging)
could be easily reused for kasan as well.
- get rid of kasan_disable_local()/kasan_enable_local() functions. kasan_enable/kasan_disable are
used in some rare cases when we need validly access poisoned areas. This functions might be a
stopping gap for inline instrumentation (see below).
TODO probably not for these series:
- Quarantine for slub. For more strong use after free detection we need to delay reusing of freed
slabs. So we need a something similar to guard pages in buddy allocator. Such quarantine might
be useful even without kasan.
- Inline instrumentation. Inline instrumentation means that fast patch of __asan_load* __asan_store* calls
will be implemented in compiler, and instead of inserting function calls compiler will actually insert
this fast path. To be able to do this we need (at least):
a) get rid of kasan_disable()/kasan_enable() (see above)
b) get rid of kasan_initialized flag. The main reason why we have this flag now is because we don't
have any shadow on early stages of boot.
Konstantin Khlebnikov suggested a way to solve this issue:
We could reserve virtual address space for shadow and map pages on very early stage of
boot process (for x86_64 I think it should be done somewhere in x86_64_start_kernel).
So we will have shadow all the time an flag kasan_initialized will no longer required.
- Stack instrumentation (currently doesn't supported in mainline GCC though it is possible)
- Global variables instrumentation
- Use after return
[1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
List of already fixed bugs found by address sanitizer:
aab515d (fib_trie: remove potential out of bound access)
984f173 ([SCSI] sd: Fix potential out-of-bounds access)
5e9ae2e (aio: fix use-after-free in aio_migratepage)
2811eba (ipv6: udp packets following an UFO enqueued packet need also be handled by UFO)
057db84 (tracing: Fix potential out-of-bounds in trace_get_user())
9709674 (ipv4: fix a race in ip4_datagram_release_cb())
4e8d213 (ext4: fix use-after-free in ext4_mb_new_blocks)
624483f (mm: rmap: fix use-after-free in __put_anon_vma)
Cc: Dmitry Vyukov <[email protected]>
Cc: Konstantin Serebryany <[email protected]>
Cc: Alexey Preobrazhensky <[email protected]>
Cc: Andrey Konovalov <[email protected]>
Cc: Yuri Gribov <[email protected]>
Cc: Konstantin Khlebnikov <[email protected]>
Cc: Sasha Levin <[email protected]>
Cc: Michal Marek <[email protected]>
Cc: Russell King <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Pekka Enberg <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: Joonsoo Kim <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: <[email protected]>
Cc: <[email protected]>
Cc: <[email protected]>
Cc: <[email protected]>
Andrey Ryabinin (21):
Add kernel address sanitizer infrastructure.
init: main: initialize kasan's shadow area on boot
x86: add kasan hooks fort memcpy/memmove/memset functions
x86: boot: vdso: disable instrumentation for code not linked with
kernel
x86: cpu: don't sanitize early stages of a secondary CPU boot
x86: mm: init: allocate shadow memory for kasan
x86: Kconfig: enable kernel address sanitizer
mm: page_alloc: add kasan hooks on alloc and free pathes
mm: Makefile: kasan: don't instrument slub.c and slab_common.c files
mm: slab: share virt_to_cache() between slab and slub
mm: slub: share slab_err and object_err functions
mm: util: move krealloc/kzfree to slab_common.c
mm: slub: add allocation size field to struct kmem_cache
mm: slub: kasan: disable kasan when touching unaccessible memory
mm: slub: add kernel address sanitizer hooks to slub allocator
arm: boot: compressed: disable kasan's instrumentation
arm: add kasan hooks fort memcpy/memmove/memset functions
arm: mm: reserve shadow memory for kasan
arm: Kconfig: enable kernel address sanitizer
fs: dcache: manually unpoison dname after allocation to shut up
kasan's reports
lib: add kmalloc_bug_test module
Documentation/kasan.txt | 224 ++++++++++++++++++++
Makefile | 8 +-
arch/arm/Kconfig | 1 +
arch/arm/boot/compressed/Makefile | 2 +
arch/arm/include/asm/string.h | 30 +++
arch/arm/mm/init.c | 3 +
arch/x86/Kconfig | 1 +
arch/x86/boot/Makefile | 2 +
arch/x86/boot/compressed/Makefile | 2 +
arch/x86/include/asm/string_32.h | 28 +++
arch/x86/include/asm/string_64.h | 24 +++
arch/x86/kernel/cpu/Makefile | 3 +
arch/x86/lib/Makefile | 2 +
arch/x86/mm/init.c | 3 +
arch/x86/realmode/Makefile | 2 +-
arch/x86/realmode/rm/Makefile | 1 +
arch/x86/vdso/Makefile | 1 +
commit | 3 +
fs/dcache.c | 3 +
include/linux/kasan.h | 61 ++++++
include/linux/sched.h | 4 +
include/linux/slab.h | 19 +-
include/linux/slub_def.h | 5 +
init/main.c | 3 +-
lib/Kconfig.debug | 10 +
lib/Kconfig.kasan | 22 ++
lib/Makefile | 1 +
lib/test_kmalloc_bugs.c | 254 +++++++++++++++++++++++
mm/Makefile | 5 +
mm/kasan/Makefile | 3 +
mm/kasan/kasan.c | 420 ++++++++++++++++++++++++++++++++++++++
mm/kasan/kasan.h | 42 ++++
mm/kasan/report.c | 187 +++++++++++++++++
mm/page_alloc.c | 4 +
mm/slab.c | 6 -
mm/slab.h | 25 ++-
mm/slab_common.c | 96 +++++++++
mm/slub.c | 50 ++++-
mm/util.c | 91 ---------
scripts/Makefile.lib | 10 +
40 files changed, 1550 insertions(+), 111 deletions(-)
create mode 100644 Documentation/kasan.txt
create mode 100644 commit
create mode 100644 include/linux/kasan.h
create mode 100644 lib/Kconfig.kasan
create mode 100644 lib/test_kmalloc_bugs.c
create mode 100644 mm/kasan/Makefile
create mode 100644 mm/kasan/kasan.c
create mode 100644 mm/kasan/kasan.h
create mode 100644 mm/kasan/report.c
--
1.8.5.5
Instrumentation of this files may result in unbootable machine.
Signed-off-by: Andrey Ryabinin <[email protected]>
---
arch/x86/kernel/cpu/Makefile | 3 +++
1 file changed, 3 insertions(+)
diff --git a/arch/x86/kernel/cpu/Makefile b/arch/x86/kernel/cpu/Makefile
index 7fd54f0..a7bb360 100644
--- a/arch/x86/kernel/cpu/Makefile
+++ b/arch/x86/kernel/cpu/Makefile
@@ -8,6 +8,9 @@ CFLAGS_REMOVE_common.o = -pg
CFLAGS_REMOVE_perf_event.o = -pg
endif
+KASAN_SANITIZE_common.o := n
+KASAN_SANITIZE_perf_event.o := n
+
# Make sure load_percpu_segment has no stackprotector
nostackp := $(call cc-option, -fno-stack-protector)
CFLAGS_common.o := $(nostackp)
--
1.8.5.5
Some code in slub could validly touch memory marked by kasan as unaccessible.
Even though slub.c doesn't instrumented, functions called in it are instrumented,
so to avoid false positive reports such places are protected by
kasan_disable_local()/kasan_enable_local() calls.
Signed-off-by: Andrey Ryabinin <[email protected]>
---
mm/slub.c | 21 +++++++++++++++++++--
1 file changed, 19 insertions(+), 2 deletions(-)
diff --git a/mm/slub.c b/mm/slub.c
index 6ddedf9..c8dbea7 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -560,8 +560,10 @@ static void print_tracking(struct kmem_cache *s, void *object)
if (!(s->flags & SLAB_STORE_USER))
return;
+ kasan_disable_local();
print_track("Allocated", get_track(s, object, TRACK_ALLOC));
print_track("Freed", get_track(s, object, TRACK_FREE));
+ kasan_enable_local();
}
static void print_page_info(struct page *page)
@@ -604,6 +606,8 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
unsigned int off; /* Offset of last byte */
u8 *addr = page_address(page);
+ kasan_disable_local();
+
print_tracking(s, p);
print_page_info(page);
@@ -632,6 +636,8 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
/* Beginning of the filler is the free pointer */
print_section("Padding ", p + off, s->size - off);
+ kasan_enable_local();
+
dump_stack();
}
@@ -1012,6 +1018,8 @@ static noinline int alloc_debug_processing(struct kmem_cache *s,
struct page *page,
void *object, unsigned long addr)
{
+
+ kasan_disable_local();
if (!check_slab(s, page))
goto bad;
@@ -1028,6 +1036,7 @@ static noinline int alloc_debug_processing(struct kmem_cache *s,
set_track(s, object, TRACK_ALLOC, addr);
trace(s, page, object, 1);
init_object(s, object, SLUB_RED_ACTIVE);
+ kasan_enable_local();
return 1;
bad:
@@ -1041,6 +1050,7 @@ bad:
page->inuse = page->objects;
page->freelist = NULL;
}
+ kasan_enable_local();
return 0;
}
@@ -1052,6 +1062,7 @@ static noinline struct kmem_cache_node *free_debug_processing(
spin_lock_irqsave(&n->list_lock, *flags);
slab_lock(page);
+ kasan_disable_local();
if (!check_slab(s, page))
goto fail;
@@ -1088,6 +1099,7 @@ static noinline struct kmem_cache_node *free_debug_processing(
trace(s, page, object, 0);
init_object(s, object, SLUB_RED_INACTIVE);
out:
+ kasan_enable_local();
slab_unlock(page);
/*
* Keep node_lock to preserve integrity
@@ -1096,6 +1108,7 @@ out:
return n;
fail:
+ kasan_enable_local();
slab_unlock(page);
spin_unlock_irqrestore(&n->list_lock, *flags);
slab_fix(s, "Object at 0x%p not freed", object);
@@ -1371,8 +1384,11 @@ static void setup_object(struct kmem_cache *s, struct page *page,
void *object)
{
setup_object_debug(s, page, object);
- if (unlikely(s->ctor))
+ if (unlikely(s->ctor)) {
+ kasan_disable_local();
s->ctor(object);
+ kasan_enable_local();
+ }
}
static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
@@ -1425,11 +1441,12 @@ static void __free_slab(struct kmem_cache *s, struct page *page)
if (kmem_cache_debug(s)) {
void *p;
-
+ kasan_disable_local();
slab_pad_check(s, page);
for_each_object(p, s, page_address(page),
page->objects)
check_object(s, page, p, SLUB_RED_INACTIVE);
+ kasan_enable_local();
}
kmemcheck_free_shadow(page, compound_order(page));
--
1.8.5.5
To avoid build errors, compiler's instrumentation used for kernel
address sanitizer, must be disabled for code not linked with kernel.
Signed-off-by: Andrey Ryabinin <[email protected]>
---
arch/arm/boot/compressed/Makefile | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/arm/boot/compressed/Makefile b/arch/arm/boot/compressed/Makefile
index 76a50ec..03f2976 100644
--- a/arch/arm/boot/compressed/Makefile
+++ b/arch/arm/boot/compressed/Makefile
@@ -4,6 +4,8 @@
# create a compressed vmlinuz image from the original vmlinux
#
+KASAN_SANITIZE := n
+
OBJS =
# Ensure that MMCIF loader code appears early in the image
--
1.8.5.5
We need to manually unpoison rounded up allocation size for dname
to avoid kasan's reports in __d_lookup_rcu.
__d_lookup_rcu may validly read a little beyound allocated size.
Reported-by: Dmitry Vyukov <[email protected]>
Signed-off-by: Andrey Ryabinin <[email protected]>
---
fs/dcache.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/fs/dcache.c b/fs/dcache.c
index b7e8b20..dff64f2 100644
--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -38,6 +38,7 @@
#include <linux/prefetch.h>
#include <linux/ratelimit.h>
#include <linux/list_lru.h>
+#include <linux/kasan.h>
#include "internal.h"
#include "mount.h"
@@ -1412,6 +1413,8 @@ struct dentry *__d_alloc(struct super_block *sb, const struct qstr *name)
kmem_cache_free(dentry_cache, dentry);
return NULL;
}
+ unpoison_shadow(dname,
+ roundup(name->len + 1, sizeof(unsigned long)));
} else {
dname = dentry->d_iname;
}
--
1.8.5.5
This patch shares virt_to_cache() between slab and slub and
it used in cache_from_obj() now.
Later virt_to_cache() will be kernel address sanitizer also.
Signed-off-by: Andrey Ryabinin <[email protected]>
---
mm/slab.c | 6 ------
mm/slab.h | 10 +++++++---
2 files changed, 7 insertions(+), 9 deletions(-)
diff --git a/mm/slab.c b/mm/slab.c
index e7763db..fa4f840 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -433,12 +433,6 @@ static inline void set_obj_status(struct page *page, int idx, int val) {}
static int slab_max_order = SLAB_MAX_ORDER_LO;
static bool slab_max_order_set __initdata;
-static inline struct kmem_cache *virt_to_cache(const void *obj)
-{
- struct page *page = virt_to_head_page(obj);
- return page->slab_cache;
-}
-
static inline void *index_to_obj(struct kmem_cache *cache, struct page *page,
unsigned int idx)
{
diff --git a/mm/slab.h b/mm/slab.h
index 84c160a..1257ade 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -260,10 +260,15 @@ static inline void memcg_uncharge_slab(struct kmem_cache *s, int order)
}
#endif
+static inline struct kmem_cache *virt_to_cache(const void *obj)
+{
+ struct page *page = virt_to_head_page(obj);
+ return page->slab_cache;
+}
+
static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x)
{
struct kmem_cache *cachep;
- struct page *page;
/*
* When kmemcg is not being used, both assignments should return the
@@ -275,8 +280,7 @@ static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x)
if (!memcg_kmem_enabled() && !unlikely(s->flags & SLAB_DEBUG_FREE))
return s;
- page = virt_to_head_page(x);
- cachep = page->slab_cache;
+ cachep = virt_to_cache(x);
if (slab_equal_or_root(cachep, s))
return cachep;
--
1.8.5.5
This is a test module doing varios nasty things like
out of bounds accesses, use after free. It is usefull for testing
kernel debugging features like kernel address sanitizer.
Signed-off-by: Andrey Ryabinin <[email protected]>
---
lib/Kconfig.debug | 8 ++
lib/Makefile | 1 +
lib/test_kmalloc_bugs.c | 254 ++++++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 263 insertions(+)
create mode 100644 lib/test_kmalloc_bugs.c
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 67a4dfc..64fd9e6 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -609,6 +609,14 @@ config DEBUG_STACKOVERFLOW
If in doubt, say "N".
+config KMALLOC_BUG_TEST
+ tristate "Module for testing bugs detection in sl[auo]b"
+ default n
+ help
+ This is a test module doing varios nasty things like
+ out of bounds accesses, use after free. It is usefull for testing
+ kernel debugging features like kernel address sanitizer.
+
source "lib/Kconfig.kmemcheck"
source "lib/Kconfig.kasan"
diff --git a/lib/Makefile b/lib/Makefile
index e48067c..af68259 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -34,6 +34,7 @@ obj-$(CONFIG_TEST_KSTRTOX) += test-kstrtox.o
obj-$(CONFIG_TEST_MODULE) += test_module.o
obj-$(CONFIG_TEST_USER_COPY) += test_user_copy.o
obj-$(CONFIG_TEST_BPF) += test_bpf.o
+obj-$(CONFIG_KMALLOC_BUG_TEST) += test_kmalloc_bugs.o
ifeq ($(CONFIG_DEBUG_KOBJECT),y)
CFLAGS_kobject.o += -DDEBUG
diff --git a/lib/test_kmalloc_bugs.c b/lib/test_kmalloc_bugs.c
new file mode 100644
index 0000000..04cd11b
--- /dev/null
+++ b/lib/test_kmalloc_bugs.c
@@ -0,0 +1,254 @@
+/*
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <[email protected]>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define pr_fmt(fmt) "kmalloc bug test: " fmt
+
+#include <linux/kernel.h>
+#include <linux/printk.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+#include <linux/module.h>
+
+void __init kmalloc_oob_rigth(void)
+{
+ char *ptr;
+ size_t size = 123;
+
+ pr_info("out-of-bounds to right\n");
+ ptr = kmalloc(size , GFP_KERNEL);
+ if (!ptr) {
+ pr_err("Allocation failed\n");
+ return;
+ }
+
+ ptr[size] = 'x';
+ kfree(ptr);
+}
+
+void __init kmalloc_oob_left(void)
+{
+ char *ptr;
+ size_t size = 15;
+
+ pr_info("out-of-bounds to left\n");
+ ptr = kmalloc(size, GFP_KERNEL);
+ if (!ptr) {
+ pr_err("Allocation failed\n");
+ return;
+ }
+
+ *ptr = *(ptr - 1);
+ kfree(ptr);
+}
+
+void __init kmalloc_node_oob_right(void)
+{
+ char *ptr;
+ size_t size = 4096;
+
+ pr_info("kmalloc_node(): out-of-bounds to right\n");
+ ptr = kmalloc_node(size , GFP_KERNEL, 0);
+ if (!ptr) {
+ pr_err("Allocation failed\n");
+ return;
+ }
+
+ ptr[size] = 0;
+ kfree(ptr);
+}
+
+void __init kmalloc_large_oob_rigth(void)
+{
+ char *ptr;
+ size_t size = PAGE_SIZE*3 - 10;
+
+ pr_info("kmalloc large allocation: out-of-bounds to right\n");
+ ptr = kmalloc(size , GFP_KERNEL);
+ if (!ptr) {
+ pr_err("Allocation failed\n");
+ return;
+ }
+
+ ptr[size] = 0;
+ kfree(ptr);
+}
+
+void __init kmalloc_oob_krealloc_more(void)
+{
+ char *ptr1, *ptr2;
+ size_t size1 = 17;
+ size_t size2 = 19;
+
+ pr_info("out-of-bounds after krealloc more\n");
+ ptr1 = kmalloc(size1, GFP_KERNEL);
+ ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
+ if (!ptr1 || !ptr2) {
+ pr_err("Allocation failed\n");
+ kfree(ptr1);
+ return;
+ }
+
+ ptr2[size2] = 'x';
+ kfree(ptr2);
+}
+
+void __init kmalloc_oob_krealloc_less(void)
+{
+ char *ptr1, *ptr2;
+ size_t size1 = 17;
+ size_t size2 = 15;
+
+ pr_info("out-of-bounds after krealloc less\n");
+ ptr1 = kmalloc(size1, GFP_KERNEL);
+ ptr2 = krealloc(ptr1, size2, GFP_KERNEL);
+ if (!ptr1 || !ptr2) {
+ pr_err("Allocation failed\n");
+ kfree(ptr1);
+ return;
+ }
+ ptr2[size1] = 'x';
+ kfree(ptr2);
+}
+
+void __init kmalloc_oob_16(void)
+{
+ struct {
+ u64 words[2];
+ } *ptr1, *ptr2;
+
+ pr_info("kmalloc out-of-bounds for 16-bytes access\n");
+ ptr1 = kmalloc(sizeof(*ptr1) - 3, GFP_KERNEL);
+ ptr2 = kmalloc(sizeof(*ptr2), GFP_KERNEL);
+ if (!ptr1 || !ptr2) {
+ pr_err("Allocation failed\n");
+ kfree(ptr1);
+ kfree(ptr2);
+ return;
+ }
+ *ptr1 = *ptr2;
+ kfree(ptr1);
+ kfree(ptr2);
+}
+
+void __init kmalloc_oob_in_memset(void)
+{
+ char *ptr;
+ size_t size = 666;
+
+ pr_info("out-of-bounds in memset\n");
+ ptr = kmalloc(size, GFP_KERNEL);
+ if (!ptr) {
+ pr_err("Allocation failed\n");
+ return;
+ }
+
+ memset(ptr, 0, size+5);
+ kfree(ptr);
+}
+
+void __init kmalloc_uaf(void)
+{
+ char *ptr;
+ size_t size = 10;
+
+ pr_info("use-after-free\n");
+ ptr = kmalloc(size, GFP_KERNEL);
+ if (!ptr) {
+ pr_err("Allocation failed\n");
+ return;
+ }
+
+ kfree(ptr);
+ *ptr = 'x';
+}
+
+void __init kmalloc_uaf_memset(void)
+{
+ char *ptr;
+ size_t size = 33;
+
+ pr_info("use-after-free in memset\n");
+ ptr = kmalloc(size, GFP_KERNEL);
+ if (!ptr) {
+ pr_err("Allocation failed\n");
+ return;
+ }
+
+ kfree(ptr);
+ memset(ptr, 0, size);
+}
+
+void __init kmalloc_uaf2(void)
+{
+ char *ptr1, *ptr2;
+ size_t size = 43;
+
+ pr_info("use-after-free after another kmalloc\n");
+ ptr1 = kmalloc(size, GFP_KERNEL);
+ if (!ptr1) {
+ pr_err("Allocation failed\n");
+ return;
+ }
+
+ kfree(ptr1);
+ ptr2 = kmalloc(size, GFP_KERNEL);
+ if (!ptr2) {
+ pr_err("Allocation failed\n");
+ return;
+ }
+
+ ptr1[0] = 'x';
+ kfree(ptr2);
+}
+
+void __init kmem_cache_oob(void)
+{
+ char *p;
+ size_t size = 200;
+ struct kmem_cache *cache = kmem_cache_create("test_cache",
+ size, 0,
+ 0, NULL);
+ if (!cache) {
+ pr_err("Cache allocation failed\n");
+ return;
+ }
+ pr_info("out-of-bounds in kmem_cache_alloc\n");
+ p = kmem_cache_alloc(cache, GFP_KERNEL);
+ if (!p) {
+ pr_err("Allocation failed\n");
+ kmem_cache_destroy(cache);
+ return;
+ }
+
+ *p = p[size];
+ kmem_cache_free(cache, p);
+ kmem_cache_destroy(cache);
+}
+
+int __init kmalloc_tests_init(void)
+{
+ kmalloc_oob_rigth();
+ kmalloc_oob_left();
+ kmalloc_node_oob_right();
+ kmalloc_large_oob_rigth();
+ kmalloc_oob_krealloc_more();
+ kmalloc_oob_krealloc_less();
+ kmalloc_oob_16();
+ kmalloc_oob_in_memset();
+ kmalloc_uaf();
+ kmalloc_uaf_memset();
+ kmalloc_uaf2();
+ kmem_cache_oob();
+ return 0;
+}
+
+module_init(kmalloc_tests_init);
+MODULE_LICENSE("GPL");
--
1.8.5.5
Now everything in x86 code is ready for kasan. Enable it.
Signed-off-by: Andrey Ryabinin <[email protected]>
---
arch/arm/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index c52d1ca..c62db6c 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -26,6 +26,7 @@ config ARM
select HARDIRQS_SW_RESEND
select HAVE_ARCH_AUDITSYSCALL if (AEABI && !OABI_COMPAT)
select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL
+ select HAVE_ARCH_KASAN
select HAVE_ARCH_KGDB
select HAVE_ARCH_SECCOMP_FILTER if (AEABI && !OABI_COMPAT)
select HAVE_ARCH_TRACEHOOK
--
1.8.5.5
Signed-off-by: Andrey Ryabinin <[email protected]>
---
arch/arm/mm/init.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
index 659c75d..02fce2c 100644
--- a/arch/arm/mm/init.c
+++ b/arch/arm/mm/init.c
@@ -22,6 +22,7 @@
#include <linux/memblock.h>
#include <linux/dma-contiguous.h>
#include <linux/sizes.h>
+#include <linux/kasan.h>
#include <asm/cp15.h>
#include <asm/mach-types.h>
@@ -324,6 +325,8 @@ void __init arm_memblock_init(const struct machine_desc *mdesc)
*/
dma_contiguous_reserve(min(arm_dma_limit, arm_lowmem_limit));
+ kasan_alloc_shadow();
+
arm_memblock_steal_permitted = false;
memblock_dump_all();
}
--
1.8.5.5
With this patch kasan will be able to catch bugs in memory allocated
by slub.
Allocated slab page, this whole page marked as unaccessible
in corresponding shadow memory.
On allocation of slub object requested allocation size marked as
accessible, and the rest of the object (including slub's metadata)
marked as redzone (unaccessible).
We also mark object as accessible if ksize was called for this object.
There is some places in kernel where ksize function is called to inquire
size of really allocated area. Such callers could validly access whole
allocated memory, so it should be marked as accessible by kasan_krealloc call.
Signed-off-by: Andrey Ryabinin <[email protected]>
---
include/linux/kasan.h | 22 ++++++++++
include/linux/slab.h | 19 +++++++--
lib/Kconfig.kasan | 2 +
mm/kasan/kasan.c | 110 ++++++++++++++++++++++++++++++++++++++++++++++++++
mm/kasan/kasan.h | 5 +++
mm/kasan/report.c | 23 +++++++++++
mm/slab.h | 2 +-
mm/slab_common.c | 9 +++--
mm/slub.c | 24 ++++++++++-
9 files changed, 208 insertions(+), 8 deletions(-)
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 4adc0a1..583c011 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -20,6 +20,17 @@ void kasan_init_shadow(void);
void kasan_alloc_pages(struct page *page, unsigned int order);
void kasan_free_pages(struct page *page, unsigned int order);
+void kasan_kmalloc_large(const void *ptr, size_t size);
+void kasan_kfree_large(const void *ptr);
+void kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size);
+void kasan_krealloc(const void *object, size_t new_size);
+
+void kasan_slab_alloc(struct kmem_cache *s, void *object);
+void kasan_slab_free(struct kmem_cache *s, void *object);
+
+void kasan_alloc_slab_pages(struct page *page, int order);
+void kasan_free_slab_pages(struct page *page, int order);
+
#else /* CONFIG_KASAN */
static inline void unpoison_shadow(const void *address, size_t size) {}
@@ -34,6 +45,17 @@ static inline void kasan_alloc_shadow(void) {}
static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
static inline void kasan_free_pages(struct page *page, unsigned int order) {}
+static inline void kasan_kmalloc_large(void *ptr, size_t size) {}
+static inline void kasan_kfree_large(const void *ptr) {}
+static inline void kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size) {}
+static inline void kasan_krealloc(const void *object, size_t new_size) {}
+
+static inline void kasan_slab_alloc(struct kmem_cache *s, void *object) {}
+static inline void kasan_slab_free(struct kmem_cache *s, void *object) {}
+
+static inline void kasan_alloc_slab_pages(struct page *page, int order) {}
+static inline void kasan_free_slab_pages(struct page *page, int order) {}
+
#endif /* CONFIG_KASAN */
#endif /* LINUX_KASAN_H */
diff --git a/include/linux/slab.h b/include/linux/slab.h
index 68b1feab..a9513e9 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -104,6 +104,7 @@
(unsigned long)ZERO_SIZE_PTR)
#include <linux/kmemleak.h>
+#include <linux/kasan.h>
struct mem_cgroup;
/*
@@ -444,6 +445,8 @@ static __always_inline void *kmalloc_large(size_t size, gfp_t flags)
*/
static __always_inline void *kmalloc(size_t size, gfp_t flags)
{
+ void *ret;
+
if (__builtin_constant_p(size)) {
if (size > KMALLOC_MAX_CACHE_SIZE)
return kmalloc_large(size, flags);
@@ -454,8 +457,12 @@ static __always_inline void *kmalloc(size_t size, gfp_t flags)
if (!index)
return ZERO_SIZE_PTR;
- return kmem_cache_alloc_trace(kmalloc_caches[index],
+ ret = kmem_cache_alloc_trace(kmalloc_caches[index],
flags, size);
+
+ kasan_kmalloc(kmalloc_caches[index], ret, size);
+
+ return ret;
}
#endif
}
@@ -485,6 +492,8 @@ static __always_inline int kmalloc_size(int n)
static __always_inline void *kmalloc_node(size_t size, gfp_t flags, int node)
{
#ifndef CONFIG_SLOB
+ void *ret;
+
if (__builtin_constant_p(size) &&
size <= KMALLOC_MAX_CACHE_SIZE && !(flags & GFP_DMA)) {
int i = kmalloc_index(size);
@@ -492,8 +501,12 @@ static __always_inline void *kmalloc_node(size_t size, gfp_t flags, int node)
if (!i)
return ZERO_SIZE_PTR;
- return kmem_cache_alloc_node_trace(kmalloc_caches[i],
- flags, node, size);
+ ret = kmem_cache_alloc_node_trace(kmalloc_caches[i],
+ flags, node, size);
+
+ kasan_kmalloc(kmalloc_caches[i], ret, size);
+
+ return ret;
}
#endif
return __kmalloc_node(size, flags, node);
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index 2bfff78..289a624 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -5,6 +5,8 @@ if HAVE_ARCH_KASAN
config KASAN
bool "AddressSanitizer: dynamic memory error detector"
+ depends on SLUB
+ select STACKTRACE
default n
help
Enables AddressSanitizer - dynamic memory error detector,
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 109478e..9b5182a 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -177,6 +177,116 @@ void __init kasan_init_shadow(void)
}
}
+void kasan_alloc_slab_pages(struct page *page, int order)
+{
+ if (unlikely(!kasan_initialized))
+ return;
+
+ poison_shadow(page_address(page), PAGE_SIZE << order, KASAN_SLAB_REDZONE);
+}
+
+void kasan_free_slab_pages(struct page *page, int order)
+{
+ if (unlikely(!kasan_initialized))
+ return;
+
+ poison_shadow(page_address(page), PAGE_SIZE << order, KASAN_SLAB_FREE);
+}
+
+void kasan_slab_alloc(struct kmem_cache *cache, void *object)
+{
+ if (unlikely(!kasan_initialized))
+ return;
+
+ if (unlikely(object == NULL))
+ return;
+
+ poison_shadow(object, cache->size, KASAN_KMALLOC_REDZONE);
+ unpoison_shadow(object, cache->alloc_size);
+}
+
+void kasan_slab_free(struct kmem_cache *cache, void *object)
+{
+ unsigned long size = cache->size;
+ unsigned long rounded_up_size = round_up(size, KASAN_SHADOW_SCALE_SIZE);
+
+ if (unlikely(!kasan_initialized))
+ return;
+
+ poison_shadow(object, rounded_up_size, KASAN_KMALLOC_FREE);
+}
+
+void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size)
+{
+ unsigned long redzone_start;
+ unsigned long redzone_end;
+
+ if (unlikely(!kasan_initialized))
+ return;
+
+ if (unlikely(object == NULL))
+ return;
+
+ redzone_start = round_up((unsigned long)(object + size),
+ KASAN_SHADOW_SCALE_SIZE);
+ redzone_end = (unsigned long)object + cache->size;
+
+ unpoison_shadow(object, size);
+ poison_shadow((void *)redzone_start, redzone_end - redzone_start,
+ KASAN_KMALLOC_REDZONE);
+
+}
+EXPORT_SYMBOL(kasan_kmalloc);
+
+void kasan_kmalloc_large(const void *ptr, size_t size)
+{
+ struct page *page;
+ unsigned long redzone_start;
+ unsigned long redzone_end;
+
+ if (unlikely(!kasan_initialized))
+ return;
+
+ if (unlikely(ptr == NULL))
+ return;
+
+ page = virt_to_page(ptr);
+ redzone_start = round_up((unsigned long)(ptr + size),
+ KASAN_SHADOW_SCALE_SIZE);
+ redzone_end = (unsigned long)ptr + (PAGE_SIZE << compound_order(page));
+
+ unpoison_shadow(ptr, size);
+ poison_shadow((void *)redzone_start, redzone_end - redzone_start,
+ KASAN_PAGE_REDZONE);
+}
+EXPORT_SYMBOL(kasan_kmalloc_large);
+
+void kasan_krealloc(const void *object, size_t size)
+{
+ struct page *page;
+
+ if (unlikely(object == ZERO_SIZE_PTR))
+ return;
+
+ page = virt_to_head_page(object);
+
+ if (unlikely(!PageSlab(page)))
+ kasan_kmalloc_large(object, size);
+ else
+ kasan_kmalloc(page->slab_cache, object, size);
+}
+
+void kasan_kfree_large(const void *ptr)
+{
+ struct page *page;
+
+ if (unlikely(!kasan_initialized))
+ return;
+
+ page = virt_to_page(ptr);
+ poison_shadow(ptr, PAGE_SIZE << compound_order(page), KASAN_FREE_PAGE);
+}
+
void kasan_alloc_pages(struct page *page, unsigned int order)
{
if (unlikely(!kasan_initialized))
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index be9597e..f925d03 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -6,6 +6,11 @@
#define KASAN_SHADOW_MASK (KASAN_SHADOW_SCALE_SIZE - 1)
#define KASAN_FREE_PAGE 0xFF /* page was freed */
+#define KASAN_PAGE_REDZONE 0xFE /* redzone for kmalloc_large allocations */
+#define KASAN_SLAB_REDZONE 0xFD /* Slab page redzone, does not belong to any slub object */
+#define KASAN_KMALLOC_REDZONE 0xFC /* redzone inside slub object */
+#define KASAN_KMALLOC_FREE 0xFB /* object was freed (kmem_cache_free/kfree) */
+#define KASAN_SLAB_FREE 0xFA /* free slab page */
#define KASAN_SHADOW_GAP 0xF9 /* address belongs to shadow memory */
struct access_info {
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 6ef9e57..6d829af 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -43,10 +43,15 @@ static void print_error_description(struct access_info *info)
u8 shadow_val = *(u8 *)kasan_mem_to_shadow(info->access_addr);
switch (shadow_val) {
+ case KASAN_PAGE_REDZONE:
+ case KASAN_SLAB_REDZONE:
+ case KASAN_KMALLOC_REDZONE:
case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
bug_type = "buffer overflow";
break;
case KASAN_FREE_PAGE:
+ case KASAN_SLAB_FREE:
+ case KASAN_KMALLOC_FREE:
bug_type = "use after free";
break;
case KASAN_SHADOW_GAP:
@@ -70,7 +75,25 @@ static void print_address_description(struct access_info *info)
page = virt_to_page(info->access_addr);
switch (shadow_val) {
+ case KASAN_SLAB_REDZONE:
+ cache = virt_to_cache((void *)info->access_addr);
+ slab_err(cache, page, "access to slab redzone");
+ dump_stack();
+ break;
+ case KASAN_KMALLOC_FREE:
+ case KASAN_KMALLOC_REDZONE:
+ case 1 ... KASAN_SHADOW_SCALE_SIZE - 1:
+ if (PageSlab(page)) {
+ cache = virt_to_cache((void *)info->access_addr);
+ slab_start = page_address(virt_to_head_page((void *)info->access_addr));
+ object = virt_to_obj(cache, slab_start,
+ (void *)info->access_addr);
+ object_err(cache, page, object, "kasan error");
+ break;
+ }
+ case KASAN_PAGE_REDZONE:
case KASAN_FREE_PAGE:
+ case KASAN_SLAB_FREE:
dump_page(page, "kasan error");
dump_stack();
break;
diff --git a/mm/slab.h b/mm/slab.h
index cb2e776..b22ed8b 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -353,6 +353,6 @@ void slab_err(struct kmem_cache *s, struct page *page,
const char *fmt, ...);
void object_err(struct kmem_cache *s, struct page *page,
u8 *object, char *reason);
-
+size_t __ksize(const void *obj);
#endif /* MM_SLAB_H */
diff --git a/mm/slab_common.c b/mm/slab_common.c
index f5b52f0..313e270 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -625,6 +625,7 @@ void *kmalloc_order(size_t size, gfp_t flags, unsigned int order)
page = alloc_kmem_pages(flags, order);
ret = page ? page_address(page) : NULL;
kmemleak_alloc(ret, size, 1, flags);
+ kasan_kmalloc_large(ret, size);
return ret;
}
EXPORT_SYMBOL(kmalloc_order);
@@ -797,10 +798,12 @@ static __always_inline void *__do_krealloc(const void *p, size_t new_size,
size_t ks = 0;
if (p)
- ks = ksize(p);
+ ks = __ksize(p);
- if (ks >= new_size)
+ if (ks >= new_size) {
+ kasan_krealloc((void *)p, new_size);
return (void *)p;
+ }
ret = kmalloc_track_caller(new_size, flags);
if (ret && p)
@@ -875,7 +878,7 @@ void kzfree(const void *p)
if (unlikely(ZERO_OR_NULL_PTR(mem)))
return;
- ks = ksize(mem);
+ ks = __ksize(mem);
memset(mem, 0, ks);
kfree(mem);
}
diff --git a/mm/slub.c b/mm/slub.c
index c8dbea7..87d2198 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -33,6 +33,7 @@
#include <linux/stacktrace.h>
#include <linux/prefetch.h>
#include <linux/memcontrol.h>
+#include <linux/kasan.h>
#include <trace/events/kmem.h>
@@ -1245,11 +1246,13 @@ static inline void dec_slabs_node(struct kmem_cache *s, int node,
static inline void kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags)
{
kmemleak_alloc(ptr, size, 1, flags);
+ kasan_kmalloc_large(ptr, size);
}
static inline void kfree_hook(const void *x)
{
kmemleak_free(x);
+ kasan_kfree_large(x);
}
static inline int slab_pre_alloc_hook(struct kmem_cache *s, gfp_t flags)
@@ -1267,11 +1270,13 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s,
flags &= gfp_allowed_mask;
kmemcheck_slab_alloc(s, flags, object, slab_ksize(s));
kmemleak_alloc_recursive(object, s->object_size, 1, s->flags, flags);
+ kasan_slab_alloc(s, object);
}
static inline void slab_free_hook(struct kmem_cache *s, void *x)
{
kmemleak_free_recursive(x, s->flags);
+ kasan_slab_free(s, x);
/*
* Trouble is that we may no longer disable interrupts in the fast path
@@ -1371,6 +1376,8 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
if (!page)
return NULL;
+ kasan_alloc_slab_pages(page, oo_order(oo));
+
page->objects = oo_objects(oo);
mod_zone_page_state(page_zone(page),
(s->flags & SLAB_RECLAIM_ACCOUNT) ?
@@ -1450,6 +1457,7 @@ static void __free_slab(struct kmem_cache *s, struct page *page)
}
kmemcheck_free_shadow(page, compound_order(page));
+ kasan_free_slab_pages(page, compound_order(page));
mod_zone_page_state(page_zone(page),
(s->flags & SLAB_RECLAIM_ACCOUNT) ?
@@ -2907,6 +2915,7 @@ static void early_kmem_cache_node_alloc(int node)
init_object(kmem_cache_node, n, SLUB_RED_ACTIVE);
init_tracking(kmem_cache_node, n);
#endif
+ kasan_kmalloc(kmem_cache_node, n, sizeof(struct kmem_cache_node));
init_kmem_cache_node(n);
inc_slabs_node(kmem_cache_node, node, page->objects);
@@ -3289,6 +3298,8 @@ void *__kmalloc(size_t size, gfp_t flags)
trace_kmalloc(_RET_IP_, ret, size, s->size, flags);
+ kasan_kmalloc(s, ret, size);
+
return ret;
}
EXPORT_SYMBOL(__kmalloc);
@@ -3332,12 +3343,14 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node)
trace_kmalloc_node(_RET_IP_, ret, size, s->size, flags, node);
+ kasan_kmalloc(s, ret, size);
+
return ret;
}
EXPORT_SYMBOL(__kmalloc_node);
#endif
-size_t ksize(const void *object)
+size_t __ksize(const void *object)
{
struct page *page;
@@ -3353,6 +3366,15 @@ size_t ksize(const void *object)
return slab_ksize(page->slab_cache);
}
+
+size_t ksize(const void *object)
+{
+ size_t size = __ksize(object);
+ /* We assume that ksize callers could use whole allocated area,
+ so we need unpoison this area. */
+ kasan_krealloc(object, size);
+ return size;
+}
EXPORT_SYMBOL(ksize);
void kfree(const void *x)
--
1.8.5.5
Since functions memset, memmove, memcpy are written in assembly,
compiler can't instrument memory accesses inside them.
This patch replaces these functions with our own instrumented
functions (kasan_mem*) for CONFIG_KASAN = y
In rare circumstances you may need to use the original functions,
in such case put #undef KASAN_HOOKS before includes.
Signed-off-by: Andrey Ryabinin <[email protected]>
---
arch/arm/include/asm/string.h | 30 ++++++++++++++++++++++++++++++
1 file changed, 30 insertions(+)
diff --git a/arch/arm/include/asm/string.h b/arch/arm/include/asm/string.h
index cf4f3aa..3cbe47f 100644
--- a/arch/arm/include/asm/string.h
+++ b/arch/arm/include/asm/string.h
@@ -38,4 +38,34 @@ extern void __memzero(void *ptr, __kernel_size_t n);
(__p); \
})
+
+#if defined(CONFIG_KASAN) && defined(KASAN_HOOKS)
+
+/*
+ * Since some of the following functions (memset, memmove, memcpy)
+ * are written in assembly, compiler can't instrument memory accesses
+ * inside them.
+ *
+ * To solve this issue we replace these functions with our own instrumented
+ * functions (kasan_mem*)
+ *
+ * In case if any of mem*() fucntions are written in C we use our instrumented
+ * functions for perfomance reasons. It's should be faster to check whole
+ * accessed memory range at once, then do a lot of checks at each memory access.
+ *
+ * In rare circumstances you may need to use the original functions,
+ * in such case #undef KASAN_HOOKS before includes.
+ */
+#undef memset
+
+void *kasan_memset(void *ptr, int val, size_t len);
+void *kasan_memcpy(void *dst, const void *src, size_t len);
+void *kasan_memmove(void *dst, const void *src, size_t len);
+
+#define memcpy(dst, src, len) kasan_memcpy((dst), (src), (len))
+#define memset(ptr, val, len) kasan_memset((ptr), (val), (len))
+#define memmove(dst, src, len) kasan_memmove((dst), (src), (len))
+
+#endif /* CONFIG_KASAN && KASAN_HOOKS */
+
#endif
--
1.8.5.5
When caller creates new kmem_cache, requested size of kmem_cache
will be stored in alloc_size. Later alloc_size will be used by
kerenel address sanitizer to mark alloc_size of slab object as
accessible and the rest of its size as redzone.
Signed-off-by: Andrey Ryabinin <[email protected]>
---
include/linux/slub_def.h | 5 +++++
mm/slab.h | 10 ++++++++++
mm/slab_common.c | 2 ++
mm/slub.c | 1 +
4 files changed, 18 insertions(+)
diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index d82abd4..b8b8154 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -68,6 +68,11 @@ struct kmem_cache {
int object_size; /* The size of an object without meta data */
int offset; /* Free pointer offset. */
int cpu_partial; /* Number of per cpu partial objects to keep around */
+
+#ifdef CONFIG_KASAN
+ int alloc_size; /* actual allocation size kmem_cache_create */
+#endif
+
struct kmem_cache_order_objects oo;
/* Allocation and freeing of slabs */
diff --git a/mm/slab.h b/mm/slab.h
index 912af7f..cb2e776 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -260,6 +260,16 @@ static inline void memcg_uncharge_slab(struct kmem_cache *s, int order)
}
#endif
+#ifdef CONFIG_KASAN
+static inline void kasan_set_alloc_size(struct kmem_cache *s, size_t size)
+{
+ s->alloc_size = size;
+}
+#else
+static inline void kasan_set_alloc_size(struct kmem_cache *s, size_t size) { }
+#endif
+
+
static inline struct kmem_cache *virt_to_cache(const void *obj)
{
struct page *page = virt_to_head_page(obj);
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 8df59b09..f5b52f0 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -147,6 +147,7 @@ do_kmem_cache_create(char *name, size_t object_size, size_t size, size_t align,
s->name = name;
s->object_size = object_size;
s->size = size;
+ kasan_set_alloc_size(s, object_size);
s->align = align;
s->ctor = ctor;
@@ -409,6 +410,7 @@ void __init create_boot_cache(struct kmem_cache *s, const char *name, size_t siz
s->name = name;
s->size = s->object_size = size;
+ kasan_set_alloc_size(s, size);
s->align = calculate_alignment(flags, ARCH_KMALLOC_MINALIGN, size);
err = __kmem_cache_create(s, flags);
diff --git a/mm/slub.c b/mm/slub.c
index 3bdd9ac..6ddedf9 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3724,6 +3724,7 @@ __kmem_cache_alias(const char *name, size_t size, size_t align,
* the complete object on kzalloc.
*/
s->object_size = max(s->object_size, (int)size);
+ kasan_set_alloc_size(s, max(s->alloc_size, (int)size));
s->inuse = max_t(int, s->inuse, ALIGN(size, sizeof(void *)));
for_each_memcg_cache_index(i) {
--
1.8.5.5
To avoid false positive reports in kernel address sanitizer krealloc/kzfree
functions shouldn't be instrumented. Since we want to instrument other
functions in mm/util.c, krealloc/kzfree moved to slab_common.c which is not
instrumented.
Unfortunately we can't completely disable instrumentation for one function.
We could disable compiler's instrumentation for one function by using
__atribute__((no_sanitize_address)).
But the problem here is that memset call will be replaced by instumented
version kasan_memset since currently it's implemented as define:
Signed-off-by: Andrey Ryabinin <[email protected]>
---
mm/slab_common.c | 91 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++
mm/util.c | 91 --------------------------------------------------------
2 files changed, 91 insertions(+), 91 deletions(-)
diff --git a/mm/slab_common.c b/mm/slab_common.c
index d31c4ba..8df59b09 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -787,3 +787,94 @@ static int __init slab_proc_init(void)
}
module_init(slab_proc_init);
#endif /* CONFIG_SLABINFO */
+
+static __always_inline void *__do_krealloc(const void *p, size_t new_size,
+ gfp_t flags)
+{
+ void *ret;
+ size_t ks = 0;
+
+ if (p)
+ ks = ksize(p);
+
+ if (ks >= new_size)
+ return (void *)p;
+
+ ret = kmalloc_track_caller(new_size, flags);
+ if (ret && p)
+ memcpy(ret, p, ks);
+
+ return ret;
+}
+
+/**
+ * __krealloc - like krealloc() but don't free @p.
+ * @p: object to reallocate memory for.
+ * @new_size: how many bytes of memory are required.
+ * @flags: the type of memory to allocate.
+ *
+ * This function is like krealloc() except it never frees the originally
+ * allocated buffer. Use this if you don't want to free the buffer immediately
+ * like, for example, with RCU.
+ */
+void *__krealloc(const void *p, size_t new_size, gfp_t flags)
+{
+ if (unlikely(!new_size))
+ return ZERO_SIZE_PTR;
+
+ return __do_krealloc(p, new_size, flags);
+
+}
+EXPORT_SYMBOL(__krealloc);
+
+/**
+ * krealloc - reallocate memory. The contents will remain unchanged.
+ * @p: object to reallocate memory for.
+ * @new_size: how many bytes of memory are required.
+ * @flags: the type of memory to allocate.
+ *
+ * The contents of the object pointed to are preserved up to the
+ * lesser of the new and old sizes. If @p is %NULL, krealloc()
+ * behaves exactly like kmalloc(). If @new_size is 0 and @p is not a
+ * %NULL pointer, the object pointed to is freed.
+ */
+void *krealloc(const void *p, size_t new_size, gfp_t flags)
+{
+ void *ret;
+
+ if (unlikely(!new_size)) {
+ kfree(p);
+ return ZERO_SIZE_PTR;
+ }
+
+ ret = __do_krealloc(p, new_size, flags);
+ if (ret && p != ret)
+ kfree(p);
+
+ return ret;
+}
+EXPORT_SYMBOL(krealloc);
+
+/**
+ * kzfree - like kfree but zero memory
+ * @p: object to free memory of
+ *
+ * The memory of the object @p points to is zeroed before freed.
+ * If @p is %NULL, kzfree() does nothing.
+ *
+ * Note: this function zeroes the whole allocated buffer which can be a good
+ * deal bigger than the requested buffer size passed to kmalloc(). So be
+ * careful when using this function in performance sensitive code.
+ */
+void kzfree(const void *p)
+{
+ size_t ks;
+ void *mem = (void *)p;
+
+ if (unlikely(ZERO_OR_NULL_PTR(mem)))
+ return;
+ ks = ksize(mem);
+ memset(mem, 0, ks);
+ kfree(mem);
+}
+EXPORT_SYMBOL(kzfree);
diff --git a/mm/util.c b/mm/util.c
index 8f326ed..2992e16 100644
--- a/mm/util.c
+++ b/mm/util.c
@@ -142,97 +142,6 @@ void *memdup_user(const void __user *src, size_t len)
}
EXPORT_SYMBOL(memdup_user);
-static __always_inline void *__do_krealloc(const void *p, size_t new_size,
- gfp_t flags)
-{
- void *ret;
- size_t ks = 0;
-
- if (p)
- ks = ksize(p);
-
- if (ks >= new_size)
- return (void *)p;
-
- ret = kmalloc_track_caller(new_size, flags);
- if (ret && p)
- memcpy(ret, p, ks);
-
- return ret;
-}
-
-/**
- * __krealloc - like krealloc() but don't free @p.
- * @p: object to reallocate memory for.
- * @new_size: how many bytes of memory are required.
- * @flags: the type of memory to allocate.
- *
- * This function is like krealloc() except it never frees the originally
- * allocated buffer. Use this if you don't want to free the buffer immediately
- * like, for example, with RCU.
- */
-void *__krealloc(const void *p, size_t new_size, gfp_t flags)
-{
- if (unlikely(!new_size))
- return ZERO_SIZE_PTR;
-
- return __do_krealloc(p, new_size, flags);
-
-}
-EXPORT_SYMBOL(__krealloc);
-
-/**
- * krealloc - reallocate memory. The contents will remain unchanged.
- * @p: object to reallocate memory for.
- * @new_size: how many bytes of memory are required.
- * @flags: the type of memory to allocate.
- *
- * The contents of the object pointed to are preserved up to the
- * lesser of the new and old sizes. If @p is %NULL, krealloc()
- * behaves exactly like kmalloc(). If @new_size is 0 and @p is not a
- * %NULL pointer, the object pointed to is freed.
- */
-void *krealloc(const void *p, size_t new_size, gfp_t flags)
-{
- void *ret;
-
- if (unlikely(!new_size)) {
- kfree(p);
- return ZERO_SIZE_PTR;
- }
-
- ret = __do_krealloc(p, new_size, flags);
- if (ret && p != ret)
- kfree(p);
-
- return ret;
-}
-EXPORT_SYMBOL(krealloc);
-
-/**
- * kzfree - like kfree but zero memory
- * @p: object to free memory of
- *
- * The memory of the object @p points to is zeroed before freed.
- * If @p is %NULL, kzfree() does nothing.
- *
- * Note: this function zeroes the whole allocated buffer which can be a good
- * deal bigger than the requested buffer size passed to kmalloc(). So be
- * careful when using this function in performance sensitive code.
- */
-void kzfree(const void *p)
-{
- size_t ks;
- void *mem = (void *)p;
-
- if (unlikely(ZERO_OR_NULL_PTR(mem)))
- return;
- ks = ksize(mem);
- memset(mem, 0, ks);
- kfree(mem);
-}
-EXPORT_SYMBOL(kzfree);
-
/*
* strndup_user - duplicate an existing string from user space
* @s: The string to duplicate
--
1.8.5.5
Signed-off-by: Andrey Ryabinin <[email protected]>
---
arch/x86/mm/init.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index f971306..d9925ee 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -4,6 +4,7 @@
#include <linux/swap.h>
#include <linux/memblock.h>
#include <linux/bootmem.h> /* for max_low_pfn */
+#include <linux/kasan.h>
#include <asm/cacheflush.h>
#include <asm/e820.h>
@@ -678,5 +679,7 @@ void __init zone_sizes_init(void)
#endif
free_area_init_nodes(max_zone_pfns);
+
+ kasan_alloc_shadow();
}
--
1.8.5.5
Remove static and add function declarations to mm/slab.h so they
could be used by kernel address sanitizer.
Signed-off-by: Andrey Ryabinin <[email protected]>
---
mm/slab.h | 5 +++++
mm/slub.c | 4 ++--
2 files changed, 7 insertions(+), 2 deletions(-)
diff --git a/mm/slab.h b/mm/slab.h
index 1257ade..912af7f 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -339,5 +339,10 @@ static inline struct kmem_cache_node *get_node(struct kmem_cache *s, int node)
void *slab_next(struct seq_file *m, void *p, loff_t *pos);
void slab_stop(struct seq_file *m, void *p);
+void slab_err(struct kmem_cache *s, struct page *page,
+ const char *fmt, ...);
+void object_err(struct kmem_cache *s, struct page *page,
+ u8 *object, char *reason);
+
#endif /* MM_SLAB_H */
diff --git a/mm/slub.c b/mm/slub.c
index 6641a8f..3bdd9ac 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -635,14 +635,14 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
dump_stack();
}
-static void object_err(struct kmem_cache *s, struct page *page,
+void object_err(struct kmem_cache *s, struct page *page,
u8 *object, char *reason)
{
slab_bug(s, "%s", reason);
print_trailer(s, page, object);
}
-static void slab_err(struct kmem_cache *s, struct page *page,
+void slab_err(struct kmem_cache *s, struct page *page,
const char *fmt, ...)
{
va_list args;
--
1.8.5.5
To avoid build errors, compiler's instrumentation must be disabled
for code not linked with kernel image.
Signed-off-by: Andrey Ryabinin <[email protected]>
---
arch/x86/boot/Makefile | 2 ++
arch/x86/boot/compressed/Makefile | 2 ++
arch/x86/realmode/Makefile | 2 +-
arch/x86/realmode/rm/Makefile | 1 +
arch/x86/vdso/Makefile | 1 +
5 files changed, 7 insertions(+), 1 deletion(-)
diff --git a/arch/x86/boot/Makefile b/arch/x86/boot/Makefile
index dbe8dd2..9204cc0 100644
--- a/arch/x86/boot/Makefile
+++ b/arch/x86/boot/Makefile
@@ -14,6 +14,8 @@
# Set it to -DSVGA_MODE=NORMAL_VGA if you just want the EGA/VGA mode.
# The number is the same as you would ordinarily press at bootup.
+KASAN_SANITIZE := n
+
SVGA_MODE := -DSVGA_MODE=NORMAL_VGA
targets := vmlinux.bin setup.bin setup.elf bzImage
diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile
index 0fcd913..64a92b3 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -4,6 +4,8 @@
# create a compressed vmlinux image from the original vmlinux
#
+KASAN_SANITIZE := n
+
targets := vmlinux vmlinux.bin vmlinux.bin.gz vmlinux.bin.bz2 vmlinux.bin.lzma \
vmlinux.bin.xz vmlinux.bin.lzo vmlinux.bin.lz4
diff --git a/arch/x86/realmode/Makefile b/arch/x86/realmode/Makefile
index 94f7fbe..e02c2c6 100644
--- a/arch/x86/realmode/Makefile
+++ b/arch/x86/realmode/Makefile
@@ -6,7 +6,7 @@
# for more details.
#
#
-
+KASAN_SANITIZE := n
subdir- := rm
obj-y += init.o
diff --git a/arch/x86/realmode/rm/Makefile b/arch/x86/realmode/rm/Makefile
index 7c0d7be..2730d77 100644
--- a/arch/x86/realmode/rm/Makefile
+++ b/arch/x86/realmode/rm/Makefile
@@ -6,6 +6,7 @@
# for more details.
#
#
+KASAN_SANITIZE := n
always := realmode.bin realmode.relocs
diff --git a/arch/x86/vdso/Makefile b/arch/x86/vdso/Makefile
index 61b04fe..90daad6 100644
--- a/arch/x86/vdso/Makefile
+++ b/arch/x86/vdso/Makefile
@@ -3,6 +3,7 @@
#
KBUILD_CFLAGS += $(DISABLE_LTO)
+KASAN_SANITIZE := n
VDSO64-$(CONFIG_X86_64) := y
VDSOX32-$(CONFIG_X86_X32_ABI) := y
--
1.8.5.5
Code in slub.c and slab_common.c files could validly access to object's
redzones
Signed-off-by: Andrey Ryabinin <[email protected]>
---
mm/Makefile | 2 ++
1 file changed, 2 insertions(+)
diff --git a/mm/Makefile b/mm/Makefile
index 6a9c3f8..59cc184 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -3,6 +3,8 @@
#
KASAN_SANITIZE_page_alloc.o := n
+KASAN_SANITIZE_slab_common.o := n
+KASAN_SANITIZE_slub.o := n
mmu-y := nommu.o
mmu-$(CONFIG_MMU) := gup.o highmem.o madvise.o memory.o mincore.o \
--
1.8.5.5
Add kernel address sanitizer hooks to mark allocated page's addresses
as accessible in corresponding shadow region.
Mark freed pages as unaccessible.
Signed-off-by: Andrey Ryabinin <[email protected]>
---
include/linux/kasan.h | 6 ++++++
mm/Makefile | 2 ++
mm/kasan/kasan.c | 18 ++++++++++++++++++
mm/kasan/kasan.h | 1 +
mm/kasan/report.c | 7 +++++++
mm/page_alloc.c | 4 ++++
6 files changed, 38 insertions(+)
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 7efc3eb..4adc0a1 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -17,6 +17,9 @@ void kasan_disable_local(void);
void kasan_alloc_shadow(void);
void kasan_init_shadow(void);
+void kasan_alloc_pages(struct page *page, unsigned int order);
+void kasan_free_pages(struct page *page, unsigned int order);
+
#else /* CONFIG_KASAN */
static inline void unpoison_shadow(const void *address, size_t size) {}
@@ -28,6 +31,9 @@ static inline void kasan_disable_local(void) {}
static inline void kasan_init_shadow(void) {}
static inline void kasan_alloc_shadow(void) {}
+static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
+static inline void kasan_free_pages(struct page *page, unsigned int order) {}
+
#endif /* CONFIG_KASAN */
#endif /* LINUX_KASAN_H */
diff --git a/mm/Makefile b/mm/Makefile
index dbe9a22..6a9c3f8 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -2,6 +2,8 @@
# Makefile for the linux memory manager.
#
+KASAN_SANITIZE_page_alloc.o := n
+
mmu-y := nommu.o
mmu-$(CONFIG_MMU) := gup.o highmem.o madvise.o memory.o mincore.o \
mlock.o mmap.o mprotect.o mremap.o msync.o rmap.o \
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index e2cd345..109478e 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -177,6 +177,24 @@ void __init kasan_init_shadow(void)
}
}
+void kasan_alloc_pages(struct page *page, unsigned int order)
+{
+ if (unlikely(!kasan_initialized))
+ return;
+
+ if (likely(page && !PageHighMem(page)))
+ unpoison_shadow(page_address(page), PAGE_SIZE << order);
+}
+
+void kasan_free_pages(struct page *page, unsigned int order)
+{
+ if (unlikely(!kasan_initialized))
+ return;
+
+ if (likely(!PageHighMem(page)))
+ poison_shadow(page_address(page), PAGE_SIZE << order, KASAN_FREE_PAGE);
+}
+
void *kasan_memcpy(void *dst, const void *src, size_t len)
{
if (unlikely(len == 0))
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 711ae4f..be9597e 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -5,6 +5,7 @@
#define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
#define KASAN_SHADOW_MASK (KASAN_SHADOW_SCALE_SIZE - 1)
+#define KASAN_FREE_PAGE 0xFF /* page was freed */
#define KASAN_SHADOW_GAP 0xF9 /* address belongs to shadow memory */
struct access_info {
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 2430e05..6ef9e57 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -46,6 +46,9 @@ static void print_error_description(struct access_info *info)
case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
bug_type = "buffer overflow";
break;
+ case KASAN_FREE_PAGE:
+ bug_type = "use after free";
+ break;
case KASAN_SHADOW_GAP:
bug_type = "wild memory access";
break;
@@ -67,6 +70,10 @@ static void print_address_description(struct access_info *info)
page = virt_to_page(info->access_addr);
switch (shadow_val) {
+ case KASAN_FREE_PAGE:
+ dump_page(page, "kasan error");
+ dump_stack();
+ break;
case KASAN_SHADOW_GAP:
pr_err("No metainfo is available for this access.\n");
dump_stack();
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 8c9eeec..67833d1 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -61,6 +61,7 @@
#include <linux/page-debug-flags.h>
#include <linux/hugetlb.h>
#include <linux/sched/rt.h>
+#include <linux/kasan.h>
#include <asm/sections.h>
#include <asm/tlbflush.h>
@@ -747,6 +748,7 @@ static bool free_pages_prepare(struct page *page, unsigned int order)
trace_mm_page_free(page, order);
kmemcheck_free_shadow(page, order);
+ kasan_free_pages(page, order);
if (PageAnon(page))
page->mapping = NULL;
@@ -2807,6 +2809,7 @@ out:
if (unlikely(!page && read_mems_allowed_retry(cpuset_mems_cookie)))
goto retry_cpuset;
+ kasan_alloc_pages(page, order);
return page;
}
EXPORT_SYMBOL(__alloc_pages_nodemask);
@@ -6415,6 +6418,7 @@ int alloc_contig_range(unsigned long start, unsigned long end,
if (end != outer_end)
free_contig_range(end, outer_end - end);
+ kasan_alloc_pages(pfn_to_page(start), end - start);
done:
undo_isolate_page_range(pfn_max_align_down(start),
pfn_max_align_up(end), migratetype);
--
1.8.5.5
This patch initializes shadow area after it was allocated by arch code.
All low memory marked as accessible except shadow area itself.
Later free_all_bootmem() will release pages to buddy allocator
and these pages will be marked as unaccessible, untill somebody
will allocate them.
Signed-off-by: Andrey Ryabinin <[email protected]>
---
init/main.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/init/main.c b/init/main.c
index bb1aed9..d06a636 100644
--- a/init/main.c
+++ b/init/main.c
@@ -78,6 +78,7 @@
#include <linux/context_tracking.h>
#include <linux/random.h>
#include <linux/list.h>
+#include <linux/kasan.h>
#include <asm/io.h>
#include <asm/bugs.h>
@@ -549,7 +550,7 @@ asmlinkage __visible void __init start_kernel(void)
set_init_arg);
jump_label_init();
-
+ kasan_init_shadow();
/*
* These use large bootmem allocations and must precede
* kmem_cache_init()
--
1.8.5.5
Address sanitizer for kernel (kasan) is a dynamic memory error detector.
The main features of kasan is:
- is based on compiler instrumentation (fast),
- detects out of bounds for both writes and reads,
- provides use after free detection,
This patch only adds infrastructure for kernel address sanitizer. It's not
available for use yet. The idea and some code was borrowed from [1].
This feature requires pretty fresh GCC (revision r211699 from 2014-06-16 or
latter).
Implementation details:
The main idea of KASAN is to use shadow memory to record whether each byte of memory
is safe to access or not, and use compiler's instrumentation to check the shadow memory
on each memory access.
Address sanitizer dedicates 1/8 of the low memory to the shadow memory and uses direct
mapping with a scale and offset to translate a memory address to its corresponding
shadow address.
Here is function to translate address to corresponding shadow address:
unsigned long kasan_mem_to_shadow(unsigned long addr)
{
return ((addr - PAGE_OFFSET) >> KASAN_SHADOW_SCALE_SHIFT)
+ kasan_shadow_start;
}
where KASAN_SHADOW_SCALE_SHIFT = 3.
So for every 8 bytes of lowmemory there is one corresponding byte of shadow memory.
The following encoding used for each shadow byte: 0 means that all 8 bytes of the
corresponding memory region are valid for access; k (1 <= k <= 7) means that
the first k bytes are valid for access, and other (8 - k) bytes are not;
Any negative value indicates that the entire 8-bytes are unaccessible.
Different negative values used to distinguish between different kinds of
unaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h).
To be able to detect accesses to bad memory we need a special compiler.
Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr))
before each memory access of size 1, 2, 4, 8 or 16.
These functions check whether memory region is valid to access or not by checking
corresponding shadow memory. If access is not valid an error printed.
[1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel
Signed-off-by: Andrey Ryabinin <[email protected]>
---
Documentation/kasan.txt | 224 +++++++++++++++++++++++++++++++++++++
Makefile | 8 +-
commit | 3 +
include/linux/kasan.h | 33 ++++++
include/linux/sched.h | 4 +
lib/Kconfig.debug | 2 +
lib/Kconfig.kasan | 20 ++++
mm/Makefile | 1 +
mm/kasan/Makefile | 3 +
mm/kasan/kasan.c | 292 ++++++++++++++++++++++++++++++++++++++++++++++++
mm/kasan/kasan.h | 36 ++++++
mm/kasan/report.c | 157 ++++++++++++++++++++++++++
scripts/Makefile.lib | 10 ++
13 files changed, 792 insertions(+), 1 deletion(-)
create mode 100644 Documentation/kasan.txt
create mode 100644 commit
create mode 100644 include/linux/kasan.h
create mode 100644 lib/Kconfig.kasan
create mode 100644 mm/kasan/Makefile
create mode 100644 mm/kasan/kasan.c
create mode 100644 mm/kasan/kasan.h
create mode 100644 mm/kasan/report.c
diff --git a/Documentation/kasan.txt b/Documentation/kasan.txt
new file mode 100644
index 0000000..141391ba
--- /dev/null
+++ b/Documentation/kasan.txt
@@ -0,0 +1,224 @@
+Kernel address sanitizer
+================
+
+0. Overview
+===========
+
+Address sanitizer for kernel (KASAN) is a dynamic memory error detector. It provides
+fast and comprehensive solution for finding use-after-free and out-of-bounds bugs.
+
+KASAN is better than all of CONFIG_DEBUG_PAGEALLOC, because it:
+ - is based on compiler instrumentation (fast),
+ - detects OOB for both writes and reads,
+ - provides UAF detection,
+ - prints informative reports.
+
+KASAN uses compiler instrumentation for checking every memory access, therefore you
+will need a special compiler: GCC >= 4.10.0.
+
+Currently KASAN supported on x86/x86_64/arm architectures and requires kernel
+to be build with SLUB allocator.
+
+1. Usage
+=========
+
+KASAN requires the kernel to be built with a special compiler (GCC >= 4.10.0).
+
+To enable KASAN configure kernel with:
+
+ CONFIG_KASAN = y
+
+to instrument entire kernel:
+
+ CONFIG_KASAN_SANTIZE_ALL = y
+
+Currently KASAN works only with SLUB. It is highly recommended to run KASAN with
+CONFIG_SLUB_DEBUG=y and 'slub_debug=U'. This enables user tracking (free and alloc traces).
+There is no need to enable redzoning since KASAN detects access to user tracking structs
+so they actually act like redzones.
+
+To enable instrumentation for only specific files or directories, add a line
+similar to the following to the respective kernel Makefile:
+
+ For a single file (e.g. main.o):
+ KASAN_SANITIZE_main.o := y
+
+ For all files in one directory:
+ KASAN_SANITIZE := y
+
+To exclude files from being profiled even when CONFIG_GCOV_PROFILE_ALL
+is specified, use:
+
+ KASAN_SANITIZE_main.o := n
+ and:
+ KASAN_SANITIZE := n
+
+Only files which are linked to the main kernel image or are compiled as
+kernel modules are supported by this mechanism.
+
+
+1.1 Error reports
+==========
+
+A typical buffer overflow report looks like this:
+
+==================================================================
+AddressSanitizer: buffer overflow in kasan_kmalloc_oob_rigth+0x6a/0x7a at addr c6006f1b
+=============================================================================
+BUG kmalloc-128 (Not tainted): kasan error
+-----------------------------------------------------------------------------
+
+Disabling lock debugging due to kernel taint
+INFO: Allocated in kasan_kmalloc_oob_rigth+0x2c/0x7a age=5 cpu=0 pid=1
+ __slab_alloc.constprop.72+0x64f/0x680
+ kmem_cache_alloc+0xa8/0xe0
+ kasan_kmalloc_oob_rigth+0x2c/0x7a
+ kasan_tests_init+0x8/0xc
+ do_one_initcall+0x85/0x1a0
+ kernel_init_freeable+0x1f1/0x279
+ kernel_init+0x8/0xd0
+ ret_from_kernel_thread+0x21/0x30
+INFO: Slab 0xc7f3d0c0 objects=14 used=2 fp=0xc6006120 flags=0x5000080
+INFO: Object 0xc6006ea0 @offset=3744 fp=0xc6006d80
+
+Bytes b4 c6006e90: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
+Object c6006ea0: 80 6d 00 c6 00 00 00 00 00 00 00 00 00 00 00 00 .m..............
+Object c6006eb0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
+Object c6006ec0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
+Object c6006ed0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
+Object c6006ee0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
+Object c6006ef0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
+Object c6006f00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
+Object c6006f10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
+CPU: 0 PID: 1 Comm: swapper/0 Tainted: G B 3.16.0-rc3-next-20140704+ #216
+Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
+ 00000000 00000000 c6006ea0 c6889e30 c1c4446f c6801b40 c6889e48 c11c3f32
+ c6006000 c6801b40 c7f3d0c0 c6006ea0 c6889e68 c11c4ff5 c6801b40 c1e44906
+ c1e11352 c7f3d0c0 c6889efc c6801b40 c6889ef4 c11ccb78 c1e11352 00000286
+Call Trace:
+ [<c1c4446f>] dump_stack+0x4b/0x75
+ [<c11c3f32>] print_trailer+0xf2/0x180
+ [<c11c4ff5>] object_err+0x25/0x30
+ [<c11ccb78>] kasan_report_error+0xf8/0x380
+ [<c1c57940>] ? need_resched+0x21/0x25
+ [<c11cb92b>] ? poison_shadow+0x2b/0x30
+ [<c11cb92b>] ? poison_shadow+0x2b/0x30
+ [<c11cb92b>] ? poison_shadow+0x2b/0x30
+ [<c1f82763>] ? kasan_kmalloc_oob_rigth+0x7a/0x7a
+ [<c11cbacc>] __asan_store1+0x9c/0xa0
+ [<c1f82753>] ? kasan_kmalloc_oob_rigth+0x6a/0x7a
+ [<c1f82753>] kasan_kmalloc_oob_rigth+0x6a/0x7a
+ [<c1f8276b>] kasan_tests_init+0x8/0xc
+ [<c1000435>] do_one_initcall+0x85/0x1a0
+ [<c1f6f508>] ? repair_env_string+0x23/0x66
+ [<c1f6f4e5>] ? initcall_blacklist+0x85/0x85
+ [<c10c9883>] ? parse_args+0x33/0x450
+ [<c1f6fdb7>] kernel_init_freeable+0x1f1/0x279
+ [<c1000558>] kernel_init+0x8/0xd0
+ [<c1c578c1>] ret_from_kernel_thread+0x21/0x30
+ [<c1000550>] ? do_one_initcall+0x1a0/0x1a0
+Write of size 1 by thread T1:
+Memory state around the buggy address:
+ c6006c80: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
+ c6006d00: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
+ c6006d80: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
+ c6006e00: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
+ c6006e80: fd fd fd fd 00 00 00 00 00 00 00 00 00 00 00 00
+>c6006f00: 00 00 00 03 fc fc fc fc fc fc fc fc fc fc fc fc
+ ^
+ c6006f80: fc fc fc fc fc fc fc fc fd fd fd fd fd fd fd fd
+ c6007000: 00 00 00 00 00 00 00 00 00 fc fc fc fc fc fc fc
+ c6007080: fc fc fc fc fc fc fc fc fc fc fc fc fc 00 00 00
+ c6007100: 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc fc
+ c6007180: fc fc fc fc fc fc fc fc fc fc 00 00 00 00 00 00
+==================================================================
+
+In the last section the report shows memory state around the accessed address.
+Reading this part requires some more undestanding of how KASAN works.
+
+Each KASAN_SHADOW_SCALE_SIZE bytes of memory can be marked as addressable,
+partially addressable, freed or they can be part of a redzone.
+If bytes are marked as addressable that means that they belong to some
+allocated memory block and it is possible to read or modify any of these
+bytes. Addressable KASAN_SHADOW_SCALE_SIZE bytes are marked by 0 in the report.
+When only the first N bytes of KASAN_SHADOW_SCALE_SIZE belong to an allocated
+memory block, this bytes are partially addressable and marked by 'N'.
+
+Markers of unaccessible bytes could be found in mm/kasan/kasan.h header:
+
+#define KASAN_FREE_PAGE 0xFF /* page was freed */
+#define KASAN_PAGE_REDZONE 0xFE /* redzone for kmalloc_large allocations */
+#define KASAN_SLAB_REDZONE 0xFD /* Slab page redzone, does not belong to any slub object */
+#define KASAN_KMALLOC_REDZONE 0xFC /* redzone inside slub object */
+#define KASAN_KMALLOC_FREE 0xFB /* object was freed (kmem_cache_free/kfree) */
+#define KASAN_SLAB_FREE 0xFA /* free slab page */
+#define KASAN_SHADOW_GAP 0xF9 /* address belongs to shadow memory */
+
+In the report above the arrows point to the shadow byte 03, which means that the
+accessed address is partially addressable.
+
+
+2. Implementation details
+========================
+
+2.1. Shadow memory
+==================
+
+From a high level, our approach to memory error detection is similar to that
+of kmemcheck: use shadow memory to record whether each byte of memory is safe
+to access, and use instrumentation to check the shadow memory on each memory
+access.
+
+AddressSanitizer dedicates one-eighth of the low memory to its shadow
+memory and uses direct mapping with a scale and offset to translate a memory
+address to its corresponding shadow address.
+
+Here is function witch translate address to corresponding shadow address:
+
+unsigned long kasan_mem_to_shadow(unsigned long addr)
+{
+ return ((addr - PAGE_OFFSET) >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_START;
+}
+
+where KASAN_SHADOW_SCALE_SHIFT = 3.
+
+The figure below shows the address space layout. The memory is split
+into two parts (low and high) which map to the corresponding shadow regions.
+Applying the shadow mapping to addresses in the shadow region gives us
+addresses in the Bad region.
+
+|--------| |--------|
+| Memory |---- | Memory |
+|--------| \ |--------|
+| Shadow |-- -->| Shadow |
+|--------| \ |--------|
+| Bad | ---->| Bad |
+|--------| / |--------|
+| Shadow |-- -->| Shadow |
+|--------| / |--------|
+| Memory |---- | Memory |
+|--------| |--------|
+
+Each shadow byte corresponds to 8 bytes of the main memory. We use the
+following encoding for each shadow byte: 0 means that all 8 bytes of the
+corresponding memory region are addressable; k (1 <= k <= 7) means that
+the first k bytes are addressable, and other (8 - k) bytes are not;
+any negative value indicates that the entire 8-byte word is unaddressable.
+We use different negative values to distinguish between different kinds of
+unaddressable memory (redzones, freed memory) (see mm/kasan/kasan.h).
+
+Poisoning or unpoisoning a byte in the main memory means writing some special
+value into the corresponding shadow memory. This value indicates whether the
+byte is addressable or not.
+
+
+2.2. Instrumentation
+====================
+
+Since some functions (such as memset, memmove, memcpy) wich do memory accesses
+are written in assembly, compiler can't instrument them.
+Therefore we replace these functions with our own instrumented functions
+(kasan_memset, kasan_memcpy, kasan_memove).
+In some circumstances you may need to use the original functions,
+in such case insert #undef KASAN_HOOKS before includes.
+
diff --git a/Makefile b/Makefile
index 64ab7b3..08a07f2 100644
--- a/Makefile
+++ b/Makefile
@@ -384,6 +384,12 @@ LDFLAGS_MODULE =
CFLAGS_KERNEL =
AFLAGS_KERNEL =
CFLAGS_GCOV = -fprofile-arcs -ftest-coverage
+CFLAGS_KASAN = -fsanitize=address --param asan-stack=0 \
+ --param asan-use-after-return=0 \
+ --param asan-globals=0 \
+ --param asan-memintrin=0 \
+ --param asan-instrumentation-with-call-threshold=0 \
+ -DKASAN_HOOKS
# Use USERINCLUDE when you must reference the UAPI directories only.
@@ -428,7 +434,7 @@ export MAKE AWK GENKSYMS INSTALLKERNEL PERL UTS_MACHINE
export HOSTCXX HOSTCXXFLAGS LDFLAGS_MODULE CHECK CHECKFLAGS
export KBUILD_CPPFLAGS NOSTDINC_FLAGS LINUXINCLUDE OBJCOPYFLAGS LDFLAGS
-export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV
+export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV CFLAGS_KASAN
export KBUILD_AFLAGS AFLAGS_KERNEL AFLAGS_MODULE
export KBUILD_AFLAGS_MODULE KBUILD_CFLAGS_MODULE KBUILD_LDFLAGS_MODULE
export KBUILD_AFLAGS_KERNEL KBUILD_CFLAGS_KERNEL
diff --git a/commit b/commit
new file mode 100644
index 0000000..134f4dd
--- /dev/null
+++ b/commit
@@ -0,0 +1,3 @@
+
+I'm working on address sanitizer for kernel.
+fuck this bloody.
\ No newline at end of file
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
new file mode 100644
index 0000000..7efc3eb
--- /dev/null
+++ b/include/linux/kasan.h
@@ -0,0 +1,33 @@
+#ifndef _LINUX_KASAN_H
+#define _LINUX_KASAN_H
+
+#include <linux/types.h>
+
+struct kmem_cache;
+struct page;
+
+#ifdef CONFIG_KASAN
+
+void unpoison_shadow(const void *address, size_t size);
+
+void kasan_enable_local(void);
+void kasan_disable_local(void);
+
+/* Reserves shadow memory. */
+void kasan_alloc_shadow(void);
+void kasan_init_shadow(void);
+
+#else /* CONFIG_KASAN */
+
+static inline void unpoison_shadow(const void *address, size_t size) {}
+
+static inline void kasan_enable_local(void) {}
+static inline void kasan_disable_local(void) {}
+
+/* Reserves shadow memory. */
+static inline void kasan_init_shadow(void) {}
+static inline void kasan_alloc_shadow(void) {}
+
+#endif /* CONFIG_KASAN */
+
+#endif /* LINUX_KASAN_H */
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 322d4fc..286650a 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1471,6 +1471,10 @@ struct task_struct {
gfp_t lockdep_reclaim_gfp;
#endif
+#ifdef CONFIG_KASAN
+ int kasan_depth;
+#endif
+
/* journalling filesystem info */
void *journal_info;
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index cf9cf82..67a4dfc 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -611,6 +611,8 @@ config DEBUG_STACKOVERFLOW
source "lib/Kconfig.kmemcheck"
+source "lib/Kconfig.kasan"
+
endmenu # "Memory Debugging"
config DEBUG_SHIRQ
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
new file mode 100644
index 0000000..2bfff78
--- /dev/null
+++ b/lib/Kconfig.kasan
@@ -0,0 +1,20 @@
+config HAVE_ARCH_KASAN
+ bool
+
+if HAVE_ARCH_KASAN
+
+config KASAN
+ bool "AddressSanitizer: dynamic memory error detector"
+ default n
+ help
+ Enables AddressSanitizer - dynamic memory error detector,
+ that finds out-of-bounds and use-after-free bugs.
+
+config KASAN_SANITIZE_ALL
+ bool "Instrument entire kernel"
+ depends on KASAN
+ default y
+ help
+ This enables compiler intrumentation for entire kernel
+
+endif
diff --git a/mm/Makefile b/mm/Makefile
index e4a97bd..dbe9a22 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -64,3 +64,4 @@ obj-$(CONFIG_ZPOOL) += zpool.o
obj-$(CONFIG_ZSMALLOC) += zsmalloc.o
obj-$(CONFIG_GENERIC_EARLY_IOREMAP) += early_ioremap.o
obj-$(CONFIG_CMA) += cma.o
+obj-$(CONFIG_KASAN) += kasan/
diff --git a/mm/kasan/Makefile b/mm/kasan/Makefile
new file mode 100644
index 0000000..46d44bb
--- /dev/null
+++ b/mm/kasan/Makefile
@@ -0,0 +1,3 @@
+KASAN_SANITIZE := n
+
+obj-y := kasan.o report.o
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
new file mode 100644
index 0000000..e2cd345
--- /dev/null
+++ b/mm/kasan/kasan.c
@@ -0,0 +1,292 @@
+/*
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <[email protected]>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/export.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/memblock.h>
+#include <linux/mm.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/kasan.h>
+#include <linux/memcontrol.h>
+
+#include "kasan.h"
+#include "../slab.h"
+
+static bool __read_mostly kasan_initialized;
+
+unsigned long kasan_shadow_start;
+unsigned long kasan_shadow_end;
+
+/* equals to (kasan_shadow_start - PAGE_OFFSET/KASAN_SHADOW_SCALE_SIZE) */
+unsigned long __read_mostly kasan_shadow_offset; /* it's not a very good name for this variable */
+
+
+static inline bool addr_is_in_mem(unsigned long addr)
+{
+ return likely(addr >= PAGE_OFFSET && addr < (unsigned long)high_memory);
+}
+
+void kasan_enable_local(void)
+{
+ if (likely(kasan_initialized))
+ current->kasan_depth--;
+}
+
+void kasan_disable_local(void)
+{
+ if (likely(kasan_initialized))
+ current->kasan_depth++;
+}
+
+static inline bool kasan_enabled(void)
+{
+ return likely(kasan_initialized
+ && !current->kasan_depth);
+}
+
+/*
+ * Poisons the shadow memory for 'size' bytes starting from 'addr'.
+ * Memory addresses should be aligned to KASAN_SHADOW_SCALE_SIZE.
+ */
+static void poison_shadow(const void *address, size_t size, u8 value)
+{
+ unsigned long shadow_start, shadow_end;
+ unsigned long addr = (unsigned long)address;
+
+ shadow_start = kasan_mem_to_shadow(addr);
+ shadow_end = kasan_mem_to_shadow(addr + size);
+
+ memset((void *)shadow_start, value, shadow_end - shadow_start);
+}
+
+void unpoison_shadow(const void *address, size_t size)
+{
+ poison_shadow(address, size, 0);
+
+ if (size & KASAN_SHADOW_MASK) {
+ u8 *shadow = (u8 *)kasan_mem_to_shadow((unsigned long)address
+ + size);
+ *shadow = size & KASAN_SHADOW_MASK;
+ }
+}
+
+static __always_inline bool address_is_poisoned(unsigned long addr)
+{
+ s8 shadow_value = *(s8 *)kasan_mem_to_shadow(addr);
+
+ if (shadow_value != 0) {
+ s8 last_byte = addr & KASAN_SHADOW_MASK;
+ return last_byte >= shadow_value;
+ }
+ return false;
+}
+
+static __always_inline unsigned long memory_is_poisoned(unsigned long addr,
+ size_t size)
+{
+ unsigned long end = addr + size;
+ for (; addr < end; addr++)
+ if (unlikely(address_is_poisoned(addr)))
+ return addr;
+ return 0;
+}
+
+static __always_inline void check_memory_region(unsigned long addr,
+ size_t size, bool write)
+{
+ unsigned long access_addr;
+ struct access_info info;
+
+ if (!kasan_enabled())
+ return;
+
+ if (unlikely(addr < TASK_SIZE)) {
+ info.access_addr = addr;
+ info.access_size = size;
+ info.is_write = write;
+ info.ip = _RET_IP_;
+ kasan_report_user_access(&info);
+ return;
+ }
+
+ if (!addr_is_in_mem(addr))
+ return;
+
+ access_addr = memory_is_poisoned(addr, size);
+ if (likely(access_addr == 0))
+ return;
+
+ info.access_addr = access_addr;
+ info.access_size = size;
+ info.is_write = write;
+ info.ip = _RET_IP_;
+ kasan_report_error(&info);
+}
+
+void __init kasan_alloc_shadow(void)
+{
+ unsigned long lowmem_size = (unsigned long)high_memory - PAGE_OFFSET;
+ unsigned long shadow_size;
+ phys_addr_t shadow_phys_start;
+
+ shadow_size = lowmem_size >> KASAN_SHADOW_SCALE_SHIFT;
+
+ shadow_phys_start = memblock_alloc(shadow_size, PAGE_SIZE);
+ if (!shadow_phys_start) {
+ pr_err("Unable to reserve shadow memory\n");
+ return;
+ }
+
+ kasan_shadow_start = (unsigned long)phys_to_virt(shadow_phys_start);
+ kasan_shadow_end = kasan_shadow_start + shadow_size;
+
+ pr_info("reserved shadow memory: [0x%lx - 0x%lx]\n",
+ kasan_shadow_start, kasan_shadow_end);
+ kasan_shadow_offset = kasan_shadow_start -
+ (PAGE_OFFSET >> KASAN_SHADOW_SCALE_SHIFT);
+}
+
+void __init kasan_init_shadow(void)
+{
+ if (kasan_shadow_start) {
+ unpoison_shadow((void *)PAGE_OFFSET,
+ (size_t)(kasan_shadow_start - PAGE_OFFSET));
+ poison_shadow((void *)kasan_shadow_start,
+ kasan_shadow_end - kasan_shadow_start,
+ KASAN_SHADOW_GAP);
+ unpoison_shadow((void *)kasan_shadow_end,
+ (size_t)(high_memory - kasan_shadow_end));
+ kasan_initialized = true;
+ pr_info("shadow memory initialized\n");
+ }
+}
+
+void *kasan_memcpy(void *dst, const void *src, size_t len)
+{
+ if (unlikely(len == 0))
+ return dst;
+
+ check_memory_region((unsigned long)src, len, false);
+ check_memory_region((unsigned long)dst, len, true);
+
+ return memcpy(dst, src, len);
+}
+EXPORT_SYMBOL(kasan_memcpy);
+
+void *kasan_memset(void *ptr, int val, size_t len)
+{
+ if (unlikely(len == 0))
+ return ptr;
+
+ check_memory_region((unsigned long)ptr, len, true);
+
+ return memset(ptr, val, len);
+}
+EXPORT_SYMBOL(kasan_memset);
+
+void *kasan_memmove(void *dst, const void *src, size_t len)
+{
+ if (unlikely(len == 0))
+ return dst;
+
+ check_memory_region((unsigned long)src, len, false);
+ check_memory_region((unsigned long)dst, len, true);
+
+ return memmove(dst, src, len);
+}
+EXPORT_SYMBOL(kasan_memmove);
+
+void __asan_load1(unsigned long addr)
+{
+ check_memory_region(addr, 1, false);
+}
+EXPORT_SYMBOL(__asan_load1);
+
+void __asan_load2(unsigned long addr)
+{
+ check_memory_region(addr, 2, false);
+}
+EXPORT_SYMBOL(__asan_load2);
+
+void __asan_load4(unsigned long addr)
+{
+ check_memory_region(addr, 4, false);
+}
+EXPORT_SYMBOL(__asan_load4);
+
+void __asan_load8(unsigned long addr)
+{
+ check_memory_region(addr, 8, false);
+}
+EXPORT_SYMBOL(__asan_load8);
+
+void __asan_load16(unsigned long addr)
+{
+ check_memory_region(addr, 16, false);
+}
+EXPORT_SYMBOL(__asan_load16);
+
+void __asan_loadN(unsigned long addr, size_t size)
+{
+ check_memory_region(addr, size, false);
+}
+EXPORT_SYMBOL(__asan_loadN);
+
+void __asan_store1(unsigned long addr)
+{
+ check_memory_region(addr, 1, true);
+}
+EXPORT_SYMBOL(__asan_store1);
+
+void __asan_store2(unsigned long addr)
+{
+ check_memory_region(addr, 2, true);
+}
+EXPORT_SYMBOL(__asan_store2);
+
+void __asan_store4(unsigned long addr)
+{
+ check_memory_region(addr, 4, true);
+}
+EXPORT_SYMBOL(__asan_store4);
+
+void __asan_store8(unsigned long addr)
+{
+ check_memory_region(addr, 8, true);
+}
+EXPORT_SYMBOL(__asan_store8);
+
+void __asan_store16(unsigned long addr)
+{
+ check_memory_region(addr, 16, true);
+}
+EXPORT_SYMBOL(__asan_store16);
+
+void __asan_storeN(unsigned long addr, size_t size)
+{
+ check_memory_region(addr, size, true);
+}
+EXPORT_SYMBOL(__asan_storeN);
+
+/* to shut up compiler complains */
+void __asan_init_v3(void) {}
+EXPORT_SYMBOL(__asan_init_v3);
+
+void __asan_handle_no_return(void) {}
+EXPORT_SYMBOL(__asan_handle_no_return);
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
new file mode 100644
index 0000000..711ae4f
--- /dev/null
+++ b/mm/kasan/kasan.h
@@ -0,0 +1,36 @@
+#ifndef __MM_KASAN_KASAN_H
+#define __MM_KASAN_KASAN_H
+
+#define KASAN_SHADOW_SCALE_SHIFT 3
+#define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
+#define KASAN_SHADOW_MASK (KASAN_SHADOW_SCALE_SIZE - 1)
+
+#define KASAN_SHADOW_GAP 0xF9 /* address belongs to shadow memory */
+
+struct access_info {
+ unsigned long access_addr;
+ size_t access_size;
+ bool is_write;
+ unsigned long ip;
+};
+
+extern unsigned long kasan_shadow_start;
+extern unsigned long kasan_shadow_end;
+extern unsigned long kasan_shadow_offset;
+
+void kasan_report_error(struct access_info *info);
+void kasan_report_user_access(struct access_info *info);
+
+static inline unsigned long kasan_mem_to_shadow(unsigned long addr)
+{
+ return (addr >> KASAN_SHADOW_SCALE_SHIFT)
+ + kasan_shadow_offset;
+}
+
+static inline unsigned long kasan_shadow_to_mem(unsigned long shadow_addr)
+{
+ return ((shadow_addr - kasan_shadow_start)
+ << KASAN_SHADOW_SCALE_SHIFT) + PAGE_OFFSET;
+}
+
+#endif
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
new file mode 100644
index 0000000..2430e05
--- /dev/null
+++ b/mm/kasan/report.c
@@ -0,0 +1,157 @@
+/*
+ *
+ * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Author: Andrey Ryabinin <[email protected]>
+ *
+ * Some of code borrowed from https://github.com/xairy/linux by
+ * Andrey Konovalov <[email protected]>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/stacktrace.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/slab.h>
+#include <linux/kasan.h>
+#include <linux/memcontrol.h> /* for ../slab.h */
+
+#include "kasan.h"
+#include "../slab.h"
+
+/* Shadow layout customization. */
+#define SHADOW_BYTES_PER_BLOCK 1
+#define SHADOW_BLOCKS_PER_ROW 16
+#define SHADOW_BYTES_PER_ROW (SHADOW_BLOCKS_PER_ROW * SHADOW_BYTES_PER_BLOCK)
+#define SHADOW_ROWS_AROUND_ADDR 5
+
+static inline void *virt_to_obj(struct kmem_cache *s, void *slab_start, void *x)
+{
+ return x - ((x - slab_start) % s->size);
+}
+
+static void print_error_description(struct access_info *info)
+{
+ const char *bug_type = "unknown crash";
+ u8 shadow_val = *(u8 *)kasan_mem_to_shadow(info->access_addr);
+
+ switch (shadow_val) {
+ case 0 ... KASAN_SHADOW_SCALE_SIZE - 1:
+ bug_type = "buffer overflow";
+ break;
+ case KASAN_SHADOW_GAP:
+ bug_type = "wild memory access";
+ break;
+ }
+
+ pr_err("AddressSanitizer: %s in %pS at addr %p\n",
+ bug_type, (void *)info->ip,
+ (void *)info->access_addr);
+}
+
+static void print_address_description(struct access_info *info)
+{
+ void *object;
+ struct kmem_cache *cache;
+ void *slab_start;
+ struct page *page;
+ u8 shadow_val = *(u8 *)kasan_mem_to_shadow(info->access_addr);
+
+ page = virt_to_page(info->access_addr);
+
+ switch (shadow_val) {
+ case KASAN_SHADOW_GAP:
+ pr_err("No metainfo is available for this access.\n");
+ dump_stack();
+ break;
+ default:
+ WARN_ON(1);
+ }
+
+ pr_err("%s of size %zu by thread T%d:\n",
+ info->is_write ? "Write" : "Read",
+ info->access_size, current->pid);
+}
+
+static bool row_is_guilty(unsigned long row, unsigned long guilty)
+{
+ return (row <= guilty) && (guilty < row + SHADOW_BYTES_PER_ROW);
+}
+
+static void print_shadow_pointer(unsigned long row, unsigned long shadow,
+ char *output)
+{
+ /* The length of ">ff00ff00ff00ff00: " is 3 + (BITS_PER_LONG/8)*2 chars. */
+ unsigned long space_count = 3 + (BITS_PER_LONG >> 2) + (shadow - row)*2 +
+ (shadow - row) / SHADOW_BYTES_PER_BLOCK;
+ unsigned long i;
+
+ for (i = 0; i < space_count; i++)
+ output[i] = ' ';
+ output[space_count] = '^';
+ output[space_count + 1] = '\0';
+}
+
+static void print_shadow_for_address(unsigned long addr)
+{
+ int i;
+ unsigned long shadow = kasan_mem_to_shadow(addr);
+ unsigned long aligned_shadow = round_down(shadow, SHADOW_BYTES_PER_ROW)
+ - SHADOW_ROWS_AROUND_ADDR * SHADOW_BYTES_PER_ROW;
+
+ pr_err("Memory state around the buggy address:\n");
+
+ for (i = -SHADOW_ROWS_AROUND_ADDR; i <= SHADOW_ROWS_AROUND_ADDR; i++) {
+ unsigned long kaddr = kasan_shadow_to_mem(aligned_shadow);
+ char buffer[100];
+
+ snprintf(buffer, sizeof(buffer),
+ (i == 0) ? ">%lx: " : " %lx: ", kaddr);
+
+ print_hex_dump(KERN_ERR, buffer,
+ DUMP_PREFIX_NONE, SHADOW_BYTES_PER_ROW, 1,
+ (void *)aligned_shadow, SHADOW_BYTES_PER_ROW, 0);
+
+ if (row_is_guilty(aligned_shadow, shadow)) {
+ print_shadow_pointer(aligned_shadow, shadow, buffer);
+ pr_err("%s\n", buffer);
+ }
+ aligned_shadow += SHADOW_BYTES_PER_ROW;
+ }
+}
+
+void kasan_report_error(struct access_info *info)
+{
+ kasan_disable_local();
+ pr_err("================================="
+ "=================================\n");
+ print_error_description(info);
+ print_address_description(info);
+ print_shadow_for_address(info->access_addr);
+ pr_err("================================="
+ "=================================\n");
+ kasan_enable_local();
+}
+
+void kasan_report_user_access(struct access_info *info)
+{
+ kasan_disable_local();
+ pr_err("================================="
+ "=================================\n");
+ pr_err("AddressSanitizer: user-memory-access on address %lx\n",
+ info->access_addr);
+ pr_err("%s of size %zu by thread T%d:\n",
+ info->is_write ? "Write" : "Read",
+ info->access_size, current->pid);
+ dump_stack();
+ pr_err("================================="
+ "=================================\n");
+ kasan_enable_local();
+}
diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
index 260bf8a..2bec69e 100644
--- a/scripts/Makefile.lib
+++ b/scripts/Makefile.lib
@@ -119,6 +119,16 @@ _c_flags += $(if $(patsubst n%,, \
$(CFLAGS_GCOV))
endif
+#
+# Enable address sanitizer flags for kernel except some files or directories
+# we don't want to check (depends on variables KASAN_SANITIZE_obj.o, KASAN_SANITIZE)
+#
+ifeq ($(CONFIG_KASAN),y)
+_c_flags += $(if $(patsubst n%,, \
+ $(KASAN_SANITIZE_$(basetarget).o)$(KASAN_SANITIZE)$(CONFIG_KASAN_SANITIZE_ALL)), \
+ $(CFLAGS_KASAN))
+endif
+
# If building the kernel in a separate objtree expand all occurrences
# of -Idir to -I$(srctree)/dir except for absolute paths (starting with '/').
--
1.8.5.5
Now everything in x86 code is ready for kasan. Enable it.
Signed-off-by: Andrey Ryabinin <[email protected]>
---
arch/x86/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 8657c06..f9863b3 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -132,6 +132,7 @@ config X86
select HAVE_CC_STACKPROTECTOR
select GENERIC_CPU_AUTOPROBE
select HAVE_ARCH_AUDITSYSCALL
+ select HAVE_ARCH_KASAN
config INSTRUCTION_DECODER
def_bool y
--
1.8.5.5
Since functions memset, memmove, memcpy are written in assembly,
compiler can't instrument memory accesses inside them.
This patch replaces these functions with our own instrumented
functions (kasan_mem*) for CONFIG_KASAN = y
In rare circumstances you may need to use the original functions,
in such case put #undef KASAN_HOOKS before includes.
Signed-off-by: Andrey Ryabinin <[email protected]>
---
arch/x86/include/asm/string_32.h | 28 ++++++++++++++++++++++++++++
arch/x86/include/asm/string_64.h | 24 ++++++++++++++++++++++++
arch/x86/lib/Makefile | 2 ++
3 files changed, 54 insertions(+)
diff --git a/arch/x86/include/asm/string_32.h b/arch/x86/include/asm/string_32.h
index 3d3e835..a86615a 100644
--- a/arch/x86/include/asm/string_32.h
+++ b/arch/x86/include/asm/string_32.h
@@ -321,6 +321,32 @@ void *__constant_c_and_count_memset(void *s, unsigned long pattern,
: __memset_generic((s), (c), (count)))
#define __HAVE_ARCH_MEMSET
+
+#if defined(CONFIG_KASAN) && defined(KASAN_HOOKS)
+
+/*
+ * Since some of the following functions (memset, memmove, memcpy)
+ * are written in assembly, compiler can't instrument memory accesses
+ * inside them.
+ *
+ * To solve this issue we replace these functions with our own instrumented
+ * functions (kasan_mem*)
+ *
+ * In rare circumstances you may need to use the original functions,
+ * in such case put #undef KASAN_HOOKS before includes.
+ */
+
+#undef memcpy
+void *kasan_memset(void *ptr, int val, size_t len);
+void *kasan_memcpy(void *dst, const void *src, size_t len);
+void *kasan_memmove(void *dst, const void *src, size_t len);
+
+#define memcpy(dst, src, len) kasan_memcpy((dst), (src), (len))
+#define memset(ptr, val, len) kasan_memset((ptr), (val), (len))
+#define memmove(dst, src, len) kasan_memmove((dst), (src), (len))
+
+#else /* CONFIG_KASAN && KASAN_HOOKS */
+
#if (__GNUC__ >= 4)
#define memset(s, c, count) __builtin_memset(s, c, count)
#else
@@ -331,6 +357,8 @@ void *__constant_c_and_count_memset(void *s, unsigned long pattern,
: __memset((s), (c), (count)))
#endif
+#endif /* CONFIG_KASAN && KASAN_HOOKS */
+
/*
* find the first occurrence of byte 'c', or 1 past the area if none
*/
diff --git a/arch/x86/include/asm/string_64.h b/arch/x86/include/asm/string_64.h
index 19e2c46..2af2dbe 100644
--- a/arch/x86/include/asm/string_64.h
+++ b/arch/x86/include/asm/string_64.h
@@ -63,6 +63,30 @@ char *strcpy(char *dest, const char *src);
char *strcat(char *dest, const char *src);
int strcmp(const char *cs, const char *ct);
+#if defined(CONFIG_KASAN) && defined(KASAN_HOOKS)
+
+/*
+ * Since some of the following functions (memset, memmove, memcpy)
+ * are written in assembly, compiler can't instrument memory accesses
+ * inside them.
+ *
+ * To solve this issue we replace these functions with our own instrumented
+ * functions (kasan_mem*)
+ *
+ * In rare circumstances you may need to use the original functions,
+ * in such case put #undef KASAN_HOOKS before includes.
+ */
+
+void *kasan_memset(void *ptr, int val, size_t len);
+void *kasan_memcpy(void *dst, const void *src, size_t len);
+void *kasan_memmove(void *dst, const void *src, size_t len);
+
+#define memcpy(dst, src, len) kasan_memcpy((dst), (src), (len))
+#define memset(ptr, val, len) kasan_memset((ptr), (val), (len))
+#define memmove(dst, src, len) kasan_memmove((dst), (src), (len))
+
+#endif /* CONFIG_KASAN && KASAN_HOOKS */
+
#endif /* __KERNEL__ */
#endif /* _ASM_X86_STRING_64_H */
diff --git a/arch/x86/lib/Makefile b/arch/x86/lib/Makefile
index 4d4f96a..d82bc35 100644
--- a/arch/x86/lib/Makefile
+++ b/arch/x86/lib/Makefile
@@ -2,6 +2,8 @@
# Makefile for x86 specific library files.
#
+KASAN_SANITIZE_memcpy_32.o := n
+
inat_tables_script = $(srctree)/arch/x86/tools/gen-insn-attr-x86.awk
inat_tables_maps = $(srctree)/arch/x86/lib/x86-opcode-map.txt
quiet_cmd_inat_tables = GEN $@
--
1.8.5.5
On 07/09/2014 04:00 AM, Andrey Ryabinin wrote:
>
> Address sanitizer dedicates 1/8 of the low memory to the shadow memory and uses direct
> mapping with a scale and offset to translate a memory address to its corresponding
> shadow address.
>
> Here is function to translate address to corresponding shadow address:
>
> unsigned long kasan_mem_to_shadow(unsigned long addr)
> {
> return ((addr) >> KASAN_SHADOW_SCALE_SHIFT)
> + kasan_shadow_start - (PAGE_OFFSET >> KASAN_SHADOW_SCALE_SHIFT);
> }
>
> where KASAN_SHADOW_SCALE_SHIFT = 3.
>
How does that work when memory is sparsely populated?
-hpa
2014-07-12 4:59 GMT+04:00 H. Peter Anvin <[email protected]>:
> On 07/09/2014 04:00 AM, Andrey Ryabinin wrote:
>>
>> Address sanitizer dedicates 1/8 of the low memory to the shadow memory and uses direct
>> mapping with a scale and offset to translate a memory address to its corresponding
>> shadow address.
>>
>> Here is function to translate address to corresponding shadow address:
>>
>> unsigned long kasan_mem_to_shadow(unsigned long addr)
>> {
>> return ((addr) >> KASAN_SHADOW_SCALE_SHIFT)
>> + kasan_shadow_start - (PAGE_OFFSET >> KASAN_SHADOW_SCALE_SHIFT);
>> }
>>
>> where KASAN_SHADOW_SCALE_SHIFT = 3.
>>
>
> How does that work when memory is sparsely populated?
>
Sparsemem configurations currently may not work with kasan.
I suppose I will have to move shadow area to vmalloc address space and
make it (shadow) sparse too if needed.
> -hpa
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to [email protected]. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"[email protected]"> [email protected] </a>
--
Best regards,
Andrey Ryabinin
On Sun, 13 Jul 2014, Andrey Ryabinin wrote:
> > How does that work when memory is sparsely populated?
> >
>
> Sparsemem configurations currently may not work with kasan.
> I suppose I will have to move shadow area to vmalloc address space and
> make it (shadow) sparse too if needed.
Well it seems to work with sparsemem / vmemmap? So non vmmemmapped configs
of sparsemem only. vmemmmap can also handle holes in memory.
On 07/14/14 19:13, Christoph Lameter wrote:
> On Sun, 13 Jul 2014, Andrey Ryabinin wrote:
>
>>> How does that work when memory is sparsely populated?
>>>
>>
>> Sparsemem configurations currently may not work with kasan.
>> I suppose I will have to move shadow area to vmalloc address space and
>> make it (shadow) sparse too if needed.
>
> Well it seems to work with sparsemem / vmemmap? So non vmmemmapped configs
> of sparsemem only. vmemmmap can also handle holes in memory.
>
>
Not sure. This sparsemem/vmemmap thing is kinda new to me, so I need to dig some more
to understand how it iтteracts with kasan.
As far as I understand the main problem with sparsemem & kasan is shadow allocation:
unsigned long lowmem_size = (unsigned long)high_memory - PAGE_OFFSET;
shadow_size = lowmem_size >> KASAN_SHADOW_SCALE_SHIFT;
shadow_phys_start = memblock_alloc(shadow_size, PAGE_SIZE);
If we don't have one big enough physically contiguous block for shadow it will fail.