Hello everyone! Requesting for your comments.
Use-after-free vulnerabilities in the Linux kernel are very popular for
exploitation. A few examples:
https://googleprojectzero.blogspot.com/2018/09/a-cache-invalidation-bug-in-linux.html
https://googleprojectzero.blogspot.com/2019/11/bad-binder-android-in-wild-exploit.html?m=1
https://a13xp0p0v.github.io/2020/02/15/CVE-2019-18683.html
Use-after-free exploits usually employ heap spraying technique.
Generally it aims to put controlled bytes at a predetermined memory
location on the heap. Heap spraying for exploiting use-after-free in
the Linux kernel relies on the fact that on kmalloc(), the slab allocator
returns the address of the memory that was recently freed. So allocating
a kernel object with the same size and controlled contents allows
overwriting the vulnerable freed object.
I've found an easy way to break heap spraying for use-after-free
exploitation. I simply extracted slab freelist quarantine from KASAN
functionality and called it CONFIG_SLAB_QUARANTINE. Please see patch 1.
If this feature is enabled, freed allocations are stored in the quarantine
and can't be instantly reallocated and overwritten by the exploit
performing heap spraying.
In patch 2 you can see the lkdtm test showing how CONFIG_SLAB_QUARANTINE
prevents immediate reallocation of a freed heap object.
I tested this patch series both for CONFIG_SLUB and CONFIG_SLAB.
CONFIG_SLAB_QUARANTINE disabled:
# echo HEAP_SPRAY > /sys/kernel/debug/provoke-crash/DIRECT
lkdtm: Performing direct entry HEAP_SPRAY
lkdtm: Performing heap spraying...
lkdtm: attempt 0: spray alloc addr 00000000f8699c7d vs freed addr 00000000f8699c7d
lkdtm: freed addr is reallocated!
lkdtm: FAIL! Heap spraying succeed :(
CONFIG_SLAB_QUARANTINE enabled:
# echo HEAP_SPRAY > /sys/kernel/debug/provoke-crash/DIRECT
lkdtm: Performing direct entry HEAP_SPRAY
lkdtm: Performing heap spraying...
lkdtm: attempt 0: spray alloc addr 000000009cafb63f vs freed addr 00000000173cce94
lkdtm: attempt 1: spray alloc addr 000000003096911f vs freed addr 00000000173cce94
lkdtm: attempt 2: spray alloc addr 00000000da60d755 vs freed addr 00000000173cce94
lkdtm: attempt 3: spray alloc addr 000000000b415070 vs freed addr 00000000173cce94
...
lkdtm: attempt 126: spray alloc addr 00000000e80ef807 vs freed addr 00000000173cce94
lkdtm: attempt 127: spray alloc addr 00000000398fe535 vs freed addr 00000000173cce94
lkdtm: OK! Heap spraying hasn't succeed :)
I did a brief performance evaluation of this feature.
1. Memory consumption. KASAN quarantine uses 1/32 of the memory.
CONFIG_SLAB_QUARANTINE disabled:
# free -m
total used free shared buff/cache available
Mem: 1987 39 1862 10 86 1907
Swap: 0 0 0
CONFIG_SLAB_QUARANTINE enabled:
# free -m
total used free shared buff/cache available
Mem: 1987 140 1760 10 87 1805
Swap: 0 0 0
2. Performance penalty. I used `hackbench -s 256 -l 200 -g 15 -f 25 -P`.
CONFIG_SLAB_QUARANTINE disabled (x86_64, CONFIG_SLUB):
Times: 3.088, 3.103, 3.068, 3.103, 3.107
Mean: 3.0938
Standard deviation: 0.0144
CONFIG_SLAB_QUARANTINE enabled (x86_64, CONFIG_SLUB):
Times: 3.303, 3.329, 3.356, 3.314, 3.292
Mean: 3.3188 (+7.3%)
Standard deviation: 0.0223
I would appreciate your feedback!
Best regards,
Alexander
Alexander Popov (2):
mm: Extract SLAB_QUARANTINE from KASAN
lkdtm: Add heap spraying test
drivers/misc/lkdtm/core.c | 1 +
drivers/misc/lkdtm/heap.c | 40 ++++++++++++++
drivers/misc/lkdtm/lkdtm.h | 1 +
include/linux/kasan.h | 107 ++++++++++++++++++++-----------------
include/linux/slab_def.h | 2 +-
include/linux/slub_def.h | 2 +-
init/Kconfig | 11 ++++
mm/Makefile | 3 +-
mm/kasan/Makefile | 2 +
mm/kasan/kasan.h | 75 +++++++++++++-------------
mm/kasan/quarantine.c | 2 +
mm/kasan/slab_quarantine.c | 99 ++++++++++++++++++++++++++++++++++
mm/slub.c | 2 +-
13 files changed, 258 insertions(+), 89 deletions(-)
create mode 100644 mm/kasan/slab_quarantine.c
--
2.26.2
Heap spraying is an exploitation technique that aims to put controlled
bytes at a predetermined memory location on the heap. Heap spraying for
exploiting use-after-free in the Linux kernel relies on the fact that on
kmalloc(), the slab allocator returns the address of the memory that was
recently freed. Allocating a kernel object with the same size and
controlled contents allows overwriting the vulnerable freed object.
Let's extract slab freelist quarantine from KASAN functionality and
call it CONFIG_SLAB_QUARANTINE. This feature breaks widespread heap
spraying technique used for exploiting use-after-free vulnerabilities
in the kernel code.
If this feature is enabled, freed allocations are stored in the quarantine
and can't be instantly reallocated and overwritten by the exploit
performing heap spraying.
Signed-off-by: Alexander Popov <[email protected]>
---
include/linux/kasan.h | 107 ++++++++++++++++++++-----------------
include/linux/slab_def.h | 2 +-
include/linux/slub_def.h | 2 +-
init/Kconfig | 11 ++++
mm/Makefile | 3 +-
mm/kasan/Makefile | 2 +
mm/kasan/kasan.h | 75 +++++++++++++-------------
mm/kasan/quarantine.c | 2 +
mm/kasan/slab_quarantine.c | 99 ++++++++++++++++++++++++++++++++++
mm/slub.c | 2 +-
10 files changed, 216 insertions(+), 89 deletions(-)
create mode 100644 mm/kasan/slab_quarantine.c
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 087fba34b209..b837216f760c 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -42,32 +42,14 @@ void kasan_unpoison_task_stack(struct task_struct *task);
void kasan_alloc_pages(struct page *page, unsigned int order);
void kasan_free_pages(struct page *page, unsigned int order);
-void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
- slab_flags_t *flags);
-
void kasan_poison_slab(struct page *page);
void kasan_unpoison_object_data(struct kmem_cache *cache, void *object);
void kasan_poison_object_data(struct kmem_cache *cache, void *object);
void * __must_check kasan_init_slab_obj(struct kmem_cache *cache,
const void *object);
-void * __must_check kasan_kmalloc_large(const void *ptr, size_t size,
- gfp_t flags);
void kasan_kfree_large(void *ptr, unsigned long ip);
void kasan_poison_kfree(void *ptr, unsigned long ip);
-void * __must_check kasan_kmalloc(struct kmem_cache *s, const void *object,
- size_t size, gfp_t flags);
-void * __must_check kasan_krealloc(const void *object, size_t new_size,
- gfp_t flags);
-
-void * __must_check kasan_slab_alloc(struct kmem_cache *s, void *object,
- gfp_t flags);
-bool kasan_slab_free(struct kmem_cache *s, void *object, unsigned long ip);
-
-struct kasan_cache {
- int alloc_meta_offset;
- int free_meta_offset;
-};
/*
* These functions provide a special case to support backing module
@@ -107,10 +89,6 @@ static inline void kasan_disable_current(void) {}
static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
static inline void kasan_free_pages(struct page *page, unsigned int order) {}
-static inline void kasan_cache_create(struct kmem_cache *cache,
- unsigned int *size,
- slab_flags_t *flags) {}
-
static inline void kasan_poison_slab(struct page *page) {}
static inline void kasan_unpoison_object_data(struct kmem_cache *cache,
void *object) {}
@@ -122,17 +100,65 @@ static inline void *kasan_init_slab_obj(struct kmem_cache *cache,
return (void *)object;
}
+static inline void kasan_kfree_large(void *ptr, unsigned long ip) {}
+static inline void kasan_poison_kfree(void *ptr, unsigned long ip) {}
+static inline void kasan_free_shadow(const struct vm_struct *vm) {}
+static inline void kasan_remove_zero_shadow(void *start, unsigned long size) {}
+static inline void kasan_unpoison_slab(const void *ptr) {}
+
+static inline int kasan_module_alloc(void *addr, size_t size)
+{
+ return 0;
+}
+
+static inline int kasan_add_zero_shadow(void *start, unsigned long size)
+{
+ return 0;
+}
+
+static inline size_t kasan_metadata_size(struct kmem_cache *cache)
+{
+ return 0;
+}
+
+#endif /* CONFIG_KASAN */
+
+struct kasan_cache {
+ int alloc_meta_offset;
+ int free_meta_offset;
+};
+
+#if defined(CONFIG_KASAN) || defined(CONFIG_SLAB_QUARANTINE)
+
+void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
+ slab_flags_t *flags);
+void * __must_check kasan_kmalloc_large(const void *ptr, size_t size,
+ gfp_t flags);
+void * __must_check kasan_kmalloc(struct kmem_cache *s, const void *object,
+ size_t size, gfp_t flags);
+void * __must_check kasan_krealloc(const void *object, size_t new_size,
+ gfp_t flags);
+void * __must_check kasan_slab_alloc(struct kmem_cache *s, void *object,
+ gfp_t flags);
+bool kasan_slab_free(struct kmem_cache *s, void *object, unsigned long ip);
+
+#else /* CONFIG_KASAN || CONFIG_SLAB_QUARANTINE */
+
+static inline void kasan_cache_create(struct kmem_cache *cache,
+ unsigned int *size,
+ slab_flags_t *flags) {}
+
static inline void *kasan_kmalloc_large(void *ptr, size_t size, gfp_t flags)
{
return ptr;
}
-static inline void kasan_kfree_large(void *ptr, unsigned long ip) {}
-static inline void kasan_poison_kfree(void *ptr, unsigned long ip) {}
+
static inline void *kasan_kmalloc(struct kmem_cache *s, const void *object,
size_t size, gfp_t flags)
{
return (void *)object;
}
+
static inline void *kasan_krealloc(const void *object, size_t new_size,
gfp_t flags)
{
@@ -144,43 +170,28 @@ static inline void *kasan_slab_alloc(struct kmem_cache *s, void *object,
{
return object;
}
+
static inline bool kasan_slab_free(struct kmem_cache *s, void *object,
unsigned long ip)
{
return false;
}
-
-static inline int kasan_module_alloc(void *addr, size_t size) { return 0; }
-static inline void kasan_free_shadow(const struct vm_struct *vm) {}
-
-static inline int kasan_add_zero_shadow(void *start, unsigned long size)
-{
- return 0;
-}
-static inline void kasan_remove_zero_shadow(void *start,
- unsigned long size)
-{}
-
-static inline void kasan_unpoison_slab(const void *ptr) { }
-static inline size_t kasan_metadata_size(struct kmem_cache *cache) { return 0; }
-
-#endif /* CONFIG_KASAN */
+#endif /* CONFIG_KASAN || CONFIG_SLAB_QUARANTINE */
#ifdef CONFIG_KASAN_GENERIC
-
#define KASAN_SHADOW_INIT 0
-
-void kasan_cache_shrink(struct kmem_cache *cache);
-void kasan_cache_shutdown(struct kmem_cache *cache);
void kasan_record_aux_stack(void *ptr);
-
#else /* CONFIG_KASAN_GENERIC */
+static inline void kasan_record_aux_stack(void *ptr) {}
+#endif /* CONFIG_KASAN_GENERIC */
+#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_SLAB_QUARANTINE)
+void kasan_cache_shrink(struct kmem_cache *cache);
+void kasan_cache_shutdown(struct kmem_cache *cache);
+#else /* CONFIG_KASAN_GENERIC || CONFIG_SLAB_QUARANTINE */
static inline void kasan_cache_shrink(struct kmem_cache *cache) {}
static inline void kasan_cache_shutdown(struct kmem_cache *cache) {}
-static inline void kasan_record_aux_stack(void *ptr) {}
-
-#endif /* CONFIG_KASAN_GENERIC */
+#endif /* CONFIG_KASAN_GENERIC || CONFIG_SLAB_QUARANTINE */
#ifdef CONFIG_KASAN_SW_TAGS
diff --git a/include/linux/slab_def.h b/include/linux/slab_def.h
index 9eb430c163c2..fc7548f27512 100644
--- a/include/linux/slab_def.h
+++ b/include/linux/slab_def.h
@@ -72,7 +72,7 @@ struct kmem_cache {
int obj_offset;
#endif /* CONFIG_DEBUG_SLAB */
-#ifdef CONFIG_KASAN
+#if defined(CONFIG_KASAN) || defined(CONFIG_SLAB_QUARANTINE)
struct kasan_cache kasan_info;
#endif
diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index 1be0ed5befa1..71020cee9fd2 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -124,7 +124,7 @@ struct kmem_cache {
unsigned int *random_seq;
#endif
-#ifdef CONFIG_KASAN
+#if defined(CONFIG_KASAN) || defined(CONFIG_SLAB_QUARANTINE)
struct kasan_cache kasan_info;
#endif
diff --git a/init/Kconfig b/init/Kconfig
index d6a0b31b13dc..de5aa061762f 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -1931,6 +1931,17 @@ config SLAB_FREELIST_HARDENED
sanity-checking than others. This option is most effective with
CONFIG_SLUB.
+config SLAB_QUARANTINE
+ bool "Enable slab freelist quarantine"
+ depends on !KASAN && (SLAB || SLUB)
+ help
+ Enable slab freelist quarantine to break heap spraying technique
+ used for exploiting use-after-free vulnerabilities in the kernel
+ code. If this feature is enabled, freed allocations are stored
+ in the quarantine and can't be instantly reallocated and
+ overwritten by the exploit performing heap spraying.
+ This feature is a part of KASAN functionality.
+
config SHUFFLE_PAGE_ALLOCATOR
bool "Page allocator randomization"
default SLAB_FREELIST_RANDOM && ACPI_NUMA
diff --git a/mm/Makefile b/mm/Makefile
index d5649f1c12c0..c052bc616a88 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -52,7 +52,7 @@ obj-y := filemap.o mempool.o oom_kill.o fadvise.o \
mm_init.o percpu.o slab_common.o \
compaction.o vmacache.o \
interval_tree.o list_lru.o workingset.o \
- debug.o gup.o $(mmu-y)
+ debug.o gup.o kasan/ $(mmu-y)
# Give 'page_alloc' its own module-parameter namespace
page-alloc-y := page_alloc.o
@@ -80,7 +80,6 @@ obj-$(CONFIG_KSM) += ksm.o
obj-$(CONFIG_PAGE_POISONING) += page_poison.o
obj-$(CONFIG_SLAB) += slab.o
obj-$(CONFIG_SLUB) += slub.o
-obj-$(CONFIG_KASAN) += kasan/
obj-$(CONFIG_FAILSLAB) += failslab.o
obj-$(CONFIG_MEMORY_HOTPLUG) += memory_hotplug.o
obj-$(CONFIG_MEMTEST) += memtest.o
diff --git a/mm/kasan/Makefile b/mm/kasan/Makefile
index 370d970e5ab5..f6367d56a4d0 100644
--- a/mm/kasan/Makefile
+++ b/mm/kasan/Makefile
@@ -32,3 +32,5 @@ CFLAGS_tags_report.o := $(CC_FLAGS_KASAN_RUNTIME)
obj-$(CONFIG_KASAN) := common.o init.o report.o
obj-$(CONFIG_KASAN_GENERIC) += generic.o generic_report.o quarantine.o
obj-$(CONFIG_KASAN_SW_TAGS) += tags.o tags_report.o
+
+obj-$(CONFIG_SLAB_QUARANTINE) += slab_quarantine.o quarantine.o
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index ac499456740f..979c5600db8c 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -5,6 +5,43 @@
#include <linux/kasan.h>
#include <linux/stackdepot.h>
+struct qlist_node {
+ struct qlist_node *next;
+};
+
+struct kasan_track {
+ u32 pid;
+ depot_stack_handle_t stack;
+};
+
+struct kasan_free_meta {
+ /* This field is used while the object is in the quarantine.
+ * Otherwise it might be used for the allocator freelist.
+ */
+ struct qlist_node quarantine_link;
+#ifdef CONFIG_KASAN_GENERIC
+ struct kasan_track free_track;
+#endif
+};
+
+struct kasan_free_meta *get_free_info(struct kmem_cache *cache,
+ const void *object);
+
+#if defined(CONFIG_KASAN_GENERIC) && \
+ (defined(CONFIG_SLAB) || defined(CONFIG_SLUB)) || \
+ defined(CONFIG_SLAB_QUARANTINE)
+void quarantine_put(struct kasan_free_meta *info, struct kmem_cache *cache);
+void quarantine_reduce(void);
+void quarantine_remove_cache(struct kmem_cache *cache);
+#else
+static inline void quarantine_put(struct kasan_free_meta *info,
+ struct kmem_cache *cache) { }
+static inline void quarantine_reduce(void) { }
+static inline void quarantine_remove_cache(struct kmem_cache *cache) { }
+#endif
+
+#ifdef CONFIG_KASAN
+
#define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT)
#define KASAN_SHADOW_MASK (KASAN_SHADOW_SCALE_SIZE - 1)
@@ -87,17 +124,8 @@ struct kasan_global {
#endif
};
-/**
- * Structures to keep alloc and free tracks *
- */
-
#define KASAN_STACK_DEPTH 64
-struct kasan_track {
- u32 pid;
- depot_stack_handle_t stack;
-};
-
#ifdef CONFIG_KASAN_SW_TAGS_IDENTIFY
#define KASAN_NR_FREE_STACKS 5
#else
@@ -121,23 +149,8 @@ struct kasan_alloc_meta {
#endif
};
-struct qlist_node {
- struct qlist_node *next;
-};
-struct kasan_free_meta {
- /* This field is used while the object is in the quarantine.
- * Otherwise it might be used for the allocator freelist.
- */
- struct qlist_node quarantine_link;
-#ifdef CONFIG_KASAN_GENERIC
- struct kasan_track free_track;
-#endif
-};
-
struct kasan_alloc_meta *get_alloc_info(struct kmem_cache *cache,
const void *object);
-struct kasan_free_meta *get_free_info(struct kmem_cache *cache,
- const void *object);
static inline const void *kasan_shadow_to_mem(const void *shadow_addr)
{
@@ -178,18 +191,6 @@ void kasan_set_free_info(struct kmem_cache *cache, void *object, u8 tag);
struct kasan_track *kasan_get_free_track(struct kmem_cache *cache,
void *object, u8 tag);
-#if defined(CONFIG_KASAN_GENERIC) && \
- (defined(CONFIG_SLAB) || defined(CONFIG_SLUB))
-void quarantine_put(struct kasan_free_meta *info, struct kmem_cache *cache);
-void quarantine_reduce(void);
-void quarantine_remove_cache(struct kmem_cache *cache);
-#else
-static inline void quarantine_put(struct kasan_free_meta *info,
- struct kmem_cache *cache) { }
-static inline void quarantine_reduce(void) { }
-static inline void quarantine_remove_cache(struct kmem_cache *cache) { }
-#endif
-
#ifdef CONFIG_KASAN_SW_TAGS
void print_tags(u8 addr_tag, const void *addr);
@@ -296,4 +297,6 @@ void __hwasan_storeN_noabort(unsigned long addr, size_t size);
void __hwasan_tag_memory(unsigned long addr, u8 tag, unsigned long size);
+#endif /* CONFIG_KASAN */
+
#endif
diff --git a/mm/kasan/quarantine.c b/mm/kasan/quarantine.c
index 4c5375810449..61666263c53e 100644
--- a/mm/kasan/quarantine.c
+++ b/mm/kasan/quarantine.c
@@ -145,7 +145,9 @@ static void qlink_free(struct qlist_node *qlink, struct kmem_cache *cache)
if (IS_ENABLED(CONFIG_SLAB))
local_irq_save(flags);
+#ifdef CONFIG_KASAN
*(u8 *)kasan_mem_to_shadow(object) = KASAN_KMALLOC_FREE;
+#endif
___cache_free(cache, object, _THIS_IP_);
if (IS_ENABLED(CONFIG_SLAB))
diff --git a/mm/kasan/slab_quarantine.c b/mm/kasan/slab_quarantine.c
new file mode 100644
index 000000000000..5764aa7ad253
--- /dev/null
+++ b/mm/kasan/slab_quarantine.c
@@ -0,0 +1,99 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * The layer providing KASAN slab quarantine separately without the
+ * main KASAN functionality.
+ *
+ * Author: Alexander Popov <[email protected]>
+ *
+ * This feature breaks widespread heap spraying technique used for
+ * exploiting use-after-free vulnerabilities in the kernel code.
+ *
+ * Heap spraying is an exploitation technique that aims to put controlled
+ * bytes at a predetermined memory location on the heap. Heap spraying for
+ * exploiting use-after-free in the Linux kernel relies on the fact that on
+ * kmalloc(), the slab allocator returns the address of the memory that was
+ * recently freed. Allocating a kernel object with the same size and
+ * controlled contents allows overwriting the vulnerable freed object.
+ *
+ * If freed allocations are stored in the quarantine, they can't be
+ * instantly reallocated and overwritten by the exploit performing
+ * heap spraying.
+ */
+
+#include <linux/kasan.h>
+#include <linux/bug.h>
+#include <linux/slab.h>
+#include <linux/mm.h>
+#include "../slab.h"
+#include "kasan.h"
+
+void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
+ slab_flags_t *flags)
+{
+ cache->kasan_info.alloc_meta_offset = 0;
+
+ if (cache->flags & SLAB_TYPESAFE_BY_RCU || cache->ctor ||
+ cache->object_size < sizeof(struct kasan_free_meta)) {
+ cache->kasan_info.free_meta_offset = *size;
+ *size += sizeof(struct kasan_free_meta);
+ BUG_ON(*size > KMALLOC_MAX_SIZE);
+ }
+
+ *flags |= SLAB_KASAN;
+}
+
+struct kasan_free_meta *get_free_info(struct kmem_cache *cache,
+ const void *object)
+{
+ BUILD_BUG_ON(sizeof(struct kasan_free_meta) > 32);
+ return (void *)object + cache->kasan_info.free_meta_offset;
+}
+
+bool kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip)
+{
+ quarantine_put(get_free_info(cache, object), cache);
+ return true;
+}
+
+static void *reduce_helper(const void *ptr, gfp_t flags)
+{
+ if (gfpflags_allow_blocking(flags))
+ quarantine_reduce();
+
+ return (void *)ptr;
+}
+
+void * __must_check kasan_kmalloc_large(const void *ptr, size_t size,
+ gfp_t flags)
+{
+ return reduce_helper(ptr, flags);
+}
+
+void * __must_check kasan_krealloc(const void *object, size_t size, gfp_t flags)
+{
+ return reduce_helper(object, flags);
+}
+
+void * __must_check kasan_slab_alloc(struct kmem_cache *cache, void *object,
+ gfp_t flags)
+{
+ return reduce_helper(object, flags);
+}
+
+void * __must_check kasan_kmalloc(struct kmem_cache *cache, const void *object,
+ size_t size, gfp_t flags)
+{
+ return reduce_helper(object, flags);
+}
+EXPORT_SYMBOL(kasan_kmalloc);
+
+void kasan_cache_shrink(struct kmem_cache *cache)
+{
+ quarantine_remove_cache(cache);
+}
+
+void kasan_cache_shutdown(struct kmem_cache *cache)
+{
+ if (!__kmem_cache_empty(cache))
+ quarantine_remove_cache(cache);
+}
diff --git a/mm/slub.c b/mm/slub.c
index 68c02b2eecd9..8d6620effa3c 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3143,7 +3143,7 @@ static __always_inline void slab_free(struct kmem_cache *s, struct page *page,
do_slab_free(s, page, head, tail, cnt, addr);
}
-#ifdef CONFIG_KASAN_GENERIC
+#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_SLAB_QUARANTINE)
void ___cache_free(struct kmem_cache *cache, void *x, unsigned long addr)
{
do_slab_free(cache, virt_to_head_page(x), x, NULL, 1, addr);
--
2.26.2
Add a simple test for CONFIG_SLAB_QUARANTINE.
It performs heap spraying that aims to reallocate the recently freed heap
object. This technique is used for exploiting use-after-free
vulnerabilities in the kernel code.
This test shows that CONFIG_SLAB_QUARANTINE breaks heap spraying
exploitation technique.
Signed-off-by: Alexander Popov <[email protected]>
---
drivers/misc/lkdtm/core.c | 1 +
drivers/misc/lkdtm/heap.c | 40 ++++++++++++++++++++++++++++++++++++++
drivers/misc/lkdtm/lkdtm.h | 1 +
3 files changed, 42 insertions(+)
diff --git a/drivers/misc/lkdtm/core.c b/drivers/misc/lkdtm/core.c
index a5e344df9166..78b7669c35eb 100644
--- a/drivers/misc/lkdtm/core.c
+++ b/drivers/misc/lkdtm/core.c
@@ -126,6 +126,7 @@ static const struct crashtype crashtypes[] = {
CRASHTYPE(SLAB_FREE_DOUBLE),
CRASHTYPE(SLAB_FREE_CROSS),
CRASHTYPE(SLAB_FREE_PAGE),
+ CRASHTYPE(HEAP_SPRAY),
CRASHTYPE(SOFTLOCKUP),
CRASHTYPE(HARDLOCKUP),
CRASHTYPE(SPINLOCKUP),
diff --git a/drivers/misc/lkdtm/heap.c b/drivers/misc/lkdtm/heap.c
index 1323bc16f113..a72a241e314a 100644
--- a/drivers/misc/lkdtm/heap.c
+++ b/drivers/misc/lkdtm/heap.c
@@ -205,6 +205,46 @@ static void ctor_a(void *region)
static void ctor_b(void *region)
{ }
+#define HEAP_SPRAY_SIZE 128
+
+void lkdtm_HEAP_SPRAY(void)
+{
+ int *addr;
+ int *spray_addrs[HEAP_SPRAY_SIZE] = { 0 };
+ unsigned long i = 0;
+
+ addr = kmem_cache_alloc(a_cache, GFP_KERNEL);
+ if (!addr) {
+ pr_info("Unable to allocate memory in lkdtm-heap-a cache\n");
+ return;
+ }
+
+ *addr = 0x31337;
+ kmem_cache_free(a_cache, addr);
+
+ pr_info("Performing heap spraying...\n");
+ for (i = 0; i < HEAP_SPRAY_SIZE; i++) {
+ spray_addrs[i] = kmem_cache_alloc(a_cache, GFP_KERNEL);
+ *spray_addrs[i] = 0x31337;
+ pr_info("attempt %lu: spray alloc addr %p vs freed addr %p\n",
+ i, spray_addrs[i], addr);
+ if (spray_addrs[i] == addr) {
+ pr_info("freed addr is reallocated!\n");
+ break;
+ }
+ }
+
+ if (i < HEAP_SPRAY_SIZE)
+ pr_info("FAIL! Heap spraying succeed :(\n");
+ else
+ pr_info("OK! Heap spraying hasn't succeed :)\n");
+
+ for (i = 0; i < HEAP_SPRAY_SIZE; i++) {
+ if (spray_addrs[i])
+ kmem_cache_free(a_cache, spray_addrs[i]);
+ }
+}
+
void __init lkdtm_heap_init(void)
{
double_free_cache = kmem_cache_create("lkdtm-heap-double_free",
diff --git a/drivers/misc/lkdtm/lkdtm.h b/drivers/misc/lkdtm/lkdtm.h
index 8878538b2c13..dfafb4ae6f3a 100644
--- a/drivers/misc/lkdtm/lkdtm.h
+++ b/drivers/misc/lkdtm/lkdtm.h
@@ -45,6 +45,7 @@ void lkdtm_READ_BUDDY_AFTER_FREE(void);
void lkdtm_SLAB_FREE_DOUBLE(void);
void lkdtm_SLAB_FREE_CROSS(void);
void lkdtm_SLAB_FREE_PAGE(void);
+void lkdtm_HEAP_SPRAY(void);
/* lkdtm_perms.c */
void __init lkdtm_perms_init(void);
--
2.26.2
On 13.08.2020 18:19, Alexander Popov wrote:
> Hello everyone! Requesting for your comments.
>
> Use-after-free vulnerabilities in the Linux kernel are very popular for
> exploitation. A few examples:
> https://googleprojectzero.blogspot.com/2018/09/a-cache-invalidation-bug-in-linux.html
> https://googleprojectzero.blogspot.com/2019/11/bad-binder-android-in-wild-exploit.html?m=1
> https://a13xp0p0v.github.io/2020/02/15/CVE-2019-18683.html
>
> Use-after-free exploits usually employ heap spraying technique.
> Generally it aims to put controlled bytes at a predetermined memory
> location on the heap. Heap spraying for exploiting use-after-free in
> the Linux kernel relies on the fact that on kmalloc(), the slab allocator
> returns the address of the memory that was recently freed. So allocating
> a kernel object with the same size and controlled contents allows
> overwriting the vulnerable freed object.
>
> I've found an easy way to break heap spraying for use-after-free
> exploitation. I simply extracted slab freelist quarantine from KASAN
> functionality and called it CONFIG_SLAB_QUARANTINE. Please see patch 1.
>
> If this feature is enabled, freed allocations are stored in the quarantine
> and can't be instantly reallocated and overwritten by the exploit
> performing heap spraying.
>
> In patch 2 you can see the lkdtm test showing how CONFIG_SLAB_QUARANTINE
> prevents immediate reallocation of a freed heap object.
>
> I tested this patch series both for CONFIG_SLUB and CONFIG_SLAB.
>
> CONFIG_SLAB_QUARANTINE disabled:
> # echo HEAP_SPRAY > /sys/kernel/debug/provoke-crash/DIRECT
> lkdtm: Performing direct entry HEAP_SPRAY
> lkdtm: Performing heap spraying...
> lkdtm: attempt 0: spray alloc addr 00000000f8699c7d vs freed addr 00000000f8699c7d
> lkdtm: freed addr is reallocated!
> lkdtm: FAIL! Heap spraying succeed :(
>
> CONFIG_SLAB_QUARANTINE enabled:
> # echo HEAP_SPRAY > /sys/kernel/debug/provoke-crash/DIRECT
> lkdtm: Performing direct entry HEAP_SPRAY
> lkdtm: Performing heap spraying...
> lkdtm: attempt 0: spray alloc addr 000000009cafb63f vs freed addr 00000000173cce94
> lkdtm: attempt 1: spray alloc addr 000000003096911f vs freed addr 00000000173cce94
> lkdtm: attempt 2: spray alloc addr 00000000da60d755 vs freed addr 00000000173cce94
> lkdtm: attempt 3: spray alloc addr 000000000b415070 vs freed addr 00000000173cce94
> ...
> lkdtm: attempt 126: spray alloc addr 00000000e80ef807 vs freed addr 00000000173cce94
> lkdtm: attempt 127: spray alloc addr 00000000398fe535 vs freed addr 00000000173cce94
> lkdtm: OK! Heap spraying hasn't succeed :)
>
> I did a brief performance evaluation of this feature.
>
> 1. Memory consumption. KASAN quarantine uses 1/32 of the memory.
> CONFIG_SLAB_QUARANTINE disabled:
> # free -m
> total used free shared buff/cache available
> Mem: 1987 39 1862 10 86 1907
> Swap: 0 0 0
> CONFIG_SLAB_QUARANTINE enabled:
> # free -m
> total used free shared buff/cache available
> Mem: 1987 140 1760 10 87 1805
> Swap: 0 0 0
>
> 2. Performance penalty. I used `hackbench -s 256 -l 200 -g 15 -f 25 -P`.
> CONFIG_SLAB_QUARANTINE disabled (x86_64, CONFIG_SLUB):
> Times: 3.088, 3.103, 3.068, 3.103, 3.107
> Mean: 3.0938
> Standard deviation: 0.0144
> CONFIG_SLAB_QUARANTINE enabled (x86_64, CONFIG_SLUB):
> Times: 3.303, 3.329, 3.356, 3.314, 3.292
> Mean: 3.3188 (+7.3%)
> Standard deviation: 0.0223
>
> I would appreciate your feedback!
While waiting for the feedback on these RFC patches, I compiled a list of topics
for further research:
- Possible ways to overwrite a quarantined heap object by making a large amount
of allocations (with/without freeing them)
- How init_on_free=1 affects heap spraying on a system with the heap quarantine
- How releasing batches of quarantine objects right away affects heap spraying
reliability
- Heap spraying on multi-core systems with the heap quarantine
- More precise performance evaluation
- Possible ways to improve the security properties and performance results
(KASAN quarantine has some interesting settings)
Best regards,
Alexander
> Alexander Popov (2):
> mm: Extract SLAB_QUARANTINE from KASAN
> lkdtm: Add heap spraying test
>
> drivers/misc/lkdtm/core.c | 1 +
> drivers/misc/lkdtm/heap.c | 40 ++++++++++++++
> drivers/misc/lkdtm/lkdtm.h | 1 +
> include/linux/kasan.h | 107 ++++++++++++++++++++-----------------
> include/linux/slab_def.h | 2 +-
> include/linux/slub_def.h | 2 +-
> init/Kconfig | 11 ++++
> mm/Makefile | 3 +-
> mm/kasan/Makefile | 2 +
> mm/kasan/kasan.h | 75 +++++++++++++-------------
> mm/kasan/quarantine.c | 2 +
> mm/kasan/slab_quarantine.c | 99 ++++++++++++++++++++++++++++++++++
> mm/slub.c | 2 +-
> 13 files changed, 258 insertions(+), 89 deletions(-)
> create mode 100644 mm/kasan/slab_quarantine.c
>
On Thu, Aug 13, 2020 at 06:19:22PM +0300, Alexander Popov wrote:
> Add a simple test for CONFIG_SLAB_QUARANTINE.
>
> It performs heap spraying that aims to reallocate the recently freed heap
> object. This technique is used for exploiting use-after-free
> vulnerabilities in the kernel code.
>
> This test shows that CONFIG_SLAB_QUARANTINE breaks heap spraying
> exploitation technique.
Yay tests!
>
> Signed-off-by: Alexander Popov <[email protected]>
> ---
> drivers/misc/lkdtm/core.c | 1 +
> drivers/misc/lkdtm/heap.c | 40 ++++++++++++++++++++++++++++++++++++++
> drivers/misc/lkdtm/lkdtm.h | 1 +
> 3 files changed, 42 insertions(+)
>
> diff --git a/drivers/misc/lkdtm/core.c b/drivers/misc/lkdtm/core.c
> index a5e344df9166..78b7669c35eb 100644
> --- a/drivers/misc/lkdtm/core.c
> +++ b/drivers/misc/lkdtm/core.c
> @@ -126,6 +126,7 @@ static const struct crashtype crashtypes[] = {
> CRASHTYPE(SLAB_FREE_DOUBLE),
> CRASHTYPE(SLAB_FREE_CROSS),
> CRASHTYPE(SLAB_FREE_PAGE),
> + CRASHTYPE(HEAP_SPRAY),
> CRASHTYPE(SOFTLOCKUP),
> CRASHTYPE(HARDLOCKUP),
> CRASHTYPE(SPINLOCKUP),
> diff --git a/drivers/misc/lkdtm/heap.c b/drivers/misc/lkdtm/heap.c
> index 1323bc16f113..a72a241e314a 100644
> --- a/drivers/misc/lkdtm/heap.c
> +++ b/drivers/misc/lkdtm/heap.c
> @@ -205,6 +205,46 @@ static void ctor_a(void *region)
> static void ctor_b(void *region)
> { }
>
> +#define HEAP_SPRAY_SIZE 128
> +
> +void lkdtm_HEAP_SPRAY(void)
> +{
> + int *addr;
> + int *spray_addrs[HEAP_SPRAY_SIZE] = { 0 };
(the 0 isn't needed -- and it was left there, it should be NULL)
> + unsigned long i = 0;
> +
> + addr = kmem_cache_alloc(a_cache, GFP_KERNEL);
I would prefer this test add its own cache (e.g. spray_cache), to avoid
misbehaviors between tests. (e.g. the a and b caches already run the
risk of getting corrupted weirdly.)
> + if (!addr) {
> + pr_info("Unable to allocate memory in lkdtm-heap-a cache\n");
> + return;
> + }
> +
> + *addr = 0x31337;
> + kmem_cache_free(a_cache, addr);
> +
> + pr_info("Performing heap spraying...\n");
> + for (i = 0; i < HEAP_SPRAY_SIZE; i++) {
> + spray_addrs[i] = kmem_cache_alloc(a_cache, GFP_KERNEL);
> + *spray_addrs[i] = 0x31337;
> + pr_info("attempt %lu: spray alloc addr %p vs freed addr %p\n",
> + i, spray_addrs[i], addr);
That's 128 lines spewed into dmesg... I would leave this out.
> + if (spray_addrs[i] == addr) {
> + pr_info("freed addr is reallocated!\n");
> + break;
> + }
> + }
> +
> + if (i < HEAP_SPRAY_SIZE)
> + pr_info("FAIL! Heap spraying succeed :(\n");
I'd move this into the "if (spray_addrs[i] == addr)" test instead of the
pr_info() that is there.
> + else
> + pr_info("OK! Heap spraying hasn't succeed :)\n");
And then make this an "if (i == HEAP_SPRAY_SIZE)" test
> +
> + for (i = 0; i < HEAP_SPRAY_SIZE; i++) {
> + if (spray_addrs[i])
> + kmem_cache_free(a_cache, spray_addrs[i]);
> + }
> +}
> +
> void __init lkdtm_heap_init(void)
> {
> double_free_cache = kmem_cache_create("lkdtm-heap-double_free",
> diff --git a/drivers/misc/lkdtm/lkdtm.h b/drivers/misc/lkdtm/lkdtm.h
> index 8878538b2c13..dfafb4ae6f3a 100644
> --- a/drivers/misc/lkdtm/lkdtm.h
> +++ b/drivers/misc/lkdtm/lkdtm.h
> @@ -45,6 +45,7 @@ void lkdtm_READ_BUDDY_AFTER_FREE(void);
> void lkdtm_SLAB_FREE_DOUBLE(void);
> void lkdtm_SLAB_FREE_CROSS(void);
> void lkdtm_SLAB_FREE_PAGE(void);
> +void lkdtm_HEAP_SPRAY(void);
>
> /* lkdtm_perms.c */
> void __init lkdtm_perms_init(void);
> --
> 2.26.2
>
I assume enabling the quarantine defense also ends up being seen in the
SLAB_FREE_DOUBLE LKDTM test too, yes?
--
Kees Cook
On Thu, Aug 13, 2020 at 06:19:20PM +0300, Alexander Popov wrote:
> I've found an easy way to break heap spraying for use-after-free
> exploitation. I simply extracted slab freelist quarantine from KASAN
> functionality and called it CONFIG_SLAB_QUARANTINE. Please see patch 1.
Ah yeah, good idea. :)
> [...]
> I did a brief performance evaluation of this feature.
>
> 1. Memory consumption. KASAN quarantine uses 1/32 of the memory.
> CONFIG_SLAB_QUARANTINE disabled:
> # free -m
> total used free shared buff/cache available
> Mem: 1987 39 1862 10 86 1907
> Swap: 0 0 0
> CONFIG_SLAB_QUARANTINE enabled:
> # free -m
> total used free shared buff/cache available
> Mem: 1987 140 1760 10 87 1805
> Swap: 0 0 0
1/32 of memory doesn't seem too bad for someone interested in this defense.
> 2. Performance penalty. I used `hackbench -s 256 -l 200 -g 15 -f 25 -P`.
> CONFIG_SLAB_QUARANTINE disabled (x86_64, CONFIG_SLUB):
> Times: 3.088, 3.103, 3.068, 3.103, 3.107
> Mean: 3.0938
> Standard deviation: 0.0144
> CONFIG_SLAB_QUARANTINE enabled (x86_64, CONFIG_SLUB):
> Times: 3.303, 3.329, 3.356, 3.314, 3.292
> Mean: 3.3188 (+7.3%)
> Standard deviation: 0.0223
That's rather painful, but hackbench can produce some big deltas given
it can be an unrealistic workload for most systems. I'd be curious to
see the "building a kernel" timings, which tends to be much more
realistic for "busy system" without hammering one particular subsystem
(though it's a bit VFS heavy, obviously).
More notes in the patches...
--
Kees Cook
On Thu, Aug 13, 2020 at 06:19:21PM +0300, Alexander Popov wrote:
> +config SLAB_QUARANTINE
> + bool "Enable slab freelist quarantine"
> + depends on !KASAN && (SLAB || SLUB)
> + help
> + Enable slab freelist quarantine to break heap spraying technique
> + used for exploiting use-after-free vulnerabilities in the kernel
> + code. If this feature is enabled, freed allocations are stored
> + in the quarantine and can't be instantly reallocated and
> + overwritten by the exploit performing heap spraying.
> + This feature is a part of KASAN functionality.
After this patch, it isn't part of KASAN any more ;-)
The way this is written is a bit too low level. Let's write it in terms
that people who don't know the guts of the slab allocator or security
terminology can understand:
Delay reuse of freed slab objects. This makes some security
exploits harder to execute. It reduces performance slightly
as objects will be cache cold by the time they are reallocated,
and it costs a small amount of memory.
(feel free to edit this)
> +struct qlist_node {
> + struct qlist_node *next;
> +};
I appreciate this isn't new, but why do we have a new singly-linked-list
abstraction being defined in this code?
On Thu, Aug 13, 2020 at 06:19:21PM +0300, Alexander Popov wrote:
> Heap spraying is an exploitation technique that aims to put controlled
> bytes at a predetermined memory location on the heap. Heap spraying for
> exploiting use-after-free in the Linux kernel relies on the fact that on
> kmalloc(), the slab allocator returns the address of the memory that was
> recently freed. Allocating a kernel object with the same size and
> controlled contents allows overwriting the vulnerable freed object.
>
> Let's extract slab freelist quarantine from KASAN functionality and
> call it CONFIG_SLAB_QUARANTINE. This feature breaks widespread heap
> spraying technique used for exploiting use-after-free vulnerabilities
> in the kernel code.
>
> If this feature is enabled, freed allocations are stored in the quarantine
> and can't be instantly reallocated and overwritten by the exploit
> performing heap spraying.
It may be worth clarifying that this is specifically only direct UAF and
doesn't help with spray-and-overflow-into-a-neighboring-object attacks
(i.e. both tend to use sprays, but the former doesn't depend on a write
overflow).
> Signed-off-by: Alexander Popov <[email protected]>
> ---
> include/linux/kasan.h | 107 ++++++++++++++++++++-----------------
> include/linux/slab_def.h | 2 +-
> include/linux/slub_def.h | 2 +-
> init/Kconfig | 11 ++++
> mm/Makefile | 3 +-
> mm/kasan/Makefile | 2 +
> mm/kasan/kasan.h | 75 +++++++++++++-------------
> mm/kasan/quarantine.c | 2 +
> mm/kasan/slab_quarantine.c | 99 ++++++++++++++++++++++++++++++++++
> mm/slub.c | 2 +-
> 10 files changed, 216 insertions(+), 89 deletions(-)
> create mode 100644 mm/kasan/slab_quarantine.c
>
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index 087fba34b209..b837216f760c 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -42,32 +42,14 @@ void kasan_unpoison_task_stack(struct task_struct *task);
> void kasan_alloc_pages(struct page *page, unsigned int order);
> void kasan_free_pages(struct page *page, unsigned int order);
>
> -void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
> - slab_flags_t *flags);
> -
> void kasan_poison_slab(struct page *page);
> void kasan_unpoison_object_data(struct kmem_cache *cache, void *object);
> void kasan_poison_object_data(struct kmem_cache *cache, void *object);
> void * __must_check kasan_init_slab_obj(struct kmem_cache *cache,
> const void *object);
>
> -void * __must_check kasan_kmalloc_large(const void *ptr, size_t size,
> - gfp_t flags);
> void kasan_kfree_large(void *ptr, unsigned long ip);
> void kasan_poison_kfree(void *ptr, unsigned long ip);
> -void * __must_check kasan_kmalloc(struct kmem_cache *s, const void *object,
> - size_t size, gfp_t flags);
> -void * __must_check kasan_krealloc(const void *object, size_t new_size,
> - gfp_t flags);
> -
> -void * __must_check kasan_slab_alloc(struct kmem_cache *s, void *object,
> - gfp_t flags);
> -bool kasan_slab_free(struct kmem_cache *s, void *object, unsigned long ip);
> -
> -struct kasan_cache {
> - int alloc_meta_offset;
> - int free_meta_offset;
> -};
>
> /*
> * These functions provide a special case to support backing module
> @@ -107,10 +89,6 @@ static inline void kasan_disable_current(void) {}
> static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
> static inline void kasan_free_pages(struct page *page, unsigned int order) {}
>
> -static inline void kasan_cache_create(struct kmem_cache *cache,
> - unsigned int *size,
> - slab_flags_t *flags) {}
> -
> static inline void kasan_poison_slab(struct page *page) {}
> static inline void kasan_unpoison_object_data(struct kmem_cache *cache,
> void *object) {}
> @@ -122,17 +100,65 @@ static inline void *kasan_init_slab_obj(struct kmem_cache *cache,
> return (void *)object;
> }
>
> +static inline void kasan_kfree_large(void *ptr, unsigned long ip) {}
> +static inline void kasan_poison_kfree(void *ptr, unsigned long ip) {}
> +static inline void kasan_free_shadow(const struct vm_struct *vm) {}
> +static inline void kasan_remove_zero_shadow(void *start, unsigned long size) {}
> +static inline void kasan_unpoison_slab(const void *ptr) {}
> +
> +static inline int kasan_module_alloc(void *addr, size_t size)
> +{
> + return 0;
> +}
> +
> +static inline int kasan_add_zero_shadow(void *start, unsigned long size)
> +{
> + return 0;
> +}
> +
> +static inline size_t kasan_metadata_size(struct kmem_cache *cache)
> +{
> + return 0;
> +}
> +
> +#endif /* CONFIG_KASAN */
> +
> +struct kasan_cache {
> + int alloc_meta_offset;
> + int free_meta_offset;
> +};
> +
> +#if defined(CONFIG_KASAN) || defined(CONFIG_SLAB_QUARANTINE)
> +
> +void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
> + slab_flags_t *flags);
> +void * __must_check kasan_kmalloc_large(const void *ptr, size_t size,
> + gfp_t flags);
> +void * __must_check kasan_kmalloc(struct kmem_cache *s, const void *object,
> + size_t size, gfp_t flags);
> +void * __must_check kasan_krealloc(const void *object, size_t new_size,
> + gfp_t flags);
> +void * __must_check kasan_slab_alloc(struct kmem_cache *s, void *object,
> + gfp_t flags);
> +bool kasan_slab_free(struct kmem_cache *s, void *object, unsigned long ip);
> +
> +#else /* CONFIG_KASAN || CONFIG_SLAB_QUARANTINE */
> +
> +static inline void kasan_cache_create(struct kmem_cache *cache,
> + unsigned int *size,
> + slab_flags_t *flags) {}
> +
> static inline void *kasan_kmalloc_large(void *ptr, size_t size, gfp_t flags)
> {
> return ptr;
> }
> -static inline void kasan_kfree_large(void *ptr, unsigned long ip) {}
> -static inline void kasan_poison_kfree(void *ptr, unsigned long ip) {}
> +
> static inline void *kasan_kmalloc(struct kmem_cache *s, const void *object,
> size_t size, gfp_t flags)
> {
> return (void *)object;
> }
> +
> static inline void *kasan_krealloc(const void *object, size_t new_size,
> gfp_t flags)
> {
> @@ -144,43 +170,28 @@ static inline void *kasan_slab_alloc(struct kmem_cache *s, void *object,
> {
> return object;
> }
> +
> static inline bool kasan_slab_free(struct kmem_cache *s, void *object,
> unsigned long ip)
> {
> return false;
> }
> -
> -static inline int kasan_module_alloc(void *addr, size_t size) { return 0; }
> -static inline void kasan_free_shadow(const struct vm_struct *vm) {}
> -
> -static inline int kasan_add_zero_shadow(void *start, unsigned long size)
> -{
> - return 0;
> -}
> -static inline void kasan_remove_zero_shadow(void *start,
> - unsigned long size)
> -{}
> -
> -static inline void kasan_unpoison_slab(const void *ptr) { }
> -static inline size_t kasan_metadata_size(struct kmem_cache *cache) { return 0; }
> -
> -#endif /* CONFIG_KASAN */
> +#endif /* CONFIG_KASAN || CONFIG_SLAB_QUARANTINE */
>
> #ifdef CONFIG_KASAN_GENERIC
> -
> #define KASAN_SHADOW_INIT 0
> -
> -void kasan_cache_shrink(struct kmem_cache *cache);
> -void kasan_cache_shutdown(struct kmem_cache *cache);
> void kasan_record_aux_stack(void *ptr);
> -
> #else /* CONFIG_KASAN_GENERIC */
> +static inline void kasan_record_aux_stack(void *ptr) {}
> +#endif /* CONFIG_KASAN_GENERIC */
>
> +#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_SLAB_QUARANTINE)
> +void kasan_cache_shrink(struct kmem_cache *cache);
> +void kasan_cache_shutdown(struct kmem_cache *cache);
> +#else /* CONFIG_KASAN_GENERIC || CONFIG_SLAB_QUARANTINE */
> static inline void kasan_cache_shrink(struct kmem_cache *cache) {}
> static inline void kasan_cache_shutdown(struct kmem_cache *cache) {}
> -static inline void kasan_record_aux_stack(void *ptr) {}
> -
> -#endif /* CONFIG_KASAN_GENERIC */
> +#endif /* CONFIG_KASAN_GENERIC || CONFIG_SLAB_QUARANTINE */
In doing this extraction, I wonder if function naming should be changed?
If it's going to live a new life outside of KASAN proper, maybe call
these functions quarantine_cache_*()? But perhaps that's too much
churn...
> #ifdef CONFIG_KASAN_SW_TAGS
>
> diff --git a/include/linux/slab_def.h b/include/linux/slab_def.h
> index 9eb430c163c2..fc7548f27512 100644
> --- a/include/linux/slab_def.h
> +++ b/include/linux/slab_def.h
> @@ -72,7 +72,7 @@ struct kmem_cache {
> int obj_offset;
> #endif /* CONFIG_DEBUG_SLAB */
>
> -#ifdef CONFIG_KASAN
> +#if defined(CONFIG_KASAN) || defined(CONFIG_SLAB_QUARANTINE)
> struct kasan_cache kasan_info;
> #endif
>
> diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
> index 1be0ed5befa1..71020cee9fd2 100644
> --- a/include/linux/slub_def.h
> +++ b/include/linux/slub_def.h
> @@ -124,7 +124,7 @@ struct kmem_cache {
> unsigned int *random_seq;
> #endif
>
> -#ifdef CONFIG_KASAN
> +#if defined(CONFIG_KASAN) || defined(CONFIG_SLAB_QUARANTINE)
> struct kasan_cache kasan_info;
> #endif
>
> diff --git a/init/Kconfig b/init/Kconfig
> index d6a0b31b13dc..de5aa061762f 100644
> --- a/init/Kconfig
> +++ b/init/Kconfig
> @@ -1931,6 +1931,17 @@ config SLAB_FREELIST_HARDENED
> sanity-checking than others. This option is most effective with
> CONFIG_SLUB.
>
> +config SLAB_QUARANTINE
> + bool "Enable slab freelist quarantine"
> + depends on !KASAN && (SLAB || SLUB)
> + help
> + Enable slab freelist quarantine to break heap spraying technique
> + used for exploiting use-after-free vulnerabilities in the kernel
> + code. If this feature is enabled, freed allocations are stored
> + in the quarantine and can't be instantly reallocated and
> + overwritten by the exploit performing heap spraying.
> + This feature is a part of KASAN functionality.
> +
To make this available to distros, I think this needs to be more than
just a CONFIG. I'd love to see this CONFIG control the availability, but
have a boot param control a ro-after-init static branch for these
functions (like is done for init_on_alloc, hardened usercopy, etc). Then
the branch can be off by default for regular distro users, and more
cautious folks could enable it with a boot param without having to roll
their own kernels.
> [...]
> +struct kasan_track {
> + u32 pid;
pid_t?
> + depot_stack_handle_t stack;
> +};
> [...]
> +#if defined(CONFIG_KASAN_GENERIC) && \
> + (defined(CONFIG_SLAB) || defined(CONFIG_SLUB)) || \
> + defined(CONFIG_SLAB_QUARANTINE)
This seems a bit messy. Perhaps an invisible CONFIG to do this logic and
then the files can test for that? CONFIG_USE_SLAB_QUARANTINE or
something?
> [...]
> + * Heap spraying is an exploitation technique that aims to put controlled
> + * bytes at a predetermined memory location on the heap. Heap spraying for
> + * exploiting use-after-free in the Linux kernel relies on the fact that on
> + * kmalloc(), the slab allocator returns the address of the memory that was
> + * recently freed. Allocating a kernel object with the same size and
> + * controlled contents allows overwriting the vulnerable freed object.
> + *
> + * If freed allocations are stored in the quarantine, they can't be
> + * instantly reallocated and overwritten by the exploit performing
> + * heap spraying.
I would clarify this with the details of what is actually happening: the
allocation isn't _moved_ to a quarantine, yes? It's only marked as not
available for allocation?
> + */
> +
> +#include <linux/kasan.h>
> +#include <linux/bug.h>
> +#include <linux/slab.h>
> +#include <linux/mm.h>
> +#include "../slab.h"
> +#include "kasan.h"
> +
> +void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
> + slab_flags_t *flags)
> +{
> + cache->kasan_info.alloc_meta_offset = 0;
> +
> + if (cache->flags & SLAB_TYPESAFE_BY_RCU || cache->ctor ||
> + cache->object_size < sizeof(struct kasan_free_meta)) {
> + cache->kasan_info.free_meta_offset = *size;
> + *size += sizeof(struct kasan_free_meta);
> + BUG_ON(*size > KMALLOC_MAX_SIZE);
Please don't use BUG_ON()[1].
Interesting!
-Kees
[1] https://www.kernel.org/doc/html/latest/process/deprecated.html#bug-and-bug-on
--
Kees Cook
On Sat 2020-08-15 19:54:55, Matthew Wilcox wrote:
> On Thu, Aug 13, 2020 at 06:19:21PM +0300, Alexander Popov wrote:
> > +config SLAB_QUARANTINE
> > + bool "Enable slab freelist quarantine"
> > + depends on !KASAN && (SLAB || SLUB)
> > + help
> > + Enable slab freelist quarantine to break heap spraying technique
> > + used for exploiting use-after-free vulnerabilities in the kernel
> > + code. If this feature is enabled, freed allocations are stored
> > + in the quarantine and can't be instantly reallocated and
> > + overwritten by the exploit performing heap spraying.
> > + This feature is a part of KASAN functionality.
>
> After this patch, it isn't part of KASAN any more ;-)
>
> The way this is written is a bit too low level. Let's write it in terms
> that people who don't know the guts of the slab allocator or security
> terminology can understand:
>
> Delay reuse of freed slab objects. This makes some security
> exploits harder to execute. It reduces performance slightly
> as objects will be cache cold by the time they are reallocated,
> and it costs a small amount of memory.
Written this way, it invites questions:
Does it introduce any new deadlocks in near out-of-memory situations?
Best regards,
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
On Sat, Aug 15, 2020 at 6:52 PM Kees Cook <[email protected]> wrote:
>
> On Thu, Aug 13, 2020 at 06:19:21PM +0300, Alexander Popov wrote:
> > Heap spraying is an exploitation technique that aims to put controlled
> > bytes at a predetermined memory location on the heap. Heap spraying for
> > exploiting use-after-free in the Linux kernel relies on the fact that on
> > kmalloc(), the slab allocator returns the address of the memory that was
> > recently freed. Allocating a kernel object with the same size and
> > controlled contents allows overwriting the vulnerable freed object.
> >
> > Let's extract slab freelist quarantine from KASAN functionality and
> > call it CONFIG_SLAB_QUARANTINE. This feature breaks widespread heap
> > spraying technique used for exploiting use-after-free vulnerabilities
> > in the kernel code.
> >
> > If this feature is enabled, freed allocations are stored in the quarantine
> > and can't be instantly reallocated and overwritten by the exploit
> > performing heap spraying.
[...]
> In doing this extraction, I wonder if function naming should be changed?
> If it's going to live a new life outside of KASAN proper, maybe call
> these functions quarantine_cache_*()? But perhaps that's too much
> churn...
If quarantine is to be used without the rest of KASAN, I'd prefer for
it to be separated from KASAN completely: move to e.g. mm/quarantine.c
and don't mention KASAN in function/config names.
On 15.08.2020 19:52, Kees Cook wrote:
> On Thu, Aug 13, 2020 at 06:19:21PM +0300, Alexander Popov wrote:
>> Heap spraying is an exploitation technique that aims to put controlled
>> bytes at a predetermined memory location on the heap. Heap spraying for
>> exploiting use-after-free in the Linux kernel relies on the fact that on
>> kmalloc(), the slab allocator returns the address of the memory that was
>> recently freed. Allocating a kernel object with the same size and
>> controlled contents allows overwriting the vulnerable freed object.
>>
>> Let's extract slab freelist quarantine from KASAN functionality and
>> call it CONFIG_SLAB_QUARANTINE. This feature breaks widespread heap
>> spraying technique used for exploiting use-after-free vulnerabilities
>> in the kernel code.
>>
>> If this feature is enabled, freed allocations are stored in the quarantine
>> and can't be instantly reallocated and overwritten by the exploit
>> performing heap spraying.
>
> It may be worth clarifying that this is specifically only direct UAF and
> doesn't help with spray-and-overflow-into-a-neighboring-object attacks
> (i.e. both tend to use sprays, but the former doesn't depend on a write
> overflow).
Right, thank you.
>> Signed-off-by: Alexander Popov <[email protected]>
>> ---
>> include/linux/kasan.h | 107 ++++++++++++++++++++-----------------
>> include/linux/slab_def.h | 2 +-
>> include/linux/slub_def.h | 2 +-
>> init/Kconfig | 11 ++++
>> mm/Makefile | 3 +-
>> mm/kasan/Makefile | 2 +
>> mm/kasan/kasan.h | 75 +++++++++++++-------------
>> mm/kasan/quarantine.c | 2 +
>> mm/kasan/slab_quarantine.c | 99 ++++++++++++++++++++++++++++++++++
>> mm/slub.c | 2 +-
>> 10 files changed, 216 insertions(+), 89 deletions(-)
>> create mode 100644 mm/kasan/slab_quarantine.c
>>
>> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
>> index 087fba34b209..b837216f760c 100644
>> --- a/include/linux/kasan.h
>> +++ b/include/linux/kasan.h
[...]
>> #else /* CONFIG_KASAN_GENERIC */
>> +static inline void kasan_record_aux_stack(void *ptr) {}
>> +#endif /* CONFIG_KASAN_GENERIC */
>>
>> +#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_SLAB_QUARANTINE)
>> +void kasan_cache_shrink(struct kmem_cache *cache);
>> +void kasan_cache_shutdown(struct kmem_cache *cache);
>> +#else /* CONFIG_KASAN_GENERIC || CONFIG_SLAB_QUARANTINE */
>> static inline void kasan_cache_shrink(struct kmem_cache *cache) {}
>> static inline void kasan_cache_shutdown(struct kmem_cache *cache) {}
>> -static inline void kasan_record_aux_stack(void *ptr) {}
>> -
>> -#endif /* CONFIG_KASAN_GENERIC */
>> +#endif /* CONFIG_KASAN_GENERIC || CONFIG_SLAB_QUARANTINE */
>
> In doing this extraction, I wonder if function naming should be changed?
> If it's going to live a new life outside of KASAN proper, maybe call
> these functions quarantine_cache_*()? But perhaps that's too much
> churn...
These functions are kasan handlers that are called by allocator.
I.e. allocator calls kasan handlers, and then kasan handlers call
quarantine_put(), quarantine_reduce() and quarantine_remove_cache() among other
things.
Andrey Konovalov wrote:
> If quarantine is to be used without the rest of KASAN, I'd prefer for
> it to be separated from KASAN completely: move to e.g. mm/quarantine.c
> and don't mention KASAN in function/config names.
Hmm, making quarantine completely separate from KASAN would bring troubles.
Currently, in many special places the allocator calls KASAN handlers:
kasan_cache_create()
kasan_slab_free()
kasan_kmalloc_large()
kasan_krealloc()
kasan_slab_alloc()
kasan_kmalloc()
kasan_cache_shrink()
kasan_cache_shutdown()
and some others.
These functions do a lot of interesting things and also work with the quarantine
using these helpers:
quarantine_put()
quarantine_reduce()
quarantine_remove_cache()
Making quarantine completely separate from KASAN would require to move some
internal logic of these KASAN handlers to allocator code.
In this patch I used another approach, that doesn't require changing the API
between allocators and KASAN. I added linux/mm/kasan/slab_quarantine.c with slim
KASAN handlers that implement the minimal functionality needed for quarantine.
Do you think that it's a bad solution?
>> #ifdef CONFIG_KASAN_SW_TAGS
>>
>> diff --git a/include/linux/slab_def.h b/include/linux/slab_def.h
>> index 9eb430c163c2..fc7548f27512 100644
>> --- a/include/linux/slab_def.h
>> +++ b/include/linux/slab_def.h
>> @@ -72,7 +72,7 @@ struct kmem_cache {
>> int obj_offset;
>> #endif /* CONFIG_DEBUG_SLAB */
>>
>> -#ifdef CONFIG_KASAN
>> +#if defined(CONFIG_KASAN) || defined(CONFIG_SLAB_QUARANTINE)
>> struct kasan_cache kasan_info;
>> #endif
>>
>> diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
>> index 1be0ed5befa1..71020cee9fd2 100644
>> --- a/include/linux/slub_def.h
>> +++ b/include/linux/slub_def.h
>> @@ -124,7 +124,7 @@ struct kmem_cache {
>> unsigned int *random_seq;
>> #endif
>>
>> -#ifdef CONFIG_KASAN
>> +#if defined(CONFIG_KASAN) || defined(CONFIG_SLAB_QUARANTINE)
>> struct kasan_cache kasan_info;
>> #endif
>>
>> diff --git a/init/Kconfig b/init/Kconfig
>> index d6a0b31b13dc..de5aa061762f 100644
>> --- a/init/Kconfig
>> +++ b/init/Kconfig
>> @@ -1931,6 +1931,17 @@ config SLAB_FREELIST_HARDENED
>> sanity-checking than others. This option is most effective with
>> CONFIG_SLUB.
>>
>> +config SLAB_QUARANTINE
>> + bool "Enable slab freelist quarantine"
>> + depends on !KASAN && (SLAB || SLUB)
>> + help
>> + Enable slab freelist quarantine to break heap spraying technique
>> + used for exploiting use-after-free vulnerabilities in the kernel
>> + code. If this feature is enabled, freed allocations are stored
>> + in the quarantine and can't be instantly reallocated and
>> + overwritten by the exploit performing heap spraying.
>> + This feature is a part of KASAN functionality.
>> +
>
> To make this available to distros, I think this needs to be more than
> just a CONFIG. I'd love to see this CONFIG control the availability, but
> have a boot param control a ro-after-init static branch for these
> functions (like is done for init_on_alloc, hardened usercopy, etc). Then
> the branch can be off by default for regular distro users, and more
> cautious folks could enable it with a boot param without having to roll
> their own kernels.
Good point, thanks, added to TODO list.
>> [...]
>> +struct kasan_track {
>> + u32 pid;
>
> pid_t?
Ok, I can change it (here I only moved the current definition of kasan_track).
>> + depot_stack_handle_t stack;
>> +};
>> [...]
>> +#if defined(CONFIG_KASAN_GENERIC) && \
>> + (defined(CONFIG_SLAB) || defined(CONFIG_SLUB)) || \
>> + defined(CONFIG_SLAB_QUARANTINE)
>
> This seems a bit messy. Perhaps an invisible CONFIG to do this logic and
> then the files can test for that? CONFIG_USE_SLAB_QUARANTINE or
> something?
Ok, thanks, I'll try that.
>> [...]
>> + * Heap spraying is an exploitation technique that aims to put controlled
>> + * bytes at a predetermined memory location on the heap. Heap spraying for
>> + * exploiting use-after-free in the Linux kernel relies on the fact that on
>> + * kmalloc(), the slab allocator returns the address of the memory that was
>> + * recently freed. Allocating a kernel object with the same size and
>> + * controlled contents allows overwriting the vulnerable freed object.
>> + *
>> + * If freed allocations are stored in the quarantine, they can't be
>> + * instantly reallocated and overwritten by the exploit performing
>> + * heap spraying.
>
> I would clarify this with the details of what is actually happening:
Ok.
> the allocation isn't _moved_ to a quarantine, yes? It's only marked as not
> available for allocation?
The allocation is put into the quarantine queues, where all allocations wait for
actual freeing.
>> + */
>> +
>> +#include <linux/kasan.h>
>> +#include <linux/bug.h>
>> +#include <linux/slab.h>
>> +#include <linux/mm.h>
>> +#include "../slab.h"
>> +#include "kasan.h"
>> +
>> +void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
>> + slab_flags_t *flags)
>> +{
>> + cache->kasan_info.alloc_meta_offset = 0;
>> +
>> + if (cache->flags & SLAB_TYPESAFE_BY_RCU || cache->ctor ||
>> + cache->object_size < sizeof(struct kasan_free_meta)) {
>> + cache->kasan_info.free_meta_offset = *size;
>> + *size += sizeof(struct kasan_free_meta);
>> + BUG_ON(*size > KMALLOC_MAX_SIZE);
>
> Please don't use BUG_ON()[1].
Ok!
> Interesting!
>
> -Kees
>
> [1] https://www.kernel.org/doc/html/latest/process/deprecated.html#bug-and-bug-on
>
Alexander Popov <[email protected]> writes:
> Add a simple test for CONFIG_SLAB_QUARANTINE.
>
> It performs heap spraying that aims to reallocate the recently freed heap
> object. This technique is used for exploiting use-after-free
> vulnerabilities in the kernel code.
>
> This test shows that CONFIG_SLAB_QUARANTINE breaks heap spraying
> exploitation technique.
>
> Signed-off-by: Alexander Popov <[email protected]>
Why put this test in the linux kernel dump test module?
I have no problem with tests, and I may be wrong but this
does not look like you are testing to see if heap corruption
triggers a crash dump. Which is what the rest of the tests
in lkdtm are about. Seeing if the test triggers successfully
triggers a crash dump.
Eric
> ---
> drivers/misc/lkdtm/core.c | 1 +
> drivers/misc/lkdtm/heap.c | 40 ++++++++++++++++++++++++++++++++++++++
> drivers/misc/lkdtm/lkdtm.h | 1 +
> 3 files changed, 42 insertions(+)
>
> diff --git a/drivers/misc/lkdtm/core.c b/drivers/misc/lkdtm/core.c
> index a5e344df9166..78b7669c35eb 100644
> --- a/drivers/misc/lkdtm/core.c
> +++ b/drivers/misc/lkdtm/core.c
> @@ -126,6 +126,7 @@ static const struct crashtype crashtypes[] = {
> CRASHTYPE(SLAB_FREE_DOUBLE),
> CRASHTYPE(SLAB_FREE_CROSS),
> CRASHTYPE(SLAB_FREE_PAGE),
> + CRASHTYPE(HEAP_SPRAY),
> CRASHTYPE(SOFTLOCKUP),
> CRASHTYPE(HARDLOCKUP),
> CRASHTYPE(SPINLOCKUP),
> diff --git a/drivers/misc/lkdtm/heap.c b/drivers/misc/lkdtm/heap.c
> index 1323bc16f113..a72a241e314a 100644
> --- a/drivers/misc/lkdtm/heap.c
> +++ b/drivers/misc/lkdtm/heap.c
> @@ -205,6 +205,46 @@ static void ctor_a(void *region)
> static void ctor_b(void *region)
> { }
>
> +#define HEAP_SPRAY_SIZE 128
> +
> +void lkdtm_HEAP_SPRAY(void)
> +{
> + int *addr;
> + int *spray_addrs[HEAP_SPRAY_SIZE] = { 0 };
> + unsigned long i = 0;
> +
> + addr = kmem_cache_alloc(a_cache, GFP_KERNEL);
> + if (!addr) {
> + pr_info("Unable to allocate memory in lkdtm-heap-a cache\n");
> + return;
> + }
> +
> + *addr = 0x31337;
> + kmem_cache_free(a_cache, addr);
> +
> + pr_info("Performing heap spraying...\n");
> + for (i = 0; i < HEAP_SPRAY_SIZE; i++) {
> + spray_addrs[i] = kmem_cache_alloc(a_cache, GFP_KERNEL);
> + *spray_addrs[i] = 0x31337;
> + pr_info("attempt %lu: spray alloc addr %p vs freed addr %p\n",
> + i, spray_addrs[i], addr);
> + if (spray_addrs[i] == addr) {
> + pr_info("freed addr is reallocated!\n");
> + break;
> + }
> + }
> +
> + if (i < HEAP_SPRAY_SIZE)
> + pr_info("FAIL! Heap spraying succeed :(\n");
> + else
> + pr_info("OK! Heap spraying hasn't succeed :)\n");
> +
> + for (i = 0; i < HEAP_SPRAY_SIZE; i++) {
> + if (spray_addrs[i])
> + kmem_cache_free(a_cache, spray_addrs[i]);
> + }
> +}
> +
> void __init lkdtm_heap_init(void)
> {
> double_free_cache = kmem_cache_create("lkdtm-heap-double_free",
> diff --git a/drivers/misc/lkdtm/lkdtm.h b/drivers/misc/lkdtm/lkdtm.h
> index 8878538b2c13..dfafb4ae6f3a 100644
> --- a/drivers/misc/lkdtm/lkdtm.h
> +++ b/drivers/misc/lkdtm/lkdtm.h
> @@ -45,6 +45,7 @@ void lkdtm_READ_BUDDY_AFTER_FREE(void);
> void lkdtm_SLAB_FREE_DOUBLE(void);
> void lkdtm_SLAB_FREE_CROSS(void);
> void lkdtm_SLAB_FREE_PAGE(void);
> +void lkdtm_HEAP_SPRAY(void);
>
> /* lkdtm_perms.c */
> void __init lkdtm_perms_init(void);
On 15.08.2020 19:59, Kees Cook wrote:
> On Thu, Aug 13, 2020 at 06:19:22PM +0300, Alexander Popov wrote:
>> Add a simple test for CONFIG_SLAB_QUARANTINE.
>>
>> It performs heap spraying that aims to reallocate the recently freed heap
>> object. This technique is used for exploiting use-after-free
>> vulnerabilities in the kernel code.
>>
>> This test shows that CONFIG_SLAB_QUARANTINE breaks heap spraying
>> exploitation technique.
>
> Yay tests!
Yes :)
I'm going to improve it to demonstrate the quarantine security properties.
>> Signed-off-by: Alexander Popov <[email protected]>
>> ---
>> drivers/misc/lkdtm/core.c | 1 +
>> drivers/misc/lkdtm/heap.c | 40 ++++++++++++++++++++++++++++++++++++++
>> drivers/misc/lkdtm/lkdtm.h | 1 +
>> 3 files changed, 42 insertions(+)
>>
>> diff --git a/drivers/misc/lkdtm/core.c b/drivers/misc/lkdtm/core.c
>> index a5e344df9166..78b7669c35eb 100644
>> --- a/drivers/misc/lkdtm/core.c
>> +++ b/drivers/misc/lkdtm/core.c
>> @@ -126,6 +126,7 @@ static const struct crashtype crashtypes[] = {
>> CRASHTYPE(SLAB_FREE_DOUBLE),
>> CRASHTYPE(SLAB_FREE_CROSS),
>> CRASHTYPE(SLAB_FREE_PAGE),
>> + CRASHTYPE(HEAP_SPRAY),
>> CRASHTYPE(SOFTLOCKUP),
>> CRASHTYPE(HARDLOCKUP),
>> CRASHTYPE(SPINLOCKUP),
>> diff --git a/drivers/misc/lkdtm/heap.c b/drivers/misc/lkdtm/heap.c
>> index 1323bc16f113..a72a241e314a 100644
>> --- a/drivers/misc/lkdtm/heap.c
>> +++ b/drivers/misc/lkdtm/heap.c
>> @@ -205,6 +205,46 @@ static void ctor_a(void *region)
>> static void ctor_b(void *region)
>> { }
>>
>> +#define HEAP_SPRAY_SIZE 128
>> +
>> +void lkdtm_HEAP_SPRAY(void)
>> +{
>> + int *addr;
>> + int *spray_addrs[HEAP_SPRAY_SIZE] = { 0 };
>
> (the 0 isn't needed -- and it was left there, it should be NULL)
It is used in tear-down below.
I'll change it to { NULL }.
>> + unsigned long i = 0;
>> +
>> + addr = kmem_cache_alloc(a_cache, GFP_KERNEL);
>
> I would prefer this test add its own cache (e.g. spray_cache), to avoid
> misbehaviors between tests. (e.g. the a and b caches already run the
> risk of getting corrupted weirdly.)
Ok, I'll do that.
>> + if (!addr) {
>> + pr_info("Unable to allocate memory in lkdtm-heap-a cache\n");
>> + return;
>> + }
>> +
>> + *addr = 0x31337;
>> + kmem_cache_free(a_cache, addr);
>> +
>> + pr_info("Performing heap spraying...\n");
>> + for (i = 0; i < HEAP_SPRAY_SIZE; i++) {
>> + spray_addrs[i] = kmem_cache_alloc(a_cache, GFP_KERNEL);
>> + *spray_addrs[i] = 0x31337;
>> + pr_info("attempt %lu: spray alloc addr %p vs freed addr %p\n",
>> + i, spray_addrs[i], addr);
>
> That's 128 lines spewed into dmesg... I would leave this out.
Ok.
>> + if (spray_addrs[i] == addr) {
>> + pr_info("freed addr is reallocated!\n");
>> + break;
>> + }
>> + }
>> +
>> + if (i < HEAP_SPRAY_SIZE)
>> + pr_info("FAIL! Heap spraying succeed :(\n");
>
> I'd move this into the "if (spray_addrs[i] == addr)" test instead of the
> pr_info() that is there.
>
>> + else
>> + pr_info("OK! Heap spraying hasn't succeed :)\n");
>
> And then make this an "if (i == HEAP_SPRAY_SIZE)" test
Do you mean that I need to avoid the additional line in the test output,
printing only the final result?
>> +
>> + for (i = 0; i < HEAP_SPRAY_SIZE; i++) {
>> + if (spray_addrs[i])
>> + kmem_cache_free(a_cache, spray_addrs[i]);
>> + }
>> +}
>> +
>> void __init lkdtm_heap_init(void)
>> {
>> double_free_cache = kmem_cache_create("lkdtm-heap-double_free",
>> diff --git a/drivers/misc/lkdtm/lkdtm.h b/drivers/misc/lkdtm/lkdtm.h
>> index 8878538b2c13..dfafb4ae6f3a 100644
>> --- a/drivers/misc/lkdtm/lkdtm.h
>> +++ b/drivers/misc/lkdtm/lkdtm.h
>> @@ -45,6 +45,7 @@ void lkdtm_READ_BUDDY_AFTER_FREE(void);
>> void lkdtm_SLAB_FREE_DOUBLE(void);
>> void lkdtm_SLAB_FREE_CROSS(void);
>> void lkdtm_SLAB_FREE_PAGE(void);
>> +void lkdtm_HEAP_SPRAY(void);
>>
>> /* lkdtm_perms.c */
>> void __init lkdtm_perms_init(void);
>> --
>> 2.26.2
>>
>
> I assume enabling the quarantine defense also ends up being seen in the
> SLAB_FREE_DOUBLE LKDTM test too, yes?
I'll experiment with that.
Thank you!
Best regards,
Alexander
On Mon, Aug 17, 2020 at 01:24:37PM -0500, Eric W. Biederman wrote:
> Alexander Popov <[email protected]> writes:
>
> > Add a simple test for CONFIG_SLAB_QUARANTINE.
> >
> > It performs heap spraying that aims to reallocate the recently freed heap
> > object. This technique is used for exploiting use-after-free
> > vulnerabilities in the kernel code.
> >
> > This test shows that CONFIG_SLAB_QUARANTINE breaks heap spraying
> > exploitation technique.
> >
> > Signed-off-by: Alexander Popov <[email protected]>
>
> Why put this test in the linux kernel dump test module?
>
> I have no problem with tests, and I may be wrong but this
> does not look like you are testing to see if heap corruption
> triggers a crash dump. Which is what the rest of the tests
> in lkdtm are about. Seeing if the test triggers successfully
> triggers a crash dump.
The scope of LKDTM has shifted a bit, and I'm fine with tests that
don't cause crashes as long as they're part of testing system-wide
defenses, etc. It's easier to collect similar tests together (even if
they don't break the system).
--
Kees Cook
On 15.08.2020 21:54, Matthew Wilcox wrote:
> On Thu, Aug 13, 2020 at 06:19:21PM +0300, Alexander Popov wrote:
>> +config SLAB_QUARANTINE
>> + bool "Enable slab freelist quarantine"
>> + depends on !KASAN && (SLAB || SLUB)
>> + help
>> + Enable slab freelist quarantine to break heap spraying technique
>> + used for exploiting use-after-free vulnerabilities in the kernel
>> + code. If this feature is enabled, freed allocations are stored
>> + in the quarantine and can't be instantly reallocated and
>> + overwritten by the exploit performing heap spraying.
>> + This feature is a part of KASAN functionality.
>
> After this patch, it isn't part of KASAN any more ;-)
Ok, I'll change that to "this feature is used by KASAN" :)
> The way this is written is a bit too low level. Let's write it in terms
> that people who don't know the guts of the slab allocator or security
> terminology can understand:
>
> Delay reuse of freed slab objects. This makes some security
> exploits harder to execute. It reduces performance slightly
> as objects will be cache cold by the time they are reallocated,
> and it costs a small amount of memory.
>
> (feel free to edit this)
Ok, I see.
I'll start from high-level description and add low-level details at the end.
>> +struct qlist_node {
>> + struct qlist_node *next;
>> +};
>
> I appreciate this isn't new, but why do we have a new singly-linked-list
> abstraction being defined in this code?
I don't know for sure.
I suppose it is caused by SLAB/SLUB freelist implementation details (qlist_node
in kasan_free_meta is also used for the allocator freelist).
Best regards,
Alexander
On 16.08.2020 22:59, Pavel Machek wrote:
> On Sat 2020-08-15 19:54:55, Matthew Wilcox wrote:
>> On Thu, Aug 13, 2020 at 06:19:21PM +0300, Alexander Popov wrote:
>>> +config SLAB_QUARANTINE
>>> + bool "Enable slab freelist quarantine"
>>> + depends on !KASAN && (SLAB || SLUB)
>>> + help
>>> + Enable slab freelist quarantine to break heap spraying technique
>>> + used for exploiting use-after-free vulnerabilities in the kernel
>>> + code. If this feature is enabled, freed allocations are stored
>>> + in the quarantine and can't be instantly reallocated and
>>> + overwritten by the exploit performing heap spraying.
>>> + This feature is a part of KASAN functionality.
>>
>> After this patch, it isn't part of KASAN any more ;-)
>>
>> The way this is written is a bit too low level. Let's write it in terms
>> that people who don't know the guts of the slab allocator or security
>> terminology can understand:
>>
>> Delay reuse of freed slab objects. This makes some security
>> exploits harder to execute. It reduces performance slightly
>> as objects will be cache cold by the time they are reallocated,
>> and it costs a small amount of memory.
>
> Written this way, it invites questions:
>
> Does it introduce any new deadlocks in near out-of-memory situations?
Linux kernel with enabled KASAN is heavily tested by syzbot.
I think Dmitry and Andrey can give good answers to your question.
Some time ago I was doing Linux kernel fuzzing with syzkaller on low memory
virtual machines (with KASAN and LOCKUP_DETECTOR enabled). I gave less than 1G
to each debian stretch VM. I didn't get any special deadlock caused by OOM.
Best regards,
Alexander
On 15.08.2020 19:39, Kees Cook wrote:
> On Thu, Aug 13, 2020 at 06:19:20PM +0300, Alexander Popov wrote:
>> I've found an easy way to break heap spraying for use-after-free
>> exploitation. I simply extracted slab freelist quarantine from KASAN
>> functionality and called it CONFIG_SLAB_QUARANTINE. Please see patch 1.
>
> Ah yeah, good idea. :)
>
>> [...]
>> I did a brief performance evaluation of this feature.
>>
>> 1. Memory consumption. KASAN quarantine uses 1/32 of the memory.
>> CONFIG_SLAB_QUARANTINE disabled:
>> # free -m
>> total used free shared buff/cache available
>> Mem: 1987 39 1862 10 86 1907
>> Swap: 0 0 0
>> CONFIG_SLAB_QUARANTINE enabled:
>> # free -m
>> total used free shared buff/cache available
>> Mem: 1987 140 1760 10 87 1805
>> Swap: 0 0 0
>
> 1/32 of memory doesn't seem too bad for someone interested in this defense.
This can be configured. Quote from linux/mm/kasan/quarantine.c:
/*
* The fraction of physical memory the quarantine is allowed to occupy.
* Quarantine doesn't support memory shrinker with SLAB allocator, so we keep
* the ratio low to avoid OOM.
*/
#define QUARANTINE_FRACTION 32
>> 2. Performance penalty. I used `hackbench -s 256 -l 200 -g 15 -f 25 -P`.
>> CONFIG_SLAB_QUARANTINE disabled (x86_64, CONFIG_SLUB):
>> Times: 3.088, 3.103, 3.068, 3.103, 3.107
>> Mean: 3.0938
>> Standard deviation: 0.0144
>> CONFIG_SLAB_QUARANTINE enabled (x86_64, CONFIG_SLUB):
>> Times: 3.303, 3.329, 3.356, 3.314, 3.292
>> Mean: 3.3188 (+7.3%)
>> Standard deviation: 0.0223
>
> That's rather painful, but hackbench can produce some big deltas given
> it can be an unrealistic workload for most systems. I'd be curious to
> see the "building a kernel" timings, which tends to be much more
> realistic for "busy system" without hammering one particular subsystem
> (though it's a bit VFS heavy, obviously).
I have new results.
CPU: Intel Core i7-6500U CPU @ 2.50GHz
Test: time make O=../build_out/defconfig/ -j2
CONFIG_SLAB_QUARANTINE disabled:
Times: 10m52.978s 10m50.161s 10m45.601s
Mean: 649.58s
Standard deviation: 3.04
CONFIG_SLAB_QUARANTINE enabled:
Times: 10m56.256s 10m51.919s 10m47.903s
Mean: 652.026s (+0,38%)
Standard deviation: 3.41
This test shows much lower performance penalty.
More ideas of tests?
Best regards,
Alexander
On Mon, Aug 17, 2020 at 7:32 PM Alexander Popov <[email protected]> wrote:
>
> On 15.08.2020 19:52, Kees Cook wrote:
> > On Thu, Aug 13, 2020 at 06:19:21PM +0300, Alexander Popov wrote:
> >> Heap spraying is an exploitation technique that aims to put controlled
> >> bytes at a predetermined memory location on the heap. Heap spraying for
> >> exploiting use-after-free in the Linux kernel relies on the fact that on
> >> kmalloc(), the slab allocator returns the address of the memory that was
> >> recently freed. Allocating a kernel object with the same size and
> >> controlled contents allows overwriting the vulnerable freed object.
> >>
> >> Let's extract slab freelist quarantine from KASAN functionality and
> >> call it CONFIG_SLAB_QUARANTINE. This feature breaks widespread heap
> >> spraying technique used for exploiting use-after-free vulnerabilities
> >> in the kernel code.
> >>
> >> If this feature is enabled, freed allocations are stored in the quarantine
> >> and can't be instantly reallocated and overwritten by the exploit
> >> performing heap spraying.
> >
> > It may be worth clarifying that this is specifically only direct UAF and
> > doesn't help with spray-and-overflow-into-a-neighboring-object attacks
> > (i.e. both tend to use sprays, but the former doesn't depend on a write
> > overflow).
>
> Andrey Konovalov wrote:
> > If quarantine is to be used without the rest of KASAN, I'd prefer for
> > it to be separated from KASAN completely: move to e.g. mm/quarantine.c
> > and don't mention KASAN in function/config names.
>
> Hmm, making quarantine completely separate from KASAN would bring troubles.
>
> Currently, in many special places the allocator calls KASAN handlers:
> kasan_cache_create()
> kasan_slab_free()
> kasan_kmalloc_large()
> kasan_krealloc()
> kasan_slab_alloc()
> kasan_kmalloc()
> kasan_cache_shrink()
> kasan_cache_shutdown()
> and some others.
> These functions do a lot of interesting things and also work with the quarantine
> using these helpers:
> quarantine_put()
> quarantine_reduce()
> quarantine_remove_cache()
>
> Making quarantine completely separate from KASAN would require to move some
> internal logic of these KASAN handlers to allocator code.
It doesn't look like there's quite a lot of KASAN-specific logic there.
All those quarantine_*() calls are either at the beginning or at the
end of some kasan annotations, so it should be quite easy to move
those out. E.g. quarantine_reduce() can be moved together with the
gfpflags_allow_blocking(flags) check and put before kasan_kmalloc()
calls (or maybe also into some other places?), quarantine_put() can be
put after kasan_slab_free(), etc.
> In this patch I used another approach, that doesn't require changing the API
> between allocators and KASAN. I added linux/mm/kasan/slab_quarantine.c with slim
> KASAN handlers that implement the minimal functionality needed for quarantine.
>
> Do you think that it's a bad solution?
This solution doesn't look clean. Here you provide a second KASAN
runtime implementation, parallel to the original one, which only does
quarantine. It seems much cleaner to put quarantine logic into a
separate module, which can be either used independently, or together
with KASAN built on top of it.
Maybe other KASAN contributors have an opinion on this?
On 18.08.2020 18:45, Andrey Konovalov wrote:
> On Mon, Aug 17, 2020 at 7:32 PM Alexander Popov <[email protected]> wrote:
>>
>> On 15.08.2020 19:52, Kees Cook wrote:
>>> On Thu, Aug 13, 2020 at 06:19:21PM +0300, Alexander Popov wrote:
>>>> Heap spraying is an exploitation technique that aims to put controlled
>>>> bytes at a predetermined memory location on the heap. Heap spraying for
>>>> exploiting use-after-free in the Linux kernel relies on the fact that on
>>>> kmalloc(), the slab allocator returns the address of the memory that was
>>>> recently freed. Allocating a kernel object with the same size and
>>>> controlled contents allows overwriting the vulnerable freed object.
>>>>
>>>> Let's extract slab freelist quarantine from KASAN functionality and
>>>> call it CONFIG_SLAB_QUARANTINE. This feature breaks widespread heap
>>>> spraying technique used for exploiting use-after-free vulnerabilities
>>>> in the kernel code.
>>>>
>>>> If this feature is enabled, freed allocations are stored in the quarantine
>>>> and can't be instantly reallocated and overwritten by the exploit
>>>> performing heap spraying.
>>>
>>> It may be worth clarifying that this is specifically only direct UAF and
>>> doesn't help with spray-and-overflow-into-a-neighboring-object attacks
>>> (i.e. both tend to use sprays, but the former doesn't depend on a write
>>> overflow).
>>
>> Andrey Konovalov wrote:
>>> If quarantine is to be used without the rest of KASAN, I'd prefer for
>>> it to be separated from KASAN completely: move to e.g. mm/quarantine.c
>>> and don't mention KASAN in function/config names.
>>
>> Hmm, making quarantine completely separate from KASAN would bring troubles.
>>
>> Currently, in many special places the allocator calls KASAN handlers:
>> kasan_cache_create()
>> kasan_slab_free()
>> kasan_kmalloc_large()
>> kasan_krealloc()
>> kasan_slab_alloc()
>> kasan_kmalloc()
>> kasan_cache_shrink()
>> kasan_cache_shutdown()
>> and some others.
>> These functions do a lot of interesting things and also work with the quarantine
>> using these helpers:
>> quarantine_put()
>> quarantine_reduce()
>> quarantine_remove_cache()
>>
>> Making quarantine completely separate from KASAN would require to move some
>> internal logic of these KASAN handlers to allocator code.
>
> It doesn't look like there's quite a lot of KASAN-specific logic there.
>
> All those quarantine_*() calls are either at the beginning or at the
> end of some kasan annotations, so it should be quite easy to move
> those out. E.g. quarantine_reduce() can be moved together with the
> gfpflags_allow_blocking(flags) check and put before kasan_kmalloc()
> calls (or maybe also into some other places?), quarantine_put() can be
> put after kasan_slab_free(), etc.
>
>> In this patch I used another approach, that doesn't require changing the API
>> between allocators and KASAN. I added linux/mm/kasan/slab_quarantine.c with slim
>> KASAN handlers that implement the minimal functionality needed for quarantine.
>>
>> Do you think that it's a bad solution?
>
> This solution doesn't look clean. Here you provide a second KASAN
> runtime implementation, parallel to the original one, which only does
> quarantine. It seems much cleaner to put quarantine logic into a
> separate module, which can be either used independently, or together
> with KASAN built on top of it.
That sounds reasonable, I agree. Thanks, Andrey.
Added to TODO list.
At first I'm going to focus on exploring security properties of the quarantine.
And then I'll do the refactoring that you and Kees propose.
Best regards,
Alexander