2023-11-06 20:10:47

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH RFC 00/20] kasan: save mempool stack traces

From: Andrey Konovalov <[email protected]>

This series updates KASAN to save alloc and free stack traces for
secondary-level allocators that cache and reuse allocations internally
instead of giving them back to the underlying allocator (e.g. mempool).

As a part of this change, introduce and document a set of KASAN hooks:

bool kasan_mempool_poison_pages(struct page *page, unsigned int order);
void kasan_mempool_unpoison_pages(struct page *page, unsigned int order);
bool kasan_mempool_poison_object(void *ptr);
void kasan_mempool_unpoison_object(void *ptr, size_t size);

and use them in the mempool code.

Besides mempool, skbuff and io_uring also cache allocations and already
use KASAN hooks to poison those. Their code is updated to use the new
mempool hooks.

The new hooks save alloc and free stack traces (for normal kmalloc and
slab objects; stack traces for large kmalloc objects and page_alloc are
not supported by KASAN yet), improve the readability of the users' code,
and also allow the users to prevent double-free and invalid-free bugs;
see the patches for the details.

I'm posting this series as an RFC, as it has a few non-trivial-to-resolve
conflicts with the stack depot eviction patches. I'll rebase the series and
resolve the conflicts once the stack depot patches are in the mm tree.

Andrey Konovalov (20):
kasan: rename kasan_slab_free_mempool to kasan_mempool_poison_object
kasan: move kasan_mempool_poison_object
kasan: document kasan_mempool_poison_object
kasan: add return value for kasan_mempool_poison_object
kasan: introduce kasan_mempool_unpoison_object
kasan: introduce kasan_mempool_poison_pages
kasan: introduce kasan_mempool_unpoison_pages
kasan: clean up __kasan_mempool_poison_object
kasan: save free stack traces for slab mempools
kasan: clean up and rename ____kasan_kmalloc
kasan: introduce poison_kmalloc_large_redzone
kasan: save alloc stack traces for mempool
mempool: use new mempool KASAN hooks
mempool: introduce mempool_use_prealloc_only
kasan: add mempool tests
kasan: rename pagealloc tests
kasan: reorder tests
kasan: rename and document kasan_(un)poison_object_data
skbuff: use mempool KASAN hooks
io_uring: use mempool KASAN hook

include/linux/kasan.h | 161 +++++++-
include/linux/mempool.h | 2 +
io_uring/alloc_cache.h | 5 +-
mm/kasan/common.c | 221 ++++++----
mm/kasan/kasan_test.c | 876 +++++++++++++++++++++++++++-------------
mm/mempool.c | 49 ++-
mm/slab.c | 10 +-
mm/slub.c | 4 +-
net/core/skbuff.c | 10 +-
9 files changed, 940 insertions(+), 398 deletions(-)

--
2.25.1


2023-11-06 20:11:14

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH RFC 01/20] kasan: rename kasan_slab_free_mempool to kasan_mempool_poison_object

From: Andrey Konovalov <[email protected]>

Rename kasan_slab_free_mempool to kasan_mempool_poison_object.

kasan_slab_free_mempool is a slightly confusing name: it is unclear
whether this function poisons the object when it is freed into mempool
or does something when the object is freed from mempool to the underlying
allocator.

The new name also aligns with other mempool-related KASAN hooks added in
the following patches in this series.

Signed-off-by: Andrey Konovalov <[email protected]>
---
include/linux/kasan.h | 8 ++++----
io_uring/alloc_cache.h | 3 +--
mm/kasan/common.c | 4 ++--
mm/mempool.c | 2 +-
4 files changed, 8 insertions(+), 9 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 72cb693b075b..6310435f528b 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -172,11 +172,11 @@ static __always_inline void kasan_kfree_large(void *ptr)
__kasan_kfree_large(ptr, _RET_IP_);
}

-void __kasan_slab_free_mempool(void *ptr, unsigned long ip);
-static __always_inline void kasan_slab_free_mempool(void *ptr)
+void __kasan_mempool_poison_object(void *ptr, unsigned long ip);
+static __always_inline void kasan_mempool_poison_object(void *ptr)
{
if (kasan_enabled())
- __kasan_slab_free_mempool(ptr, _RET_IP_);
+ __kasan_mempool_poison_object(ptr, _RET_IP_);
}

void * __must_check __kasan_slab_alloc(struct kmem_cache *s,
@@ -256,7 +256,7 @@ static inline bool kasan_slab_free(struct kmem_cache *s, void *object, bool init
return false;
}
static inline void kasan_kfree_large(void *ptr) {}
-static inline void kasan_slab_free_mempool(void *ptr) {}
+static inline void kasan_mempool_poison_object(void *ptr) {}
static inline void *kasan_slab_alloc(struct kmem_cache *s, void *object,
gfp_t flags, bool init)
{
diff --git a/io_uring/alloc_cache.h b/io_uring/alloc_cache.h
index 241245cb54a6..8de0414e8efe 100644
--- a/io_uring/alloc_cache.h
+++ b/io_uring/alloc_cache.h
@@ -16,8 +16,7 @@ static inline bool io_alloc_cache_put(struct io_alloc_cache *cache,
if (cache->nr_cached < cache->max_cached) {
cache->nr_cached++;
wq_stack_add_head(&entry->node, &cache->list);
- /* KASAN poisons object */
- kasan_slab_free_mempool(entry);
+ kasan_mempool_poison_object(entry);
return true;
}
return false;
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 256930da578a..e42d6f349ae2 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -261,7 +261,7 @@ static inline bool ____kasan_kfree_large(void *ptr, unsigned long ip)

/*
* The object will be poisoned by kasan_poison_pages() or
- * kasan_slab_free_mempool().
+ * kasan_mempool_poison_object().
*/

return false;
@@ -272,7 +272,7 @@ void __kasan_kfree_large(void *ptr, unsigned long ip)
____kasan_kfree_large(ptr, ip);
}

-void __kasan_slab_free_mempool(void *ptr, unsigned long ip)
+void __kasan_mempool_poison_object(void *ptr, unsigned long ip)
{
struct folio *folio;

diff --git a/mm/mempool.c b/mm/mempool.c
index 734bcf5afbb7..768cb39dc5e2 100644
--- a/mm/mempool.c
+++ b/mm/mempool.c
@@ -107,7 +107,7 @@ static inline void poison_element(mempool_t *pool, void *element)
static __always_inline void kasan_poison_element(mempool_t *pool, void *element)
{
if (pool->alloc == mempool_alloc_slab || pool->alloc == mempool_kmalloc)
- kasan_slab_free_mempool(element);
+ kasan_mempool_poison_object(element);
else if (pool->alloc == mempool_alloc_pages)
kasan_poison_pages(element, (unsigned long)pool->pool_data,
false);
--
2.25.1

2023-11-06 20:11:23

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH RFC 05/20] kasan: introduce kasan_mempool_unpoison_object

From: Andrey Konovalov <[email protected]>

Introduce and document a kasan_mempool_unpoison_object hook.

This hook serves as a replacement for the generic kasan_unpoison_range
that the mempool code relies on right now. mempool will be updated to use
the new hook in one of the following patches.

For now, define the new hook to be identical to kasan_unpoison_range.
One of the following patches will update it to add stack trace
collection.

Signed-off-by: Andrey Konovalov <[email protected]>
---
include/linux/kasan.h | 31 +++++++++++++++++++++++++++++++
mm/kasan/common.c | 5 +++++
2 files changed, 36 insertions(+)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 33387e254caa..c5fe303bc1c2 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -228,6 +228,9 @@ bool __kasan_mempool_poison_object(void *ptr, unsigned long ip);
* bugs and reports them. The caller can use the return value of this function
* to find out if the allocation is buggy.
*
+ * Before the poisoned allocation can be reused, it must be unpoisoned via
+ * kasan_mempool_unpoison_object().
+ *
* This function operates on all slab allocations including large kmalloc
* allocations (the ones returned by kmalloc_large() or by kmalloc() with the
* size > KMALLOC_MAX_SIZE).
@@ -241,6 +244,32 @@ static __always_inline bool kasan_mempool_poison_object(void *ptr)
return true;
}

+void __kasan_mempool_unpoison_object(void *ptr, size_t size, unsigned long ip);
+/**
+ * kasan_mempool_unpoison_object - Unpoison a mempool slab allocation.
+ * @ptr: Pointer to the slab allocation.
+ * @size: Size to be unpoisoned.
+ *
+ * This function is intended for kernel subsystems that cache slab allocations
+ * to reuse them instead of freeing them back to the slab allocator (e.g.
+ * mempool).
+ *
+ * This function unpoisons a slab allocation that was previously poisoned via
+ * kasan_mempool_poison_object() without initializing its memory. For the
+ * tag-based modes, this function does not assign a new tag to the allocation
+ * and instead restores the original tags based on the pointer value.
+ *
+ * This function operates on all slab allocations including large kmalloc
+ * allocations (the ones returned by kmalloc_large() or by kmalloc() with the
+ * size > KMALLOC_MAX_SIZE).
+ */
+static __always_inline void kasan_mempool_unpoison_object(void *ptr,
+ size_t size)
+{
+ if (kasan_enabled())
+ __kasan_mempool_unpoison_object(ptr, size, _RET_IP_);
+}
+
/*
* Unlike kasan_check_read/write(), kasan_check_byte() is performed even for
* the hardware tag-based mode that doesn't rely on compiler instrumentation.
@@ -301,6 +330,8 @@ static inline bool kasan_mempool_poison_object(void *ptr)
{
return true;
}
+static inline void kasan_mempool_unpoison_object(void *ptr, size_t size) {}
+
static inline bool kasan_check_byte(const void *address)
{
return true;
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 087f93629132..033c860afe51 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -441,6 +441,11 @@ bool __kasan_mempool_poison_object(void *ptr, unsigned long ip)
}
}

+void __kasan_mempool_unpoison_object(void *ptr, size_t size, unsigned long ip)
+{
+ kasan_unpoison(ptr, size, false);
+}
+
bool __kasan_check_byte(const void *address, unsigned long ip)
{
if (!kasan_byte_accessible(address)) {
--
2.25.1

2023-11-06 20:11:23

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH RFC 04/20] kasan: add return value for kasan_mempool_poison_object

From: Andrey Konovalov <[email protected]>

Add a return value for kasan_mempool_poison_object that lets the caller
know whether the allocation is affected by a double-free or an
invalid-free bug. The caller can use this return value to stop operating
on the object.

Also introduce a check_page_allocation helper function to improve the
code readability.

Signed-off-by: Andrey Konovalov <[email protected]>
---
include/linux/kasan.h | 17 ++++++++++++-----
mm/kasan/common.c | 21 ++++++++++-----------
2 files changed, 22 insertions(+), 16 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index bbf6e2fa4ffd..33387e254caa 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -212,7 +212,7 @@ static __always_inline void * __must_check kasan_krealloc(const void *object,
return (void *)object;
}

-void __kasan_mempool_poison_object(void *ptr, unsigned long ip);
+bool __kasan_mempool_poison_object(void *ptr, unsigned long ip);
/**
* kasan_mempool_poison_object - Check and poison a mempool slab allocation.
* @ptr: Pointer to the slab allocation.
@@ -225,16 +225,20 @@ void __kasan_mempool_poison_object(void *ptr, unsigned long ip);
* without putting it into the quarantine (for the Generic mode).
*
* This function also performs checks to detect double-free and invalid-free
- * bugs and reports them.
+ * bugs and reports them. The caller can use the return value of this function
+ * to find out if the allocation is buggy.
*
* This function operates on all slab allocations including large kmalloc
* allocations (the ones returned by kmalloc_large() or by kmalloc() with the
* size > KMALLOC_MAX_SIZE).
+ *
+ * Return: true if the allocation can be safely reused; false otherwise.
*/
-static __always_inline void kasan_mempool_poison_object(void *ptr)
+static __always_inline bool kasan_mempool_poison_object(void *ptr)
{
if (kasan_enabled())
- __kasan_mempool_poison_object(ptr, _RET_IP_);
+ return __kasan_mempool_poison_object(ptr, _RET_IP_);
+ return true;
}

/*
@@ -293,7 +297,10 @@ static inline void *kasan_krealloc(const void *object, size_t new_size,
{
return (void *)object;
}
-static inline void kasan_mempool_poison_object(void *ptr) {}
+static inline bool kasan_mempool_poison_object(void *ptr)
+{
+ return true;
+}
static inline bool kasan_check_byte(const void *address)
{
return true;
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 69f4c66f0da3..087f93629132 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -244,7 +244,7 @@ bool __kasan_slab_free(struct kmem_cache *cache, void *object,
return ____kasan_slab_free(cache, object, ip, true, init);
}

-static inline bool ____kasan_kfree_large(void *ptr, unsigned long ip)
+static inline bool check_page_allocation(void *ptr, unsigned long ip)
{
if (!kasan_arch_is_ready())
return false;
@@ -259,17 +259,14 @@ static inline bool ____kasan_kfree_large(void *ptr, unsigned long ip)
return true;
}

- /*
- * The object will be poisoned by kasan_poison_pages() or
- * kasan_mempool_poison_object().
- */
-
return false;
}

void __kasan_kfree_large(void *ptr, unsigned long ip)
{
- ____kasan_kfree_large(ptr, ip);
+ check_page_allocation(ptr, ip);
+
+ /* The object will be poisoned by kasan_poison_pages(). */
}

void * __must_check __kasan_slab_alloc(struct kmem_cache *cache,
@@ -419,7 +416,7 @@ void * __must_check __kasan_krealloc(const void *object, size_t size, gfp_t flag
return ____kasan_kmalloc(slab->slab_cache, object, size, flags);
}

-void __kasan_mempool_poison_object(void *ptr, unsigned long ip)
+bool __kasan_mempool_poison_object(void *ptr, unsigned long ip)
{
struct folio *folio;

@@ -432,13 +429,15 @@ void __kasan_mempool_poison_object(void *ptr, unsigned long ip)
* KMALLOC_MAX_SIZE, and kmalloc falls back onto page_alloc.
*/
if (unlikely(!folio_test_slab(folio))) {
- if (____kasan_kfree_large(ptr, ip))
- return;
+ if (check_page_allocation(ptr, ip))
+ return false;
kasan_poison(ptr, folio_size(folio), KASAN_PAGE_FREE, false);
+ return true;
} else {
struct slab *slab = folio_slab(folio);

- ____kasan_slab_free(slab->slab_cache, ptr, ip, false, false);
+ return !____kasan_slab_free(slab->slab_cache, ptr, ip,
+ false, false);
}
}

--
2.25.1

2023-11-06 20:11:34

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH RFC 02/20] kasan: move kasan_mempool_poison_object

From: Andrey Konovalov <[email protected]>

Move kasan_mempool_poison_object after all slab-related KASAN hooks.

This is a preparatory change for the following patches in this series.

No functional changes.

Signed-off-by: Andrey Konovalov <[email protected]>
---
include/linux/kasan.h | 16 +++++++--------
mm/kasan/common.c | 46 +++++++++++++++++++++----------------------
2 files changed, 31 insertions(+), 31 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 6310435f528b..0d1f925c136d 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -172,13 +172,6 @@ static __always_inline void kasan_kfree_large(void *ptr)
__kasan_kfree_large(ptr, _RET_IP_);
}

-void __kasan_mempool_poison_object(void *ptr, unsigned long ip);
-static __always_inline void kasan_mempool_poison_object(void *ptr)
-{
- if (kasan_enabled())
- __kasan_mempool_poison_object(ptr, _RET_IP_);
-}
-
void * __must_check __kasan_slab_alloc(struct kmem_cache *s,
void *object, gfp_t flags, bool init);
static __always_inline void * __must_check kasan_slab_alloc(
@@ -219,6 +212,13 @@ static __always_inline void * __must_check kasan_krealloc(const void *object,
return (void *)object;
}

+void __kasan_mempool_poison_object(void *ptr, unsigned long ip);
+static __always_inline void kasan_mempool_poison_object(void *ptr)
+{
+ if (kasan_enabled())
+ __kasan_mempool_poison_object(ptr, _RET_IP_);
+}
+
/*
* Unlike kasan_check_read/write(), kasan_check_byte() is performed even for
* the hardware tag-based mode that doesn't rely on compiler instrumentation.
@@ -256,7 +256,6 @@ static inline bool kasan_slab_free(struct kmem_cache *s, void *object, bool init
return false;
}
static inline void kasan_kfree_large(void *ptr) {}
-static inline void kasan_mempool_poison_object(void *ptr) {}
static inline void *kasan_slab_alloc(struct kmem_cache *s, void *object,
gfp_t flags, bool init)
{
@@ -276,6 +275,7 @@ static inline void *kasan_krealloc(const void *object, size_t new_size,
{
return (void *)object;
}
+static inline void kasan_mempool_poison_object(void *ptr) {}
static inline bool kasan_check_byte(const void *address)
{
return true;
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index e42d6f349ae2..69f4c66f0da3 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -272,29 +272,6 @@ void __kasan_kfree_large(void *ptr, unsigned long ip)
____kasan_kfree_large(ptr, ip);
}

-void __kasan_mempool_poison_object(void *ptr, unsigned long ip)
-{
- struct folio *folio;
-
- folio = virt_to_folio(ptr);
-
- /*
- * Even though this function is only called for kmem_cache_alloc and
- * kmalloc backed mempool allocations, those allocations can still be
- * !PageSlab() when the size provided to kmalloc is larger than
- * KMALLOC_MAX_SIZE, and kmalloc falls back onto page_alloc.
- */
- if (unlikely(!folio_test_slab(folio))) {
- if (____kasan_kfree_large(ptr, ip))
- return;
- kasan_poison(ptr, folio_size(folio), KASAN_PAGE_FREE, false);
- } else {
- struct slab *slab = folio_slab(folio);
-
- ____kasan_slab_free(slab->slab_cache, ptr, ip, false, false);
- }
-}
-
void * __must_check __kasan_slab_alloc(struct kmem_cache *cache,
void *object, gfp_t flags, bool init)
{
@@ -442,6 +419,29 @@ void * __must_check __kasan_krealloc(const void *object, size_t size, gfp_t flag
return ____kasan_kmalloc(slab->slab_cache, object, size, flags);
}

+void __kasan_mempool_poison_object(void *ptr, unsigned long ip)
+{
+ struct folio *folio;
+
+ folio = virt_to_folio(ptr);
+
+ /*
+ * Even though this function is only called for kmem_cache_alloc and
+ * kmalloc backed mempool allocations, those allocations can still be
+ * !PageSlab() when the size provided to kmalloc is larger than
+ * KMALLOC_MAX_SIZE, and kmalloc falls back onto page_alloc.
+ */
+ if (unlikely(!folio_test_slab(folio))) {
+ if (____kasan_kfree_large(ptr, ip))
+ return;
+ kasan_poison(ptr, folio_size(folio), KASAN_PAGE_FREE, false);
+ } else {
+ struct slab *slab = folio_slab(folio);
+
+ ____kasan_slab_free(slab->slab_cache, ptr, ip, false, false);
+ }
+}
+
bool __kasan_check_byte(const void *address, unsigned long ip)
{
if (!kasan_byte_accessible(address)) {
--
2.25.1

2023-11-06 20:12:13

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH RFC 03/20] kasan: document kasan_mempool_poison_object

From: Andrey Konovalov <[email protected]>

Add documentation comment for kasan_mempool_poison_object.

Signed-off-by: Andrey Konovalov <[email protected]>
---
include/linux/kasan.h | 18 ++++++++++++++++++
1 file changed, 18 insertions(+)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 0d1f925c136d..bbf6e2fa4ffd 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -213,6 +213,24 @@ static __always_inline void * __must_check kasan_krealloc(const void *object,
}

void __kasan_mempool_poison_object(void *ptr, unsigned long ip);
+/**
+ * kasan_mempool_poison_object - Check and poison a mempool slab allocation.
+ * @ptr: Pointer to the slab allocation.
+ *
+ * This function is intended for kernel subsystems that cache slab allocations
+ * to reuse them instead of freeing them back to the slab allocator (e.g.
+ * mempool).
+ *
+ * This function poisons a slab allocation without initializing its memory and
+ * without putting it into the quarantine (for the Generic mode).
+ *
+ * This function also performs checks to detect double-free and invalid-free
+ * bugs and reports them.
+ *
+ * This function operates on all slab allocations including large kmalloc
+ * allocations (the ones returned by kmalloc_large() or by kmalloc() with the
+ * size > KMALLOC_MAX_SIZE).
+ */
static __always_inline void kasan_mempool_poison_object(void *ptr)
{
if (kasan_enabled())
--
2.25.1

2023-11-06 20:12:25

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH RFC 06/20] kasan: introduce kasan_mempool_poison_pages

From: Andrey Konovalov <[email protected]>

Introduce and document a kasan_mempool_poison_pages hook to be used
by the mempool code instead of kasan_poison_pages.

Compated to kasan_poison_pages, the new hook:

1. For the tag-based modes, skips checking and poisoning allocations that
were not tagged due to sampling.

2. Checks for double-free and invalid-free bugs.

In the future, kasan_poison_pages can also be updated to handle #2, but
this is out-of-scope of this series.

Signed-off-by: Andrey Konovalov <[email protected]>
---
include/linux/kasan.h | 27 +++++++++++++++++++++++++++
mm/kasan/common.c | 23 +++++++++++++++++++++++
2 files changed, 50 insertions(+)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index c5fe303bc1c2..de2a695ad34d 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -212,6 +212,29 @@ static __always_inline void * __must_check kasan_krealloc(const void *object,
return (void *)object;
}

+bool __kasan_mempool_poison_pages(struct page *page, unsigned int order,
+ unsigned long ip);
+/**
+ * kasan_mempool_poison_pages - Check and poison a mempool page allocation.
+ * @page: Pointer to the page allocation.
+ * @order: Order of the allocation.
+ *
+ * This function is intended for kernel subsystems that cache page allocations
+ * to reuse them instead of freeing them back to page_alloc (e.g. mempool).
+ *
+ * This function is similar to kasan_mempool_poison_object() but operates on
+ * page allocations.
+ *
+ * Return: true if the allocation can be safely reused; false otherwise.
+ */
+static __always_inline bool kasan_mempool_poison_pages(struct page *page,
+ unsigned int order)
+{
+ if (kasan_enabled())
+ return __kasan_mempool_poison_pages(page, order, _RET_IP_);
+ return true;
+}
+
bool __kasan_mempool_poison_object(void *ptr, unsigned long ip);
/**
* kasan_mempool_poison_object - Check and poison a mempool slab allocation.
@@ -326,6 +349,10 @@ static inline void *kasan_krealloc(const void *object, size_t new_size,
{
return (void *)object;
}
+static inline bool kasan_mempool_poison_pages(struct page *page, unsigned int order)
+{
+ return true;
+}
static inline bool kasan_mempool_poison_object(void *ptr)
{
return true;
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 033c860afe51..9ccc78b20cf2 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -416,6 +416,29 @@ void * __must_check __kasan_krealloc(const void *object, size_t size, gfp_t flag
return ____kasan_kmalloc(slab->slab_cache, object, size, flags);
}

+bool __kasan_mempool_poison_pages(struct page *page, unsigned int order,
+ unsigned long ip)
+{
+ unsigned long *ptr;
+
+ if (unlikely(PageHighMem(page)))
+ return true;
+
+ /* Bail out if allocation was excluded due to sampling. */
+ if (!IS_ENABLED(CONFIG_KASAN_GENERIC) &&
+ page_kasan_tag(page) == KASAN_TAG_KERNEL)
+ return true;
+
+ ptr = page_address(page);
+
+ if (check_page_allocation(ptr, ip))
+ return false;
+
+ kasan_poison(ptr, PAGE_SIZE << order, KASAN_PAGE_FREE, false);
+
+ return true;
+}
+
bool __kasan_mempool_poison_object(void *ptr, unsigned long ip)
{
struct folio *folio;
--
2.25.1

2023-11-06 20:12:31

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH RFC 11/20] kasan: introduce poison_kmalloc_large_redzone

From: Andrey Konovalov <[email protected]>

Split out a poison_kmalloc_large_redzone helper from
__kasan_kmalloc_large and use it in the caller's code.

This is a preparatory change for the following patches in this series.

Signed-off-by: Andrey Konovalov <[email protected]>
---
mm/kasan/common.c | 41 +++++++++++++++++++++++------------------
1 file changed, 23 insertions(+), 18 deletions(-)

diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index ceb06d5f169f..b50e4fbaf238 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -353,23 +353,12 @@ void * __must_check __kasan_kmalloc(struct kmem_cache *cache, const void *object
}
EXPORT_SYMBOL(__kasan_kmalloc);

-void * __must_check __kasan_kmalloc_large(const void *ptr, size_t size,
+static inline void poison_kmalloc_large_redzone(const void *ptr, size_t size,
gfp_t flags)
{
unsigned long redzone_start;
unsigned long redzone_end;

- if (gfpflags_allow_blocking(flags))
- kasan_quarantine_reduce();
-
- if (unlikely(ptr == NULL))
- return NULL;
-
- /*
- * The object has already been unpoisoned by kasan_unpoison_pages() for
- * alloc_pages() or by kasan_krealloc() for krealloc().
- */
-
/*
* The redzone has byte-level precision for the generic mode.
* Partially poison the last object granule to cover the unaligned
@@ -379,12 +368,25 @@ void * __must_check __kasan_kmalloc_large(const void *ptr, size_t size,
kasan_poison_last_granule(ptr, size);

/* Poison the aligned part of the redzone. */
- redzone_start = round_up((unsigned long)(ptr + size),
- KASAN_GRANULE_SIZE);
+ redzone_start = round_up((unsigned long)(ptr + size), KASAN_GRANULE_SIZE);
redzone_end = (unsigned long)ptr + page_size(virt_to_page(ptr));
kasan_poison((void *)redzone_start, redzone_end - redzone_start,
KASAN_PAGE_REDZONE, false);
+}

+void * __must_check __kasan_kmalloc_large(const void *ptr, size_t size,
+ gfp_t flags)
+{
+ if (gfpflags_allow_blocking(flags))
+ kasan_quarantine_reduce();
+
+ if (unlikely(ptr == NULL))
+ return NULL;
+
+ /* The object has already been unpoisoned by kasan_unpoison_pages(). */
+ poison_kmalloc_large_redzone(ptr, size, flags);
+
+ /* Keep the tag that was set by alloc_pages(). */
return (void *)ptr;
}

@@ -392,6 +394,9 @@ void * __must_check __kasan_krealloc(const void *object, size_t size, gfp_t flag
{
struct slab *slab;

+ if (gfpflags_allow_blocking(flags))
+ kasan_quarantine_reduce();
+
if (unlikely(object == ZERO_SIZE_PTR))
return (void *)object;

@@ -409,11 +414,11 @@ void * __must_check __kasan_krealloc(const void *object, size_t size, gfp_t flag

/* Piggy-back on kmalloc() instrumentation to poison the redzone. */
if (unlikely(!slab))
- return __kasan_kmalloc_large(object, size, flags);
- else {
+ poison_kmalloc_large_redzone(object, size, flags);
+ else
poison_kmalloc_redzone(slab->slab_cache, object, size, flags);
- return (void *)object;
- }
+
+ return (void *)object;
}

bool __kasan_mempool_poison_pages(struct page *page, unsigned int order,
--
2.25.1

2023-11-06 20:12:33

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH RFC 08/20] kasan: clean up __kasan_mempool_poison_object

From: Andrey Konovalov <[email protected]>

Reorganize the code and reword the comment in
__kasan_mempool_poison_object to improve the code readability.

Signed-off-by: Andrey Konovalov <[email protected]>
---
mm/kasan/common.c | 19 +++++++------------
1 file changed, 7 insertions(+), 12 deletions(-)

diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 6283f0206ef6..7c28d0a5af2c 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -447,27 +447,22 @@ void __kasan_mempool_unpoison_pages(struct page *page, unsigned int order,

bool __kasan_mempool_poison_object(void *ptr, unsigned long ip)
{
- struct folio *folio;
-
- folio = virt_to_folio(ptr);
+ struct folio *folio = virt_to_folio(ptr);
+ struct slab *slab;

/*
- * Even though this function is only called for kmem_cache_alloc and
- * kmalloc backed mempool allocations, those allocations can still be
- * !PageSlab() when the size provided to kmalloc is larger than
- * KMALLOC_MAX_SIZE, and kmalloc falls back onto page_alloc.
+ * This function can be called for large kmalloc allocation that get
+ * their memory from page_alloc. Thus, the folio might not be a slab.
*/
if (unlikely(!folio_test_slab(folio))) {
if (check_page_allocation(ptr, ip))
return false;
kasan_poison(ptr, folio_size(folio), KASAN_PAGE_FREE, false);
return true;
- } else {
- struct slab *slab = folio_slab(folio);
-
- return !____kasan_slab_free(slab->slab_cache, ptr, ip,
- false, false);
}
+
+ slab = folio_slab(folio);
+ return !____kasan_slab_free(slab->slab_cache, ptr, ip, false, false);
}

void __kasan_mempool_unpoison_object(void *ptr, size_t size, unsigned long ip)
--
2.25.1

2023-11-06 20:12:36

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH RFC 10/20] kasan: clean up and rename ____kasan_kmalloc

From: Andrey Konovalov <[email protected]>

Introduce a new poison_kmalloc_redzone helper function that poisons
the redzone for kmalloc object.

Drop the confusingly named ____kasan_kmalloc function and instead use
poison_kmalloc_redzone along with the other required parts of
____kasan_kmalloc in the callers' code.

This is a preparatory change for the following patches in this series.

Signed-off-by: Andrey Konovalov <[email protected]>
---
mm/kasan/common.c | 42 ++++++++++++++++++++++--------------------
1 file changed, 22 insertions(+), 20 deletions(-)

diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 683d0dad32f2..ceb06d5f169f 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -302,26 +302,12 @@ void * __must_check __kasan_slab_alloc(struct kmem_cache *cache,
return tagged_object;
}

-static inline void *____kasan_kmalloc(struct kmem_cache *cache,
+static inline void poison_kmalloc_redzone(struct kmem_cache *cache,
const void *object, size_t size, gfp_t flags)
{
unsigned long redzone_start;
unsigned long redzone_end;

- if (gfpflags_allow_blocking(flags))
- kasan_quarantine_reduce();
-
- if (unlikely(object == NULL))
- return NULL;
-
- if (is_kfence_address(kasan_reset_tag(object)))
- return (void *)object;
-
- /*
- * The object has already been unpoisoned by kasan_slab_alloc() for
- * kmalloc() or by kasan_krealloc() for krealloc().
- */
-
/*
* The redzone has byte-level precision for the generic mode.
* Partially poison the last object granule to cover the unaligned
@@ -345,14 +331,25 @@ static inline void *____kasan_kmalloc(struct kmem_cache *cache,
if (kasan_stack_collection_enabled() && is_kmalloc_cache(cache))
kasan_save_alloc_info(cache, (void *)object, flags);

- /* Keep the tag that was set by kasan_slab_alloc(). */
- return (void *)object;
}

void * __must_check __kasan_kmalloc(struct kmem_cache *cache, const void *object,
size_t size, gfp_t flags)
{
- return ____kasan_kmalloc(cache, object, size, flags);
+ if (gfpflags_allow_blocking(flags))
+ kasan_quarantine_reduce();
+
+ if (unlikely(object == NULL))
+ return NULL;
+
+ if (is_kfence_address(kasan_reset_tag(object)))
+ return (void *)object;
+
+ /* The object has already been unpoisoned by kasan_slab_alloc(). */
+ poison_kmalloc_redzone(cache, object, size, flags);
+
+ /* Keep the tag that was set by kasan_slab_alloc(). */
+ return (void *)object;
}
EXPORT_SYMBOL(__kasan_kmalloc);

@@ -398,6 +395,9 @@ void * __must_check __kasan_krealloc(const void *object, size_t size, gfp_t flag
if (unlikely(object == ZERO_SIZE_PTR))
return (void *)object;

+ if (is_kfence_address(kasan_reset_tag(object)))
+ return (void *)object;
+
/*
* Unpoison the object's data.
* Part of it might already have been unpoisoned, but it's unknown
@@ -410,8 +410,10 @@ void * __must_check __kasan_krealloc(const void *object, size_t size, gfp_t flag
/* Piggy-back on kmalloc() instrumentation to poison the redzone. */
if (unlikely(!slab))
return __kasan_kmalloc_large(object, size, flags);
- else
- return ____kasan_kmalloc(slab->slab_cache, object, size, flags);
+ else {
+ poison_kmalloc_redzone(slab->slab_cache, object, size, flags);
+ return (void *)object;
+ }
}

bool __kasan_mempool_poison_pages(struct page *page, unsigned int order,
--
2.25.1

2023-11-06 20:12:36

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH RFC 07/20] kasan: introduce kasan_mempool_unpoison_pages

From: Andrey Konovalov <[email protected]>

Introduce and document a new kasan_mempool_unpoison_pages hook to be used
by the mempool code instead of kasan_unpoison_pages.

This hook is not functionally different from kasan_unpoison_pages, but
using it improves the mempool code readability.

Signed-off-by: Andrey Konovalov <[email protected]>
---
include/linux/kasan.h | 25 +++++++++++++++++++++++++
mm/kasan/common.c | 6 ++++++
2 files changed, 31 insertions(+)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index de2a695ad34d..f8ebde384bd7 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -225,6 +225,9 @@ bool __kasan_mempool_poison_pages(struct page *page, unsigned int order,
* This function is similar to kasan_mempool_poison_object() but operates on
* page allocations.
*
+ * Before the poisoned allocation can be reused, it must be unpoisoned via
+ * kasan_mempool_unpoison_pages().
+ *
* Return: true if the allocation can be safely reused; false otherwise.
*/
static __always_inline bool kasan_mempool_poison_pages(struct page *page,
@@ -235,6 +238,27 @@ static __always_inline bool kasan_mempool_poison_pages(struct page *page,
return true;
}

+void __kasan_mempool_unpoison_pages(struct page *page, unsigned int order,
+ unsigned long ip);
+/**
+ * kasan_mempool_unpoison_pages - Unpoison a mempool page allocation.
+ * @page: Pointer to the page allocation.
+ * @order: Order of the allocation.
+ *
+ * This function is intended for kernel subsystems that cache page allocations
+ * to reuse them instead of freeing them back to page_alloc (e.g. mempool).
+ *
+ * This function unpoisons a page allocation that was previously poisoned by
+ * kasan_mempool_poison_pages() without zeroing the allocation's memory. For
+ * the tag-based modes, this function assigns a new tag to the allocation.
+ */
+static __always_inline void kasan_mempool_unpoison_pages(struct page *page,
+ unsigned int order)
+{
+ if (kasan_enabled())
+ __kasan_mempool_unpoison_pages(page, order, _RET_IP_);
+}
+
bool __kasan_mempool_poison_object(void *ptr, unsigned long ip);
/**
* kasan_mempool_poison_object - Check and poison a mempool slab allocation.
@@ -353,6 +377,7 @@ static inline bool kasan_mempool_poison_pages(struct page *page, unsigned int or
{
return true;
}
+static inline void kasan_mempool_unpoison_pages(struct page *page, unsigned int order) {}
static inline bool kasan_mempool_poison_object(void *ptr)
{
return true;
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 9ccc78b20cf2..6283f0206ef6 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -439,6 +439,12 @@ bool __kasan_mempool_poison_pages(struct page *page, unsigned int order,
return true;
}

+void __kasan_mempool_unpoison_pages(struct page *page, unsigned int order,
+ unsigned long ip)
+{
+ __kasan_unpoison_pages(page, order, false);
+}
+
bool __kasan_mempool_poison_object(void *ptr, unsigned long ip)
{
struct folio *folio;
--
2.25.1

2023-11-06 20:13:11

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH RFC 13/20] mempool: use new mempool KASAN hooks

From: Andrey Konovalov <[email protected]>

Update the mempool code to use the new mempool KASAN hooks.

Rely on the return value of kasan_mempool_poison_object and
kasan_mempool_poison_pages to prevent double-free and invalid-free bugs.

Signed-off-by: Andrey Konovalov <[email protected]>
---
mm/mempool.c | 22 ++++++++++++----------
1 file changed, 12 insertions(+), 10 deletions(-)

diff --git a/mm/mempool.c b/mm/mempool.c
index 768cb39dc5e2..f67ca6753332 100644
--- a/mm/mempool.c
+++ b/mm/mempool.c
@@ -104,32 +104,34 @@ static inline void poison_element(mempool_t *pool, void *element)
}
#endif /* CONFIG_DEBUG_SLAB || CONFIG_SLUB_DEBUG_ON */

-static __always_inline void kasan_poison_element(mempool_t *pool, void *element)
+static __always_inline bool kasan_poison_element(mempool_t *pool, void *element)
{
if (pool->alloc == mempool_alloc_slab || pool->alloc == mempool_kmalloc)
- kasan_mempool_poison_object(element);
+ return kasan_mempool_poison_object(element);
else if (pool->alloc == mempool_alloc_pages)
- kasan_poison_pages(element, (unsigned long)pool->pool_data,
- false);
+ return kasan_mempool_poison_pages(element,
+ (unsigned long)pool->pool_data);
+ return true;
}

static void kasan_unpoison_element(mempool_t *pool, void *element)
{
if (pool->alloc == mempool_kmalloc)
- kasan_unpoison_range(element, (size_t)pool->pool_data);
+ kasan_mempool_unpoison_object(element, (size_t)pool->pool_data);
else if (pool->alloc == mempool_alloc_slab)
- kasan_unpoison_range(element, kmem_cache_size(pool->pool_data));
+ kasan_mempool_unpoison_object(element,
+ kmem_cache_size(pool->pool_data));
else if (pool->alloc == mempool_alloc_pages)
- kasan_unpoison_pages(element, (unsigned long)pool->pool_data,
- false);
+ kasan_mempool_unpoison_pages(element,
+ (unsigned long)pool->pool_data);
}

static __always_inline void add_element(mempool_t *pool, void *element)
{
BUG_ON(pool->curr_nr >= pool->min_nr);
poison_element(pool, element);
- kasan_poison_element(pool, element);
- pool->elements[pool->curr_nr++] = element;
+ if (kasan_poison_element(pool, element))
+ pool->elements[pool->curr_nr++] = element;
}

static void *remove_element(mempool_t *pool)
--
2.25.1

2023-11-06 20:13:20

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH RFC 14/20] mempool: introduce mempool_use_prealloc_only

From: Andrey Konovalov <[email protected]>

Introduce a new mempool_use_prealloc_only API that tells the mempool to
only use the elements preallocated during the mempool's creation and to
not attempt allocating new ones.

This API is required to test the KASAN poisoning/unpoisoning functinality
in KASAN tests, but it might be also useful on its own.

Signed-off-by: Andrey Konovalov <[email protected]>
---
include/linux/mempool.h | 2 ++
mm/mempool.c | 27 ++++++++++++++++++++++++---
2 files changed, 26 insertions(+), 3 deletions(-)

diff --git a/include/linux/mempool.h b/include/linux/mempool.h
index 4aae6c06c5f2..822adf1e7567 100644
--- a/include/linux/mempool.h
+++ b/include/linux/mempool.h
@@ -18,6 +18,7 @@ typedef struct mempool_s {
int min_nr; /* nr of elements at *elements */
int curr_nr; /* Current nr of elements at *elements */
void **elements;
+ bool use_prealloc_only; /* Use only preallocated elements */

void *pool_data;
mempool_alloc_t *alloc;
@@ -48,6 +49,7 @@ extern mempool_t *mempool_create_node(int min_nr, mempool_alloc_t *alloc_fn,
mempool_free_t *free_fn, void *pool_data,
gfp_t gfp_mask, int nid);

+extern void mempool_use_prealloc_only(mempool_t *pool);
extern int mempool_resize(mempool_t *pool, int new_min_nr);
extern void mempool_destroy(mempool_t *pool);
extern void *mempool_alloc(mempool_t *pool, gfp_t gfp_mask) __malloc;
diff --git a/mm/mempool.c b/mm/mempool.c
index f67ca6753332..59f7fcd355b3 100644
--- a/mm/mempool.c
+++ b/mm/mempool.c
@@ -365,6 +365,20 @@ int mempool_resize(mempool_t *pool, int new_min_nr)
}
EXPORT_SYMBOL(mempool_resize);

+/**
+ * mempool_use_prealloc_only - mark a pool to only use preallocated elements
+ * @pool: pointer to the memory pool that should be marked
+ *
+ * This function should only be called right after the pool creation via
+ * mempool_init() or mempool_create() and must not be called concurrently with
+ * mempool_alloc().
+ */
+void mempool_use_prealloc_only(mempool_t *pool)
+{
+ pool->use_prealloc_only = true;
+}
+EXPORT_SYMBOL(mempool_use_prealloc_only);
+
/**
* mempool_alloc - allocate an element from a specific memory pool
* @pool: pointer to the memory pool which was allocated via
@@ -397,9 +411,11 @@ void *mempool_alloc(mempool_t *pool, gfp_t gfp_mask)

repeat_alloc:

- element = pool->alloc(gfp_temp, pool->pool_data);
- if (likely(element != NULL))
- return element;
+ if (!pool->use_prealloc_only) {
+ element = pool->alloc(gfp_temp, pool->pool_data);
+ if (likely(element != NULL))
+ return element;
+ }

spin_lock_irqsave(&pool->lock, flags);
if (likely(pool->curr_nr)) {
@@ -415,6 +431,11 @@ void *mempool_alloc(mempool_t *pool, gfp_t gfp_mask)
return element;
}

+ if (pool->use_prealloc_only) {
+ spin_unlock_irqrestore(&pool->lock, flags);
+ return NULL;
+ }
+
/*
* We use gfp mask w/o direct reclaim or IO for the first round. If
* alloc failed with that and @pool was empty, retry immediately.
--
2.25.1

2023-11-06 20:13:21

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH RFC 09/20] kasan: save free stack traces for slab mempools

From: Andrey Konovalov <[email protected]>

Make kasan_mempool_poison_object save free stack traces for slab and
kmalloc mempools when the object is freed into the mempool.

Also simplify and rename ____kasan_slab_free to poison_slab_object and
do a few other reability changes.

Signed-off-by: Andrey Konovalov <[email protected]>
---
include/linux/kasan.h | 5 +++--
mm/kasan/common.c | 20 +++++++++-----------
2 files changed, 12 insertions(+), 13 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index f8ebde384bd7..e636a00e26ba 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -268,8 +268,9 @@ bool __kasan_mempool_poison_object(void *ptr, unsigned long ip);
* to reuse them instead of freeing them back to the slab allocator (e.g.
* mempool).
*
- * This function poisons a slab allocation without initializing its memory and
- * without putting it into the quarantine (for the Generic mode).
+ * This function poisons a slab allocation and saves a free stack trace for it
+ * without initializing the allocation's memory and without putting it into the
+ * quarantine (for the Generic mode).
*
* This function also performs checks to detect double-free and invalid-free
* bugs and reports them. The caller can use the return value of this function
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 7c28d0a5af2c..683d0dad32f2 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -197,8 +197,8 @@ void * __must_check __kasan_init_slab_obj(struct kmem_cache *cache,
return (void *)object;
}

-static inline bool ____kasan_slab_free(struct kmem_cache *cache, void *object,
- unsigned long ip, bool quarantine, bool init)
+static inline bool poison_slab_object(struct kmem_cache *cache, void *object,
+ unsigned long ip, bool init)
{
void *tagged_object;

@@ -211,13 +211,12 @@ static inline bool ____kasan_slab_free(struct kmem_cache *cache, void *object,
if (is_kfence_address(object))
return false;

- if (unlikely(nearest_obj(cache, virt_to_slab(object), object) !=
- object)) {
+ if (unlikely(nearest_obj(cache, virt_to_slab(object), object) != object)) {
kasan_report_invalid_free(tagged_object, ip, KASAN_REPORT_INVALID_FREE);
return true;
}

- /* RCU slabs could be legally used after free within the RCU period */
+ /* RCU slabs could be legally used after free within the RCU period. */
if (unlikely(cache->flags & SLAB_TYPESAFE_BY_RCU))
return false;

@@ -229,19 +228,18 @@ static inline bool ____kasan_slab_free(struct kmem_cache *cache, void *object,
kasan_poison(object, round_up(cache->object_size, KASAN_GRANULE_SIZE),
KASAN_SLAB_FREE, init);

- if ((IS_ENABLED(CONFIG_KASAN_GENERIC) && !quarantine))
- return false;
-
if (kasan_stack_collection_enabled())
kasan_save_free_info(cache, tagged_object);

- return kasan_quarantine_put(cache, object);
+ return false;
}

bool __kasan_slab_free(struct kmem_cache *cache, void *object,
unsigned long ip, bool init)
{
- return ____kasan_slab_free(cache, object, ip, true, init);
+ bool buggy_object = poison_slab_object(cache, object, ip, init);
+
+ return buggy_object ? true : kasan_quarantine_put(cache, object);
}

static inline bool check_page_allocation(void *ptr, unsigned long ip)
@@ -462,7 +460,7 @@ bool __kasan_mempool_poison_object(void *ptr, unsigned long ip)
}

slab = folio_slab(folio);
- return !____kasan_slab_free(slab->slab_cache, ptr, ip, false, false);
+ return !poison_slab_object(slab->slab_cache, ptr, ip, false);
}

void __kasan_mempool_unpoison_object(void *ptr, size_t size, unsigned long ip)
--
2.25.1

2023-11-06 20:13:22

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH RFC 16/20] kasan: rename pagealloc tests

From: Andrey Konovalov <[email protected]>

Rename "pagealloc" KASAN tests:

1. Use "kmalloc_large" for tests that use large kmalloc allocations.

2. Use "page_alloc" for tests that use page_alloc.

Also clean up the comments.

Signed-off-by: Andrey Konovalov <[email protected]>
---
mm/kasan/kasan_test.c | 51 ++++++++++++++++++++++---------------------
1 file changed, 26 insertions(+), 25 deletions(-)

diff --git a/mm/kasan/kasan_test.c b/mm/kasan/kasan_test.c
index 9adbcd04259b..4ea403653a39 100644
--- a/mm/kasan/kasan_test.c
+++ b/mm/kasan/kasan_test.c
@@ -214,12 +214,13 @@ static void kmalloc_node_oob_right(struct kunit *test)
}

/*
- * These kmalloc_pagealloc_* tests try allocating a memory chunk that doesn't
- * fit into a slab cache and therefore is allocated via the page allocator
- * fallback. Since this kind of fallback is only implemented for SLUB, these
- * tests are limited to that allocator.
+ * The kmalloc_large_* tests below use kmalloc() to allocate a memory chunk
+ * that does not fit into the largest slab cache and therefore is allocated via
+ * the page_alloc fallback for SLUB. SLAB has no such fallback, and thus these
+ * tests are not supported for it.
*/
-static void kmalloc_pagealloc_oob_right(struct kunit *test)
+
+static void kmalloc_large_oob_right(struct kunit *test)
{
char *ptr;
size_t size = KMALLOC_MAX_CACHE_SIZE + 10;
@@ -235,7 +236,7 @@ static void kmalloc_pagealloc_oob_right(struct kunit *test)
kfree(ptr);
}

-static void kmalloc_pagealloc_uaf(struct kunit *test)
+static void kmalloc_large_uaf(struct kunit *test)
{
char *ptr;
size_t size = KMALLOC_MAX_CACHE_SIZE + 10;
@@ -249,7 +250,7 @@ static void kmalloc_pagealloc_uaf(struct kunit *test)
KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0]);
}

-static void kmalloc_pagealloc_invalid_free(struct kunit *test)
+static void kmalloc_large_invalid_free(struct kunit *test)
{
char *ptr;
size_t size = KMALLOC_MAX_CACHE_SIZE + 10;
@@ -262,7 +263,7 @@ static void kmalloc_pagealloc_invalid_free(struct kunit *test)
KUNIT_EXPECT_KASAN_FAIL(test, kfree(ptr + 1));
}

-static void pagealloc_oob_right(struct kunit *test)
+static void page_alloc_oob_right(struct kunit *test)
{
char *ptr;
struct page *pages;
@@ -284,7 +285,7 @@ static void pagealloc_oob_right(struct kunit *test)
free_pages((unsigned long)ptr, order);
}

-static void pagealloc_uaf(struct kunit *test)
+static void page_alloc_uaf(struct kunit *test)
{
char *ptr;
struct page *pages;
@@ -298,15 +299,15 @@ static void pagealloc_uaf(struct kunit *test)
KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0]);
}

-static void kmalloc_large_oob_right(struct kunit *test)
+/*
+ * Check that KASAN detects an out-of-bounds access for a big object allocated
+ * via kmalloc(). But not as big as to trigger the page_alloc fallback for SLUB.
+ */
+static void kmalloc_big_oob_right(struct kunit *test)
{
char *ptr;
size_t size = KMALLOC_MAX_CACHE_SIZE - 256;

- /*
- * Allocate a chunk that is large enough, but still fits into a slab
- * and does not trigger the page allocator fallback in SLUB.
- */
ptr = kmalloc(size, GFP_KERNEL);
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);

@@ -404,18 +405,18 @@ static void krealloc_less_oob(struct kunit *test)
krealloc_less_oob_helper(test, 235, 201);
}

-static void krealloc_pagealloc_more_oob(struct kunit *test)
+static void krealloc_large_more_oob(struct kunit *test)
{
- /* page_alloc fallback in only implemented for SLUB. */
+ /* page_alloc fallback is only implemented for SLUB. */
KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_SLUB);

krealloc_more_oob_helper(test, KMALLOC_MAX_CACHE_SIZE + 201,
KMALLOC_MAX_CACHE_SIZE + 235);
}

-static void krealloc_pagealloc_less_oob(struct kunit *test)
+static void krealloc_large_less_oob(struct kunit *test)
{
- /* page_alloc fallback in only implemented for SLUB. */
+ /* page_alloc fallback is only implemented for SLUB. */
KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_SLUB);

krealloc_less_oob_helper(test, KMALLOC_MAX_CACHE_SIZE + 235,
@@ -1822,16 +1823,16 @@ static struct kunit_case kasan_kunit_test_cases[] = {
KUNIT_CASE(kmalloc_oob_right),
KUNIT_CASE(kmalloc_oob_left),
KUNIT_CASE(kmalloc_node_oob_right),
- KUNIT_CASE(kmalloc_pagealloc_oob_right),
- KUNIT_CASE(kmalloc_pagealloc_uaf),
- KUNIT_CASE(kmalloc_pagealloc_invalid_free),
- KUNIT_CASE(pagealloc_oob_right),
- KUNIT_CASE(pagealloc_uaf),
KUNIT_CASE(kmalloc_large_oob_right),
+ KUNIT_CASE(kmalloc_large_uaf),
+ KUNIT_CASE(kmalloc_large_invalid_free),
+ KUNIT_CASE(page_alloc_oob_right),
+ KUNIT_CASE(page_alloc_uaf),
+ KUNIT_CASE(kmalloc_big_oob_right),
KUNIT_CASE(krealloc_more_oob),
KUNIT_CASE(krealloc_less_oob),
- KUNIT_CASE(krealloc_pagealloc_more_oob),
- KUNIT_CASE(krealloc_pagealloc_less_oob),
+ KUNIT_CASE(krealloc_large_more_oob),
+ KUNIT_CASE(krealloc_large_less_oob),
KUNIT_CASE(krealloc_uaf),
KUNIT_CASE(kmalloc_oob_16),
KUNIT_CASE(kmalloc_uaf_16),
--
2.25.1

2023-11-06 20:13:32

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH RFC 17/20] kasan: reorder tests

From: Andrey Konovalov <[email protected]>

Put closely related tests next to each other.

No functional changes.

Signed-off-by: Andrey Konovalov <[email protected]>
---
mm/kasan/kasan_test.c | 418 +++++++++++++++++++++---------------------
1 file changed, 209 insertions(+), 209 deletions(-)

diff --git a/mm/kasan/kasan_test.c b/mm/kasan/kasan_test.c
index 4ea403653a39..9a1bdd6ca8c4 100644
--- a/mm/kasan/kasan_test.c
+++ b/mm/kasan/kasan_test.c
@@ -213,6 +213,23 @@ static void kmalloc_node_oob_right(struct kunit *test)
kfree(ptr);
}

+/*
+ * Check that KASAN detects an out-of-bounds access for a big object allocated
+ * via kmalloc(). But not as big as to trigger the page_alloc fallback for SLUB.
+ */
+static void kmalloc_big_oob_right(struct kunit *test)
+{
+ char *ptr;
+ size_t size = KMALLOC_MAX_CACHE_SIZE - 256;
+
+ ptr = kmalloc(size, GFP_KERNEL);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
+
+ OPTIMIZER_HIDE_VAR(ptr);
+ KUNIT_EXPECT_KASAN_FAIL(test, ptr[size] = 0);
+ kfree(ptr);
+}
+
/*
* The kmalloc_large_* tests below use kmalloc() to allocate a memory chunk
* that does not fit into the largest slab cache and therefore is allocated via
@@ -299,23 +316,6 @@ static void page_alloc_uaf(struct kunit *test)
KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0]);
}

-/*
- * Check that KASAN detects an out-of-bounds access for a big object allocated
- * via kmalloc(). But not as big as to trigger the page_alloc fallback for SLUB.
- */
-static void kmalloc_big_oob_right(struct kunit *test)
-{
- char *ptr;
- size_t size = KMALLOC_MAX_CACHE_SIZE - 256;
-
- ptr = kmalloc(size, GFP_KERNEL);
- KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
-
- OPTIMIZER_HIDE_VAR(ptr);
- KUNIT_EXPECT_KASAN_FAIL(test, ptr[size] = 0);
- kfree(ptr);
-}
-
static void krealloc_more_oob_helper(struct kunit *test,
size_t size1, size_t size2)
{
@@ -698,6 +698,126 @@ static void kmalloc_uaf3(struct kunit *test)
KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr1)[8]);
}

+static void kmalloc_double_kzfree(struct kunit *test)
+{
+ char *ptr;
+ size_t size = 16;
+
+ ptr = kmalloc(size, GFP_KERNEL);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
+
+ kfree_sensitive(ptr);
+ KUNIT_EXPECT_KASAN_FAIL(test, kfree_sensitive(ptr));
+}
+
+/* Check that ksize() does NOT unpoison whole object. */
+static void ksize_unpoisons_memory(struct kunit *test)
+{
+ char *ptr;
+ size_t size = 128 - KASAN_GRANULE_SIZE - 5;
+ size_t real_size;
+
+ ptr = kmalloc(size, GFP_KERNEL);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
+
+ real_size = ksize(ptr);
+ KUNIT_EXPECT_GT(test, real_size, size);
+
+ OPTIMIZER_HIDE_VAR(ptr);
+
+ /* These accesses shouldn't trigger a KASAN report. */
+ ptr[0] = 'x';
+ ptr[size - 1] = 'x';
+
+ /* These must trigger a KASAN report. */
+ if (IS_ENABLED(CONFIG_KASAN_GENERIC))
+ KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[size]);
+ KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[size + 5]);
+ KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[real_size - 1]);
+
+ kfree(ptr);
+}
+
+/*
+ * Check that a use-after-free is detected by ksize() and via normal accesses
+ * after it.
+ */
+static void ksize_uaf(struct kunit *test)
+{
+ char *ptr;
+ int size = 128 - KASAN_GRANULE_SIZE;
+
+ ptr = kmalloc(size, GFP_KERNEL);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
+ kfree(ptr);
+
+ OPTIMIZER_HIDE_VAR(ptr);
+ KUNIT_EXPECT_KASAN_FAIL(test, ksize(ptr));
+ KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0]);
+ KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[size]);
+}
+
+/*
+ * The two tests below check that Generic KASAN prints auxiliary stack traces
+ * for RCU callbacks and workqueues. The reports need to be inspected manually.
+ *
+ * These tests are still enabled for other KASAN modes to make sure that all
+ * modes report bad accesses in tested scenarios.
+ */
+
+static struct kasan_rcu_info {
+ int i;
+ struct rcu_head rcu;
+} *global_rcu_ptr;
+
+static void rcu_uaf_reclaim(struct rcu_head *rp)
+{
+ struct kasan_rcu_info *fp =
+ container_of(rp, struct kasan_rcu_info, rcu);
+
+ kfree(fp);
+ ((volatile struct kasan_rcu_info *)fp)->i;
+}
+
+static void rcu_uaf(struct kunit *test)
+{
+ struct kasan_rcu_info *ptr;
+
+ ptr = kmalloc(sizeof(struct kasan_rcu_info), GFP_KERNEL);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
+
+ global_rcu_ptr = rcu_dereference_protected(
+ (struct kasan_rcu_info __rcu *)ptr, NULL);
+
+ KUNIT_EXPECT_KASAN_FAIL(test,
+ call_rcu(&global_rcu_ptr->rcu, rcu_uaf_reclaim);
+ rcu_barrier());
+}
+
+static void workqueue_uaf_work(struct work_struct *work)
+{
+ kfree(work);
+}
+
+static void workqueue_uaf(struct kunit *test)
+{
+ struct workqueue_struct *workqueue;
+ struct work_struct *work;
+
+ workqueue = create_workqueue("kasan_workqueue_test");
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, workqueue);
+
+ work = kmalloc(sizeof(struct work_struct), GFP_KERNEL);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, work);
+
+ INIT_WORK(work, workqueue_uaf_work);
+ queue_work(workqueue, work);
+ destroy_workqueue(workqueue);
+
+ KUNIT_EXPECT_KASAN_FAIL(test,
+ ((volatile struct work_struct *)work)->data);
+}
+
static void kfree_via_page(struct kunit *test)
{
char *ptr;
@@ -748,6 +868,69 @@ static void kmem_cache_oob(struct kunit *test)
kmem_cache_destroy(cache);
}

+static void kmem_cache_double_free(struct kunit *test)
+{
+ char *p;
+ size_t size = 200;
+ struct kmem_cache *cache;
+
+ cache = kmem_cache_create("test_cache", size, 0, 0, NULL);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, cache);
+
+ p = kmem_cache_alloc(cache, GFP_KERNEL);
+ if (!p) {
+ kunit_err(test, "Allocation failed: %s\n", __func__);
+ kmem_cache_destroy(cache);
+ return;
+ }
+
+ kmem_cache_free(cache, p);
+ KUNIT_EXPECT_KASAN_FAIL(test, kmem_cache_free(cache, p));
+ kmem_cache_destroy(cache);
+}
+
+static void kmem_cache_invalid_free(struct kunit *test)
+{
+ char *p;
+ size_t size = 200;
+ struct kmem_cache *cache;
+
+ cache = kmem_cache_create("test_cache", size, 0, SLAB_TYPESAFE_BY_RCU,
+ NULL);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, cache);
+
+ p = kmem_cache_alloc(cache, GFP_KERNEL);
+ if (!p) {
+ kunit_err(test, "Allocation failed: %s\n", __func__);
+ kmem_cache_destroy(cache);
+ return;
+ }
+
+ /* Trigger invalid free, the object doesn't get freed. */
+ KUNIT_EXPECT_KASAN_FAIL(test, kmem_cache_free(cache, p + 1));
+
+ /*
+ * Properly free the object to prevent the "Objects remaining in
+ * test_cache on __kmem_cache_shutdown" BUG failure.
+ */
+ kmem_cache_free(cache, p);
+
+ kmem_cache_destroy(cache);
+}
+
+static void empty_cache_ctor(void *object) { }
+
+static void kmem_cache_double_destroy(struct kunit *test)
+{
+ struct kmem_cache *cache;
+
+ /* Provide a constructor to prevent cache merging. */
+ cache = kmem_cache_create("test_cache", 200, 0, 0, empty_cache_ctor);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, cache);
+ kmem_cache_destroy(cache);
+ KUNIT_EXPECT_KASAN_FAIL(test, kmem_cache_destroy(cache));
+}
+
static void kmem_cache_accounted(struct kunit *test)
{
int i;
@@ -1151,53 +1334,6 @@ static void kasan_global_oob_left(struct kunit *test)
KUNIT_EXPECT_KASAN_FAIL(test, *(volatile char *)p);
}

-/* Check that ksize() does NOT unpoison whole object. */
-static void ksize_unpoisons_memory(struct kunit *test)
-{
- char *ptr;
- size_t size = 128 - KASAN_GRANULE_SIZE - 5;
- size_t real_size;
-
- ptr = kmalloc(size, GFP_KERNEL);
- KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
-
- real_size = ksize(ptr);
- KUNIT_EXPECT_GT(test, real_size, size);
-
- OPTIMIZER_HIDE_VAR(ptr);
-
- /* These accesses shouldn't trigger a KASAN report. */
- ptr[0] = 'x';
- ptr[size - 1] = 'x';
-
- /* These must trigger a KASAN report. */
- if (IS_ENABLED(CONFIG_KASAN_GENERIC))
- KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[size]);
- KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[size + 5]);
- KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[real_size - 1]);
-
- kfree(ptr);
-}
-
-/*
- * Check that a use-after-free is detected by ksize() and via normal accesses
- * after it.
- */
-static void ksize_uaf(struct kunit *test)
-{
- char *ptr;
- int size = 128 - KASAN_GRANULE_SIZE;
-
- ptr = kmalloc(size, GFP_KERNEL);
- KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
- kfree(ptr);
-
- OPTIMIZER_HIDE_VAR(ptr);
- KUNIT_EXPECT_KASAN_FAIL(test, ksize(ptr));
- KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0]);
- KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[size]);
-}
-
static void kasan_stack_oob(struct kunit *test)
{
char stack_array[10];
@@ -1240,69 +1376,6 @@ static void kasan_alloca_oob_right(struct kunit *test)
KUNIT_EXPECT_KASAN_FAIL(test, *(volatile char *)p);
}

-static void kmem_cache_double_free(struct kunit *test)
-{
- char *p;
- size_t size = 200;
- struct kmem_cache *cache;
-
- cache = kmem_cache_create("test_cache", size, 0, 0, NULL);
- KUNIT_ASSERT_NOT_ERR_OR_NULL(test, cache);
-
- p = kmem_cache_alloc(cache, GFP_KERNEL);
- if (!p) {
- kunit_err(test, "Allocation failed: %s\n", __func__);
- kmem_cache_destroy(cache);
- return;
- }
-
- kmem_cache_free(cache, p);
- KUNIT_EXPECT_KASAN_FAIL(test, kmem_cache_free(cache, p));
- kmem_cache_destroy(cache);
-}
-
-static void kmem_cache_invalid_free(struct kunit *test)
-{
- char *p;
- size_t size = 200;
- struct kmem_cache *cache;
-
- cache = kmem_cache_create("test_cache", size, 0, SLAB_TYPESAFE_BY_RCU,
- NULL);
- KUNIT_ASSERT_NOT_ERR_OR_NULL(test, cache);
-
- p = kmem_cache_alloc(cache, GFP_KERNEL);
- if (!p) {
- kunit_err(test, "Allocation failed: %s\n", __func__);
- kmem_cache_destroy(cache);
- return;
- }
-
- /* Trigger invalid free, the object doesn't get freed. */
- KUNIT_EXPECT_KASAN_FAIL(test, kmem_cache_free(cache, p + 1));
-
- /*
- * Properly free the object to prevent the "Objects remaining in
- * test_cache on __kmem_cache_shutdown" BUG failure.
- */
- kmem_cache_free(cache, p);
-
- kmem_cache_destroy(cache);
-}
-
-static void empty_cache_ctor(void *object) { }
-
-static void kmem_cache_double_destroy(struct kunit *test)
-{
- struct kmem_cache *cache;
-
- /* Provide a constructor to prevent cache merging. */
- cache = kmem_cache_create("test_cache", 200, 0, 0, empty_cache_ctor);
- KUNIT_ASSERT_NOT_ERR_OR_NULL(test, cache);
- kmem_cache_destroy(cache);
- KUNIT_EXPECT_KASAN_FAIL(test, kmem_cache_destroy(cache));
-}
-
static void kasan_memchr(struct kunit *test)
{
char *ptr;
@@ -1464,79 +1537,6 @@ static void kasan_bitops_tags(struct kunit *test)
kfree(bits);
}

-static void kmalloc_double_kzfree(struct kunit *test)
-{
- char *ptr;
- size_t size = 16;
-
- ptr = kmalloc(size, GFP_KERNEL);
- KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
-
- kfree_sensitive(ptr);
- KUNIT_EXPECT_KASAN_FAIL(test, kfree_sensitive(ptr));
-}
-
-/*
- * The two tests below check that Generic KASAN prints auxiliary stack traces
- * for RCU callbacks and workqueues. The reports need to be inspected manually.
- *
- * These tests are still enabled for other KASAN modes to make sure that all
- * modes report bad accesses in tested scenarios.
- */
-
-static struct kasan_rcu_info {
- int i;
- struct rcu_head rcu;
-} *global_rcu_ptr;
-
-static void rcu_uaf_reclaim(struct rcu_head *rp)
-{
- struct kasan_rcu_info *fp =
- container_of(rp, struct kasan_rcu_info, rcu);
-
- kfree(fp);
- ((volatile struct kasan_rcu_info *)fp)->i;
-}
-
-static void rcu_uaf(struct kunit *test)
-{
- struct kasan_rcu_info *ptr;
-
- ptr = kmalloc(sizeof(struct kasan_rcu_info), GFP_KERNEL);
- KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
-
- global_rcu_ptr = rcu_dereference_protected(
- (struct kasan_rcu_info __rcu *)ptr, NULL);
-
- KUNIT_EXPECT_KASAN_FAIL(test,
- call_rcu(&global_rcu_ptr->rcu, rcu_uaf_reclaim);
- rcu_barrier());
-}
-
-static void workqueue_uaf_work(struct work_struct *work)
-{
- kfree(work);
-}
-
-static void workqueue_uaf(struct kunit *test)
-{
- struct workqueue_struct *workqueue;
- struct work_struct *work;
-
- workqueue = create_workqueue("kasan_workqueue_test");
- KUNIT_ASSERT_NOT_ERR_OR_NULL(test, workqueue);
-
- work = kmalloc(sizeof(struct work_struct), GFP_KERNEL);
- KUNIT_ASSERT_NOT_ERR_OR_NULL(test, work);
-
- INIT_WORK(work, workqueue_uaf_work);
- queue_work(workqueue, work);
- destroy_workqueue(workqueue);
-
- KUNIT_EXPECT_KASAN_FAIL(test,
- ((volatile struct work_struct *)work)->data);
-}
-
static void vmalloc_helpers_tags(struct kunit *test)
{
void *ptr;
@@ -1823,12 +1823,12 @@ static struct kunit_case kasan_kunit_test_cases[] = {
KUNIT_CASE(kmalloc_oob_right),
KUNIT_CASE(kmalloc_oob_left),
KUNIT_CASE(kmalloc_node_oob_right),
+ KUNIT_CASE(kmalloc_big_oob_right),
KUNIT_CASE(kmalloc_large_oob_right),
KUNIT_CASE(kmalloc_large_uaf),
KUNIT_CASE(kmalloc_large_invalid_free),
KUNIT_CASE(page_alloc_oob_right),
KUNIT_CASE(page_alloc_uaf),
- KUNIT_CASE(kmalloc_big_oob_right),
KUNIT_CASE(krealloc_more_oob),
KUNIT_CASE(krealloc_less_oob),
KUNIT_CASE(krealloc_large_more_oob),
@@ -1847,9 +1847,17 @@ static struct kunit_case kasan_kunit_test_cases[] = {
KUNIT_CASE(kmalloc_uaf_memset),
KUNIT_CASE(kmalloc_uaf2),
KUNIT_CASE(kmalloc_uaf3),
+ KUNIT_CASE(kmalloc_double_kzfree),
+ KUNIT_CASE(ksize_unpoisons_memory),
+ KUNIT_CASE(ksize_uaf),
+ KUNIT_CASE(rcu_uaf),
+ KUNIT_CASE(workqueue_uaf),
KUNIT_CASE(kfree_via_page),
KUNIT_CASE(kfree_via_phys),
KUNIT_CASE(kmem_cache_oob),
+ KUNIT_CASE(kmem_cache_double_free),
+ KUNIT_CASE(kmem_cache_invalid_free),
+ KUNIT_CASE(kmem_cache_double_destroy),
KUNIT_CASE(kmem_cache_accounted),
KUNIT_CASE(kmem_cache_bulk),
KUNIT_CASE(mempool_kmalloc_oob_right),
@@ -1869,19 +1877,11 @@ static struct kunit_case kasan_kunit_test_cases[] = {
KUNIT_CASE(kasan_stack_oob),
KUNIT_CASE(kasan_alloca_oob_left),
KUNIT_CASE(kasan_alloca_oob_right),
- KUNIT_CASE(ksize_unpoisons_memory),
- KUNIT_CASE(ksize_uaf),
- KUNIT_CASE(kmem_cache_double_free),
- KUNIT_CASE(kmem_cache_invalid_free),
- KUNIT_CASE(kmem_cache_double_destroy),
KUNIT_CASE(kasan_memchr),
KUNIT_CASE(kasan_memcmp),
KUNIT_CASE(kasan_strings),
KUNIT_CASE(kasan_bitops_generic),
KUNIT_CASE(kasan_bitops_tags),
- KUNIT_CASE(kmalloc_double_kzfree),
- KUNIT_CASE(rcu_uaf),
- KUNIT_CASE(workqueue_uaf),
KUNIT_CASE(vmalloc_helpers_tags),
KUNIT_CASE(vmalloc_oob),
KUNIT_CASE(vmap_tags),
--
2.25.1

2023-11-06 20:13:35

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH RFC 12/20] kasan: save alloc stack traces for mempool

From: Andrey Konovalov <[email protected]>

Update kasan_mempool_unpoison_object to properly poison the redzone and
save alloc strack traces for kmalloc and slab pools.

As a part of this change, split out and use a unpoison_slab_object helper
function from __kasan_slab_alloc.

Signed-off-by: Andrey Konovalov <[email protected]>
---
include/linux/kasan.h | 7 +++---
mm/kasan/common.c | 50 ++++++++++++++++++++++++++++++++++---------
2 files changed, 44 insertions(+), 13 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index e636a00e26ba..7392c5d89b92 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -303,9 +303,10 @@ void __kasan_mempool_unpoison_object(void *ptr, size_t size, unsigned long ip);
* mempool).
*
* This function unpoisons a slab allocation that was previously poisoned via
- * kasan_mempool_poison_object() without initializing its memory. For the
- * tag-based modes, this function does not assign a new tag to the allocation
- * and instead restores the original tags based on the pointer value.
+ * kasan_mempool_poison_object() and saves an alloc stack trace for it without
+ * initializing the allocation's memory. For the tag-based modes, this function
+ * does not assign a new tag to the allocation and instead restores the
+ * original tags based on the pointer value.
*
* This function operates on all slab allocations including large kmalloc
* allocations (the ones returned by kmalloc_large() or by kmalloc() with the
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index b50e4fbaf238..65850d37fd27 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -267,6 +267,20 @@ void __kasan_kfree_large(void *ptr, unsigned long ip)
/* The object will be poisoned by kasan_poison_pages(). */
}

+void unpoison_slab_object(struct kmem_cache *cache, void *object, gfp_t flags,
+ bool init)
+{
+ /*
+ * Unpoison the whole object. For kmalloc() allocations,
+ * poison_kmalloc_redzone() will do precise poisoning.
+ */
+ kasan_unpoison(object, cache->object_size, init);
+
+ /* Save alloc info (if possible) for non-kmalloc() allocations. */
+ if (kasan_stack_collection_enabled() && !is_kmalloc_cache(cache))
+ kasan_save_alloc_info(cache, object, flags);
+}
+
void * __must_check __kasan_slab_alloc(struct kmem_cache *cache,
void *object, gfp_t flags, bool init)
{
@@ -289,15 +303,8 @@ void * __must_check __kasan_slab_alloc(struct kmem_cache *cache,
tag = assign_tag(cache, object, false);
tagged_object = set_tag(object, tag);

- /*
- * Unpoison the whole object.
- * For kmalloc() allocations, kasan_kmalloc() will do precise poisoning.
- */
- kasan_unpoison(tagged_object, cache->object_size, init);
-
- /* Save alloc info (if possible) for non-kmalloc() allocations. */
- if (kasan_stack_collection_enabled() && !is_kmalloc_cache(cache))
- kasan_save_alloc_info(cache, tagged_object, flags);
+ /* Unpoison the object and save alloc info for non-kmalloc() allocations. */
+ unpoison_slab_object(cache, tagged_object, flags, init);

return tagged_object;
}
@@ -472,7 +479,30 @@ bool __kasan_mempool_poison_object(void *ptr, unsigned long ip)

void __kasan_mempool_unpoison_object(void *ptr, size_t size, unsigned long ip)
{
- kasan_unpoison(ptr, size, false);
+ struct slab *slab;
+ gfp_t flags = 0; /* Might be executing under a lock. */
+
+ if (is_kfence_address(kasan_reset_tag(ptr)))
+ return;
+
+ slab = virt_to_slab(ptr);
+
+ /*
+ * This function can be called for large kmalloc allocation that get
+ * their memory from page_alloc.
+ */
+ if (unlikely(!slab)) {
+ kasan_unpoison(ptr, size, false);
+ poison_kmalloc_large_redzone(ptr, size, flags);
+ return;
+ }
+
+ /* Unpoison the object and save alloc info for non-kmalloc() allocations. */
+ unpoison_slab_object(slab->slab_cache, ptr, size, flags);
+
+ /* Poison the redzone and save alloc info for kmalloc() allocations. */
+ if (is_kmalloc_cache(slab->slab_cache))
+ poison_kmalloc_redzone(slab->slab_cache, ptr, size, flags);
}

bool __kasan_check_byte(const void *address, unsigned long ip)
--
2.25.1

2023-11-06 20:13:47

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH RFC 15/20] kasan: add mempool tests

From: Andrey Konovalov <[email protected]>

Add KASAN tests for mempool.

Signed-off-by: Andrey Konovalov <[email protected]>
---
mm/kasan/kasan_test.c | 325 ++++++++++++++++++++++++++++++++++++++++++
1 file changed, 325 insertions(+)

diff --git a/mm/kasan/kasan_test.c b/mm/kasan/kasan_test.c
index 8281eb42464b..9adbcd04259b 100644
--- a/mm/kasan/kasan_test.c
+++ b/mm/kasan/kasan_test.c
@@ -13,6 +13,7 @@
#include <linux/io.h>
#include <linux/kasan.h>
#include <linux/kernel.h>
+#include <linux/mempool.h>
#include <linux/mm.h>
#include <linux/mman.h>
#include <linux/module.h>
@@ -798,6 +799,318 @@ static void kmem_cache_bulk(struct kunit *test)
kmem_cache_destroy(cache);
}

+static void *mempool_prepare_kmalloc(struct kunit *test, mempool_t *pool, size_t size)
+{
+ int pool_size = 4;
+ int ret;
+ void *elem;
+
+ memset(pool, 0, sizeof(*pool));
+ ret = mempool_init_kmalloc_pool(pool, pool_size, size);
+ KUNIT_ASSERT_EQ(test, ret, 0);
+
+ /* Tell mempool to only use preallocated elements. */
+ mempool_use_prealloc_only(pool);
+
+ /*
+ * Allocate one element to prevent mempool from freeing elements to the
+ * underlying allocator and instead make it add them to the element
+ * list when the tests trigger double-free and invalid-free bugs.
+ * This allows testing KASAN annotations in add_element().
+ */
+ elem = mempool_alloc(pool, GFP_KERNEL);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, elem);
+
+ return elem;
+}
+
+static struct kmem_cache *mempool_prepare_slab(struct kunit *test, mempool_t *pool, size_t size)
+{
+ struct kmem_cache *cache;
+ int pool_size = 4;
+ int ret;
+
+ cache = kmem_cache_create("test_cache", size, 0, 0, NULL);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, cache);
+
+ memset(pool, 0, sizeof(*pool));
+ ret = mempool_init_slab_pool(pool, pool_size, cache);
+ KUNIT_ASSERT_EQ(test, ret, 0);
+
+ mempool_use_prealloc_only(pool);
+
+ /*
+ * Do not allocate one preallocated element, as we skip the double-free
+ * and invalid-free tests for slab mempool for simplicity.
+ */
+
+ return cache;
+}
+
+static void *mempool_prepare_page(struct kunit *test, mempool_t *pool, int order)
+{
+ int pool_size = 4;
+ int ret;
+ void *elem;
+
+ memset(pool, 0, sizeof(*pool));
+ ret = mempool_init_page_pool(pool, pool_size, order);
+ KUNIT_ASSERT_EQ(test, ret, 0);
+
+ mempool_use_prealloc_only(pool);
+
+ elem = mempool_alloc(pool, GFP_KERNEL);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, elem);
+
+ return elem;
+}
+
+static void mempool_oob_right_helper(struct kunit *test, mempool_t *pool, size_t size)
+{
+ char *elem;
+
+ elem = mempool_alloc(pool, GFP_KERNEL);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, elem);
+
+ OPTIMIZER_HIDE_VAR(elem);
+
+ if (IS_ENABLED(CONFIG_KASAN_GENERIC))
+ KUNIT_EXPECT_KASAN_FAIL(test, elem[size] = 'x');
+ else
+ KUNIT_EXPECT_KASAN_FAIL(test,
+ elem[round_up(size, KASAN_GRANULE_SIZE)] = 'x');
+
+ mempool_free(elem, pool);
+}
+
+static void mempool_kmalloc_oob_right(struct kunit *test)
+{
+ mempool_t pool;
+ size_t size = 128 - KASAN_GRANULE_SIZE - 5;
+ void *extra_elem;
+
+ extra_elem = mempool_prepare_kmalloc(test, &pool, size);
+
+ mempool_oob_right_helper(test, &pool, size);
+
+ mempool_free(extra_elem, &pool);
+ mempool_exit(&pool);
+}
+
+static void mempool_kmalloc_large_oob_right(struct kunit *test)
+{
+ mempool_t pool;
+ size_t size = KMALLOC_MAX_CACHE_SIZE + 1;
+ void *extra_elem;
+
+ extra_elem = mempool_prepare_kmalloc(test, &pool, size);
+
+ mempool_oob_right_helper(test, &pool, size);
+
+ mempool_free(extra_elem, &pool);
+ mempool_exit(&pool);
+}
+
+static void mempool_slab_oob_right(struct kunit *test)
+{
+ mempool_t pool;
+ size_t size = 123;
+ struct kmem_cache *cache;
+
+ cache = mempool_prepare_slab(test, &pool, size);
+
+ mempool_oob_right_helper(test, &pool, size);
+
+ mempool_exit(&pool);
+ kmem_cache_destroy(cache);
+}
+
+/*
+ * Skip the out-of-bounds test for page mempool. With Generic KASAN, page
+ * allocations have no redzones, and thus the out-of-bounds detection is not
+ * guaranteed; see https://bugzilla.kernel.org/show_bug.cgi?id=210503. With
+ * the tag-based KASAN modes, the neighboring allocation might have the same
+ * tag; see https://bugzilla.kernel.org/show_bug.cgi?id=203505.
+ */
+
+static void mempool_uaf_helper(struct kunit *test, mempool_t *pool, bool page)
+{
+ char *elem, *ptr;
+
+ elem = mempool_alloc(pool, GFP_KERNEL);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, elem);
+
+ mempool_free(elem, pool);
+
+ ptr = page ? page_address((struct page *)elem) : elem;
+ KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[0]);
+}
+
+static void mempool_kmalloc_uaf(struct kunit *test)
+{
+ mempool_t pool;
+ size_t size = 128;
+ void *extra_elem;
+
+ extra_elem = mempool_prepare_kmalloc(test, &pool, size);
+
+ mempool_uaf_helper(test, &pool, false);
+
+ mempool_free(extra_elem, &pool);
+ mempool_exit(&pool);
+}
+
+static void mempool_kmalloc_large_uaf(struct kunit *test)
+{
+ mempool_t pool;
+ size_t size = KMALLOC_MAX_CACHE_SIZE + 1;
+ void *extra_elem;
+
+ /* page_alloc fallback is only implemented for SLUB. */
+ KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_SLUB);
+
+ extra_elem = mempool_prepare_kmalloc(test, &pool, size);
+
+ mempool_uaf_helper(test, &pool, false);
+
+ mempool_free(extra_elem, &pool);
+ mempool_exit(&pool);
+}
+
+static void mempool_slab_uaf(struct kunit *test)
+{
+ mempool_t pool;
+ size_t size = 123;
+ struct kmem_cache *cache;
+
+ cache = mempool_prepare_slab(test, &pool, size);
+
+ mempool_uaf_helper(test, &pool, false);
+
+ mempool_exit(&pool);
+ kmem_cache_destroy(cache);
+}
+
+static void mempool_page_alloc_uaf(struct kunit *test)
+{
+ mempool_t pool;
+ int order = 2;
+ void *extra_elem;
+
+ extra_elem = mempool_prepare_page(test, &pool, order);
+
+ mempool_uaf_helper(test, &pool, true);
+
+ mempool_free(extra_elem, &pool);
+ mempool_exit(&pool);
+}
+
+static void mempool_double_free_helper(struct kunit *test, mempool_t *pool)
+{
+ char *elem;
+
+ elem = mempool_alloc(pool, GFP_KERNEL);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, elem);
+
+ mempool_free(elem, pool);
+
+ KUNIT_EXPECT_KASAN_FAIL(test, mempool_free(elem, pool));
+}
+
+static void mempool_kmalloc_double_free(struct kunit *test)
+{
+ mempool_t pool;
+ size_t size = 128;
+ char *extra_elem;
+
+ extra_elem = mempool_prepare_kmalloc(test, &pool, size);
+
+ mempool_double_free_helper(test, &pool);
+
+ mempool_free(extra_elem, &pool);
+ mempool_exit(&pool);
+}
+
+static void mempool_kmalloc_large_double_free(struct kunit *test)
+{
+ mempool_t pool;
+ size_t size = KMALLOC_MAX_CACHE_SIZE + 1;
+ char *extra_elem;
+
+ /* page_alloc fallback is only implemented for SLUB. */
+ KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_SLUB);
+
+ extra_elem = mempool_prepare_kmalloc(test, &pool, size);
+
+ mempool_double_free_helper(test, &pool);
+
+ mempool_free(extra_elem, &pool);
+ mempool_exit(&pool);
+}
+
+static void mempool_page_alloc_double_free(struct kunit *test)
+{
+ mempool_t pool;
+ int order = 2;
+ char *extra_elem;
+
+ extra_elem = mempool_prepare_page(test, &pool, order);
+
+ mempool_double_free_helper(test, &pool);
+
+ mempool_free(extra_elem, &pool);
+ mempool_exit(&pool);
+}
+
+static void mempool_kmalloc_invalid_free_helper(struct kunit *test, mempool_t *pool)
+{
+ char *elem;
+
+ elem = mempool_alloc(pool, GFP_KERNEL);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, elem);
+
+ KUNIT_EXPECT_KASAN_FAIL(test, mempool_free(elem + 1, pool));
+
+ mempool_free(elem, pool);
+}
+
+static void mempool_kmalloc_invalid_free(struct kunit *test)
+{
+ mempool_t pool;
+ size_t size = 128;
+ char *extra_elem;
+
+ extra_elem = mempool_prepare_kmalloc(test, &pool, size);
+
+ mempool_kmalloc_invalid_free_helper(test, &pool);
+
+ mempool_free(extra_elem, &pool);
+ mempool_exit(&pool);
+}
+
+static void mempool_kmalloc_large_invalid_free(struct kunit *test)
+{
+ mempool_t pool;
+ size_t size = KMALLOC_MAX_CACHE_SIZE + 1;
+ char *extra_elem;
+
+ /* page_alloc fallback is only implemented for SLUB. */
+ KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_SLUB);
+
+ extra_elem = mempool_prepare_kmalloc(test, &pool, size);
+
+ mempool_kmalloc_invalid_free_helper(test, &pool);
+
+ mempool_free(extra_elem, &pool);
+ mempool_exit(&pool);
+}
+
+/*
+ * Skip the invalid-free test for page mempool. The invalid-free detection only
+ * works for compound pages and mempool preallocates all page elements without
+ * the __GFP_COMP flag.
+ */
+
static char global_array[10];

static void kasan_global_oob_right(struct kunit *test)
@@ -1538,6 +1851,18 @@ static struct kunit_case kasan_kunit_test_cases[] = {
KUNIT_CASE(kmem_cache_oob),
KUNIT_CASE(kmem_cache_accounted),
KUNIT_CASE(kmem_cache_bulk),
+ KUNIT_CASE(mempool_kmalloc_oob_right),
+ KUNIT_CASE(mempool_kmalloc_large_oob_right),
+ KUNIT_CASE(mempool_slab_oob_right),
+ KUNIT_CASE(mempool_kmalloc_uaf),
+ KUNIT_CASE(mempool_kmalloc_large_uaf),
+ KUNIT_CASE(mempool_slab_uaf),
+ KUNIT_CASE(mempool_page_alloc_uaf),
+ KUNIT_CASE(mempool_kmalloc_double_free),
+ KUNIT_CASE(mempool_kmalloc_large_double_free),
+ KUNIT_CASE(mempool_page_alloc_double_free),
+ KUNIT_CASE(mempool_kmalloc_invalid_free),
+ KUNIT_CASE(mempool_kmalloc_large_invalid_free),
KUNIT_CASE(kasan_global_oob_right),
KUNIT_CASE(kasan_global_oob_left),
KUNIT_CASE(kasan_stack_oob),
--
2.25.1

2023-11-06 20:14:40

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH RFC 20/20] io_uring: use mempool KASAN hook

From: Andrey Konovalov <[email protected]>

Use the proper kasan_mempool_unpoison_object hook for unpoisoning cached
objects.

A future change might also update io_uring to check the return value of
kasan_mempool_poison_object to prevent double-free and invalid-free bugs.
This proves to be non-trivial with the current way io_uring caches
objects, so this is left out-of-scope of this series.

Signed-off-by: Andrey Konovalov <[email protected]>
---
io_uring/alloc_cache.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/io_uring/alloc_cache.h b/io_uring/alloc_cache.h
index 8de0414e8efe..bf2fb26a6539 100644
--- a/io_uring/alloc_cache.h
+++ b/io_uring/alloc_cache.h
@@ -33,7 +33,7 @@ static inline struct io_cache_entry *io_alloc_cache_get(struct io_alloc_cache *c
struct io_cache_entry *entry;

entry = container_of(cache->list.next, struct io_cache_entry, node);
- kasan_unpoison_range(entry, cache->elem_size);
+ kasan_mempool_unpoison_object(entry, cache->elem_size);
cache->list.next = cache->list.next->next;
cache->nr_cached--;
return entry;
--
2.25.1

2023-11-06 20:14:44

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH RFC 19/20] skbuff: use mempool KASAN hooks

From: Andrey Konovalov <[email protected]>

Instead of using slab-internal KASAN hooks for poisoning and unpoisoning
cached objects, use the proper mempool KASAN hooks.

Also check the return value of kasan_mempool_poison_object to prevent
double-free and invali-free bugs.

Signed-off-by: Andrey Konovalov <[email protected]>
---
net/core/skbuff.c | 10 ++++++----
1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 63bb6526399d..bb75b4272992 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -337,7 +337,7 @@ static struct sk_buff *napi_skb_cache_get(void)
}

skb = nc->skb_cache[--nc->skb_count];
- kasan_unpoison_new_object(skbuff_cache, skb);
+ kasan_mempool_unpoison_object(skb, kmem_cache_size(skbuff_cache));

return skb;
}
@@ -1309,13 +1309,15 @@ static void napi_skb_cache_put(struct sk_buff *skb)
struct napi_alloc_cache *nc = this_cpu_ptr(&napi_alloc_cache);
u32 i;

- kasan_poison_new_object(skbuff_cache, skb);
+ if (!kasan_mempool_poison_object(skb))
+ return;
+
nc->skb_cache[nc->skb_count++] = skb;

if (unlikely(nc->skb_count == NAPI_SKB_CACHE_SIZE)) {
for (i = NAPI_SKB_CACHE_HALF; i < NAPI_SKB_CACHE_SIZE; i++)
- kasan_unpoison_new_object(skbuff_cache,
- nc->skb_cache[i]);
+ kasan_mempool_unpoison_object(nc->skb_cache[i],
+ kmem_cache_size(skbuff_cache));

kmem_cache_free_bulk(skbuff_cache, NAPI_SKB_CACHE_HALF,
nc->skb_cache + NAPI_SKB_CACHE_HALF);
--
2.25.1

2023-11-06 20:14:46

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH RFC 18/20] kasan: rename and document kasan_(un)poison_object_data

From: Andrey Konovalov <[email protected]>

Rename kasan_unpoison_object_data to kasan_unpoison_new_object and add
a documentation comment. Do the same for kasan_poison_object_data.

The new names and the comments should suggest the users that these hooks
are intended for internal use by the slab allocator.

The following patch will remove non-slab-internal uses of these hooks.

No functional changes.

Signed-off-by: Andrey Konovalov <[email protected]>
---
include/linux/kasan.h | 35 +++++++++++++++++++++++++++--------
mm/kasan/common.c | 4 ++--
mm/slab.c | 10 ++++------
mm/slub.c | 4 ++--
net/core/skbuff.c | 8 ++++----
5 files changed, 39 insertions(+), 22 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 7392c5d89b92..d49e3d4c099e 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -129,20 +129,39 @@ static __always_inline void kasan_poison_slab(struct slab *slab)
__kasan_poison_slab(slab);
}

-void __kasan_unpoison_object_data(struct kmem_cache *cache, void *object);
-static __always_inline void kasan_unpoison_object_data(struct kmem_cache *cache,
+void __kasan_unpoison_new_object(struct kmem_cache *cache, void *object);
+/**
+ * kasan_unpoison_new_object - Temporarily unpoison a new slab object.
+ * @cache: Cache the object belong to.
+ * @object: Pointer to the object.
+ *
+ * This function is intended for the slab allocator's internal use. It
+ * temporarily unpoisons an object from a newly allocated slab without doing
+ * anything else. The object must later be repoisoned by
+ * kasan_poison_new_object().
+ */
+static __always_inline void kasan_unpoison_new_object(struct kmem_cache *cache,
void *object)
{
if (kasan_enabled())
- __kasan_unpoison_object_data(cache, object);
+ __kasan_unpoison_new_object(cache, object);
}

-void __kasan_poison_object_data(struct kmem_cache *cache, void *object);
-static __always_inline void kasan_poison_object_data(struct kmem_cache *cache,
+void __kasan_poison_new_object(struct kmem_cache *cache, void *object);
+/**
+ * kasan_unpoison_new_object - Repoison a new slab object.
+ * @cache: Cache the object belong to.
+ * @object: Pointer to the object.
+ *
+ * This function is intended for the slab allocator's internal use. It
+ * repoisons an object that was previously unpoisoned by
+ * kasan_unpoison_new_object() without doing anything else.
+ */
+static __always_inline void kasan_poison_new_object(struct kmem_cache *cache,
void *object)
{
if (kasan_enabled())
- __kasan_poison_object_data(cache, object);
+ __kasan_poison_new_object(cache, object);
}

void * __must_check __kasan_init_slab_obj(struct kmem_cache *cache,
@@ -342,9 +361,9 @@ static inline bool kasan_unpoison_pages(struct page *page, unsigned int order,
return false;
}
static inline void kasan_poison_slab(struct slab *slab) {}
-static inline void kasan_unpoison_object_data(struct kmem_cache *cache,
+static inline void kasan_unpoison_new_object(struct kmem_cache *cache,
void *object) {}
-static inline void kasan_poison_object_data(struct kmem_cache *cache,
+static inline void kasan_poison_new_object(struct kmem_cache *cache,
void *object) {}
static inline void *kasan_init_slab_obj(struct kmem_cache *cache,
const void *object)
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 65850d37fd27..9f11be6b00a8 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -133,12 +133,12 @@ void __kasan_poison_slab(struct slab *slab)
KASAN_SLAB_REDZONE, false);
}

-void __kasan_unpoison_object_data(struct kmem_cache *cache, void *object)
+void __kasan_unpoison_new_object(struct kmem_cache *cache, void *object)
{
kasan_unpoison(object, cache->object_size, false);
}

-void __kasan_poison_object_data(struct kmem_cache *cache, void *object)
+void __kasan_poison_new_object(struct kmem_cache *cache, void *object)
{
kasan_poison(object, round_up(cache->object_size, KASAN_GRANULE_SIZE),
KASAN_SLAB_REDZONE, false);
diff --git a/mm/slab.c b/mm/slab.c
index 9ad3d0f2d1a5..773c79e153f3 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -2327,11 +2327,9 @@ static void cache_init_objs_debug(struct kmem_cache *cachep, struct slab *slab)
* They must also be threaded.
*/
if (cachep->ctor && !(cachep->flags & SLAB_POISON)) {
- kasan_unpoison_object_data(cachep,
- objp + obj_offset(cachep));
+ kasan_unpoison_new_object(cachep, objp + obj_offset(cachep));
cachep->ctor(objp + obj_offset(cachep));
- kasan_poison_object_data(
- cachep, objp + obj_offset(cachep));
+ kasan_poison_new_object(cachep, objp + obj_offset(cachep));
}

if (cachep->flags & SLAB_RED_ZONE) {
@@ -2472,9 +2470,9 @@ static void cache_init_objs(struct kmem_cache *cachep,

/* constructor could break poison info */
if (DEBUG == 0 && cachep->ctor) {
- kasan_unpoison_object_data(cachep, objp);
+ kasan_unpoison_new_object(cachep, objp);
cachep->ctor(objp);
- kasan_poison_object_data(cachep, objp);
+ kasan_poison_new_object(cachep, objp);
}

if (!shuffled)
diff --git a/mm/slub.c b/mm/slub.c
index 63d281dfacdb..973f091ec5d1 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1849,9 +1849,9 @@ static void *setup_object(struct kmem_cache *s, void *object)
setup_object_debug(s, object);
object = kasan_init_slab_obj(s, object);
if (unlikely(s->ctor)) {
- kasan_unpoison_object_data(s, object);
+ kasan_unpoison_new_object(s, object);
s->ctor(object);
- kasan_poison_object_data(s, object);
+ kasan_poison_new_object(s, object);
}
return object;
}
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index b157efea5dea..63bb6526399d 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -337,7 +337,7 @@ static struct sk_buff *napi_skb_cache_get(void)
}

skb = nc->skb_cache[--nc->skb_count];
- kasan_unpoison_object_data(skbuff_cache, skb);
+ kasan_unpoison_new_object(skbuff_cache, skb);

return skb;
}
@@ -1309,13 +1309,13 @@ static void napi_skb_cache_put(struct sk_buff *skb)
struct napi_alloc_cache *nc = this_cpu_ptr(&napi_alloc_cache);
u32 i;

- kasan_poison_object_data(skbuff_cache, skb);
+ kasan_poison_new_object(skbuff_cache, skb);
nc->skb_cache[nc->skb_count++] = skb;

if (unlikely(nc->skb_count == NAPI_SKB_CACHE_SIZE)) {
for (i = NAPI_SKB_CACHE_HALF; i < NAPI_SKB_CACHE_SIZE; i++)
- kasan_unpoison_object_data(skbuff_cache,
- nc->skb_cache[i]);
+ kasan_unpoison_new_object(skbuff_cache,
+ nc->skb_cache[i]);

kmem_cache_free_bulk(skbuff_cache, NAPI_SKB_CACHE_HALF,
nc->skb_cache + NAPI_SKB_CACHE_HALF);
--
2.25.1

2023-11-22 17:13:54

by Marco Elver

[permalink] [raw]
Subject: Re: [PATCH RFC 00/20] kasan: save mempool stack traces

On Mon, Nov 06, 2023 at 09:10PM +0100, [email protected] wrote:
> From: Andrey Konovalov <[email protected]>
>
> This series updates KASAN to save alloc and free stack traces for
> secondary-level allocators that cache and reuse allocations internally
> instead of giving them back to the underlying allocator (e.g. mempool).

Nice.

> As a part of this change, introduce and document a set of KASAN hooks:
>
> bool kasan_mempool_poison_pages(struct page *page, unsigned int order);
> void kasan_mempool_unpoison_pages(struct page *page, unsigned int order);
> bool kasan_mempool_poison_object(void *ptr);
> void kasan_mempool_unpoison_object(void *ptr, size_t size);
>
> and use them in the mempool code.
>
> Besides mempool, skbuff and io_uring also cache allocations and already
> use KASAN hooks to poison those. Their code is updated to use the new
> mempool hooks.
>
> The new hooks save alloc and free stack traces (for normal kmalloc and
> slab objects; stack traces for large kmalloc objects and page_alloc are
> not supported by KASAN yet), improve the readability of the users' code,
> and also allow the users to prevent double-free and invalid-free bugs;
> see the patches for the details.
>
> I'm posting this series as an RFC, as it has a few non-trivial-to-resolve
> conflicts with the stack depot eviction patches. I'll rebase the series and
> resolve the conflicts once the stack depot patches are in the mm tree.
>
> Andrey Konovalov (20):
> kasan: rename kasan_slab_free_mempool to kasan_mempool_poison_object
> kasan: move kasan_mempool_poison_object
> kasan: document kasan_mempool_poison_object
> kasan: add return value for kasan_mempool_poison_object
> kasan: introduce kasan_mempool_unpoison_object
> kasan: introduce kasan_mempool_poison_pages
> kasan: introduce kasan_mempool_unpoison_pages
> kasan: clean up __kasan_mempool_poison_object
> kasan: save free stack traces for slab mempools
> kasan: clean up and rename ____kasan_kmalloc
> kasan: introduce poison_kmalloc_large_redzone
> kasan: save alloc stack traces for mempool
> mempool: use new mempool KASAN hooks
> mempool: introduce mempool_use_prealloc_only
> kasan: add mempool tests
> kasan: rename pagealloc tests
> kasan: reorder tests
> kasan: rename and document kasan_(un)poison_object_data
> skbuff: use mempool KASAN hooks
> io_uring: use mempool KASAN hook
>
> include/linux/kasan.h | 161 +++++++-
> include/linux/mempool.h | 2 +
> io_uring/alloc_cache.h | 5 +-
> mm/kasan/common.c | 221 ++++++----
> mm/kasan/kasan_test.c | 876 +++++++++++++++++++++++++++-------------
> mm/mempool.c | 49 ++-
> mm/slab.c | 10 +-
> mm/slub.c | 4 +-
> net/core/skbuff.c | 10 +-
> 9 files changed, 940 insertions(+), 398 deletions(-)

Overall LGTM and the majority of it is cleanups, so I think once the
stack depot patches are in the mm tree, just send v1 of this series.

2023-11-22 17:21:23

by Marco Elver

[permalink] [raw]
Subject: Re: [PATCH RFC 14/20] mempool: introduce mempool_use_prealloc_only

On Mon, Nov 06, 2023 at 09:10PM +0100, [email protected] wrote:
> From: Andrey Konovalov <[email protected]>
>
> Introduce a new mempool_use_prealloc_only API that tells the mempool to
> only use the elements preallocated during the mempool's creation and to
> not attempt allocating new ones.
>
> This API is required to test the KASAN poisoning/unpoisoning functinality
> in KASAN tests, but it might be also useful on its own.
>
> Signed-off-by: Andrey Konovalov <[email protected]>
> ---
> include/linux/mempool.h | 2 ++
> mm/mempool.c | 27 ++++++++++++++++++++++++---
> 2 files changed, 26 insertions(+), 3 deletions(-)
>
> diff --git a/include/linux/mempool.h b/include/linux/mempool.h
> index 4aae6c06c5f2..822adf1e7567 100644
> --- a/include/linux/mempool.h
> +++ b/include/linux/mempool.h
> @@ -18,6 +18,7 @@ typedef struct mempool_s {
> int min_nr; /* nr of elements at *elements */
> int curr_nr; /* Current nr of elements at *elements */
> void **elements;
> + bool use_prealloc_only; /* Use only preallocated elements */

This increases the struct size from 56 to 64 bytes (64 bit arch).
mempool_t is embedded in lots of other larger structs, and this may
result in some unwanted bloat.

Is there a way to achieve the same thing without adding a new bool to
the mempool struct?

It seems a little excessive only for the purpose of the tests.

2023-11-23 18:07:29

by Andrey Konovalov

[permalink] [raw]
Subject: Re: [PATCH RFC 00/20] kasan: save mempool stack traces

On Wed, Nov 22, 2023 at 6:13 PM Marco Elver <[email protected]> wrote:
>
> On Mon, Nov 06, 2023 at 09:10PM +0100, [email protected] wrote:
> > From: Andrey Konovalov <[email protected]>
> >
> > This series updates KASAN to save alloc and free stack traces for
> > secondary-level allocators that cache and reuse allocations internally
> > instead of giving them back to the underlying allocator (e.g. mempool).
>
> Nice.

Thanks! :)

> Overall LGTM and the majority of it is cleanups, so I think once the
> stack depot patches are in the mm tree, just send v1 of this series.

Will do, thank you for looking at the patches!

2023-11-23 18:07:31

by Andrey Konovalov

[permalink] [raw]
Subject: Re: [PATCH RFC 14/20] mempool: introduce mempool_use_prealloc_only

On Wed, Nov 22, 2023 at 6:21 PM Marco Elver <[email protected]> wrote:
>
> On Mon, Nov 06, 2023 at 09:10PM +0100, [email protected] wrote:
> > From: Andrey Konovalov <[email protected]>
> >
> > Introduce a new mempool_use_prealloc_only API that tells the mempool to
> > only use the elements preallocated during the mempool's creation and to
> > not attempt allocating new ones.
> >
> > This API is required to test the KASAN poisoning/unpoisoning functinality
> > in KASAN tests, but it might be also useful on its own.
> >
> > Signed-off-by: Andrey Konovalov <[email protected]>
> > ---
> > include/linux/mempool.h | 2 ++
> > mm/mempool.c | 27 ++++++++++++++++++++++++---
> > 2 files changed, 26 insertions(+), 3 deletions(-)
> >
> > diff --git a/include/linux/mempool.h b/include/linux/mempool.h
> > index 4aae6c06c5f2..822adf1e7567 100644
> > --- a/include/linux/mempool.h
> > +++ b/include/linux/mempool.h
> > @@ -18,6 +18,7 @@ typedef struct mempool_s {
> > int min_nr; /* nr of elements at *elements */
> > int curr_nr; /* Current nr of elements at *elements */
> > void **elements;
> > + bool use_prealloc_only; /* Use only preallocated elements */
>
> This increases the struct size from 56 to 64 bytes (64 bit arch).
> mempool_t is embedded in lots of other larger structs, and this may
> result in some unwanted bloat.
>
> Is there a way to achieve the same thing without adding a new bool to
> the mempool struct?

We could split out the part of mempool_alloc that uses preallocated
elements without what waiting part and expose it in another API
function named something like mempool_alloc_preallocated. Would that
be better?

2023-11-23 18:48:36

by Marco Elver

[permalink] [raw]
Subject: Re: [PATCH RFC 14/20] mempool: introduce mempool_use_prealloc_only

On Thu, 23 Nov 2023 at 19:06, Andrey Konovalov <[email protected]> wrote:
>
> On Wed, Nov 22, 2023 at 6:21 PM Marco Elver <[email protected]> wrote:
> >
> > On Mon, Nov 06, 2023 at 09:10PM +0100, [email protected] wrote:
> > > From: Andrey Konovalov <[email protected]>
> > >
> > > Introduce a new mempool_use_prealloc_only API that tells the mempool to
> > > only use the elements preallocated during the mempool's creation and to
> > > not attempt allocating new ones.
> > >
> > > This API is required to test the KASAN poisoning/unpoisoning functinality
> > > in KASAN tests, but it might be also useful on its own.
> > >
> > > Signed-off-by: Andrey Konovalov <[email protected]>
> > > ---
> > > include/linux/mempool.h | 2 ++
> > > mm/mempool.c | 27 ++++++++++++++++++++++++---
> > > 2 files changed, 26 insertions(+), 3 deletions(-)
> > >
> > > diff --git a/include/linux/mempool.h b/include/linux/mempool.h
> > > index 4aae6c06c5f2..822adf1e7567 100644
> > > --- a/include/linux/mempool.h
> > > +++ b/include/linux/mempool.h
> > > @@ -18,6 +18,7 @@ typedef struct mempool_s {
> > > int min_nr; /* nr of elements at *elements */
> > > int curr_nr; /* Current nr of elements at *elements */
> > > void **elements;
> > > + bool use_prealloc_only; /* Use only preallocated elements */
> >
> > This increases the struct size from 56 to 64 bytes (64 bit arch).
> > mempool_t is embedded in lots of other larger structs, and this may
> > result in some unwanted bloat.
> >
> > Is there a way to achieve the same thing without adding a new bool to
> > the mempool struct?
>
> We could split out the part of mempool_alloc that uses preallocated
> elements without what waiting part and expose it in another API
> function named something like mempool_alloc_preallocated. Would that
> be better?

Yes, that might be better. As long as other users of mempool (esp if
KASAN is disabled) are unaffected then it should be fine.