From: Andrey Konovalov <[email protected]>
This series makes the tag-based KASAN modes use a ring buffer for storing
stack depot handles for alloc/free stack traces for slab objects instead
of per-object metadata. This ring buffer is referred to as the stack ring.
On each alloc/free of a slab object, the tagged address of the object and
the current stack trace are recorded in the stack ring.
On each bug report, if the accessed address belongs to a slab object, the
stack ring is scanned for matching entries. The newest entries are used to
print the alloc/free stack traces in the report: one entry for alloc and
one for free.
The advantages of this approach over storing stack trace handles in
per-object metadata with the tag-based KASAN modes:
- Allows to find relevant stack traces for use-after-free bugs without
using quarantine for freed memory. (Currently, if the object was
reallocated multiple times, the report contains the latest alloc/free
stack traces, not necessarily the ones relevant to the buggy allocation.)
- Allows to better identify and mark use-after-free bugs, effectively
making the CONFIG_KASAN_TAGS_IDENTIFY functionality always-on.
- Has fixed memory overhead.
The disadvantage:
- If the affected object was allocated/freed long before the bug happened
and the stack trace events were purged from the stack ring, the report
will have no stack traces.
Discussion
==========
The proposed implementation of the stack ring uses a single ring buffer for
the whole kernel. This might lead to contention due to atomic accesses to
the ring buffer index on multicore systems.
At this point, it is unknown whether the performance impact from this
contention would be significant compared to the slowdown introduced by
collecting stack traces due to the planned changes to the latter part,
see the section below.
For now, the proposed implementation is deemed to be good enough, but this
might need to be revisited once the stack collection becomes faster.
A considered alternative is to keep a separate ring buffer for each CPU
and then iterate over all of them when printing a bug report. This approach
requires somehow figuring out which of the stack rings has the freshest
stack traces for an object if multiple stack rings have them.
Further plans
=============
This series is a part of an effort to make KASAN stack trace collection
suitable for production. This requires stack trace collection to be fast
and memory-bounded.
The planned steps are:
1. Speed up stack trace collection (potentially, by using SCS;
patches on-hold until steps #2 and #3 are completed).
2. Keep stack trace handles in the stack ring (this series).
3. Add a memory-bounded mode to stack depot or provide an alternative
memory-bounded stack storage.
4. Potentially, implement stack trace collection sampling to minimize
the performance impact.
Thanks!
---
Changes v1->v2:
- Rework synchronization in the stack ring implementation.
- Dynamically allocate stack ring based on the kasan.stack_ring_size
command-line parameter.
- Multiple less significant changes, see the notes in patches for details.
Andrey Konovalov (33):
kasan: check KASAN_NO_FREE_META in __kasan_metadata_size
kasan: rename kasan_set_*_info to kasan_save_*_info
kasan: move is_kmalloc check out of save_alloc_info
kasan: split save_alloc_info implementations
kasan: drop CONFIG_KASAN_TAGS_IDENTIFY
kasan: introduce kasan_print_aux_stacks
kasan: introduce kasan_get_alloc_track
kasan: introduce kasan_init_object_meta
kasan: clear metadata functions for tag-based modes
kasan: move kasan_get_*_meta to generic.c
kasan: introduce kasan_requires_meta
kasan: introduce kasan_init_cache_meta
kasan: drop CONFIG_KASAN_GENERIC check from kasan_init_cache_meta
kasan: only define kasan_metadata_size for Generic mode
kasan: only define kasan_never_merge for Generic mode
kasan: only define metadata offsets for Generic mode
kasan: only define metadata structs for Generic mode
kasan: only define kasan_cache_create for Generic mode
kasan: pass tagged pointers to kasan_save_alloc/free_info
kasan: move kasan_get_alloc/free_track definitions
kasan: cosmetic changes in report.c
kasan: use virt_addr_valid in kasan_addr_to_page/slab
kasan: use kasan_addr_to_slab in print_address_description
kasan: make kasan_addr_to_page static
kasan: simplify print_report
kasan: introduce complete_report_info
kasan: fill in cache and object in complete_report_info
kasan: rework function arguments in report.c
kasan: introduce kasan_complete_mode_report_info
kasan: implement stack ring for tag-based modes
kasan: support kasan.stacktrace for SW_TAGS
kasan: dynamically allocate stack ring entries
kasan: better identify bug types for tag-based modes
Documentation/dev-tools/kasan.rst | 15 ++-
include/linux/kasan.h | 55 ++++------
include/linux/slab.h | 2 +-
lib/Kconfig.kasan | 8 --
mm/kasan/common.c | 175 +++---------------------------
mm/kasan/generic.c | 154 ++++++++++++++++++++++++--
mm/kasan/hw_tags.c | 39 +------
mm/kasan/kasan.h | 173 ++++++++++++++++++++---------
mm/kasan/report.c | 117 +++++++++-----------
mm/kasan/report_generic.c | 45 +++++++-
mm/kasan/report_tags.c | 128 +++++++++++++++++-----
mm/kasan/sw_tags.c | 5 +-
mm/kasan/tags.c | 138 ++++++++++++++++++-----
13 files changed, 620 insertions(+), 434 deletions(-)
--
2.25.1
From: Andrey Konovalov <[email protected]>
KASAN prevents merging of slab caches whose objects have per-object
metadata stored in redzones.
As now only the Generic mode uses per-object metadata, define
kasan_never_merge() only for this mode.
Signed-off-by: Andrey Konovalov <[email protected]>
---
include/linux/kasan.h | 18 ++++++------------
mm/kasan/common.c | 8 --------
mm/kasan/generic.c | 8 ++++++++
3 files changed, 14 insertions(+), 20 deletions(-)
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 027df7599573..9743d4b3a918 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -103,14 +103,6 @@ struct kasan_cache {
bool is_kmalloc;
};
-slab_flags_t __kasan_never_merge(void);
-static __always_inline slab_flags_t kasan_never_merge(void)
-{
- if (kasan_enabled())
- return __kasan_never_merge();
- return 0;
-}
-
void __kasan_unpoison_range(const void *addr, size_t size);
static __always_inline void kasan_unpoison_range(const void *addr, size_t size)
{
@@ -261,10 +253,6 @@ static __always_inline bool kasan_check_byte(const void *addr)
#else /* CONFIG_KASAN */
-static inline slab_flags_t kasan_never_merge(void)
-{
- return 0;
-}
static inline void kasan_unpoison_range(const void *address, size_t size) {}
static inline void kasan_poison_pages(struct page *page, unsigned int order,
bool init) {}
@@ -325,6 +313,7 @@ static inline void kasan_unpoison_task_stack(struct task_struct *task) {}
#ifdef CONFIG_KASAN_GENERIC
size_t kasan_metadata_size(struct kmem_cache *cache);
+slab_flags_t kasan_never_merge(void);
void kasan_cache_shrink(struct kmem_cache *cache);
void kasan_cache_shutdown(struct kmem_cache *cache);
@@ -338,6 +327,11 @@ static inline size_t kasan_metadata_size(struct kmem_cache *cache)
{
return 0;
}
+/* And thus nothing prevents cache merging. */
+static inline slab_flags_t kasan_never_merge(void)
+{
+ return 0;
+}
static inline void kasan_cache_shrink(struct kmem_cache *cache) {}
static inline void kasan_cache_shutdown(struct kmem_cache *cache) {}
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 0cef41f8a60d..e4ff0e4e7a9d 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -88,14 +88,6 @@ asmlinkage void kasan_unpoison_task_stack_below(const void *watermark)
}
#endif /* CONFIG_KASAN_STACK */
-/* Only allow cache merging when no per-object metadata is present. */
-slab_flags_t __kasan_never_merge(void)
-{
- if (kasan_requires_meta())
- return SLAB_KASAN;
- return 0;
-}
-
void __kasan_unpoison_pages(struct page *page, unsigned int order, bool init)
{
u8 tag;
diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
index 806ab92032c3..25333bf3c99f 100644
--- a/mm/kasan/generic.c
+++ b/mm/kasan/generic.c
@@ -328,6 +328,14 @@ DEFINE_ASAN_SET_SHADOW(f3);
DEFINE_ASAN_SET_SHADOW(f5);
DEFINE_ASAN_SET_SHADOW(f8);
+/* Only allow cache merging when no per-object metadata is present. */
+slab_flags_t kasan_never_merge(void)
+{
+ if (!kasan_requires_meta())
+ return 0;
+ return SLAB_KASAN;
+}
+
/*
* Adaptive redzone policy taken from the userspace AddressSanitizer runtime.
* For larger allocations larger redzones are used.
--
2.25.1
From: Andrey Konovalov <[email protected]>
Add a kasan_get_alloc_track() helper that fetches alloc_track for a slab
object and use this helper in the common reporting code.
For now, the implementations of this helper are the same for the Generic
and tag-based modes, but they will diverge later in the series.
This change hides references to alloc_meta from the common reporting code.
This is desired as only the Generic mode will be using per-object metadata
after this series.
Reviewed-by: Marco Elver <[email protected]>
Signed-off-by: Andrey Konovalov <[email protected]>
---
mm/kasan/generic.c | 14 +++++++++++++-
mm/kasan/kasan.h | 4 +++-
mm/kasan/report.c | 8 ++++----
mm/kasan/tags.c | 14 +++++++++++++-
4 files changed, 33 insertions(+), 7 deletions(-)
diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
index 98c451a3b01f..f212b9ae57b5 100644
--- a/mm/kasan/generic.c
+++ b/mm/kasan/generic.c
@@ -381,8 +381,20 @@ void kasan_save_free_info(struct kmem_cache *cache,
*(u8 *)kasan_mem_to_shadow(object) = KASAN_SLAB_FREETRACK;
}
+struct kasan_track *kasan_get_alloc_track(struct kmem_cache *cache,
+ void *object)
+{
+ struct kasan_alloc_meta *alloc_meta;
+
+ alloc_meta = kasan_get_alloc_meta(cache, object);
+ if (!alloc_meta)
+ return NULL;
+
+ return &alloc_meta->alloc_track;
+}
+
struct kasan_track *kasan_get_free_track(struct kmem_cache *cache,
- void *object, u8 tag)
+ void *object, u8 tag)
{
if (*(u8 *)kasan_mem_to_shadow(object) != KASAN_SLAB_FREETRACK)
return NULL;
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 30ff341b6d35..b65a51349c51 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -283,8 +283,10 @@ depot_stack_handle_t kasan_save_stack(gfp_t flags, bool can_alloc);
void kasan_set_track(struct kasan_track *track, gfp_t flags);
void kasan_save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags);
void kasan_save_free_info(struct kmem_cache *cache, void *object, u8 tag);
+struct kasan_track *kasan_get_alloc_track(struct kmem_cache *cache,
+ void *object);
struct kasan_track *kasan_get_free_track(struct kmem_cache *cache,
- void *object, u8 tag);
+ void *object, u8 tag);
#if defined(CONFIG_KASAN_GENERIC) && \
(defined(CONFIG_SLAB) || defined(CONFIG_SLUB))
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index cd9f5c7fc6db..5d225d7d9c4c 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -255,12 +255,12 @@ static void describe_object_addr(struct kmem_cache *cache, void *object,
static void describe_object_stacks(struct kmem_cache *cache, void *object,
const void *addr, u8 tag)
{
- struct kasan_alloc_meta *alloc_meta;
+ struct kasan_track *alloc_track;
struct kasan_track *free_track;
- alloc_meta = kasan_get_alloc_meta(cache, object);
- if (alloc_meta) {
- print_track(&alloc_meta->alloc_track, "Allocated");
+ alloc_track = kasan_get_alloc_track(cache, object);
+ if (alloc_track) {
+ print_track(alloc_track, "Allocated");
pr_err("\n");
}
diff --git a/mm/kasan/tags.c b/mm/kasan/tags.c
index e0e5de8ce834..7b1fc8e7c99c 100644
--- a/mm/kasan/tags.c
+++ b/mm/kasan/tags.c
@@ -38,8 +38,20 @@ void kasan_save_free_info(struct kmem_cache *cache,
kasan_set_track(&alloc_meta->free_track, GFP_NOWAIT);
}
+struct kasan_track *kasan_get_alloc_track(struct kmem_cache *cache,
+ void *object)
+{
+ struct kasan_alloc_meta *alloc_meta;
+
+ alloc_meta = kasan_get_alloc_meta(cache, object);
+ if (!alloc_meta)
+ return NULL;
+
+ return &alloc_meta->alloc_track;
+}
+
struct kasan_track *kasan_get_free_track(struct kmem_cache *cache,
- void *object, u8 tag)
+ void *object, u8 tag)
{
struct kasan_alloc_meta *alloc_meta;
--
2.25.1
From: Andrey Konovalov <[email protected]>
Hide the definitions of alloc_meta_offset and free_meta_offset under
an ifdef CONFIG_KASAN_GENERIC check, as these fields are now only used
when the Generic mode is enabled.
Signed-off-by: Andrey Konovalov <[email protected]>
---
include/linux/kasan.h | 2 ++
1 file changed, 2 insertions(+)
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 9743d4b3a918..a212c2e3f32d 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -98,8 +98,10 @@ static inline bool kasan_has_integrated_init(void)
#ifdef CONFIG_KASAN
struct kasan_cache {
+#ifdef CONFIG_KASAN_GENERIC
int alloc_meta_offset;
int free_meta_offset;
+#endif
bool is_kmalloc;
};
--
2.25.1
From: Andrey Konovalov <[email protected]>
Add a kasan_print_aux_stacks() helper that prints the auxiliary stack
traces for the Generic mode.
This change hides references to alloc_meta from the common reporting code.
This is desired as only the Generic mode will be using per-object metadata
after this series.
Signed-off-by: Andrey Konovalov <[email protected]>
---
mm/kasan/kasan.h | 6 ++++++
mm/kasan/report.c | 15 +--------------
mm/kasan/report_generic.c | 20 ++++++++++++++++++++
3 files changed, 27 insertions(+), 14 deletions(-)
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 15c718782c1f..30ff341b6d35 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -266,6 +266,12 @@ void kasan_print_address_stack_frame(const void *addr);
static inline void kasan_print_address_stack_frame(const void *addr) { }
#endif
+#ifdef CONFIG_KASAN_GENERIC
+void kasan_print_aux_stacks(struct kmem_cache *cache, const void *object);
+#else
+static inline void kasan_print_aux_stacks(struct kmem_cache *cache, const void *object) { }
+#endif
+
bool kasan_report(unsigned long addr, size_t size,
bool is_write, unsigned long ip);
void kasan_report_invalid_free(void *object, unsigned long ip, enum kasan_report_type type);
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index fe3f606b3a98..cd9f5c7fc6db 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -270,20 +270,7 @@ static void describe_object_stacks(struct kmem_cache *cache, void *object,
pr_err("\n");
}
-#ifdef CONFIG_KASAN_GENERIC
- if (!alloc_meta)
- return;
- if (alloc_meta->aux_stack[0]) {
- pr_err("Last potentially related work creation:\n");
- stack_depot_print(alloc_meta->aux_stack[0]);
- pr_err("\n");
- }
- if (alloc_meta->aux_stack[1]) {
- pr_err("Second to last potentially related work creation:\n");
- stack_depot_print(alloc_meta->aux_stack[1]);
- pr_err("\n");
- }
-#endif
+ kasan_print_aux_stacks(cache, object);
}
static void describe_object(struct kmem_cache *cache, void *object,
diff --git a/mm/kasan/report_generic.c b/mm/kasan/report_generic.c
index 6689fb9a919b..348dc207d462 100644
--- a/mm/kasan/report_generic.c
+++ b/mm/kasan/report_generic.c
@@ -132,6 +132,26 @@ void kasan_metadata_fetch_row(char *buffer, void *row)
memcpy(buffer, kasan_mem_to_shadow(row), META_BYTES_PER_ROW);
}
+void kasan_print_aux_stacks(struct kmem_cache *cache, const void *object)
+{
+ struct kasan_alloc_meta *alloc_meta;
+
+ alloc_meta = kasan_get_alloc_meta(cache, object);
+ if (!alloc_meta)
+ return;
+
+ if (alloc_meta->aux_stack[0]) {
+ pr_err("Last potentially related work creation:\n");
+ stack_depot_print(alloc_meta->aux_stack[0]);
+ pr_err("\n");
+ }
+ if (alloc_meta->aux_stack[1]) {
+ pr_err("Second to last potentially related work creation:\n");
+ stack_depot_print(alloc_meta->aux_stack[1]);
+ pr_err("\n");
+ }
+}
+
#ifdef CONFIG_KASAN_STACK
static bool __must_check tokenize_frame_descr(const char **frame_descr,
char *token, size_t max_tok_len,
--
2.25.1
From: Andrey Konovalov <[email protected]>
Right now, kasan_cache_create() assigns SLAB_KASAN for all KASAN modes
and then sets up metadata-related cache parameters for the Generic mode.
SLAB_KASAN is used in two places:
1. In slab_ksize() to account for per-object metadata when
calculating the size of the accessible memory within the object.
2. In slab_common.c via kasan_never_merge() to prevent merging of
caches with per-object metadata.
Both cases are only relevant when per-object metadata is present, which
is only the case with the Generic mode.
Thus, assign SLAB_KASAN and define kasan_cache_create() only for the
Generic mode.
Also update the SLAB_KASAN-related comment.
Signed-off-by: Andrey Konovalov <[email protected]>
---
include/linux/kasan.h | 18 ++++++------------
include/linux/slab.h | 2 +-
mm/kasan/common.c | 16 ----------------
mm/kasan/generic.c | 17 ++++++++++++++++-
4 files changed, 23 insertions(+), 30 deletions(-)
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index a212c2e3f32d..d811b3d7d2a1 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -128,15 +128,6 @@ static __always_inline void kasan_unpoison_pages(struct page *page,
__kasan_unpoison_pages(page, order, init);
}
-void __kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
- slab_flags_t *flags);
-static __always_inline void kasan_cache_create(struct kmem_cache *cache,
- unsigned int *size, slab_flags_t *flags)
-{
- if (kasan_enabled())
- __kasan_cache_create(cache, size, flags);
-}
-
void __kasan_cache_create_kmalloc(struct kmem_cache *cache);
static __always_inline void kasan_cache_create_kmalloc(struct kmem_cache *cache)
{
@@ -260,9 +251,6 @@ static inline void kasan_poison_pages(struct page *page, unsigned int order,
bool init) {}
static inline void kasan_unpoison_pages(struct page *page, unsigned int order,
bool init) {}
-static inline void kasan_cache_create(struct kmem_cache *cache,
- unsigned int *size,
- slab_flags_t *flags) {}
static inline void kasan_cache_create_kmalloc(struct kmem_cache *cache) {}
static inline void kasan_poison_slab(struct slab *slab) {}
static inline void kasan_unpoison_object_data(struct kmem_cache *cache,
@@ -316,6 +304,8 @@ static inline void kasan_unpoison_task_stack(struct task_struct *task) {}
size_t kasan_metadata_size(struct kmem_cache *cache);
slab_flags_t kasan_never_merge(void);
+void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
+ slab_flags_t *flags);
void kasan_cache_shrink(struct kmem_cache *cache);
void kasan_cache_shutdown(struct kmem_cache *cache);
@@ -334,6 +324,10 @@ static inline slab_flags_t kasan_never_merge(void)
{
return 0;
}
+/* And no cache-related metadata initialization is required. */
+static inline void kasan_cache_create(struct kmem_cache *cache,
+ unsigned int *size,
+ slab_flags_t *flags) {}
static inline void kasan_cache_shrink(struct kmem_cache *cache) {}
static inline void kasan_cache_shutdown(struct kmem_cache *cache) {}
diff --git a/include/linux/slab.h b/include/linux/slab.h
index 0fefdf528e0d..1c6b7362e82b 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -106,7 +106,7 @@
# define SLAB_ACCOUNT 0
#endif
-#ifdef CONFIG_KASAN
+#ifdef CONFIG_KASAN_GENERIC
#define SLAB_KASAN ((slab_flags_t __force)0x08000000U)
#else
#define SLAB_KASAN 0
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index e4ff0e4e7a9d..89aa97af876e 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -109,22 +109,6 @@ void __kasan_poison_pages(struct page *page, unsigned int order, bool init)
KASAN_PAGE_FREE, init);
}
-void __kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
- slab_flags_t *flags)
-{
- /*
- * SLAB_KASAN is used to mark caches as ones that are sanitized by
- * KASAN. Currently this flag is used in two places:
- * 1. In slab_ksize() when calculating the size of the accessible
- * memory within the object.
- * 2. In slab_common.c to prevent merging of sanitized caches.
- */
- *flags |= SLAB_KASAN;
-
- if (kasan_requires_meta())
- kasan_init_cache_meta(cache, size);
-}
-
void __kasan_cache_create_kmalloc(struct kmem_cache *cache)
{
cache->kasan_info.is_kmalloc = true;
diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
index 25333bf3c99f..f6bef347de87 100644
--- a/mm/kasan/generic.c
+++ b/mm/kasan/generic.c
@@ -352,11 +352,26 @@ static inline unsigned int optimal_redzone(unsigned int object_size)
object_size <= (1 << 16) - 1024 ? 1024 : 2048;
}
-void kasan_init_cache_meta(struct kmem_cache *cache, unsigned int *size)
+void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
+ slab_flags_t *flags)
{
unsigned int ok_size;
unsigned int optimal_size;
+ if (!kasan_requires_meta())
+ return;
+
+ /*
+ * SLAB_KASAN is used to mark caches that are sanitized by KASAN
+ * and that thus have per-object metadata.
+ * Currently this flag is used in two places:
+ * 1. In slab_ksize() to account for per-object metadata when
+ * calculating the size of the accessible memory within the object.
+ * 2. In slab_common.c via kasan_never_merge() to prevent merging of
+ * caches with per-object metadata.
+ */
+ *flags |= SLAB_KASAN;
+
ok_size = *size;
/* Add alloc meta into redzone. */
--
2.25.1
From: Andrey Konovalov <[email protected]>
To simplify reading the implementation of print_report(), remove the
tagged_addr variable and rename untagged_addr to addr.
Signed-off-by: Andrey Konovalov <[email protected]>
---
mm/kasan/report.c | 11 +++++------
1 file changed, 5 insertions(+), 6 deletions(-)
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index ac526c10ebff..dc38ada86f85 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -397,17 +397,16 @@ static void print_memory_metadata(const void *addr)
static void print_report(struct kasan_report_info *info)
{
- void *tagged_addr = info->access_addr;
- void *untagged_addr = kasan_reset_tag(tagged_addr);
- u8 tag = get_tag(tagged_addr);
+ void *addr = kasan_reset_tag(info->access_addr);
+ u8 tag = get_tag(info->access_addr);
print_error_description(info);
- if (addr_has_metadata(untagged_addr))
+ if (addr_has_metadata(addr))
kasan_print_tags(tag, info->first_bad_addr);
pr_err("\n");
- if (addr_has_metadata(untagged_addr)) {
- print_address_description(untagged_addr, tag);
+ if (addr_has_metadata(addr)) {
+ print_address_description(addr, tag);
print_memory_metadata(info->first_bad_addr);
} else {
dump_stack_lvl(KERN_ERR);
--
2.25.1
From: Andrey Konovalov <[email protected]>
Drop CONFIG_KASAN_TAGS_IDENTIFY and related code to simplify making
changes to the reporting code.
The dropped functionality will be restored in the following patches in
this series.
Reviewed-by: Marco Elver <[email protected]>
Signed-off-by: Andrey Konovalov <[email protected]>
---
lib/Kconfig.kasan | 8 --------
mm/kasan/kasan.h | 12 +-----------
mm/kasan/report_tags.c | 28 ----------------------------
mm/kasan/tags.c | 21 ++-------------------
4 files changed, 3 insertions(+), 66 deletions(-)
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index f0973da583e0..ca09b1cf8ee9 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -167,14 +167,6 @@ config KASAN_STACK
as well, as it adds inline-style instrumentation that is run
unconditionally.
-config KASAN_TAGS_IDENTIFY
- bool "Memory corruption type identification"
- depends on KASAN_SW_TAGS || KASAN_HW_TAGS
- help
- Enables best-effort identification of the bug types (use-after-free
- or out-of-bounds) at the cost of increased memory consumption.
- Only applicable for the tag-based KASAN modes.
-
config KASAN_VMALLOC
bool "Check accesses to vmalloc allocations"
depends on HAVE_ARCH_KASAN_VMALLOC
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index d401fb770f67..15c718782c1f 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -169,23 +169,13 @@ struct kasan_track {
depot_stack_handle_t stack;
};
-#if defined(CONFIG_KASAN_TAGS_IDENTIFY) && defined(CONFIG_KASAN_SW_TAGS)
-#define KASAN_NR_FREE_STACKS 5
-#else
-#define KASAN_NR_FREE_STACKS 1
-#endif
-
struct kasan_alloc_meta {
struct kasan_track alloc_track;
/* Generic mode stores free track in kasan_free_meta. */
#ifdef CONFIG_KASAN_GENERIC
depot_stack_handle_t aux_stack[2];
#else
- struct kasan_track free_track[KASAN_NR_FREE_STACKS];
-#endif
-#ifdef CONFIG_KASAN_TAGS_IDENTIFY
- u8 free_pointer_tag[KASAN_NR_FREE_STACKS];
- u8 free_track_idx;
+ struct kasan_track free_track;
#endif
};
diff --git a/mm/kasan/report_tags.c b/mm/kasan/report_tags.c
index e25d2166e813..35cf3cae4aa4 100644
--- a/mm/kasan/report_tags.c
+++ b/mm/kasan/report_tags.c
@@ -5,37 +5,9 @@
*/
#include "kasan.h"
-#include "../slab.h"
const char *kasan_get_bug_type(struct kasan_report_info *info)
{
-#ifdef CONFIG_KASAN_TAGS_IDENTIFY
- struct kasan_alloc_meta *alloc_meta;
- struct kmem_cache *cache;
- struct slab *slab;
- const void *addr;
- void *object;
- u8 tag;
- int i;
-
- tag = get_tag(info->access_addr);
- addr = kasan_reset_tag(info->access_addr);
- slab = kasan_addr_to_slab(addr);
- if (slab) {
- cache = slab->slab_cache;
- object = nearest_obj(cache, slab, (void *)addr);
- alloc_meta = kasan_get_alloc_meta(cache, object);
-
- if (alloc_meta) {
- for (i = 0; i < KASAN_NR_FREE_STACKS; i++) {
- if (alloc_meta->free_pointer_tag[i] == tag)
- return "use-after-free";
- }
- }
- return "out-of-bounds";
- }
-#endif
-
/*
* If access_size is a negative number, then it has reason to be
* defined as out-of-bounds bug type.
diff --git a/mm/kasan/tags.c b/mm/kasan/tags.c
index 1ba3c8399f72..e0e5de8ce834 100644
--- a/mm/kasan/tags.c
+++ b/mm/kasan/tags.c
@@ -30,39 +30,22 @@ void kasan_save_free_info(struct kmem_cache *cache,
void *object, u8 tag)
{
struct kasan_alloc_meta *alloc_meta;
- u8 idx = 0;
alloc_meta = kasan_get_alloc_meta(cache, object);
if (!alloc_meta)
return;
-#ifdef CONFIG_KASAN_TAGS_IDENTIFY
- idx = alloc_meta->free_track_idx;
- alloc_meta->free_pointer_tag[idx] = tag;
- alloc_meta->free_track_idx = (idx + 1) % KASAN_NR_FREE_STACKS;
-#endif
-
- kasan_set_track(&alloc_meta->free_track[idx], GFP_NOWAIT);
+ kasan_set_track(&alloc_meta->free_track, GFP_NOWAIT);
}
struct kasan_track *kasan_get_free_track(struct kmem_cache *cache,
void *object, u8 tag)
{
struct kasan_alloc_meta *alloc_meta;
- int i = 0;
alloc_meta = kasan_get_alloc_meta(cache, object);
if (!alloc_meta)
return NULL;
-#ifdef CONFIG_KASAN_TAGS_IDENTIFY
- for (i = 0; i < KASAN_NR_FREE_STACKS; i++) {
- if (alloc_meta->free_pointer_tag[i] == tag)
- break;
- }
- if (i == KASAN_NR_FREE_STACKS)
- i = alloc_meta->free_track_idx;
-#endif
-
- return &alloc_meta->free_track[i];
+ return &alloc_meta->free_track;
}
--
2.25.1
From: Andrey Konovalov <[email protected]>
Instead of using a large static array, allocate the stack ring dynamically
via memblock_alloc().
The size of the stack ring is controlled by a new kasan.stack_ring_size
command-line parameter. When kasan.stack_ring_size is not provided, the
default value of 32 << 10 is used.
When the stack trace collection is disabled via kasan.stacktrace=off,
the stack ring is not allocated.
Signed-off-by: Andrey Konovalov <[email protected]>
---
Changes v1->v2:
- This is a new patch.
---
mm/kasan/kasan.h | 5 +++--
mm/kasan/report_tags.c | 4 ++--
mm/kasan/tags.c | 22 +++++++++++++++++++++-
3 files changed, 26 insertions(+), 5 deletions(-)
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 447baf1a7a2e..4afe4db751da 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -252,12 +252,13 @@ struct kasan_stack_ring_entry {
bool is_free;
};
-#define KASAN_STACK_RING_SIZE (32 << 10)
+#define KASAN_STACK_RING_SIZE_DEFAULT (32 << 10)
struct kasan_stack_ring {
rwlock_t lock;
+ size_t size;
atomic64_t pos;
- struct kasan_stack_ring_entry entries[KASAN_STACK_RING_SIZE];
+ struct kasan_stack_ring_entry *entries;
};
#endif /* CONFIG_KASAN_SW_TAGS || CONFIG_KASAN_HW_TAGS */
diff --git a/mm/kasan/report_tags.c b/mm/kasan/report_tags.c
index a996489e6dac..7e267e69ce19 100644
--- a/mm/kasan/report_tags.c
+++ b/mm/kasan/report_tags.c
@@ -56,11 +56,11 @@ void kasan_complete_mode_report_info(struct kasan_report_info *info)
* entries relevant to the buggy object can be overwritten.
*/
- for (u64 i = pos - 1; i != pos - 1 - KASAN_STACK_RING_SIZE; i--) {
+ for (u64 i = pos - 1; i != pos - 1 - stack_ring.size; i--) {
if (alloc_found && free_found)
break;
- entry = &stack_ring.entries[i % KASAN_STACK_RING_SIZE];
+ entry = &stack_ring.entries[i % stack_ring.size];
/* Paired with smp_store_release() in save_stack_info(). */
ptr = (void *)smp_load_acquire(&entry->ptr);
diff --git a/mm/kasan/tags.c b/mm/kasan/tags.c
index 0eb6cf6717db..fd8c5f919156 100644
--- a/mm/kasan/tags.c
+++ b/mm/kasan/tags.c
@@ -10,6 +10,7 @@
#include <linux/init.h>
#include <linux/kasan.h>
#include <linux/kernel.h>
+#include <linux/memblock.h>
#include <linux/memory.h>
#include <linux/mm.h>
#include <linux/static_key.h>
@@ -52,6 +53,16 @@ static int __init early_kasan_flag_stacktrace(char *arg)
}
early_param("kasan.stacktrace", early_kasan_flag_stacktrace);
+/* kasan.stack_ring_size=32768 */
+static int __init early_kasan_flag_stack_ring_size(char *arg)
+{
+ if (!arg)
+ return -EINVAL;
+
+ return kstrtoul(arg, 0, &stack_ring.size);
+}
+early_param("kasan.stack_ring_size", early_kasan_flag_stack_ring_size);
+
void __init kasan_init_tags(void)
{
switch (kasan_arg_stacktrace) {
@@ -65,6 +76,15 @@ void __init kasan_init_tags(void)
static_branch_enable(&kasan_flag_stacktrace);
break;
}
+
+ if (kasan_stack_collection_enabled()) {
+ if (!stack_ring.size)
+ stack_ring.size = KASAN_STACK_RING_SIZE_DEFAULT;
+ stack_ring.entries = memblock_alloc(
+ sizeof(stack_ring.entries[0]) *
+ stack_ring.size,
+ SMP_CACHE_BYTES);
+ }
}
static void save_stack_info(struct kmem_cache *cache, void *object,
@@ -86,7 +106,7 @@ static void save_stack_info(struct kmem_cache *cache, void *object,
next:
pos = atomic64_fetch_add(1, &stack_ring.pos);
- entry = &stack_ring.entries[pos % KASAN_STACK_RING_SIZE];
+ entry = &stack_ring.entries[pos % stack_ring.size];
/* Detect stack ring entry slots that are being written to. */
old_ptr = READ_ONCE(entry->ptr);
--
2.25.1
From: Andrey Konovalov <[email protected]>
__kasan_metadata_size() calculates the size of the redzone for objects
in a slab cache.
When accounting for presence of kasan_free_meta in the redzone, this
function only compares free_meta_offset with 0. But free_meta_offset could
also be equal to KASAN_NO_FREE_META, which indicates that kasan_free_meta
is not present at all.
Add a comparison with KASAN_NO_FREE_META into __kasan_metadata_size().
Reviewed-by: Marco Elver <[email protected]>
Signed-off-by: Andrey Konovalov <[email protected]>
---
This is a minor fix that only affects slub_debug runs, so it is probably
not worth backporting.
---
mm/kasan/common.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 707c3a527fcb..b7351b860abf 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -223,8 +223,9 @@ size_t __kasan_metadata_size(struct kmem_cache *cache)
return 0;
return (cache->kasan_info.alloc_meta_offset ?
sizeof(struct kasan_alloc_meta) : 0) +
- (cache->kasan_info.free_meta_offset ?
- sizeof(struct kasan_free_meta) : 0);
+ ((cache->kasan_info.free_meta_offset &&
+ cache->kasan_info.free_meta_offset != KASAN_NO_FREE_META) ?
+ sizeof(struct kasan_free_meta) : 0);
}
struct kasan_alloc_meta *kasan_get_alloc_meta(struct kmem_cache *cache,
--
2.25.1
From: Andrey Konovalov <[email protected]>
As kasan_init_cache_meta() is only defined for the Generic mode, it does
not require the CONFIG_KASAN_GENERIC check.
Signed-off-by: Andrey Konovalov <[email protected]>
---
mm/kasan/generic.c | 6 ------
1 file changed, 6 deletions(-)
diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
index 73aea784040a..5125fad76f70 100644
--- a/mm/kasan/generic.c
+++ b/mm/kasan/generic.c
@@ -367,12 +367,6 @@ void kasan_init_cache_meta(struct kmem_cache *cache, unsigned int *size)
/* Continue, since free meta might still fit. */
}
- /* Only the generic mode uses free meta or flexible redzones. */
- if (!IS_ENABLED(CONFIG_KASAN_GENERIC)) {
- cache->kasan_info.free_meta_offset = KASAN_NO_FREE_META;
- return;
- }
-
/*
* Add free meta into redzone when it's not possible to store
* it in the object. This is the case when:
--
2.25.1
From: Andrey Konovalov <[email protected]>
Move the implementations of kasan_get_alloc/free_meta() to generic.c,
as the common KASAN code does not use these functions anymore.
Also drop kasan_reset_tag() from the implementation, as the Generic
mode does not tag pointers.
Signed-off-by: Andrey Konovalov <[email protected]>
---
mm/kasan/common.c | 19 -------------------
mm/kasan/generic.c | 17 +++++++++++++++++
mm/kasan/kasan.h | 14 +++++++-------
3 files changed, 24 insertions(+), 26 deletions(-)
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index f57469b6b346..d46bb2b351ff 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -228,25 +228,6 @@ size_t __kasan_metadata_size(struct kmem_cache *cache)
sizeof(struct kasan_free_meta) : 0);
}
-struct kasan_alloc_meta *kasan_get_alloc_meta(struct kmem_cache *cache,
- const void *object)
-{
- if (!cache->kasan_info.alloc_meta_offset)
- return NULL;
- return kasan_reset_tag(object) + cache->kasan_info.alloc_meta_offset;
-}
-
-#ifdef CONFIG_KASAN_GENERIC
-struct kasan_free_meta *kasan_get_free_meta(struct kmem_cache *cache,
- const void *object)
-{
- BUILD_BUG_ON(sizeof(struct kasan_free_meta) > 32);
- if (cache->kasan_info.free_meta_offset == KASAN_NO_FREE_META)
- return NULL;
- return kasan_reset_tag(object) + cache->kasan_info.free_meta_offset;
-}
-#endif
-
void __kasan_poison_slab(struct slab *slab)
{
struct page *page = slab_page(slab);
diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
index 5462ddbc21e6..fa654cb96a0d 100644
--- a/mm/kasan/generic.c
+++ b/mm/kasan/generic.c
@@ -328,6 +328,23 @@ DEFINE_ASAN_SET_SHADOW(f3);
DEFINE_ASAN_SET_SHADOW(f5);
DEFINE_ASAN_SET_SHADOW(f8);
+struct kasan_alloc_meta *kasan_get_alloc_meta(struct kmem_cache *cache,
+ const void *object)
+{
+ if (!cache->kasan_info.alloc_meta_offset)
+ return NULL;
+ return (void *)object + cache->kasan_info.alloc_meta_offset;
+}
+
+struct kasan_free_meta *kasan_get_free_meta(struct kmem_cache *cache,
+ const void *object)
+{
+ BUILD_BUG_ON(sizeof(struct kasan_free_meta) > 32);
+ if (cache->kasan_info.free_meta_offset == KASAN_NO_FREE_META)
+ return NULL;
+ return (void *)object + cache->kasan_info.free_meta_offset;
+}
+
void kasan_init_object_meta(struct kmem_cache *cache, const void *object)
{
struct kasan_alloc_meta *alloc_meta;
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 2c8c3cce7bc6..fdd577f3eb9d 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -209,13 +209,6 @@ struct kunit_kasan_status {
};
#endif
-struct kasan_alloc_meta *kasan_get_alloc_meta(struct kmem_cache *cache,
- const void *object);
-#ifdef CONFIG_KASAN_GENERIC
-struct kasan_free_meta *kasan_get_free_meta(struct kmem_cache *cache,
- const void *object);
-#endif
-
#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
static inline const void *kasan_shadow_to_mem(const void *shadow_addr)
@@ -281,6 +274,13 @@ struct slab *kasan_addr_to_slab(const void *addr);
void kasan_init_object_meta(struct kmem_cache *cache, const void *object);
+#ifdef CONFIG_KASAN_GENERIC
+struct kasan_alloc_meta *kasan_get_alloc_meta(struct kmem_cache *cache,
+ const void *object);
+struct kasan_free_meta *kasan_get_free_meta(struct kmem_cache *cache,
+ const void *object);
+#endif
+
depot_stack_handle_t kasan_save_stack(gfp_t flags, bool can_alloc);
void kasan_set_track(struct kasan_track *track, gfp_t flags);
void kasan_save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags);
--
2.25.1
From: Andrey Konovalov <[email protected]>
Add bug_type and alloc/free_track fields to kasan_report_info and add a
kasan_complete_mode_report_info() function that fills in these fields.
This function is implemented differently for different KASAN mode.
Change the reporting code to use the filled in fields instead of
invoking kasan_get_bug_type() and kasan_get_alloc/free_track().
For the Generic mode, kasan_complete_mode_report_info() invokes these
functions instead. For the tag-based modes, only the bug_type field is
filled in; alloc/free_track are handled in the next patch.
Using a single function that fills in these fields is required for the
tag-based modes, as the values for all three fields are determined in a
single procedure implemented in the following patch.
Signed-off-by: Andrey Konovalov <[email protected]>
---
mm/kasan/kasan.h | 33 +++++++++++++++++----------------
mm/kasan/report.c | 30 ++++++++++++++----------------
mm/kasan/report_generic.c | 32 +++++++++++++++++---------------
mm/kasan/report_tags.c | 13 +++----------
4 files changed, 51 insertions(+), 57 deletions(-)
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index b8fa1e50f3d4..7df107dc400a 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -146,6 +146,13 @@ static inline bool kasan_requires_meta(void)
#define META_MEM_BYTES_PER_ROW (META_BYTES_PER_ROW * KASAN_GRANULE_SIZE)
#define META_ROWS_AROUND_ADDR 2
+#define KASAN_STACK_DEPTH 64
+
+struct kasan_track {
+ u32 pid;
+ depot_stack_handle_t stack;
+};
+
enum kasan_report_type {
KASAN_REPORT_ACCESS,
KASAN_REPORT_INVALID_FREE,
@@ -164,6 +171,11 @@ struct kasan_report_info {
void *first_bad_addr;
struct kmem_cache *cache;
void *object;
+
+ /* Filled in by the mode-specific reporting code. */
+ const char *bug_type;
+ struct kasan_track alloc_track;
+ struct kasan_track free_track;
};
/* Do not change the struct layout: compiler ABI. */
@@ -189,14 +201,7 @@ struct kasan_global {
#endif
};
-/* Structures for keeping alloc and free tracks. */
-
-#define KASAN_STACK_DEPTH 64
-
-struct kasan_track {
- u32 pid;
- depot_stack_handle_t stack;
-};
+/* Structures for keeping alloc and free meta. */
#ifdef CONFIG_KASAN_GENERIC
@@ -270,16 +275,16 @@ static inline bool addr_has_metadata(const void *addr)
#endif /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */
+void *kasan_find_first_bad_addr(void *addr, size_t size);
+void kasan_complete_mode_report_info(struct kasan_report_info *info);
+void kasan_metadata_fetch_row(char *buffer, void *row);
+
#if defined(CONFIG_KASAN_SW_TAGS) || defined(CONFIG_KASAN_HW_TAGS)
void kasan_print_tags(u8 addr_tag, const void *addr);
#else
static inline void kasan_print_tags(u8 addr_tag, const void *addr) { }
#endif
-void *kasan_find_first_bad_addr(void *addr, size_t size);
-const char *kasan_get_bug_type(struct kasan_report_info *info);
-void kasan_metadata_fetch_row(char *buffer, void *row);
-
#if defined(CONFIG_KASAN_STACK)
void kasan_print_address_stack_frame(const void *addr);
#else
@@ -314,10 +319,6 @@ depot_stack_handle_t kasan_save_stack(gfp_t flags, bool can_alloc);
void kasan_set_track(struct kasan_track *track, gfp_t flags);
void kasan_save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags);
void kasan_save_free_info(struct kmem_cache *cache, void *object);
-struct kasan_track *kasan_get_alloc_track(struct kmem_cache *cache,
- void *object);
-struct kasan_track *kasan_get_free_track(struct kmem_cache *cache,
- void *object, u8 tag);
#if defined(CONFIG_KASAN_GENERIC) && \
(defined(CONFIG_SLAB) || defined(CONFIG_SLUB))
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index ec018f849992..39e8e5a80b82 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -185,8 +185,7 @@ static void print_error_description(struct kasan_report_info *info)
return;
}
- pr_err("BUG: KASAN: %s in %pS\n",
- kasan_get_bug_type(info), (void *)info->ip);
+ pr_err("BUG: KASAN: %s in %pS\n", info->bug_type, (void *)info->ip);
if (info->access_size)
pr_err("%s of size %zu at addr %px by task %s/%d\n",
info->is_write ? "Write" : "Read", info->access_size,
@@ -242,31 +241,25 @@ static void describe_object_addr(const void *addr, struct kmem_cache *cache,
(void *)(object_addr + cache->object_size));
}
-static void describe_object_stacks(u8 tag, struct kasan_report_info *info)
+static void describe_object_stacks(struct kasan_report_info *info)
{
- struct kasan_track *alloc_track;
- struct kasan_track *free_track;
-
- alloc_track = kasan_get_alloc_track(info->cache, info->object);
- if (alloc_track) {
- print_track(alloc_track, "Allocated");
+ if (info->alloc_track.stack) {
+ print_track(&info->alloc_track, "Allocated");
pr_err("\n");
}
- free_track = kasan_get_free_track(info->cache, info->object, tag);
- if (free_track) {
- print_track(free_track, "Freed");
+ if (info->free_track.stack) {
+ print_track(&info->free_track, "Freed");
pr_err("\n");
}
kasan_print_aux_stacks(info->cache, info->object);
}
-static void describe_object(const void *addr, u8 tag,
- struct kasan_report_info *info)
+static void describe_object(const void *addr, struct kasan_report_info *info)
{
if (kasan_stack_collection_enabled())
- describe_object_stacks(tag, info);
+ describe_object_stacks(info);
describe_object_addr(addr, info->cache, info->object);
}
@@ -295,7 +288,7 @@ static void print_address_description(void *addr, u8 tag,
pr_err("\n");
if (info->cache && info->object) {
- describe_object(addr, tag, info);
+ describe_object(addr, info);
pr_err("\n");
}
@@ -426,6 +419,9 @@ static void complete_report_info(struct kasan_report_info *info)
info->object = nearest_obj(info->cache, slab, addr);
} else
info->cache = info->object = NULL;
+
+ /* Fill in mode-specific report info fields. */
+ kasan_complete_mode_report_info(info);
}
void kasan_report_invalid_free(void *ptr, unsigned long ip, enum kasan_report_type type)
@@ -443,6 +439,7 @@ void kasan_report_invalid_free(void *ptr, unsigned long ip, enum kasan_report_ty
start_report(&flags, true);
+ memset(&info, 0, sizeof(info));
info.type = type;
info.access_addr = ptr;
info.access_size = 0;
@@ -477,6 +474,7 @@ bool kasan_report(unsigned long addr, size_t size, bool is_write,
start_report(&irq_flags, true);
+ memset(&info, 0, sizeof(info));
info.type = KASAN_REPORT_ACCESS;
info.access_addr = ptr;
info.access_size = size;
diff --git a/mm/kasan/report_generic.c b/mm/kasan/report_generic.c
index 74d21786ef09..087c1d8c8145 100644
--- a/mm/kasan/report_generic.c
+++ b/mm/kasan/report_generic.c
@@ -109,7 +109,7 @@ static const char *get_wild_bug_type(struct kasan_report_info *info)
return bug_type;
}
-const char *kasan_get_bug_type(struct kasan_report_info *info)
+static const char *get_bug_type(struct kasan_report_info *info)
{
/*
* If access_size is a negative number, then it has reason to be
@@ -127,25 +127,27 @@ const char *kasan_get_bug_type(struct kasan_report_info *info)
return get_wild_bug_type(info);
}
-struct kasan_track *kasan_get_alloc_track(struct kmem_cache *cache,
- void *object)
+void kasan_complete_mode_report_info(struct kasan_report_info *info)
{
struct kasan_alloc_meta *alloc_meta;
+ struct kasan_free_meta *free_meta;
- alloc_meta = kasan_get_alloc_meta(cache, object);
- if (!alloc_meta)
- return NULL;
+ info->bug_type = get_bug_type(info);
- return &alloc_meta->alloc_track;
-}
+ if (!info->cache || !info->object)
+ return;
-struct kasan_track *kasan_get_free_track(struct kmem_cache *cache,
- void *object, u8 tag)
-{
- if (*(u8 *)kasan_mem_to_shadow(object) != KASAN_SLAB_FREETRACK)
- return NULL;
- /* Free meta must be present with KASAN_SLAB_FREETRACK. */
- return &kasan_get_free_meta(cache, object)->free_track;
+ alloc_meta = kasan_get_alloc_meta(info->cache, info->object);
+ if (alloc_meta)
+ memcpy(&info->alloc_track, &alloc_meta->alloc_track,
+ sizeof(info->alloc_track));
+
+ if (*(u8 *)kasan_mem_to_shadow(info->object) == KASAN_SLAB_FREETRACK) {
+ /* Free meta must be present with KASAN_SLAB_FREETRACK. */
+ free_meta = kasan_get_free_meta(info->cache, info->object);
+ memcpy(&info->free_track, &free_meta->free_track,
+ sizeof(info->free_track));
+ }
}
void kasan_metadata_fetch_row(char *buffer, void *row)
diff --git a/mm/kasan/report_tags.c b/mm/kasan/report_tags.c
index 79b6497d8a81..5cbac2cdb177 100644
--- a/mm/kasan/report_tags.c
+++ b/mm/kasan/report_tags.c
@@ -6,7 +6,7 @@
#include "kasan.h"
-const char *kasan_get_bug_type(struct kasan_report_info *info)
+static const char *get_bug_type(struct kasan_report_info *info)
{
/*
* If access_size is a negative number, then it has reason to be
@@ -22,14 +22,7 @@ const char *kasan_get_bug_type(struct kasan_report_info *info)
return "invalid-access";
}
-struct kasan_track *kasan_get_alloc_track(struct kmem_cache *cache,
- void *object)
+void kasan_complete_mode_report_info(struct kasan_report_info *info)
{
- return NULL;
-}
-
-struct kasan_track *kasan_get_free_track(struct kmem_cache *cache,
- void *object, u8 tag)
-{
- return NULL;
+ info->bug_type = get_bug_type(info);
}
--
2.25.1
From: Andrey Konovalov <[email protected]>
Hide the definitions of kasan_alloc_meta and kasan_free_meta under
an ifdef CONFIG_KASAN_GENERIC check, as these structures are now only
used when the Generic mode is enabled.
Signed-off-by: Andrey Konovalov <[email protected]>
---
mm/kasan/kasan.h | 12 +++++-------
1 file changed, 5 insertions(+), 7 deletions(-)
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 6da35370ba37..cae60e4d8842 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -193,14 +193,12 @@ struct kasan_track {
depot_stack_handle_t stack;
};
+#ifdef CONFIG_KASAN_GENERIC
+
struct kasan_alloc_meta {
struct kasan_track alloc_track;
- /* Generic mode stores free track in kasan_free_meta. */
-#ifdef CONFIG_KASAN_GENERIC
+ /* Free track is stored in kasan_free_meta. */
depot_stack_handle_t aux_stack[2];
-#else
- struct kasan_track free_track;
-#endif
};
struct qlist_node {
@@ -219,12 +217,12 @@ struct qlist_node {
* After that, slab allocator stores the freelist pointer in the object.
*/
struct kasan_free_meta {
-#ifdef CONFIG_KASAN_GENERIC
struct qlist_node quarantine_link;
struct kasan_track free_track;
-#endif
};
+#endif /* CONFIG_KASAN_GENERIC */
+
#if IS_ENABLED(CONFIG_KASAN_KUNIT_TEST)
/* Used in KUnit-compatible KASAN tests. */
struct kunit_kasan_status {
--
2.25.1
From: Andrey Konovalov <[email protected]>
Introduce a complete_report_info() function that fills in the
first_bad_addr field of kasan_report_info instead of doing it in
kasan_report_*().
This function will be extended in the next patch.
Signed-off-by: Andrey Konovalov <[email protected]>
---
mm/kasan/kasan.h | 5 ++++-
mm/kasan/report.c | 17 +++++++++++++++--
2 files changed, 19 insertions(+), 3 deletions(-)
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 4fddfdb08abf..7e07115873d3 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -153,12 +153,15 @@ enum kasan_report_type {
};
struct kasan_report_info {
+ /* Filled in by kasan_report_*(). */
enum kasan_report_type type;
void *access_addr;
- void *first_bad_addr;
size_t access_size;
bool is_write;
unsigned long ip;
+
+ /* Filled in by the common reporting code. */
+ void *first_bad_addr;
};
/* Do not change the struct layout: compiler ABI. */
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index dc38ada86f85..0c2e7a58095d 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -413,6 +413,17 @@ static void print_report(struct kasan_report_info *info)
}
}
+static void complete_report_info(struct kasan_report_info *info)
+{
+ void *addr = kasan_reset_tag(info->access_addr);
+
+ if (info->type == KASAN_REPORT_ACCESS)
+ info->first_bad_addr = kasan_find_first_bad_addr(
+ info->access_addr, info->access_size);
+ else
+ info->first_bad_addr = addr;
+}
+
void kasan_report_invalid_free(void *ptr, unsigned long ip, enum kasan_report_type type)
{
unsigned long flags;
@@ -430,11 +441,12 @@ void kasan_report_invalid_free(void *ptr, unsigned long ip, enum kasan_report_ty
info.type = type;
info.access_addr = ptr;
- info.first_bad_addr = kasan_reset_tag(ptr);
info.access_size = 0;
info.is_write = false;
info.ip = ip;
+ complete_report_info(&info);
+
print_report(&info);
end_report(&flags, ptr);
@@ -463,11 +475,12 @@ bool kasan_report(unsigned long addr, size_t size, bool is_write,
info.type = KASAN_REPORT_ACCESS;
info.access_addr = ptr;
- info.first_bad_addr = kasan_find_first_bad_addr(ptr, size);
info.access_size = size;
info.is_write = is_write;
info.ip = ip;
+ complete_report_info(&info);
+
print_report(&info);
end_report(&irq_flags, ptr);
--
2.25.1
From: Andrey Konovalov <[email protected]>
Implement storing stack depot handles for alloc/free stack traces for
slab objects for the tag-based KASAN modes in a ring buffer.
This ring buffer is referred to as the stack ring.
On each alloc/free of a slab object, the tagged address of the object and
the current stack trace are recorded in the stack ring.
On each bug report, if the accessed address belongs to a slab object, the
stack ring is scanned for matching entries. The newest entries are used to
print the alloc/free stack traces in the report: one entry for alloc and
one for free.
The number of entries in the stack ring is fixed in this patch, but one of
the following patches adds a command-line argument to control it.
Signed-off-by: Andrey Konovalov <[email protected]>
---
Changes v1->v2:
- Only use the atomic type for pos, use READ/WRITE_ONCE() for the rest.
- Rename KASAN_STACK_RING_ENTRIES to KASAN_STACK_RING_SIZE.
- Rename object local variable in kasan_complete_mode_report_info() to
ptr to match the name in kasan_stack_ring_entry.
- Detect stack ring entry slots that are being written to.
- Use read-write lock to disallow reading half-written stack ring entries.
- Add a comment about the stack ring being best-effort.
---
mm/kasan/kasan.h | 21 ++++++++++++
mm/kasan/report_tags.c | 76 ++++++++++++++++++++++++++++++++++++++++++
mm/kasan/tags.c | 50 +++++++++++++++++++++++++++
3 files changed, 147 insertions(+)
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 7df107dc400a..cfff81139d67 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -2,6 +2,7 @@
#ifndef __MM_KASAN_KASAN_H
#define __MM_KASAN_KASAN_H
+#include <linux/atomic.h>
#include <linux/kasan.h>
#include <linux/kasan-tags.h>
#include <linux/kfence.h>
@@ -233,6 +234,26 @@ struct kasan_free_meta {
#endif /* CONFIG_KASAN_GENERIC */
+#if defined(CONFIG_KASAN_SW_TAGS) || defined(CONFIG_KASAN_HW_TAGS)
+
+struct kasan_stack_ring_entry {
+ void *ptr;
+ size_t size;
+ u32 pid;
+ depot_stack_handle_t stack;
+ bool is_free;
+};
+
+#define KASAN_STACK_RING_SIZE (32 << 10)
+
+struct kasan_stack_ring {
+ rwlock_t lock;
+ atomic64_t pos;
+ struct kasan_stack_ring_entry entries[KASAN_STACK_RING_SIZE];
+};
+
+#endif /* CONFIG_KASAN_SW_TAGS || CONFIG_KASAN_HW_TAGS */
+
#if IS_ENABLED(CONFIG_KASAN_KUNIT_TEST)
/* Used in KUnit-compatible KASAN tests. */
struct kunit_kasan_status {
diff --git a/mm/kasan/report_tags.c b/mm/kasan/report_tags.c
index 5cbac2cdb177..a996489e6dac 100644
--- a/mm/kasan/report_tags.c
+++ b/mm/kasan/report_tags.c
@@ -4,8 +4,12 @@
* Copyright (c) 2020 Google, Inc.
*/
+#include <linux/atomic.h>
+
#include "kasan.h"
+extern struct kasan_stack_ring stack_ring;
+
static const char *get_bug_type(struct kasan_report_info *info)
{
/*
@@ -24,5 +28,77 @@ static const char *get_bug_type(struct kasan_report_info *info)
void kasan_complete_mode_report_info(struct kasan_report_info *info)
{
+ unsigned long flags;
+ u64 pos;
+ struct kasan_stack_ring_entry *entry;
+ void *ptr;
+ u32 pid;
+ depot_stack_handle_t stack;
+ bool is_free;
+ bool alloc_found = false, free_found = false;
+
info->bug_type = get_bug_type(info);
+
+ if (!info->cache || !info->object)
+ return;
+ }
+
+ write_lock_irqsave(&stack_ring.lock, flags);
+
+ pos = atomic64_read(&stack_ring.pos);
+
+ /*
+ * The loop below tries to find stack ring entries relevant to the
+ * buggy object. This is a best-effort process.
+ *
+ * First, another object with the same tag can be allocated in place of
+ * the buggy object. Also, since the number of entries is limited, the
+ * entries relevant to the buggy object can be overwritten.
+ */
+
+ for (u64 i = pos - 1; i != pos - 1 - KASAN_STACK_RING_SIZE; i--) {
+ if (alloc_found && free_found)
+ break;
+
+ entry = &stack_ring.entries[i % KASAN_STACK_RING_SIZE];
+
+ /* Paired with smp_store_release() in save_stack_info(). */
+ ptr = (void *)smp_load_acquire(&entry->ptr);
+
+ if (kasan_reset_tag(ptr) != info->object ||
+ get_tag(ptr) != get_tag(info->access_addr))
+ continue;
+
+ pid = READ_ONCE(entry->pid);
+ stack = READ_ONCE(entry->stack);
+ is_free = READ_ONCE(entry->is_free);
+
+ /* Try detecting if the entry was changed while being read. */
+ smp_mb();
+ if (ptr != (void *)READ_ONCE(entry->ptr))
+ continue;
+
+ if (is_free) {
+ /*
+ * Second free of the same object.
+ * Give up on trying to find the alloc entry.
+ */
+ if (free_found)
+ break;
+
+ info->free_track.pid = pid;
+ info->free_track.stack = stack;
+ free_found = true;
+ } else {
+ /* Second alloc of the same object. Give up. */
+ if (alloc_found)
+ break;
+
+ info->alloc_track.pid = pid;
+ info->alloc_track.stack = stack;
+ alloc_found = true;
+ }
+ }
+
+ write_unlock_irqrestore(&stack_ring.lock, flags);
}
diff --git a/mm/kasan/tags.c b/mm/kasan/tags.c
index 39a0481e5228..07828021c1f5 100644
--- a/mm/kasan/tags.c
+++ b/mm/kasan/tags.c
@@ -6,6 +6,7 @@
* Copyright (c) 2020 Google, Inc.
*/
+#include <linux/atomic.h>
#include <linux/init.h>
#include <linux/kasan.h>
#include <linux/kernel.h>
@@ -16,11 +17,60 @@
#include <linux/types.h>
#include "kasan.h"
+#include "../slab.h"
+
+/* Non-zero, as initial pointer values are 0. */
+#define STACK_RING_BUSY_PTR ((void *)1)
+
+struct kasan_stack_ring stack_ring;
+
+static void save_stack_info(struct kmem_cache *cache, void *object,
+ gfp_t gfp_flags, bool is_free)
+{
+ unsigned long flags;
+ depot_stack_handle_t stack;
+ u64 pos;
+ struct kasan_stack_ring_entry *entry;
+ void *old_ptr;
+
+ stack = kasan_save_stack(gfp_flags, true);
+
+ /*
+ * Prevent save_stack_info() from modifying stack ring
+ * when kasan_complete_mode_report_info() is walking it.
+ */
+ read_lock_irqsave(&stack_ring.lock, flags);
+
+next:
+ pos = atomic64_fetch_add(1, &stack_ring.pos);
+ entry = &stack_ring.entries[pos % KASAN_STACK_RING_SIZE];
+
+ /* Detect stack ring entry slots that are being written to. */
+ old_ptr = READ_ONCE(entry->ptr);
+ if (old_ptr == STACK_RING_BUSY_PTR)
+ goto next; /* Busy slot. */
+ if (!try_cmpxchg(&entry->ptr, &old_ptr, STACK_RING_BUSY_PTR))
+ goto next; /* Busy slot. */
+
+ WRITE_ONCE(entry->size, cache->object_size);
+ WRITE_ONCE(entry->pid, current->pid);
+ WRITE_ONCE(entry->stack, stack);
+ WRITE_ONCE(entry->is_free, is_free);
+
+ /*
+ * Paired with smp_load_acquire() in kasan_complete_mode_report_info().
+ */
+ smp_store_release(&entry->ptr, (s64)object);
+
+ read_unlock_irqrestore(&stack_ring.lock, flags);
+}
void kasan_save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
{
+ save_stack_info(cache, object, flags, false);
}
void kasan_save_free_info(struct kmem_cache *cache, void *object)
{
+ save_stack_info(cache, object, GFP_NOWAIT, true);
}
--
2.25.1
From: Andrey Konovalov <[email protected]>
Pass tagged pointers to kasan_save_alloc/free_info().
This is a preparatory patch to simplify other changes in the series.
Signed-off-by: Andrey Konovalov <[email protected]>
---
Changes v1->v2:
- Drop unused variable tag from ____kasan_slab_free().
---
mm/kasan/common.c | 6 ++----
mm/kasan/generic.c | 3 +--
mm/kasan/kasan.h | 2 +-
mm/kasan/tags.c | 3 +--
4 files changed, 5 insertions(+), 9 deletions(-)
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 89aa97af876e..3dc57a199893 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -192,13 +192,11 @@ void * __must_check __kasan_init_slab_obj(struct kmem_cache *cache,
static inline bool ____kasan_slab_free(struct kmem_cache *cache, void *object,
unsigned long ip, bool quarantine, bool init)
{
- u8 tag;
void *tagged_object;
if (!kasan_arch_is_ready())
return false;
- tag = get_tag(object);
tagged_object = object;
object = kasan_reset_tag(object);
@@ -227,7 +225,7 @@ static inline bool ____kasan_slab_free(struct kmem_cache *cache, void *object,
return false;
if (kasan_stack_collection_enabled())
- kasan_save_free_info(cache, object, tag);
+ kasan_save_free_info(cache, tagged_object);
return kasan_quarantine_put(cache, object);
}
@@ -316,7 +314,7 @@ void * __must_check __kasan_slab_alloc(struct kmem_cache *cache,
/* Save alloc info (if possible) for non-kmalloc() allocations. */
if (kasan_stack_collection_enabled() && !cache->kasan_info.is_kmalloc)
- kasan_save_alloc_info(cache, (void *)object, flags);
+ kasan_save_alloc_info(cache, tagged_object, flags);
return tagged_object;
}
diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
index f6bef347de87..aff39af3c532 100644
--- a/mm/kasan/generic.c
+++ b/mm/kasan/generic.c
@@ -500,8 +500,7 @@ void kasan_save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
kasan_set_track(&alloc_meta->alloc_track, flags);
}
-void kasan_save_free_info(struct kmem_cache *cache,
- void *object, u8 tag)
+void kasan_save_free_info(struct kmem_cache *cache, void *object)
{
struct kasan_free_meta *free_meta;
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index cae60e4d8842..cca49ab029f1 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -309,7 +309,7 @@ static inline void kasan_init_object_meta(struct kmem_cache *cache, const void *
depot_stack_handle_t kasan_save_stack(gfp_t flags, bool can_alloc);
void kasan_set_track(struct kasan_track *track, gfp_t flags);
void kasan_save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags);
-void kasan_save_free_info(struct kmem_cache *cache, void *object, u8 tag);
+void kasan_save_free_info(struct kmem_cache *cache, void *object);
struct kasan_track *kasan_get_alloc_track(struct kmem_cache *cache,
void *object);
struct kasan_track *kasan_get_free_track(struct kmem_cache *cache,
diff --git a/mm/kasan/tags.c b/mm/kasan/tags.c
index 4f24669085e9..fd11d10a4ffc 100644
--- a/mm/kasan/tags.c
+++ b/mm/kasan/tags.c
@@ -21,8 +21,7 @@ void kasan_save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
{
}
-void kasan_save_free_info(struct kmem_cache *cache,
- void *object, u8 tag)
+void kasan_save_free_info(struct kmem_cache *cache, void *object)
{
}
--
2.25.1
From: Andrey Konovalov <[email protected]>
Add cache and object fields to kasan_report_info and fill them in in
complete_report_info() instead of fetching them in the middle of the
report printing code.
This allows the reporting code to get access to the object information
before starting printing the report. One of the following patches uses
this information to determine the bug type with the tag-based modes.
Signed-off-by: Andrey Konovalov <[email protected]>
---
mm/kasan/kasan.h | 2 ++
mm/kasan/report.c | 21 +++++++++++++--------
2 files changed, 15 insertions(+), 8 deletions(-)
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 7e07115873d3..b8fa1e50f3d4 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -162,6 +162,8 @@ struct kasan_report_info {
/* Filled in by the common reporting code. */
void *first_bad_addr;
+ struct kmem_cache *cache;
+ void *object;
};
/* Do not change the struct layout: compiler ABI. */
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 0c2e7a58095d..763de8e68887 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -287,19 +287,16 @@ static inline bool init_task_stack_addr(const void *addr)
sizeof(init_thread_union.stack));
}
-static void print_address_description(void *addr, u8 tag)
+static void print_address_description(void *addr, u8 tag,
+ struct kasan_report_info *info)
{
struct page *page = addr_to_page(addr);
- struct slab *slab = kasan_addr_to_slab(addr);
dump_stack_lvl(KERN_ERR);
pr_err("\n");
- if (slab) {
- struct kmem_cache *cache = slab->slab_cache;
- void *object = nearest_obj(cache, slab, addr);
-
- describe_object(cache, object, addr, tag);
+ if (info->cache && info->object) {
+ describe_object(info->cache, info->object, addr, tag);
pr_err("\n");
}
@@ -406,7 +403,7 @@ static void print_report(struct kasan_report_info *info)
pr_err("\n");
if (addr_has_metadata(addr)) {
- print_address_description(addr, tag);
+ print_address_description(addr, tag, info);
print_memory_metadata(info->first_bad_addr);
} else {
dump_stack_lvl(KERN_ERR);
@@ -416,12 +413,20 @@ static void print_report(struct kasan_report_info *info)
static void complete_report_info(struct kasan_report_info *info)
{
void *addr = kasan_reset_tag(info->access_addr);
+ struct slab *slab;
if (info->type == KASAN_REPORT_ACCESS)
info->first_bad_addr = kasan_find_first_bad_addr(
info->access_addr, info->access_size);
else
info->first_bad_addr = addr;
+
+ slab = kasan_addr_to_slab(addr);
+ if (slab) {
+ info->cache = slab->slab_cache;
+ info->object = nearest_obj(info->cache, slab, addr);
+ } else
+ info->cache = info->object = NULL;
}
void kasan_report_invalid_free(void *ptr, unsigned long ip, enum kasan_report_type type)
--
2.25.1
From: Andrey Konovalov <[email protected]>
Do a few non-functional style fixes for the code in report.c.
Signed-off-by: Andrey Konovalov <[email protected]>
---
mm/kasan/report.c | 11 ++++-------
1 file changed, 4 insertions(+), 7 deletions(-)
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 5d225d7d9c4c..83f420a28c0b 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -200,25 +200,22 @@ static void print_error_description(struct kasan_report_info *info)
static void print_track(struct kasan_track *track, const char *prefix)
{
pr_err("%s by task %u:\n", prefix, track->pid);
- if (track->stack) {
+ if (track->stack)
stack_depot_print(track->stack);
- } else {
+ else
pr_err("(stack is not available)\n");
- }
}
struct page *kasan_addr_to_page(const void *addr)
{
- if ((addr >= (void *)PAGE_OFFSET) &&
- (addr < high_memory))
+ if ((addr >= (void *)PAGE_OFFSET) && (addr < high_memory))
return virt_to_head_page(addr);
return NULL;
}
struct slab *kasan_addr_to_slab(const void *addr)
{
- if ((addr >= (void *)PAGE_OFFSET) &&
- (addr < high_memory))
+ if ((addr >= (void *)PAGE_OFFSET) && (addr < high_memory))
return virt_to_slab(addr);
return NULL;
}
--
2.25.1
From: Andrey Konovalov <[email protected]>
Identify the bug type for the tag-based modes based on the stack trace
entries found in the stack ring.
If a free entry is found first (meaning that it was added last), mark the
bug as use-after-free. If an alloc entry is found first, mark the bug as
slab-out-of-bounds. Otherwise, assign the common bug type.
This change returns the functionalify of the previously dropped
CONFIG_KASAN_TAGS_IDENTIFY.
Signed-off-by: Andrey Konovalov <[email protected]>
---
mm/kasan/report_tags.c | 25 +++++++++++++++++++++----
1 file changed, 21 insertions(+), 4 deletions(-)
diff --git a/mm/kasan/report_tags.c b/mm/kasan/report_tags.c
index 7e267e69ce19..cedcdc5890bc 100644
--- a/mm/kasan/report_tags.c
+++ b/mm/kasan/report_tags.c
@@ -10,7 +10,7 @@
extern struct kasan_stack_ring stack_ring;
-static const char *get_bug_type(struct kasan_report_info *info)
+static const char *get_common_bug_type(struct kasan_report_info *info)
{
/*
* If access_size is a negative number, then it has reason to be
@@ -37,9 +37,8 @@ void kasan_complete_mode_report_info(struct kasan_report_info *info)
bool is_free;
bool alloc_found = false, free_found = false;
- info->bug_type = get_bug_type(info);
-
- if (!info->cache || !info->object)
+ if (!info->cache || !info->object) {
+ info->bug_type = get_common_bug_type(info);
return;
}
@@ -89,6 +88,13 @@ void kasan_complete_mode_report_info(struct kasan_report_info *info)
info->free_track.pid = pid;
info->free_track.stack = stack;
free_found = true;
+
+ /*
+ * If a free entry is found first, the bug is likely
+ * a use-after-free.
+ */
+ if (!info->bug_type)
+ info->bug_type = "use-after-free";
} else {
/* Second alloc of the same object. Give up. */
if (alloc_found)
@@ -97,8 +103,19 @@ void kasan_complete_mode_report_info(struct kasan_report_info *info)
info->alloc_track.pid = pid;
info->alloc_track.stack = stack;
alloc_found = true;
+
+ /*
+ * If an alloc entry is found first, the bug is likely
+ * an out-of-bounds.
+ */
+ if (!info->bug_type)
+ info->bug_type = "slab-out-of-bounds";
}
}
write_unlock_irqrestore(&stack_ring.lock, flags);
+
+ /* Assign the common bug type if no entries were found. */
+ if (!info->bug_type)
+ info->bug_type = get_common_bug_type(info);
}
--
2.25.1
From: Andrey Konovalov <[email protected]>
Provide standalone implementations of save_alloc_info() for the Generic
and tag-based modes.
For now, the implementations are the same, but they will diverge later
in the series.
Reviewed-by: Marco Elver <[email protected]>
Signed-off-by: Andrey Konovalov <[email protected]>
---
mm/kasan/common.c | 13 ++-----------
mm/kasan/generic.c | 9 +++++++++
mm/kasan/kasan.h | 1 +
mm/kasan/tags.c | 9 +++++++++
4 files changed, 21 insertions(+), 11 deletions(-)
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index a6fd597f73f5..6156c6f0e303 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -423,15 +423,6 @@ void __kasan_slab_free_mempool(void *ptr, unsigned long ip)
}
}
-static void save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
-{
- struct kasan_alloc_meta *alloc_meta;
-
- alloc_meta = kasan_get_alloc_meta(cache, object);
- if (alloc_meta)
- kasan_set_track(&alloc_meta->alloc_track, flags);
-}
-
void * __must_check __kasan_slab_alloc(struct kmem_cache *cache,
void *object, gfp_t flags, bool init)
{
@@ -462,7 +453,7 @@ void * __must_check __kasan_slab_alloc(struct kmem_cache *cache,
/* Save alloc info (if possible) for non-kmalloc() allocations. */
if (kasan_stack_collection_enabled() && !cache->kasan_info.is_kmalloc)
- save_alloc_info(cache, (void *)object, flags);
+ kasan_save_alloc_info(cache, (void *)object, flags);
return tagged_object;
}
@@ -508,7 +499,7 @@ static inline void *____kasan_kmalloc(struct kmem_cache *cache,
* This also rewrites the alloc info when called from kasan_krealloc().
*/
if (kasan_stack_collection_enabled() && cache->kasan_info.is_kmalloc)
- save_alloc_info(cache, (void *)object, flags);
+ kasan_save_alloc_info(cache, (void *)object, flags);
/* Keep the tag that was set by kasan_slab_alloc(). */
return (void *)object;
diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
index 03a3770cfeae..98c451a3b01f 100644
--- a/mm/kasan/generic.c
+++ b/mm/kasan/generic.c
@@ -358,6 +358,15 @@ void kasan_record_aux_stack_noalloc(void *addr)
return __kasan_record_aux_stack(addr, false);
}
+void kasan_save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
+{
+ struct kasan_alloc_meta *alloc_meta;
+
+ alloc_meta = kasan_get_alloc_meta(cache, object);
+ if (alloc_meta)
+ kasan_set_track(&alloc_meta->alloc_track, flags);
+}
+
void kasan_save_free_info(struct kmem_cache *cache,
void *object, u8 tag)
{
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index bf16a74dc027..d401fb770f67 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -285,6 +285,7 @@ struct slab *kasan_addr_to_slab(const void *addr);
depot_stack_handle_t kasan_save_stack(gfp_t flags, bool can_alloc);
void kasan_set_track(struct kasan_track *track, gfp_t flags);
+void kasan_save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags);
void kasan_save_free_info(struct kmem_cache *cache, void *object, u8 tag);
struct kasan_track *kasan_get_free_track(struct kmem_cache *cache,
void *object, u8 tag);
diff --git a/mm/kasan/tags.c b/mm/kasan/tags.c
index b453a353bc86..1ba3c8399f72 100644
--- a/mm/kasan/tags.c
+++ b/mm/kasan/tags.c
@@ -17,6 +17,15 @@
#include "kasan.h"
+void kasan_save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
+{
+ struct kasan_alloc_meta *alloc_meta;
+
+ alloc_meta = kasan_get_alloc_meta(cache, object);
+ if (alloc_meta)
+ kasan_set_track(&alloc_meta->alloc_track, flags);
+}
+
void kasan_save_free_info(struct kmem_cache *cache,
void *object, u8 tag)
{
--
2.25.1
From: Andrey Konovalov <[email protected]>
Instead of open-coding the validity checks for addr in
kasan_addr_to_page/slab(), use the virt_addr_valid() helper.
Signed-off-by: Andrey Konovalov <[email protected]>
---
Changes v1->v2:
- This is a new patch.
---
mm/kasan/report.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 83f420a28c0b..570f9419b90c 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -208,14 +208,14 @@ static void print_track(struct kasan_track *track, const char *prefix)
struct page *kasan_addr_to_page(const void *addr)
{
- if ((addr >= (void *)PAGE_OFFSET) && (addr < high_memory))
+ if (virt_addr_valid(addr))
return virt_to_head_page(addr);
return NULL;
}
struct slab *kasan_addr_to_slab(const void *addr)
{
- if ((addr >= (void *)PAGE_OFFSET) && (addr < high_memory))
+ if (virt_addr_valid(addr))
return virt_to_slab(addr);
return NULL;
}
--
2.25.1
From: Andrey Konovalov <[email protected]>
KASAN provides a helper for calculating the size of per-object metadata
stored in the redzone.
As now only the Generic mode uses per-object metadata, only define
kasan_metadata_size() for this mode.
Signed-off-by: Andrey Konovalov <[email protected]>
---
include/linux/kasan.h | 17 ++++++++---------
mm/kasan/common.c | 11 -----------
mm/kasan/generic.c | 11 +++++++++++
3 files changed, 19 insertions(+), 20 deletions(-)
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index b092277bf48d..027df7599573 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -150,14 +150,6 @@ static __always_inline void kasan_cache_create_kmalloc(struct kmem_cache *cache)
__kasan_cache_create_kmalloc(cache);
}
-size_t __kasan_metadata_size(struct kmem_cache *cache);
-static __always_inline size_t kasan_metadata_size(struct kmem_cache *cache)
-{
- if (kasan_enabled())
- return __kasan_metadata_size(cache);
- return 0;
-}
-
void __kasan_poison_slab(struct slab *slab);
static __always_inline void kasan_poison_slab(struct slab *slab)
{
@@ -282,7 +274,6 @@ static inline void kasan_cache_create(struct kmem_cache *cache,
unsigned int *size,
slab_flags_t *flags) {}
static inline void kasan_cache_create_kmalloc(struct kmem_cache *cache) {}
-static inline size_t kasan_metadata_size(struct kmem_cache *cache) { return 0; }
static inline void kasan_poison_slab(struct slab *slab) {}
static inline void kasan_unpoison_object_data(struct kmem_cache *cache,
void *object) {}
@@ -333,6 +324,8 @@ static inline void kasan_unpoison_task_stack(struct task_struct *task) {}
#ifdef CONFIG_KASAN_GENERIC
+size_t kasan_metadata_size(struct kmem_cache *cache);
+
void kasan_cache_shrink(struct kmem_cache *cache);
void kasan_cache_shutdown(struct kmem_cache *cache);
void kasan_record_aux_stack(void *ptr);
@@ -340,6 +333,12 @@ void kasan_record_aux_stack_noalloc(void *ptr);
#else /* CONFIG_KASAN_GENERIC */
+/* Tag-based KASAN modes do not use per-object metadata. */
+static inline size_t kasan_metadata_size(struct kmem_cache *cache)
+{
+ return 0;
+}
+
static inline void kasan_cache_shrink(struct kmem_cache *cache) {}
static inline void kasan_cache_shutdown(struct kmem_cache *cache) {}
static inline void kasan_record_aux_stack(void *ptr) {}
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 83a04834746f..0cef41f8a60d 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -138,17 +138,6 @@ void __kasan_cache_create_kmalloc(struct kmem_cache *cache)
cache->kasan_info.is_kmalloc = true;
}
-size_t __kasan_metadata_size(struct kmem_cache *cache)
-{
- if (!kasan_requires_meta())
- return 0;
- return (cache->kasan_info.alloc_meta_offset ?
- sizeof(struct kasan_alloc_meta) : 0) +
- ((cache->kasan_info.free_meta_offset &&
- cache->kasan_info.free_meta_offset != KASAN_NO_FREE_META) ?
- sizeof(struct kasan_free_meta) : 0);
-}
-
void __kasan_poison_slab(struct slab *slab)
{
struct page *page = slab_page(slab);
diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
index 5125fad76f70..806ab92032c3 100644
--- a/mm/kasan/generic.c
+++ b/mm/kasan/generic.c
@@ -427,6 +427,17 @@ void kasan_init_object_meta(struct kmem_cache *cache, const void *object)
__memset(alloc_meta, 0, sizeof(*alloc_meta));
}
+size_t kasan_metadata_size(struct kmem_cache *cache)
+{
+ if (!kasan_requires_meta())
+ return 0;
+ return (cache->kasan_info.alloc_meta_offset ?
+ sizeof(struct kasan_alloc_meta) : 0) +
+ ((cache->kasan_info.free_meta_offset &&
+ cache->kasan_info.free_meta_offset != KASAN_NO_FREE_META) ?
+ sizeof(struct kasan_free_meta) : 0);
+}
+
static void __kasan_record_aux_stack(void *addr, bool can_alloc)
{
struct slab *slab = kasan_addr_to_slab(addr);
--
2.25.1
From: Andrey Konovalov <[email protected]>
Add a kasan_init_object_meta() helper that initializes metadata for a slab
object and use it in the common code.
For now, the implementations of this helper are the same for the Generic
and tag-based modes, but they will diverge later in the series.
This change hides references to alloc_meta from the common code. This is
desired as only the Generic mode will be using per-object metadata after
this series.
Reviewed-by: Marco Elver <[email protected]>
Signed-off-by: Andrey Konovalov <[email protected]>
---
mm/kasan/common.c | 10 +++-------
mm/kasan/generic.c | 9 +++++++++
mm/kasan/kasan.h | 2 ++
mm/kasan/tags.c | 9 +++++++++
4 files changed, 23 insertions(+), 7 deletions(-)
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 6156c6f0e303..f57469b6b346 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -312,13 +312,9 @@ static inline u8 assign_tag(struct kmem_cache *cache,
void * __must_check __kasan_init_slab_obj(struct kmem_cache *cache,
const void *object)
{
- struct kasan_alloc_meta *alloc_meta;
-
- if (kasan_stack_collection_enabled()) {
- alloc_meta = kasan_get_alloc_meta(cache, object);
- if (alloc_meta)
- __memset(alloc_meta, 0, sizeof(*alloc_meta));
- }
+ /* Initialize per-object metadata if it is present. */
+ if (kasan_stack_collection_enabled())
+ kasan_init_object_meta(cache, object);
/* Tag is ignored in set_tag() without CONFIG_KASAN_SW/HW_TAGS */
object = set_tag(object, assign_tag(cache, object, true));
diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
index f212b9ae57b5..5462ddbc21e6 100644
--- a/mm/kasan/generic.c
+++ b/mm/kasan/generic.c
@@ -328,6 +328,15 @@ DEFINE_ASAN_SET_SHADOW(f3);
DEFINE_ASAN_SET_SHADOW(f5);
DEFINE_ASAN_SET_SHADOW(f8);
+void kasan_init_object_meta(struct kmem_cache *cache, const void *object)
+{
+ struct kasan_alloc_meta *alloc_meta;
+
+ alloc_meta = kasan_get_alloc_meta(cache, object);
+ if (alloc_meta)
+ __memset(alloc_meta, 0, sizeof(*alloc_meta));
+}
+
static void __kasan_record_aux_stack(void *addr, bool can_alloc)
{
struct slab *slab = kasan_addr_to_slab(addr);
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index b65a51349c51..2c8c3cce7bc6 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -279,6 +279,8 @@ void kasan_report_invalid_free(void *object, unsigned long ip, enum kasan_report
struct page *kasan_addr_to_page(const void *addr);
struct slab *kasan_addr_to_slab(const void *addr);
+void kasan_init_object_meta(struct kmem_cache *cache, const void *object);
+
depot_stack_handle_t kasan_save_stack(gfp_t flags, bool can_alloc);
void kasan_set_track(struct kasan_track *track, gfp_t flags);
void kasan_save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags);
diff --git a/mm/kasan/tags.c b/mm/kasan/tags.c
index 7b1fc8e7c99c..2e200969a4b8 100644
--- a/mm/kasan/tags.c
+++ b/mm/kasan/tags.c
@@ -17,6 +17,15 @@
#include "kasan.h"
+void kasan_init_object_meta(struct kmem_cache *cache, const void *object)
+{
+ struct kasan_alloc_meta *alloc_meta;
+
+ alloc_meta = kasan_get_alloc_meta(cache, object);
+ if (alloc_meta)
+ __memset(alloc_meta, 0, sizeof(*alloc_meta));
+}
+
void kasan_save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
{
struct kasan_alloc_meta *alloc_meta;
--
2.25.1
From: Andrey Konovalov <[email protected]>
Add a kasan_init_cache_meta() helper that initializes metadata-related
cache parameters and use this helper in the common KASAN code.
Put the implementation of this new helper into generic.c, as only the
Generic mode uses per-object metadata.
Signed-off-by: Andrey Konovalov <[email protected]>
---
mm/kasan/common.c | 80 ++--------------------------------------------
mm/kasan/generic.c | 79 +++++++++++++++++++++++++++++++++++++++++++++
mm/kasan/kasan.h | 2 ++
3 files changed, 83 insertions(+), 78 deletions(-)
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index d2ec4e6af675..83a04834746f 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -117,28 +117,9 @@ void __kasan_poison_pages(struct page *page, unsigned int order, bool init)
KASAN_PAGE_FREE, init);
}
-/*
- * Adaptive redzone policy taken from the userspace AddressSanitizer runtime.
- * For larger allocations larger redzones are used.
- */
-static inline unsigned int optimal_redzone(unsigned int object_size)
-{
- return
- object_size <= 64 - 16 ? 16 :
- object_size <= 128 - 32 ? 32 :
- object_size <= 512 - 64 ? 64 :
- object_size <= 4096 - 128 ? 128 :
- object_size <= (1 << 14) - 256 ? 256 :
- object_size <= (1 << 15) - 512 ? 512 :
- object_size <= (1 << 16) - 1024 ? 1024 : 2048;
-}
-
void __kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
slab_flags_t *flags)
{
- unsigned int ok_size;
- unsigned int optimal_size;
-
/*
* SLAB_KASAN is used to mark caches as ones that are sanitized by
* KASAN. Currently this flag is used in two places:
@@ -148,65 +129,8 @@ void __kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
*/
*flags |= SLAB_KASAN;
- if (!kasan_requires_meta())
- return;
-
- ok_size = *size;
-
- /* Add alloc meta into redzone. */
- cache->kasan_info.alloc_meta_offset = *size;
- *size += sizeof(struct kasan_alloc_meta);
-
- /*
- * If alloc meta doesn't fit, don't add it.
- * This can only happen with SLAB, as it has KMALLOC_MAX_SIZE equal
- * to KMALLOC_MAX_CACHE_SIZE and doesn't fall back to page_alloc for
- * larger sizes.
- */
- if (*size > KMALLOC_MAX_SIZE) {
- cache->kasan_info.alloc_meta_offset = 0;
- *size = ok_size;
- /* Continue, since free meta might still fit. */
- }
-
- /* Only the generic mode uses free meta or flexible redzones. */
- if (!IS_ENABLED(CONFIG_KASAN_GENERIC)) {
- cache->kasan_info.free_meta_offset = KASAN_NO_FREE_META;
- return;
- }
-
- /*
- * Add free meta into redzone when it's not possible to store
- * it in the object. This is the case when:
- * 1. Object is SLAB_TYPESAFE_BY_RCU, which means that it can
- * be touched after it was freed, or
- * 2. Object has a constructor, which means it's expected to
- * retain its content until the next allocation, or
- * 3. Object is too small.
- * Otherwise cache->kasan_info.free_meta_offset = 0 is implied.
- */
- if ((cache->flags & SLAB_TYPESAFE_BY_RCU) || cache->ctor ||
- cache->object_size < sizeof(struct kasan_free_meta)) {
- ok_size = *size;
-
- cache->kasan_info.free_meta_offset = *size;
- *size += sizeof(struct kasan_free_meta);
-
- /* If free meta doesn't fit, don't add it. */
- if (*size > KMALLOC_MAX_SIZE) {
- cache->kasan_info.free_meta_offset = KASAN_NO_FREE_META;
- *size = ok_size;
- }
- }
-
- /* Calculate size with optimal redzone. */
- optimal_size = cache->object_size + optimal_redzone(cache->object_size);
- /* Limit it with KMALLOC_MAX_SIZE (relevant for SLAB only). */
- if (optimal_size > KMALLOC_MAX_SIZE)
- optimal_size = KMALLOC_MAX_SIZE;
- /* Use optimal size if the size with added metas is not large enough. */
- if (*size < optimal_size)
- *size = optimal_size;
+ if (kasan_requires_meta())
+ kasan_init_cache_meta(cache, size);
}
void __kasan_cache_create_kmalloc(struct kmem_cache *cache)
diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
index fa654cb96a0d..73aea784040a 100644
--- a/mm/kasan/generic.c
+++ b/mm/kasan/generic.c
@@ -328,6 +328,85 @@ DEFINE_ASAN_SET_SHADOW(f3);
DEFINE_ASAN_SET_SHADOW(f5);
DEFINE_ASAN_SET_SHADOW(f8);
+/*
+ * Adaptive redzone policy taken from the userspace AddressSanitizer runtime.
+ * For larger allocations larger redzones are used.
+ */
+static inline unsigned int optimal_redzone(unsigned int object_size)
+{
+ return
+ object_size <= 64 - 16 ? 16 :
+ object_size <= 128 - 32 ? 32 :
+ object_size <= 512 - 64 ? 64 :
+ object_size <= 4096 - 128 ? 128 :
+ object_size <= (1 << 14) - 256 ? 256 :
+ object_size <= (1 << 15) - 512 ? 512 :
+ object_size <= (1 << 16) - 1024 ? 1024 : 2048;
+}
+
+void kasan_init_cache_meta(struct kmem_cache *cache, unsigned int *size)
+{
+ unsigned int ok_size;
+ unsigned int optimal_size;
+
+ ok_size = *size;
+
+ /* Add alloc meta into redzone. */
+ cache->kasan_info.alloc_meta_offset = *size;
+ *size += sizeof(struct kasan_alloc_meta);
+
+ /*
+ * If alloc meta doesn't fit, don't add it.
+ * This can only happen with SLAB, as it has KMALLOC_MAX_SIZE equal
+ * to KMALLOC_MAX_CACHE_SIZE and doesn't fall back to page_alloc for
+ * larger sizes.
+ */
+ if (*size > KMALLOC_MAX_SIZE) {
+ cache->kasan_info.alloc_meta_offset = 0;
+ *size = ok_size;
+ /* Continue, since free meta might still fit. */
+ }
+
+ /* Only the generic mode uses free meta or flexible redzones. */
+ if (!IS_ENABLED(CONFIG_KASAN_GENERIC)) {
+ cache->kasan_info.free_meta_offset = KASAN_NO_FREE_META;
+ return;
+ }
+
+ /*
+ * Add free meta into redzone when it's not possible to store
+ * it in the object. This is the case when:
+ * 1. Object is SLAB_TYPESAFE_BY_RCU, which means that it can
+ * be touched after it was freed, or
+ * 2. Object has a constructor, which means it's expected to
+ * retain its content until the next allocation, or
+ * 3. Object is too small.
+ * Otherwise cache->kasan_info.free_meta_offset = 0 is implied.
+ */
+ if ((cache->flags & SLAB_TYPESAFE_BY_RCU) || cache->ctor ||
+ cache->object_size < sizeof(struct kasan_free_meta)) {
+ ok_size = *size;
+
+ cache->kasan_info.free_meta_offset = *size;
+ *size += sizeof(struct kasan_free_meta);
+
+ /* If free meta doesn't fit, don't add it. */
+ if (*size > KMALLOC_MAX_SIZE) {
+ cache->kasan_info.free_meta_offset = KASAN_NO_FREE_META;
+ *size = ok_size;
+ }
+ }
+
+ /* Calculate size with optimal redzone. */
+ optimal_size = cache->object_size + optimal_redzone(cache->object_size);
+ /* Limit it with KMALLOC_MAX_SIZE (relevant for SLAB only). */
+ if (optimal_size > KMALLOC_MAX_SIZE)
+ optimal_size = KMALLOC_MAX_SIZE;
+ /* Use optimal size if the size with added metas is not large enough. */
+ if (*size < optimal_size)
+ *size = optimal_size;
+}
+
struct kasan_alloc_meta *kasan_get_alloc_meta(struct kmem_cache *cache,
const void *object)
{
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 1736abd661b6..6da35370ba37 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -297,12 +297,14 @@ struct page *kasan_addr_to_page(const void *addr);
struct slab *kasan_addr_to_slab(const void *addr);
#ifdef CONFIG_KASAN_GENERIC
+void kasan_init_cache_meta(struct kmem_cache *cache, unsigned int *size);
void kasan_init_object_meta(struct kmem_cache *cache, const void *object);
struct kasan_alloc_meta *kasan_get_alloc_meta(struct kmem_cache *cache,
const void *object);
struct kasan_free_meta *kasan_get_free_meta(struct kmem_cache *cache,
const void *object);
#else
+static inline void kasan_init_cache_meta(struct kmem_cache *cache, unsigned int *size) { }
static inline void kasan_init_object_meta(struct kmem_cache *cache, const void *object) { }
#endif
--
2.25.1
From: Andrey Konovalov <[email protected]>
Move the definitions of kasan_get_alloc/free_track() to report_*.c, as
they belong with other the reporting code.
Signed-off-by: Andrey Konovalov <[email protected]>
---
mm/kasan/generic.c | 21 ---------------------
mm/kasan/report_generic.c | 21 +++++++++++++++++++++
mm/kasan/report_tags.c | 12 ++++++++++++
mm/kasan/tags.c | 12 ------------
4 files changed, 33 insertions(+), 33 deletions(-)
diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
index aff39af3c532..d8b5590f9484 100644
--- a/mm/kasan/generic.c
+++ b/mm/kasan/generic.c
@@ -512,24 +512,3 @@ void kasan_save_free_info(struct kmem_cache *cache, void *object)
/* The object was freed and has free track set. */
*(u8 *)kasan_mem_to_shadow(object) = KASAN_SLAB_FREETRACK;
}
-
-struct kasan_track *kasan_get_alloc_track(struct kmem_cache *cache,
- void *object)
-{
- struct kasan_alloc_meta *alloc_meta;
-
- alloc_meta = kasan_get_alloc_meta(cache, object);
- if (!alloc_meta)
- return NULL;
-
- return &alloc_meta->alloc_track;
-}
-
-struct kasan_track *kasan_get_free_track(struct kmem_cache *cache,
- void *object, u8 tag)
-{
- if (*(u8 *)kasan_mem_to_shadow(object) != KASAN_SLAB_FREETRACK)
- return NULL;
- /* Free meta must be present with KASAN_SLAB_FREETRACK. */
- return &kasan_get_free_meta(cache, object)->free_track;
-}
diff --git a/mm/kasan/report_generic.c b/mm/kasan/report_generic.c
index 348dc207d462..74d21786ef09 100644
--- a/mm/kasan/report_generic.c
+++ b/mm/kasan/report_generic.c
@@ -127,6 +127,27 @@ const char *kasan_get_bug_type(struct kasan_report_info *info)
return get_wild_bug_type(info);
}
+struct kasan_track *kasan_get_alloc_track(struct kmem_cache *cache,
+ void *object)
+{
+ struct kasan_alloc_meta *alloc_meta;
+
+ alloc_meta = kasan_get_alloc_meta(cache, object);
+ if (!alloc_meta)
+ return NULL;
+
+ return &alloc_meta->alloc_track;
+}
+
+struct kasan_track *kasan_get_free_track(struct kmem_cache *cache,
+ void *object, u8 tag)
+{
+ if (*(u8 *)kasan_mem_to_shadow(object) != KASAN_SLAB_FREETRACK)
+ return NULL;
+ /* Free meta must be present with KASAN_SLAB_FREETRACK. */
+ return &kasan_get_free_meta(cache, object)->free_track;
+}
+
void kasan_metadata_fetch_row(char *buffer, void *row)
{
memcpy(buffer, kasan_mem_to_shadow(row), META_BYTES_PER_ROW);
diff --git a/mm/kasan/report_tags.c b/mm/kasan/report_tags.c
index 35cf3cae4aa4..79b6497d8a81 100644
--- a/mm/kasan/report_tags.c
+++ b/mm/kasan/report_tags.c
@@ -21,3 +21,15 @@ const char *kasan_get_bug_type(struct kasan_report_info *info)
return "invalid-access";
}
+
+struct kasan_track *kasan_get_alloc_track(struct kmem_cache *cache,
+ void *object)
+{
+ return NULL;
+}
+
+struct kasan_track *kasan_get_free_track(struct kmem_cache *cache,
+ void *object, u8 tag)
+{
+ return NULL;
+}
diff --git a/mm/kasan/tags.c b/mm/kasan/tags.c
index fd11d10a4ffc..39a0481e5228 100644
--- a/mm/kasan/tags.c
+++ b/mm/kasan/tags.c
@@ -24,15 +24,3 @@ void kasan_save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
void kasan_save_free_info(struct kmem_cache *cache, void *object)
{
}
-
-struct kasan_track *kasan_get_alloc_track(struct kmem_cache *cache,
- void *object)
-{
- return NULL;
-}
-
-struct kasan_track *kasan_get_free_track(struct kmem_cache *cache,
- void *object, u8 tag)
-{
- return NULL;
-}
--
2.25.1
From: Andrey Konovalov <[email protected]>
Use the kasan_addr_to_slab() helper in print_address_description()
instead of separately invoking PageSlab() and page_slab().
Signed-off-by: Andrey Konovalov <[email protected]>
---
mm/kasan/common.c | 7 +++++++
mm/kasan/report.c | 11 ++---------
2 files changed, 9 insertions(+), 9 deletions(-)
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 3dc57a199893..cfb85b65fa44 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -30,6 +30,13 @@
#include "kasan.h"
#include "../slab.h"
+struct slab *kasan_addr_to_slab(const void *addr)
+{
+ if (virt_addr_valid(addr))
+ return virt_to_slab(addr);
+ return NULL;
+}
+
depot_stack_handle_t kasan_save_stack(gfp_t flags, bool can_alloc)
{
unsigned long entries[KASAN_STACK_DEPTH];
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 570f9419b90c..cd31b3b89ca1 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -213,13 +213,6 @@ struct page *kasan_addr_to_page(const void *addr)
return NULL;
}
-struct slab *kasan_addr_to_slab(const void *addr)
-{
- if (virt_addr_valid(addr))
- return virt_to_slab(addr);
- return NULL;
-}
-
static void describe_object_addr(struct kmem_cache *cache, void *object,
const void *addr)
{
@@ -297,12 +290,12 @@ static inline bool init_task_stack_addr(const void *addr)
static void print_address_description(void *addr, u8 tag)
{
struct page *page = kasan_addr_to_page(addr);
+ struct slab *slab = kasan_addr_to_slab(addr);
dump_stack_lvl(KERN_ERR);
pr_err("\n");
- if (page && PageSlab(page)) {
- struct slab *slab = page_slab(page);
+ if (slab) {
struct kmem_cache *cache = slab->slab_cache;
void *object = nearest_obj(cache, slab, addr);
--
2.25.1
From: Andrey Konovalov <[email protected]>
Add support for the kasan.stacktrace command-line argument for Software
Tag-Based KASAN.
The following patch adds a command-line argument for selecting the stack
ring size, and, as the stack ring is supported by both the Software and
the Hardware Tag-Based KASAN modes, it is natural that both of them have
support for kasan.stacktrace too.
Signed-off-by: Andrey Konovalov <[email protected]>
---
Changes v1->v2:
- This is a new patch.
---
Documentation/dev-tools/kasan.rst | 15 ++++++-----
mm/kasan/hw_tags.c | 39 +---------------------------
mm/kasan/kasan.h | 36 +++++++++++++++++---------
mm/kasan/sw_tags.c | 5 +++-
mm/kasan/tags.c | 43 +++++++++++++++++++++++++++++++
5 files changed, 81 insertions(+), 57 deletions(-)
diff --git a/Documentation/dev-tools/kasan.rst b/Documentation/dev-tools/kasan.rst
index 1772fd457fed..7bd38c181018 100644
--- a/Documentation/dev-tools/kasan.rst
+++ b/Documentation/dev-tools/kasan.rst
@@ -111,9 +111,15 @@ parameter can be used to control panic and reporting behaviour:
report or also panic the kernel (default: ``report``). The panic happens even
if ``kasan_multi_shot`` is enabled.
-Hardware Tag-Based KASAN mode (see the section about various modes below) is
-intended for use in production as a security mitigation. Therefore, it supports
-additional boot parameters that allow disabling KASAN or controlling features:
+Software and Hardware Tag-Based KASAN modes (see the section about various
+modes below) support disabling stack trace collection:
+
+- ``kasan.stacktrace=off`` or ``=on`` disables or enables alloc and free stack
+ traces collection (default: ``on``).
+
+Hardware Tag-Based KASAN mode is intended for use in production as a security
+mitigation. Therefore, it supports additional boot parameters that allow
+disabling KASAN altogether or controlling its features:
- ``kasan=off`` or ``=on`` controls whether KASAN is enabled (default: ``on``).
@@ -132,9 +138,6 @@ additional boot parameters that allow disabling KASAN or controlling features:
- ``kasan.vmalloc=off`` or ``=on`` disables or enables tagging of vmalloc
allocations (default: ``on``).
-- ``kasan.stacktrace=off`` or ``=on`` disables or enables alloc and free stack
- traces collection (default: ``on``).
-
Error reports
~~~~~~~~~~~~~
diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
index 9ad8eff71b28..b22c4f461cb0 100644
--- a/mm/kasan/hw_tags.c
+++ b/mm/kasan/hw_tags.c
@@ -38,16 +38,9 @@ enum kasan_arg_vmalloc {
KASAN_ARG_VMALLOC_ON,
};
-enum kasan_arg_stacktrace {
- KASAN_ARG_STACKTRACE_DEFAULT,
- KASAN_ARG_STACKTRACE_OFF,
- KASAN_ARG_STACKTRACE_ON,
-};
-
static enum kasan_arg kasan_arg __ro_after_init;
static enum kasan_arg_mode kasan_arg_mode __ro_after_init;
static enum kasan_arg_vmalloc kasan_arg_vmalloc __initdata;
-static enum kasan_arg_stacktrace kasan_arg_stacktrace __initdata;
/*
* Whether KASAN is enabled at all.
@@ -66,9 +59,6 @@ EXPORT_SYMBOL_GPL(kasan_mode);
/* Whether to enable vmalloc tagging. */
DEFINE_STATIC_KEY_TRUE(kasan_flag_vmalloc);
-/* Whether to collect alloc/free stack traces. */
-DEFINE_STATIC_KEY_TRUE(kasan_flag_stacktrace);
-
/* kasan=off/on */
static int __init early_kasan_flag(char *arg)
{
@@ -122,23 +112,6 @@ static int __init early_kasan_flag_vmalloc(char *arg)
}
early_param("kasan.vmalloc", early_kasan_flag_vmalloc);
-/* kasan.stacktrace=off/on */
-static int __init early_kasan_flag_stacktrace(char *arg)
-{
- if (!arg)
- return -EINVAL;
-
- if (!strcmp(arg, "off"))
- kasan_arg_stacktrace = KASAN_ARG_STACKTRACE_OFF;
- else if (!strcmp(arg, "on"))
- kasan_arg_stacktrace = KASAN_ARG_STACKTRACE_ON;
- else
- return -EINVAL;
-
- return 0;
-}
-early_param("kasan.stacktrace", early_kasan_flag_stacktrace);
-
static inline const char *kasan_mode_info(void)
{
if (kasan_mode == KASAN_MODE_ASYNC)
@@ -213,17 +186,7 @@ void __init kasan_init_hw_tags(void)
break;
}
- switch (kasan_arg_stacktrace) {
- case KASAN_ARG_STACKTRACE_DEFAULT:
- /* Default is specified by kasan_flag_stacktrace definition. */
- break;
- case KASAN_ARG_STACKTRACE_OFF:
- static_branch_disable(&kasan_flag_stacktrace);
- break;
- case KASAN_ARG_STACKTRACE_ON:
- static_branch_enable(&kasan_flag_stacktrace);
- break;
- }
+ kasan_init_tags();
/* KASAN is now initialized, enable it. */
static_branch_enable(&kasan_flag_enabled);
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index cfff81139d67..447baf1a7a2e 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -8,13 +8,31 @@
#include <linux/kfence.h>
#include <linux/stackdepot.h>
-#ifdef CONFIG_KASAN_HW_TAGS
+#if defined(CONFIG_KASAN_SW_TAGS) || defined(CONFIG_KASAN_HW_TAGS)
#include <linux/static_key.h>
+
+DECLARE_STATIC_KEY_TRUE(kasan_flag_stacktrace);
+
+static inline bool kasan_stack_collection_enabled(void)
+{
+ return static_branch_unlikely(&kasan_flag_stacktrace);
+}
+
+#else /* CONFIG_KASAN_SW_TAGS || CONFIG_KASAN_HW_TAGS */
+
+static inline bool kasan_stack_collection_enabled(void)
+{
+ return true;
+}
+
+#endif /* CONFIG_KASAN_SW_TAGS || CONFIG_KASAN_HW_TAGS */
+
+#ifdef CONFIG_KASAN_HW_TAGS
+
#include "../slab.h"
DECLARE_STATIC_KEY_TRUE(kasan_flag_vmalloc);
-DECLARE_STATIC_KEY_TRUE(kasan_flag_stacktrace);
enum kasan_mode {
KASAN_MODE_SYNC,
@@ -29,11 +47,6 @@ static inline bool kasan_vmalloc_enabled(void)
return static_branch_likely(&kasan_flag_vmalloc);
}
-static inline bool kasan_stack_collection_enabled(void)
-{
- return static_branch_unlikely(&kasan_flag_stacktrace);
-}
-
static inline bool kasan_async_fault_possible(void)
{
return kasan_mode == KASAN_MODE_ASYNC || kasan_mode == KASAN_MODE_ASYMM;
@@ -46,11 +59,6 @@ static inline bool kasan_sync_fault_possible(void)
#else /* CONFIG_KASAN_HW_TAGS */
-static inline bool kasan_stack_collection_enabled(void)
-{
- return true;
-}
-
static inline bool kasan_async_fault_possible(void)
{
return false;
@@ -410,6 +418,10 @@ static inline void kasan_enable_tagging(void) { }
#endif /* CONFIG_KASAN_HW_TAGS */
+#if defined(CONFIG_KASAN_SW_TAGS) || defined(CONFIG_KASAN_HW_TAGS)
+void __init kasan_init_tags(void);
+#endif /* CONFIG_KASAN_SW_TAGS || CONFIG_KASAN_HW_TAGS */
+
#if defined(CONFIG_KASAN_HW_TAGS) && IS_ENABLED(CONFIG_KASAN_KUNIT_TEST)
void kasan_force_async_fault(void);
diff --git a/mm/kasan/sw_tags.c b/mm/kasan/sw_tags.c
index 77f13f391b57..a3afaf2ad1b1 100644
--- a/mm/kasan/sw_tags.c
+++ b/mm/kasan/sw_tags.c
@@ -42,7 +42,10 @@ void __init kasan_init_sw_tags(void)
for_each_possible_cpu(cpu)
per_cpu(prng_state, cpu) = (u32)get_cycles();
- pr_info("KernelAddressSanitizer initialized (sw-tags)\n");
+ kasan_init_tags();
+
+ pr_info("KernelAddressSanitizer initialized (sw-tags, stacktrace=%s)\n",
+ kasan_stack_collection_enabled() ? "on" : "off");
}
/*
diff --git a/mm/kasan/tags.c b/mm/kasan/tags.c
index 07828021c1f5..0eb6cf6717db 100644
--- a/mm/kasan/tags.c
+++ b/mm/kasan/tags.c
@@ -19,11 +19,54 @@
#include "kasan.h"
#include "../slab.h"
+enum kasan_arg_stacktrace {
+ KASAN_ARG_STACKTRACE_DEFAULT,
+ KASAN_ARG_STACKTRACE_OFF,
+ KASAN_ARG_STACKTRACE_ON,
+};
+
+static enum kasan_arg_stacktrace kasan_arg_stacktrace __initdata;
+
+/* Whether to collect alloc/free stack traces. */
+DEFINE_STATIC_KEY_TRUE(kasan_flag_stacktrace);
+
/* Non-zero, as initial pointer values are 0. */
#define STACK_RING_BUSY_PTR ((void *)1)
struct kasan_stack_ring stack_ring;
+/* kasan.stacktrace=off/on */
+static int __init early_kasan_flag_stacktrace(char *arg)
+{
+ if (!arg)
+ return -EINVAL;
+
+ if (!strcmp(arg, "off"))
+ kasan_arg_stacktrace = KASAN_ARG_STACKTRACE_OFF;
+ else if (!strcmp(arg, "on"))
+ kasan_arg_stacktrace = KASAN_ARG_STACKTRACE_ON;
+ else
+ return -EINVAL;
+
+ return 0;
+}
+early_param("kasan.stacktrace", early_kasan_flag_stacktrace);
+
+void __init kasan_init_tags(void)
+{
+ switch (kasan_arg_stacktrace) {
+ case KASAN_ARG_STACKTRACE_DEFAULT:
+ /* Default is specified by kasan_flag_stacktrace definition. */
+ break;
+ case KASAN_ARG_STACKTRACE_OFF:
+ static_branch_disable(&kasan_flag_stacktrace);
+ break;
+ case KASAN_ARG_STACKTRACE_ON:
+ static_branch_enable(&kasan_flag_stacktrace);
+ break;
+ }
+}
+
static void save_stack_info(struct kmem_cache *cache, void *object,
gfp_t gfp_flags, bool is_free)
{
--
2.25.1
From: Andrey Konovalov <[email protected]>
Rename set_alloc_info() and kasan_set_free_info() to save_alloc_info()
and kasan_save_free_info(). The new names make more sense.
Reviewed-by: Marco Elver <[email protected]>
Signed-off-by: Andrey Konovalov <[email protected]>
---
mm/kasan/common.c | 8 ++++----
mm/kasan/generic.c | 2 +-
mm/kasan/kasan.h | 2 +-
mm/kasan/tags.c | 2 +-
4 files changed, 7 insertions(+), 7 deletions(-)
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index b7351b860abf..4b2bbb6063cb 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -364,7 +364,7 @@ static inline bool ____kasan_slab_free(struct kmem_cache *cache, void *object,
return false;
if (kasan_stack_collection_enabled())
- kasan_set_free_info(cache, object, tag);
+ kasan_save_free_info(cache, object, tag);
return kasan_quarantine_put(cache, object);
}
@@ -423,7 +423,7 @@ void __kasan_slab_free_mempool(void *ptr, unsigned long ip)
}
}
-static void set_alloc_info(struct kmem_cache *cache, void *object,
+static void save_alloc_info(struct kmem_cache *cache, void *object,
gfp_t flags, bool is_kmalloc)
{
struct kasan_alloc_meta *alloc_meta;
@@ -467,7 +467,7 @@ void * __must_check __kasan_slab_alloc(struct kmem_cache *cache,
/* Save alloc info (if possible) for non-kmalloc() allocations. */
if (kasan_stack_collection_enabled())
- set_alloc_info(cache, (void *)object, flags, false);
+ save_alloc_info(cache, (void *)object, flags, false);
return tagged_object;
}
@@ -513,7 +513,7 @@ static inline void *____kasan_kmalloc(struct kmem_cache *cache,
* This also rewrites the alloc info when called from kasan_krealloc().
*/
if (kasan_stack_collection_enabled())
- set_alloc_info(cache, (void *)object, flags, true);
+ save_alloc_info(cache, (void *)object, flags, true);
/* Keep the tag that was set by kasan_slab_alloc(). */
return (void *)object;
diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
index 437fcc7e77cf..03a3770cfeae 100644
--- a/mm/kasan/generic.c
+++ b/mm/kasan/generic.c
@@ -358,7 +358,7 @@ void kasan_record_aux_stack_noalloc(void *addr)
return __kasan_record_aux_stack(addr, false);
}
-void kasan_set_free_info(struct kmem_cache *cache,
+void kasan_save_free_info(struct kmem_cache *cache,
void *object, u8 tag)
{
struct kasan_free_meta *free_meta;
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 01c03e45acd4..bf16a74dc027 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -285,7 +285,7 @@ struct slab *kasan_addr_to_slab(const void *addr);
depot_stack_handle_t kasan_save_stack(gfp_t flags, bool can_alloc);
void kasan_set_track(struct kasan_track *track, gfp_t flags);
-void kasan_set_free_info(struct kmem_cache *cache, void *object, u8 tag);
+void kasan_save_free_info(struct kmem_cache *cache, void *object, u8 tag);
struct kasan_track *kasan_get_free_track(struct kmem_cache *cache,
void *object, u8 tag);
diff --git a/mm/kasan/tags.c b/mm/kasan/tags.c
index 8f48b9502a17..b453a353bc86 100644
--- a/mm/kasan/tags.c
+++ b/mm/kasan/tags.c
@@ -17,7 +17,7 @@
#include "kasan.h"
-void kasan_set_free_info(struct kmem_cache *cache,
+void kasan_save_free_info(struct kmem_cache *cache,
void *object, u8 tag)
{
struct kasan_alloc_meta *alloc_meta;
--
2.25.1
From: Andrey Konovalov <[email protected]>
Pass a pointer to kasan_report_info to describe_object() and
describe_object_stacks(), instead of passing the structure's fields.
The untagged pointer and the tag are still passed as separate arguments
to some of the functions to avoid duplicating the untagging logic.
This is preparatory change for the next patch.
Signed-off-by: Andrey Konovalov <[email protected]>
---
mm/kasan/report.c | 23 +++++++++++------------
1 file changed, 11 insertions(+), 12 deletions(-)
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 763de8e68887..ec018f849992 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -213,8 +213,8 @@ static inline struct page *addr_to_page(const void *addr)
return NULL;
}
-static void describe_object_addr(struct kmem_cache *cache, void *object,
- const void *addr)
+static void describe_object_addr(const void *addr, struct kmem_cache *cache,
+ void *object)
{
unsigned long access_addr = (unsigned long)addr;
unsigned long object_addr = (unsigned long)object;
@@ -242,33 +242,32 @@ static void describe_object_addr(struct kmem_cache *cache, void *object,
(void *)(object_addr + cache->object_size));
}
-static void describe_object_stacks(struct kmem_cache *cache, void *object,
- const void *addr, u8 tag)
+static void describe_object_stacks(u8 tag, struct kasan_report_info *info)
{
struct kasan_track *alloc_track;
struct kasan_track *free_track;
- alloc_track = kasan_get_alloc_track(cache, object);
+ alloc_track = kasan_get_alloc_track(info->cache, info->object);
if (alloc_track) {
print_track(alloc_track, "Allocated");
pr_err("\n");
}
- free_track = kasan_get_free_track(cache, object, tag);
+ free_track = kasan_get_free_track(info->cache, info->object, tag);
if (free_track) {
print_track(free_track, "Freed");
pr_err("\n");
}
- kasan_print_aux_stacks(cache, object);
+ kasan_print_aux_stacks(info->cache, info->object);
}
-static void describe_object(struct kmem_cache *cache, void *object,
- const void *addr, u8 tag)
+static void describe_object(const void *addr, u8 tag,
+ struct kasan_report_info *info)
{
if (kasan_stack_collection_enabled())
- describe_object_stacks(cache, object, addr, tag);
- describe_object_addr(cache, object, addr);
+ describe_object_stacks(tag, info);
+ describe_object_addr(addr, info->cache, info->object);
}
static inline bool kernel_or_module_addr(const void *addr)
@@ -296,7 +295,7 @@ static void print_address_description(void *addr, u8 tag,
pr_err("\n");
if (info->cache && info->object) {
- describe_object(info->cache, info->object, addr, tag);
+ describe_object(addr, tag, info);
pr_err("\n");
}
--
2.25.1
From: Andrey Konovalov <[email protected]>
As kasan_addr_to_page() is only used in report.c, rename it to
addr_to_page() and make it static.
Signed-off-by: Andrey Konovalov <[email protected]>
---
mm/kasan/kasan.h | 1 -
mm/kasan/report.c | 4 ++--
2 files changed, 2 insertions(+), 3 deletions(-)
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index cca49ab029f1..4fddfdb08abf 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -291,7 +291,6 @@ bool kasan_report(unsigned long addr, size_t size,
bool is_write, unsigned long ip);
void kasan_report_invalid_free(void *object, unsigned long ip, enum kasan_report_type type);
-struct page *kasan_addr_to_page(const void *addr);
struct slab *kasan_addr_to_slab(const void *addr);
#ifdef CONFIG_KASAN_GENERIC
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index cd31b3b89ca1..ac526c10ebff 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -206,7 +206,7 @@ static void print_track(struct kasan_track *track, const char *prefix)
pr_err("(stack is not available)\n");
}
-struct page *kasan_addr_to_page(const void *addr)
+static inline struct page *addr_to_page(const void *addr)
{
if (virt_addr_valid(addr))
return virt_to_head_page(addr);
@@ -289,7 +289,7 @@ static inline bool init_task_stack_addr(const void *addr)
static void print_address_description(void *addr, u8 tag)
{
- struct page *page = kasan_addr_to_page(addr);
+ struct page *page = addr_to_page(addr);
struct slab *slab = kasan_addr_to_slab(addr);
dump_stack_lvl(KERN_ERR);
--
2.25.1
On Tue, 19 Jul 2022 at 02:15, <[email protected]> wrote:
>
> From: Andrey Konovalov <[email protected]>
>
> Implement storing stack depot handles for alloc/free stack traces for
> slab objects for the tag-based KASAN modes in a ring buffer.
>
> This ring buffer is referred to as the stack ring.
>
> On each alloc/free of a slab object, the tagged address of the object and
> the current stack trace are recorded in the stack ring.
>
> On each bug report, if the accessed address belongs to a slab object, the
> stack ring is scanned for matching entries. The newest entries are used to
> print the alloc/free stack traces in the report: one entry for alloc and
> one for free.
>
> The number of entries in the stack ring is fixed in this patch, but one of
> the following patches adds a command-line argument to control it.
>
> Signed-off-by: Andrey Konovalov <[email protected]>
>
> ---
>
> Changes v1->v2:
> - Only use the atomic type for pos, use READ/WRITE_ONCE() for the rest.
> - Rename KASAN_STACK_RING_ENTRIES to KASAN_STACK_RING_SIZE.
> - Rename object local variable in kasan_complete_mode_report_info() to
> ptr to match the name in kasan_stack_ring_entry.
> - Detect stack ring entry slots that are being written to.
> - Use read-write lock to disallow reading half-written stack ring entries.
> - Add a comment about the stack ring being best-effort.
> ---
> mm/kasan/kasan.h | 21 ++++++++++++
> mm/kasan/report_tags.c | 76 ++++++++++++++++++++++++++++++++++++++++++
> mm/kasan/tags.c | 50 +++++++++++++++++++++++++++
> 3 files changed, 147 insertions(+)
>
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index 7df107dc400a..cfff81139d67 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -2,6 +2,7 @@
> #ifndef __MM_KASAN_KASAN_H
> #define __MM_KASAN_KASAN_H
>
> +#include <linux/atomic.h>
> #include <linux/kasan.h>
> #include <linux/kasan-tags.h>
> #include <linux/kfence.h>
> @@ -233,6 +234,26 @@ struct kasan_free_meta {
>
> #endif /* CONFIG_KASAN_GENERIC */
>
> +#if defined(CONFIG_KASAN_SW_TAGS) || defined(CONFIG_KASAN_HW_TAGS)
> +
> +struct kasan_stack_ring_entry {
> + void *ptr;
> + size_t size;
> + u32 pid;
> + depot_stack_handle_t stack;
> + bool is_free;
> +};
> +
> +#define KASAN_STACK_RING_SIZE (32 << 10)
> +
> +struct kasan_stack_ring {
> + rwlock_t lock;
> + atomic64_t pos;
> + struct kasan_stack_ring_entry entries[KASAN_STACK_RING_SIZE];
> +};
> +
> +#endif /* CONFIG_KASAN_SW_TAGS || CONFIG_KASAN_HW_TAGS */
> +
> #if IS_ENABLED(CONFIG_KASAN_KUNIT_TEST)
> /* Used in KUnit-compatible KASAN tests. */
> struct kunit_kasan_status {
> diff --git a/mm/kasan/report_tags.c b/mm/kasan/report_tags.c
> index 5cbac2cdb177..a996489e6dac 100644
> --- a/mm/kasan/report_tags.c
> +++ b/mm/kasan/report_tags.c
> @@ -4,8 +4,12 @@
> * Copyright (c) 2020 Google, Inc.
> */
>
> +#include <linux/atomic.h>
> +
> #include "kasan.h"
>
> +extern struct kasan_stack_ring stack_ring;
> +
> static const char *get_bug_type(struct kasan_report_info *info)
> {
> /*
> @@ -24,5 +28,77 @@ static const char *get_bug_type(struct kasan_report_info *info)
>
> void kasan_complete_mode_report_info(struct kasan_report_info *info)
> {
> + unsigned long flags;
> + u64 pos;
> + struct kasan_stack_ring_entry *entry;
> + void *ptr;
> + u32 pid;
> + depot_stack_handle_t stack;
> + bool is_free;
> + bool alloc_found = false, free_found = false;
> +
> info->bug_type = get_bug_type(info);
> +
> + if (!info->cache || !info->object)
> + return;
> + }
> +
> + write_lock_irqsave(&stack_ring.lock, flags);
> +
> + pos = atomic64_read(&stack_ring.pos);
> +
> + /*
> + * The loop below tries to find stack ring entries relevant to the
> + * buggy object. This is a best-effort process.
> + *
> + * First, another object with the same tag can be allocated in place of
> + * the buggy object. Also, since the number of entries is limited, the
> + * entries relevant to the buggy object can be overwritten.
> + */
> +
> + for (u64 i = pos - 1; i != pos - 1 - KASAN_STACK_RING_SIZE; i--) {
> + if (alloc_found && free_found)
> + break;
> +
> + entry = &stack_ring.entries[i % KASAN_STACK_RING_SIZE];
> +
> + /* Paired with smp_store_release() in save_stack_info(). */
> + ptr = (void *)smp_load_acquire(&entry->ptr);
> +
> + if (kasan_reset_tag(ptr) != info->object ||
> + get_tag(ptr) != get_tag(info->access_addr))
> + continue;
> +
> + pid = READ_ONCE(entry->pid);
> + stack = READ_ONCE(entry->stack);
> + is_free = READ_ONCE(entry->is_free);
> +
> + /* Try detecting if the entry was changed while being read. */
> + smp_mb();
> + if (ptr != (void *)READ_ONCE(entry->ptr))
> + continue;
I thought the re-validation is no longer needed because of the rwlock
protection?
The rest looks fine now.
> + if (is_free) {
> + /*
> + * Second free of the same object.
> + * Give up on trying to find the alloc entry.
> + */
> + if (free_found)
> + break;
> +
> + info->free_track.pid = pid;
> + info->free_track.stack = stack;
> + free_found = true;
> + } else {
> + /* Second alloc of the same object. Give up. */
> + if (alloc_found)
> + break;
> +
> + info->alloc_track.pid = pid;
> + info->alloc_track.stack = stack;
> + alloc_found = true;
> + }
> + }
> +
> + write_unlock_irqrestore(&stack_ring.lock, flags);
> }
> diff --git a/mm/kasan/tags.c b/mm/kasan/tags.c
> index 39a0481e5228..07828021c1f5 100644
> --- a/mm/kasan/tags.c
> +++ b/mm/kasan/tags.c
> @@ -6,6 +6,7 @@
> * Copyright (c) 2020 Google, Inc.
> */
>
> +#include <linux/atomic.h>
> #include <linux/init.h>
> #include <linux/kasan.h>
> #include <linux/kernel.h>
> @@ -16,11 +17,60 @@
> #include <linux/types.h>
>
> #include "kasan.h"
> +#include "../slab.h"
> +
> +/* Non-zero, as initial pointer values are 0. */
> +#define STACK_RING_BUSY_PTR ((void *)1)
> +
> +struct kasan_stack_ring stack_ring;
> +
> +static void save_stack_info(struct kmem_cache *cache, void *object,
> + gfp_t gfp_flags, bool is_free)
> +{
> + unsigned long flags;
> + depot_stack_handle_t stack;
> + u64 pos;
> + struct kasan_stack_ring_entry *entry;
> + void *old_ptr;
> +
> + stack = kasan_save_stack(gfp_flags, true);
> +
> + /*
> + * Prevent save_stack_info() from modifying stack ring
> + * when kasan_complete_mode_report_info() is walking it.
> + */
> + read_lock_irqsave(&stack_ring.lock, flags);
> +
> +next:
> + pos = atomic64_fetch_add(1, &stack_ring.pos);
> + entry = &stack_ring.entries[pos % KASAN_STACK_RING_SIZE];
> +
> + /* Detect stack ring entry slots that are being written to. */
> + old_ptr = READ_ONCE(entry->ptr);
> + if (old_ptr == STACK_RING_BUSY_PTR)
> + goto next; /* Busy slot. */
> + if (!try_cmpxchg(&entry->ptr, &old_ptr, STACK_RING_BUSY_PTR))
> + goto next; /* Busy slot. */
> +
> + WRITE_ONCE(entry->size, cache->object_size);
> + WRITE_ONCE(entry->pid, current->pid);
> + WRITE_ONCE(entry->stack, stack);
> + WRITE_ONCE(entry->is_free, is_free);
> +
> + /*
> + * Paired with smp_load_acquire() in kasan_complete_mode_report_info().
> + */
> + smp_store_release(&entry->ptr, (s64)object);
> +
> + read_unlock_irqrestore(&stack_ring.lock, flags);
> +}
>
> void kasan_save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
> {
> + save_stack_info(cache, object, flags, false);
> }
>
> void kasan_save_free_info(struct kmem_cache *cache, void *object)
> {
> + save_stack_info(cache, object, GFP_NOWAIT, true);
> }
> --
> 2.25.1
>
On Tue, Jul 19, 2022 at 1:41 PM Marco Elver <[email protected]> wrote:
>
> > + for (u64 i = pos - 1; i != pos - 1 - KASAN_STACK_RING_SIZE; i--) {
> > + if (alloc_found && free_found)
> > + break;
> > +
> > + entry = &stack_ring.entries[i % KASAN_STACK_RING_SIZE];
> > +
> > + /* Paired with smp_store_release() in save_stack_info(). */
> > + ptr = (void *)smp_load_acquire(&entry->ptr);
> > +
> > + if (kasan_reset_tag(ptr) != info->object ||
> > + get_tag(ptr) != get_tag(info->access_addr))
> > + continue;
> > +
> > + pid = READ_ONCE(entry->pid);
> > + stack = READ_ONCE(entry->stack);
> > + is_free = READ_ONCE(entry->is_free);
> > +
> > + /* Try detecting if the entry was changed while being read. */
> > + smp_mb();
> > + if (ptr != (void *)READ_ONCE(entry->ptr))
> > + continue;
>
> I thought the re-validation is no longer needed because of the rwlock
> protection?
Oh, yes, forgot to remove this. Will either do in v3 if there are more
things to fix, or will just send a small fix-up patch if the rest of
the series looks good.
> The rest looks fine now.
Thank you, Marco!
On Thu, Jul 21, 2022 at 10:41 PM Andrey Konovalov <[email protected]> wrote:
>
> On Tue, Jul 19, 2022 at 1:41 PM Marco Elver <[email protected]> wrote:
> >
> > > + for (u64 i = pos - 1; i != pos - 1 - KASAN_STACK_RING_SIZE; i--) {
> > > + if (alloc_found && free_found)
> > > + break;
> > > +
> > > + entry = &stack_ring.entries[i % KASAN_STACK_RING_SIZE];
> > > +
> > > + /* Paired with smp_store_release() in save_stack_info(). */
> > > + ptr = (void *)smp_load_acquire(&entry->ptr);
> > > +
> > > + if (kasan_reset_tag(ptr) != info->object ||
> > > + get_tag(ptr) != get_tag(info->access_addr))
> > > + continue;
> > > +
> > > + pid = READ_ONCE(entry->pid);
> > > + stack = READ_ONCE(entry->stack);
> > > + is_free = READ_ONCE(entry->is_free);
> > > +
> > > + /* Try detecting if the entry was changed while being read. */
> > > + smp_mb();
> > > + if (ptr != (void *)READ_ONCE(entry->ptr))
> > > + continue;
> >
> > I thought the re-validation is no longer needed because of the rwlock
> > protection?
>
> Oh, yes, forgot to remove this. Will either do in v3 if there are more
> things to fix, or will just send a small fix-up patch if the rest of
> the series looks good.
>
> > The rest looks fine now.
>
> Thank you, Marco!
Hi Marco,
I'm thinking of sending a v3.
Does your "The rest looks fine now" comment refer only to this patch
or to the whole series? If it's the former, could you PTAL at the
other patches?
Thanks!
On Tue, Jul 19, 2022 at 02:10AM +0200, [email protected] wrote:
> From: Andrey Konovalov <[email protected]>
>
> Instead of using a large static array, allocate the stack ring dynamically
> via memblock_alloc().
>
> The size of the stack ring is controlled by a new kasan.stack_ring_size
> command-line parameter. When kasan.stack_ring_size is not provided, the
> default value of 32 << 10 is used.
>
> When the stack trace collection is disabled via kasan.stacktrace=off,
> the stack ring is not allocated.
>
> Signed-off-by: Andrey Konovalov <[email protected]>
>
> ---
>
> Changes v1->v2:
> - This is a new patch.
> ---
> mm/kasan/kasan.h | 5 +++--
> mm/kasan/report_tags.c | 4 ++--
> mm/kasan/tags.c | 22 +++++++++++++++++++++-
> 3 files changed, 26 insertions(+), 5 deletions(-)
>
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index 447baf1a7a2e..4afe4db751da 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -252,12 +252,13 @@ struct kasan_stack_ring_entry {
> bool is_free;
> };
>
> -#define KASAN_STACK_RING_SIZE (32 << 10)
> +#define KASAN_STACK_RING_SIZE_DEFAULT (32 << 10)
>
This could be moved to tags.c, as there are no other users elsewhere.
> struct kasan_stack_ring {
> rwlock_t lock;
> + size_t size;
> atomic64_t pos;
> - struct kasan_stack_ring_entry entries[KASAN_STACK_RING_SIZE];
> + struct kasan_stack_ring_entry *entries;
> };
>
> #endif /* CONFIG_KASAN_SW_TAGS || CONFIG_KASAN_HW_TAGS */
> diff --git a/mm/kasan/report_tags.c b/mm/kasan/report_tags.c
> index a996489e6dac..7e267e69ce19 100644
> --- a/mm/kasan/report_tags.c
> +++ b/mm/kasan/report_tags.c
> @@ -56,11 +56,11 @@ void kasan_complete_mode_report_info(struct kasan_report_info *info)
> * entries relevant to the buggy object can be overwritten.
> */
>
> - for (u64 i = pos - 1; i != pos - 1 - KASAN_STACK_RING_SIZE; i--) {
> + for (u64 i = pos - 1; i != pos - 1 - stack_ring.size; i--) {
> if (alloc_found && free_found)
> break;
>
> - entry = &stack_ring.entries[i % KASAN_STACK_RING_SIZE];
> + entry = &stack_ring.entries[i % stack_ring.size];
>
> /* Paired with smp_store_release() in save_stack_info(). */
> ptr = (void *)smp_load_acquire(&entry->ptr);
> diff --git a/mm/kasan/tags.c b/mm/kasan/tags.c
> index 0eb6cf6717db..fd8c5f919156 100644
> --- a/mm/kasan/tags.c
> +++ b/mm/kasan/tags.c
> @@ -10,6 +10,7 @@
> #include <linux/init.h>
> #include <linux/kasan.h>
> #include <linux/kernel.h>
> +#include <linux/memblock.h>
> #include <linux/memory.h>
> #include <linux/mm.h>
> #include <linux/static_key.h>
> @@ -52,6 +53,16 @@ static int __init early_kasan_flag_stacktrace(char *arg)
> }
> early_param("kasan.stacktrace", early_kasan_flag_stacktrace);
>
> +/* kasan.stack_ring_size=32768 */
What does that comment say? Is it "kasan.stack_ring_size=<entries>"?
Is it already in the documentation?
> +static int __init early_kasan_flag_stack_ring_size(char *arg)
> +{
> + if (!arg)
> + return -EINVAL;
> +
> + return kstrtoul(arg, 0, &stack_ring.size);
> +}
> +early_param("kasan.stack_ring_size", early_kasan_flag_stack_ring_size);
> +
> void __init kasan_init_tags(void)
> {
> switch (kasan_arg_stacktrace) {
> @@ -65,6 +76,15 @@ void __init kasan_init_tags(void)
> static_branch_enable(&kasan_flag_stacktrace);
> break;
> }
> +
> + if (kasan_stack_collection_enabled()) {
> + if (!stack_ring.size)
> + stack_ring.size = KASAN_STACK_RING_SIZE_DEFAULT;
> + stack_ring.entries = memblock_alloc(
> + sizeof(stack_ring.entries[0]) *
> + stack_ring.size,
> + SMP_CACHE_BYTES);
memblock_alloc() can fail. Because unlikely, stack collection should
probably just be disabled.
(minor: excessive line breaks makes the above unreadable.)
> + }
> }
>
> static void save_stack_info(struct kmem_cache *cache, void *object,
> @@ -86,7 +106,7 @@ static void save_stack_info(struct kmem_cache *cache, void *object,
>
> next:
> pos = atomic64_fetch_add(1, &stack_ring.pos);
> - entry = &stack_ring.entries[pos % KASAN_STACK_RING_SIZE];
> + entry = &stack_ring.entries[pos % stack_ring.size];
>
> /* Detect stack ring entry slots that are being written to. */
> old_ptr = READ_ONCE(entry->ptr);
> --
> 2.25.1
On Tue, 2 Aug 2022 at 22:45, Andrey Konovalov <[email protected]> wrote:
>
> On Thu, Jul 21, 2022 at 10:41 PM Andrey Konovalov <[email protected]> wrote:
> >
> > On Tue, Jul 19, 2022 at 1:41 PM Marco Elver <[email protected]> wrote:
> > >
> > > > + for (u64 i = pos - 1; i != pos - 1 - KASAN_STACK_RING_SIZE; i--) {
> > > > + if (alloc_found && free_found)
> > > > + break;
> > > > +
> > > > + entry = &stack_ring.entries[i % KASAN_STACK_RING_SIZE];
> > > > +
> > > > + /* Paired with smp_store_release() in save_stack_info(). */
> > > > + ptr = (void *)smp_load_acquire(&entry->ptr);
> > > > +
> > > > + if (kasan_reset_tag(ptr) != info->object ||
> > > > + get_tag(ptr) != get_tag(info->access_addr))
> > > > + continue;
> > > > +
> > > > + pid = READ_ONCE(entry->pid);
> > > > + stack = READ_ONCE(entry->stack);
> > > > + is_free = READ_ONCE(entry->is_free);
> > > > +
> > > > + /* Try detecting if the entry was changed while being read. */
> > > > + smp_mb();
> > > > + if (ptr != (void *)READ_ONCE(entry->ptr))
> > > > + continue;
> > >
> > > I thought the re-validation is no longer needed because of the rwlock
> > > protection?
> >
> > Oh, yes, forgot to remove this. Will either do in v3 if there are more
> > things to fix, or will just send a small fix-up patch if the rest of
> > the series looks good.
> >
> > > The rest looks fine now.
> >
> > Thank you, Marco!
>
> Hi Marco,
>
> I'm thinking of sending a v3.
>
> Does your "The rest looks fine now" comment refer only to this patch
> or to the whole series? If it's the former, could you PTAL at the
> other patches?
I just looked again. Apart from the comments I just sent, overall it
looks fine (whole series).
Does test_kasan exercise the ring wrapping around? One thing that
might be worth doing is adding a multi-threaded stress test, where you
have 2+ threads doing lots of allocations, frees, and generating
reports.
On Wed, Aug 3, 2022 at 10:29 PM Marco Elver <[email protected]> wrote:
>
> > Does your "The rest looks fine now" comment refer only to this patch
> > or to the whole series? If it's the former, could you PTAL at the
> > other patches?
>
> I just looked again. Apart from the comments I just sent, overall it
> looks fine (whole series).
Great, thanks! I'll put your Reviewed-by on all patches except the
ones I will change in v3.
> Does test_kasan exercise the ring wrapping around? One thing that
> might be worth doing is adding a multi-threaded stress test, where you
> have 2+ threads doing lots of allocations, frees, and generating
> reports.
There's probably not a lot of sense in adding this test: this part is
tested during kernel boot. Even with defconfig, the stack ring
overflows multiple times.
I will, however, add a test for a complicated use-after-free scenario
to make sure that KASAN points at the right kmalloc/kfree calls.
Before I get to implementing [1], the report contents will have to be
checked manually though.
Thanks!
[1] https://bugzilla.kernel.org/show_bug.cgi?id=212203
On Wed, Aug 3, 2022 at 10:09 PM Marco Elver <[email protected]> wrote:
>
> > -#define KASAN_STACK_RING_SIZE (32 << 10)
> > +#define KASAN_STACK_RING_SIZE_DEFAULT (32 << 10)
> >
>
> This could be moved to tags.c, as there are no other users elsewhere.
Will fix in v3.
> > +/* kasan.stack_ring_size=32768 */
>
> What does that comment say? Is it "kasan.stack_ring_size=<entries>"?
Yes, will clarify in v3.
> Is it already in the documentation?
Will add in v3.
> > + if (kasan_stack_collection_enabled()) {
> > + if (!stack_ring.size)
> > + stack_ring.size = KASAN_STACK_RING_SIZE_DEFAULT;
> > + stack_ring.entries = memblock_alloc(
> > + sizeof(stack_ring.entries[0]) *
> > + stack_ring.size,
> > + SMP_CACHE_BYTES);
>
> memblock_alloc() can fail. Because unlikely, stack collection should
> probably just be disabled.
>
> (minor: excessive line breaks makes the above unreadable.)
Will fix both in v3.
Thanks!