2022-06-13 21:51:07

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH 00/32] kasan: switch tag-based modes to stack ring from per-object metadata

From: Andrey Konovalov <[email protected]>

This series makes the tag-based KASAN modes use a ring buffer for storing
stack depot handles for alloc/free stack traces for slab objects instead
of per-object metadata. This ring buffer is referred to as the stack ring.

On each alloc/free of a slab object, the tagged address of the object and
the current stack trace are recorded in the stack ring.

On each bug report, if the accessed address belongs to a slab object, the
stack ring is scanned for matching entries. The newest entries are used to
print the alloc/free stack traces in the report: one entry for alloc and
one for free.

The ring buffer is lock-free.

The advantages of this approach over storing stack trace handles in
per-object metadata with the tag-based KASAN modes:

- Allows to find relevant stack traces for use-after-free bugs without
using quarantine for freed memory. (Currently, if the object was
reallocated multiple times, the report contains the latest alloc/free
stack traces, not necessarily the ones relevant to the buggy allocation.)
- Allows to better identify and mark use-after-free bugs, effectively
making the CONFIG_KASAN_TAGS_IDENTIFY functionality always-on.
- Has fixed memory overhead.

The disadvantage:

- If the affected object was allocated/freed long before the bug happened
and the stack trace events were purged from the stack ring, the report
will have no stack traces.

Discussion
==========

The current implementation of the stack ring uses a single ring buffer for
the whole kernel. This might lead to contention due to atomic accesses to
the ring buffer index on multicore systems.

It is unclear to me whether the performance impact from this contention
is significant compared to the slowdown introduced by collecting stack
traces.

While these patches are being reviewed, I will do some tests on the arm64
hardware that I have. However, I do not have a large multicore arm64
system to do proper measurements.

A considered alternative is to keep a separate ring buffer for each CPU
and then iterate over all of them when printing a bug report. This approach
requires somehow figuring out which of the stack rings has the freshest
stack traces for an object if multiple stack rings have them.

Further plans
=============

This series is a part of an effort to make KASAN stack trace collection
suitable for production. This requires stack trace collection to be fast
and memory-bounded.

The planned steps are:

1. Speed up stack trace collection (potentially, by using SCS;
patches on-hold until steps #2 and #3 are completed).
2. Keep stack trace handles in the stack ring (this series).
3. Add a memory-bounded mode to stack depot or provide an alternative
memory-bounded stack storage.
4. Potentially, implement stack trace collection sampling to minimize
the performance impact.

Thanks!

Andrey Konovalov (32):
kasan: check KASAN_NO_FREE_META in __kasan_metadata_size
kasan: rename kasan_set_*_info to kasan_save_*_info
kasan: move is_kmalloc check out of save_alloc_info
kasan: split save_alloc_info implementations
kasan: drop CONFIG_KASAN_TAGS_IDENTIFY
kasan: introduce kasan_print_aux_stacks
kasan: introduce kasan_get_alloc_track
kasan: introduce kasan_init_object_meta
kasan: clear metadata functions for tag-based modes
kasan: move kasan_get_*_meta to generic.c
kasan: introduce kasan_requires_meta
kasan: introduce kasan_init_cache_meta
kasan: drop CONFIG_KASAN_GENERIC check from kasan_init_cache_meta
kasan: only define kasan_metadata_size for Generic mode
kasan: only define kasan_never_merge for Generic mode
kasan: only define metadata offsets for Generic mode
kasan: only define metadata structs for Generic mode
kasan: only define kasan_cache_create for Generic mode
kasan: pass tagged pointers to kasan_save_alloc/free_info
kasan: move kasan_get_alloc/free_track definitions
kasan: simplify invalid-free reporting
kasan: cosmetic changes in report.c
kasan: use kasan_addr_to_slab in print_address_description
kasan: move kasan_addr_to_slab to common.c
kasan: make kasan_addr_to_page static
kasan: simplify print_report
kasan: introduce complete_report_info
kasan: fill in cache and object in complete_report_info
kasan: rework function arguments in report.c
kasan: introduce kasan_complete_mode_report_info
kasan: implement stack ring for tag-based modes
kasan: better identify bug types for tag-based modes

include/linux/kasan.h | 55 +++++-------
include/linux/slab.h | 2 +-
lib/Kconfig.kasan | 8 --
mm/kasan/common.c | 173 ++++----------------------------------
mm/kasan/generic.c | 154 ++++++++++++++++++++++++++++++---
mm/kasan/kasan.h | 138 ++++++++++++++++++++----------
mm/kasan/report.c | 130 +++++++++++++---------------
mm/kasan/report_generic.c | 45 +++++++++-
mm/kasan/report_tags.c | 114 ++++++++++++++++++-------
mm/kasan/tags.c | 61 +++++++-------
10 files changed, 491 insertions(+), 389 deletions(-)

--
2.25.1


2022-06-13 21:54:29

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH 23/32] kasan: use kasan_addr_to_slab in print_address_description

From: Andrey Konovalov <[email protected]>

Use the kasan_addr_to_slab() helper in print_address_description()
instead of separately invoking PageSlab() and page_slab().

Signed-off-by: Andrey Konovalov <[email protected]>
---
mm/kasan/report.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 879f949dc395..1dd6fc8a678f 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -291,12 +291,12 @@ static inline bool init_task_stack_addr(const void *addr)
static void print_address_description(void *addr, u8 tag)
{
struct page *page = kasan_addr_to_page(addr);
+ struct slab *slab = kasan_addr_to_slab(addr);

dump_stack_lvl(KERN_ERR);
pr_err("\n");

- if (page && PageSlab(page)) {
- struct slab *slab = page_slab(page);
+ if (slab) {
struct kmem_cache *cache = slab->slab_cache;
void *object = nearest_obj(cache, slab, addr);

--
2.25.1

2022-06-13 21:54:34

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH 03/32] kasan: move is_kmalloc check out of save_alloc_info

From: Andrey Konovalov <[email protected]>

Move kasan_info.is_kmalloc check out of save_alloc_info().

This is a preparatory change that simplifies the following patches
in this series.

Signed-off-by: Andrey Konovalov <[email protected]>
---
mm/kasan/common.c | 15 +++++----------
1 file changed, 5 insertions(+), 10 deletions(-)

diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 753775b894b6..a6107e8375e0 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -423,15 +423,10 @@ void __kasan_slab_free_mempool(void *ptr, unsigned long ip)
}
}

-static void save_alloc_info(struct kmem_cache *cache, void *object,
- gfp_t flags, bool is_kmalloc)
+static void save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
{
struct kasan_alloc_meta *alloc_meta;

- /* Don't save alloc info for kmalloc caches in kasan_slab_alloc(). */
- if (cache->kasan_info.is_kmalloc && !is_kmalloc)
- return;
-
alloc_meta = kasan_get_alloc_meta(cache, object);
if (alloc_meta)
kasan_set_track(&alloc_meta->alloc_track, flags);
@@ -466,8 +461,8 @@ void * __must_check __kasan_slab_alloc(struct kmem_cache *cache,
kasan_unpoison(tagged_object, cache->object_size, init);

/* Save alloc info (if possible) for non-kmalloc() allocations. */
- if (kasan_stack_collection_enabled())
- save_alloc_info(cache, (void *)object, flags, false);
+ if (kasan_stack_collection_enabled() && !cache->kasan_info.is_kmalloc)
+ save_alloc_info(cache, (void *)object, flags);

return tagged_object;
}
@@ -512,8 +507,8 @@ static inline void *____kasan_kmalloc(struct kmem_cache *cache,
* Save alloc info (if possible) for kmalloc() allocations.
* This also rewrites the alloc info when called from kasan_krealloc().
*/
- if (kasan_stack_collection_enabled())
- save_alloc_info(cache, (void *)object, flags, true);
+ if (kasan_stack_collection_enabled() && cache->kasan_info.is_kmalloc)
+ save_alloc_info(cache, (void *)object, flags);

/* Keep the tag that was set by kasan_slab_alloc(). */
return (void *)object;
--
2.25.1

2022-06-13 21:55:25

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH 28/32] kasan: fill in cache and object in complete_report_info

From: Andrey Konovalov <[email protected]>

Add cache and object fields to kasan_report_info and fill them in in
complete_report_info() instead of fetching them in the middle of the
report printing code.

This allows the reporting code to get access to the object information
before starting printing the report. One of the following patches uses
this information to determine the bug type with the tag-based modes.

Signed-off-by: Andrey Konovalov <[email protected]>
---
mm/kasan/kasan.h | 2 ++
mm/kasan/report.c | 21 +++++++++++++--------
2 files changed, 15 insertions(+), 8 deletions(-)

diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 0261d1530055..b9bd9f1656bf 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -156,6 +156,8 @@ struct kasan_report_info {

/* Filled in by the common reporting code. */
void *first_bad_addr;
+ struct kmem_cache *cache;
+ void *object;
};

/* Do not change the struct layout: compiler ABI. */
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 214ba7cb654c..a6b36eb4c33b 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -281,19 +281,16 @@ static inline bool init_task_stack_addr(const void *addr)
sizeof(init_thread_union.stack));
}

-static void print_address_description(void *addr, u8 tag)
+static void print_address_description(void *addr, u8 tag,
+ struct kasan_report_info *info)
{
struct page *page = addr_to_page(addr);
- struct slab *slab = kasan_addr_to_slab(addr);

dump_stack_lvl(KERN_ERR);
pr_err("\n");

- if (slab) {
- struct kmem_cache *cache = slab->slab_cache;
- void *object = nearest_obj(cache, slab, addr);
-
- describe_object(cache, object, addr, tag);
+ if (info->cache && info->object) {
+ describe_object(info->cache, info->object, addr, tag);
pr_err("\n");
}

@@ -400,7 +397,7 @@ static void print_report(struct kasan_report_info *info)
pr_err("\n");

if (addr_has_metadata(addr)) {
- print_address_description(addr, tag);
+ print_address_description(addr, tag, info);
print_memory_metadata(info->first_bad_addr);
} else {
dump_stack_lvl(KERN_ERR);
@@ -410,12 +407,20 @@ static void print_report(struct kasan_report_info *info)
static void complete_report_info(struct kasan_report_info *info)
{
void *addr = kasan_reset_tag(info->access_addr);
+ struct slab *slab;

if (info->is_free)
info->first_bad_addr = addr;
else
info->first_bad_addr = kasan_find_first_bad_addr(
info->access_addr, info->access_size);
+
+ slab = kasan_addr_to_slab(addr);
+ if (slab) {
+ info->cache = slab->slab_cache;
+ info->object = nearest_obj(info->cache, slab, addr);
+ } else
+ info->cache = info->object = NULL;
}

void kasan_report_invalid_free(void *ptr, unsigned long ip)
--
2.25.1

2022-06-13 21:57:52

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH 13/32] kasan: drop CONFIG_KASAN_GENERIC check from kasan_init_cache_meta

From: Andrey Konovalov <[email protected]>

As kasan_init_cache_meta() is only defined for the Generic mode, it does
not require the CONFIG_KASAN_GENERIC check.

Signed-off-by: Andrey Konovalov <[email protected]>
---
mm/kasan/generic.c | 6 ------
1 file changed, 6 deletions(-)

diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
index 73aea784040a..5125fad76f70 100644
--- a/mm/kasan/generic.c
+++ b/mm/kasan/generic.c
@@ -367,12 +367,6 @@ void kasan_init_cache_meta(struct kmem_cache *cache, unsigned int *size)
/* Continue, since free meta might still fit. */
}

- /* Only the generic mode uses free meta or flexible redzones. */
- if (!IS_ENABLED(CONFIG_KASAN_GENERIC)) {
- cache->kasan_info.free_meta_offset = KASAN_NO_FREE_META;
- return;
- }
-
/*
* Add free meta into redzone when it's not possible to store
* it in the object. This is the case when:
--
2.25.1

2022-06-13 21:58:08

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH 25/32] kasan: make kasan_addr_to_page static

From: Andrey Konovalov <[email protected]>

As kasan_addr_to_page() is only used in report.c, rename it to
addr_to_page() and make it static.

Signed-off-by: Andrey Konovalov <[email protected]>
---
mm/kasan/kasan.h | 1 -
mm/kasan/report.c | 4 ++--
2 files changed, 2 insertions(+), 3 deletions(-)

diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index f696d50b09fb..e3f100833154 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -285,7 +285,6 @@ bool kasan_report(unsigned long addr, size_t size,
bool is_write, unsigned long ip);
void kasan_report_invalid_free(void *object, unsigned long ip);

-struct page *kasan_addr_to_page(const void *addr);
struct slab *kasan_addr_to_slab(const void *addr);

#ifdef CONFIG_KASAN_GENERIC
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index ed8234516bab..f3ec6f86b199 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -200,7 +200,7 @@ static void print_track(struct kasan_track *track, const char *prefix)
pr_err("(stack is not available)\n");
}

-struct page *kasan_addr_to_page(const void *addr)
+static inline struct page *addr_to_page(const void *addr)
{
if ((addr >= (void *)PAGE_OFFSET) && (addr < high_memory))
return virt_to_head_page(addr);
@@ -283,7 +283,7 @@ static inline bool init_task_stack_addr(const void *addr)

static void print_address_description(void *addr, u8 tag)
{
- struct page *page = kasan_addr_to_page(addr);
+ struct page *page = addr_to_page(addr);
struct slab *slab = kasan_addr_to_slab(addr);

dump_stack_lvl(KERN_ERR);
--
2.25.1

2022-06-13 22:01:01

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH 06/32] kasan: introduce kasan_print_aux_stacks

From: Andrey Konovalov <[email protected]>

Add a kasan_print_aux_stacks() helper that prints the auxiliary stack
traces for the Generic mode.

This change hides references to alloc_meta from the common reporting code.
This is desired as only the Generic mode will be using per-object metadata
after this series.

Signed-off-by: Andrey Konovalov <[email protected]>
---
mm/kasan/kasan.h | 6 ++++++
mm/kasan/report.c | 15 +--------------
mm/kasan/report_generic.c | 20 ++++++++++++++++++++
3 files changed, 27 insertions(+), 14 deletions(-)

diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index aa6b43936f8d..bcea5ed15631 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -265,6 +265,12 @@ void kasan_print_address_stack_frame(const void *addr);
static inline void kasan_print_address_stack_frame(const void *addr) { }
#endif

+#ifdef CONFIG_KASAN_GENERIC
+void kasan_print_aux_stacks(struct kmem_cache *cache, const void *object);
+#else
+static inline void kasan_print_aux_stacks(struct kmem_cache *cache, const void *object) { }
+#endif
+
bool kasan_report(unsigned long addr, size_t size,
bool is_write, unsigned long ip);
void kasan_report_invalid_free(void *object, unsigned long ip);
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index b341a191651d..35dd8aeb115c 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -266,20 +266,7 @@ static void describe_object_stacks(struct kmem_cache *cache, void *object,
pr_err("\n");
}

-#ifdef CONFIG_KASAN_GENERIC
- if (!alloc_meta)
- return;
- if (alloc_meta->aux_stack[0]) {
- pr_err("Last potentially related work creation:\n");
- stack_depot_print(alloc_meta->aux_stack[0]);
- pr_err("\n");
- }
- if (alloc_meta->aux_stack[1]) {
- pr_err("Second to last potentially related work creation:\n");
- stack_depot_print(alloc_meta->aux_stack[1]);
- pr_err("\n");
- }
-#endif
+ kasan_print_aux_stacks(cache, object);
}

static void describe_object(struct kmem_cache *cache, void *object,
diff --git a/mm/kasan/report_generic.c b/mm/kasan/report_generic.c
index 6689fb9a919b..348dc207d462 100644
--- a/mm/kasan/report_generic.c
+++ b/mm/kasan/report_generic.c
@@ -132,6 +132,26 @@ void kasan_metadata_fetch_row(char *buffer, void *row)
memcpy(buffer, kasan_mem_to_shadow(row), META_BYTES_PER_ROW);
}

+void kasan_print_aux_stacks(struct kmem_cache *cache, const void *object)
+{
+ struct kasan_alloc_meta *alloc_meta;
+
+ alloc_meta = kasan_get_alloc_meta(cache, object);
+ if (!alloc_meta)
+ return;
+
+ if (alloc_meta->aux_stack[0]) {
+ pr_err("Last potentially related work creation:\n");
+ stack_depot_print(alloc_meta->aux_stack[0]);
+ pr_err("\n");
+ }
+ if (alloc_meta->aux_stack[1]) {
+ pr_err("Second to last potentially related work creation:\n");
+ stack_depot_print(alloc_meta->aux_stack[1]);
+ pr_err("\n");
+ }
+}
+
#ifdef CONFIG_KASAN_STACK
static bool __must_check tokenize_frame_descr(const char **frame_descr,
char *token, size_t max_tok_len,
--
2.25.1

2022-06-13 22:02:08

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH 30/32] kasan: introduce kasan_complete_mode_report_info

From: Andrey Konovalov <[email protected]>

Add bug_type and alloc/free_track fields to kasan_report_info and add a
kasan_complete_mode_report_info() function that fills in these fields.
This function is implemented differently for different KASAN mode.

Change the reporting code to use the filled in fields instead of
invoking kasan_get_bug_type() and kasan_get_alloc/free_track().

For the Generic mode, kasan_complete_mode_report_info() invokes these
functions instead. For the tag-based modes, only the bug_type field is
filled in; alloc/free_track are handled in the next patch.

Using a single function that fills in these fields is required for the
tag-based modes, as the values for all three fields are determined in a
single procedure implemented in the following patch.

Signed-off-by: Andrey Konovalov <[email protected]>
---
mm/kasan/kasan.h | 33 +++++++++++++++++----------------
mm/kasan/report.c | 29 ++++++++++++++---------------
mm/kasan/report_generic.c | 32 +++++++++++++++++---------------
mm/kasan/report_tags.c | 13 +++----------
4 files changed, 51 insertions(+), 56 deletions(-)

diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index b9bd9f1656bf..c51cea31ced0 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -146,6 +146,13 @@ static inline bool kasan_requires_meta(void)
#define META_MEM_BYTES_PER_ROW (META_BYTES_PER_ROW * KASAN_GRANULE_SIZE)
#define META_ROWS_AROUND_ADDR 2

+#define KASAN_STACK_DEPTH 64
+
+struct kasan_track {
+ u32 pid;
+ depot_stack_handle_t stack;
+};
+
struct kasan_report_info {
/* Filled in by kasan_report_*(). */
void *access_addr;
@@ -158,6 +165,11 @@ struct kasan_report_info {
void *first_bad_addr;
struct kmem_cache *cache;
void *object;
+
+ /* Filled in by the mode-specific reporting code. */
+ const char *bug_type;
+ struct kasan_track alloc_track;
+ struct kasan_track free_track;
};

/* Do not change the struct layout: compiler ABI. */
@@ -183,14 +195,7 @@ struct kasan_global {
#endif
};

-/* Structures for keeping alloc and free tracks. */
-
-#define KASAN_STACK_DEPTH 64
-
-struct kasan_track {
- u32 pid;
- depot_stack_handle_t stack;
-};
+/* Structures for keeping alloc and free meta. */

#ifdef CONFIG_KASAN_GENERIC

@@ -264,16 +269,16 @@ static inline bool addr_has_metadata(const void *addr)

#endif /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */

+void *kasan_find_first_bad_addr(void *addr, size_t size);
+void kasan_complete_mode_report_info(struct kasan_report_info *info);
+void kasan_metadata_fetch_row(char *buffer, void *row);
+
#if defined(CONFIG_KASAN_SW_TAGS) || defined(CONFIG_KASAN_HW_TAGS)
void kasan_print_tags(u8 addr_tag, const void *addr);
#else
static inline void kasan_print_tags(u8 addr_tag, const void *addr) { }
#endif

-void *kasan_find_first_bad_addr(void *addr, size_t size);
-const char *kasan_get_bug_type(struct kasan_report_info *info);
-void kasan_metadata_fetch_row(char *buffer, void *row);
-
#if defined(CONFIG_KASAN_STACK)
void kasan_print_address_stack_frame(const void *addr);
#else
@@ -308,10 +313,6 @@ depot_stack_handle_t kasan_save_stack(gfp_t flags, bool can_alloc);
void kasan_set_track(struct kasan_track *track, gfp_t flags);
void kasan_save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags);
void kasan_save_free_info(struct kmem_cache *cache, void *object);
-struct kasan_track *kasan_get_alloc_track(struct kmem_cache *cache,
- void *object);
-struct kasan_track *kasan_get_free_track(struct kmem_cache *cache,
- void *object, u8 tag);

#if defined(CONFIG_KASAN_GENERIC) && \
(defined(CONFIG_SLAB) || defined(CONFIG_SLUB))
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index a2789d4a05dd..206b7fe64e6b 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -176,7 +176,7 @@ static void end_report(unsigned long *flags, void *addr)
static void print_error_description(struct kasan_report_info *info)
{
const char *bug_type = info->is_free ?
- "double-free or invalid-free" : kasan_get_bug_type(info);
+ "double-free or invalid-free" : info->bug_type;

pr_err("BUG: KASAN: %s in %pS\n", bug_type, (void *)info->ip);
if (info->is_free)
@@ -236,31 +236,25 @@ static void describe_object_addr(const void *addr, struct kmem_cache *cache,
(void *)(object_addr + cache->object_size));
}

-static void describe_object_stacks(u8 tag, struct kasan_report_info *info)
+static void describe_object_stacks(struct kasan_report_info *info)
{
- struct kasan_track *alloc_track;
- struct kasan_track *free_track;
-
- alloc_track = kasan_get_alloc_track(info->cache, info->object);
- if (alloc_track) {
- print_track(alloc_track, "Allocated");
+ if (info->alloc_track.stack) {
+ print_track(&info->alloc_track, "Allocated");
pr_err("\n");
}

- free_track = kasan_get_free_track(info->cache, info->object, tag);
- if (free_track) {
- print_track(free_track, "Freed");
+ if (info->free_track.stack) {
+ print_track(&info->free_track, "Freed");
pr_err("\n");
}

kasan_print_aux_stacks(info->cache, info->object);
}

-static void describe_object(const void *addr, u8 tag,
- struct kasan_report_info *info)
+static void describe_object(const void *addr, struct kasan_report_info *info)
{
if (kasan_stack_collection_enabled())
- describe_object_stacks(tag, info);
+ describe_object_stacks(info);
describe_object_addr(addr, info->cache, info->object);
}

@@ -289,7 +283,7 @@ static void print_address_description(void *addr, u8 tag,
pr_err("\n");

if (info->cache && info->object) {
- describe_object(addr, tag, info);
+ describe_object(addr, info);
pr_err("\n");
}

@@ -420,6 +414,9 @@ static void complete_report_info(struct kasan_report_info *info)
info->object = nearest_obj(info->cache, slab, addr);
} else
info->cache = info->object = NULL;
+
+ /* Fill in mode-specific report info fields. */
+ kasan_complete_mode_report_info(info);
}

void kasan_report_invalid_free(void *ptr, unsigned long ip)
@@ -437,6 +434,7 @@ void kasan_report_invalid_free(void *ptr, unsigned long ip)

start_report(&flags, true);

+ memset(&info, 0, sizeof(info));
info.access_addr = ptr;
info.access_size = 0;
info.is_write = false;
@@ -471,6 +469,7 @@ bool kasan_report(unsigned long addr, size_t size, bool is_write,

start_report(&irq_flags, true);

+ memset(&info, 0, sizeof(info));
info.access_addr = ptr;
info.access_size = size;
info.is_write = is_write;
diff --git a/mm/kasan/report_generic.c b/mm/kasan/report_generic.c
index 74d21786ef09..087c1d8c8145 100644
--- a/mm/kasan/report_generic.c
+++ b/mm/kasan/report_generic.c
@@ -109,7 +109,7 @@ static const char *get_wild_bug_type(struct kasan_report_info *info)
return bug_type;
}

-const char *kasan_get_bug_type(struct kasan_report_info *info)
+static const char *get_bug_type(struct kasan_report_info *info)
{
/*
* If access_size is a negative number, then it has reason to be
@@ -127,25 +127,27 @@ const char *kasan_get_bug_type(struct kasan_report_info *info)
return get_wild_bug_type(info);
}

-struct kasan_track *kasan_get_alloc_track(struct kmem_cache *cache,
- void *object)
+void kasan_complete_mode_report_info(struct kasan_report_info *info)
{
struct kasan_alloc_meta *alloc_meta;
+ struct kasan_free_meta *free_meta;

- alloc_meta = kasan_get_alloc_meta(cache, object);
- if (!alloc_meta)
- return NULL;
+ info->bug_type = get_bug_type(info);

- return &alloc_meta->alloc_track;
-}
+ if (!info->cache || !info->object)
+ return;

-struct kasan_track *kasan_get_free_track(struct kmem_cache *cache,
- void *object, u8 tag)
-{
- if (*(u8 *)kasan_mem_to_shadow(object) != KASAN_SLAB_FREETRACK)
- return NULL;
- /* Free meta must be present with KASAN_SLAB_FREETRACK. */
- return &kasan_get_free_meta(cache, object)->free_track;
+ alloc_meta = kasan_get_alloc_meta(info->cache, info->object);
+ if (alloc_meta)
+ memcpy(&info->alloc_track, &alloc_meta->alloc_track,
+ sizeof(info->alloc_track));
+
+ if (*(u8 *)kasan_mem_to_shadow(info->object) == KASAN_SLAB_FREETRACK) {
+ /* Free meta must be present with KASAN_SLAB_FREETRACK. */
+ free_meta = kasan_get_free_meta(info->cache, info->object);
+ memcpy(&info->free_track, &free_meta->free_track,
+ sizeof(info->free_track));
+ }
}

void kasan_metadata_fetch_row(char *buffer, void *row)
diff --git a/mm/kasan/report_tags.c b/mm/kasan/report_tags.c
index 79b6497d8a81..5cbac2cdb177 100644
--- a/mm/kasan/report_tags.c
+++ b/mm/kasan/report_tags.c
@@ -6,7 +6,7 @@

#include "kasan.h"

-const char *kasan_get_bug_type(struct kasan_report_info *info)
+static const char *get_bug_type(struct kasan_report_info *info)
{
/*
* If access_size is a negative number, then it has reason to be
@@ -22,14 +22,7 @@ const char *kasan_get_bug_type(struct kasan_report_info *info)
return "invalid-access";
}

-struct kasan_track *kasan_get_alloc_track(struct kmem_cache *cache,
- void *object)
+void kasan_complete_mode_report_info(struct kasan_report_info *info)
{
- return NULL;
-}
-
-struct kasan_track *kasan_get_free_track(struct kmem_cache *cache,
- void *object, u8 tag)
-{
- return NULL;
+ info->bug_type = get_bug_type(info);
}
--
2.25.1

2022-06-13 22:02:39

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH 24/32] kasan: move kasan_addr_to_slab to common.c

From: Andrey Konovalov <[email protected]>

Move the definition of kasan_addr_to_slab() to the common KASAN code,
as this function is not only used by the reporting code.

Signed-off-by: Andrey Konovalov <[email protected]>
---
mm/kasan/common.c | 7 +++++++
mm/kasan/report.c | 7 -------
2 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 519fd0b3040b..5d5b4cfae503 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -30,6 +30,13 @@
#include "kasan.h"
#include "../slab.h"

+struct slab *kasan_addr_to_slab(const void *addr)
+{
+ if ((addr >= (void *)PAGE_OFFSET) && (addr < high_memory))
+ return virt_to_slab(addr);
+ return NULL;
+}
+
depot_stack_handle_t kasan_save_stack(gfp_t flags, bool can_alloc)
{
unsigned long entries[KASAN_STACK_DEPTH];
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 1dd6fc8a678f..ed8234516bab 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -207,13 +207,6 @@ struct page *kasan_addr_to_page(const void *addr)
return NULL;
}

-struct slab *kasan_addr_to_slab(const void *addr)
-{
- if ((addr >= (void *)PAGE_OFFSET) && (addr < high_memory))
- return virt_to_slab(addr);
- return NULL;
-}
-
static void describe_object_addr(struct kmem_cache *cache, void *object,
const void *addr)
{
--
2.25.1

2022-06-13 22:02:52

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH 29/32] kasan: rework function arguments in report.c

From: Andrey Konovalov <[email protected]>

Pass a pointer to kasan_report_info to describe_object() and
describe_object_stacks(), instead of passing the structure's fields.

The untagged pointer and the tag are still passed as separate arguments
to some of the functions to avoid duplicating the untagging logic.

This is preparatory change for the next patch.

Signed-off-by: Andrey Konovalov <[email protected]>
---
mm/kasan/report.c | 23 +++++++++++------------
1 file changed, 11 insertions(+), 12 deletions(-)

diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index a6b36eb4c33b..a2789d4a05dd 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -207,8 +207,8 @@ static inline struct page *addr_to_page(const void *addr)
return NULL;
}

-static void describe_object_addr(struct kmem_cache *cache, void *object,
- const void *addr)
+static void describe_object_addr(const void *addr, struct kmem_cache *cache,
+ void *object)
{
unsigned long access_addr = (unsigned long)addr;
unsigned long object_addr = (unsigned long)object;
@@ -236,33 +236,32 @@ static void describe_object_addr(struct kmem_cache *cache, void *object,
(void *)(object_addr + cache->object_size));
}

-static void describe_object_stacks(struct kmem_cache *cache, void *object,
- const void *addr, u8 tag)
+static void describe_object_stacks(u8 tag, struct kasan_report_info *info)
{
struct kasan_track *alloc_track;
struct kasan_track *free_track;

- alloc_track = kasan_get_alloc_track(cache, object);
+ alloc_track = kasan_get_alloc_track(info->cache, info->object);
if (alloc_track) {
print_track(alloc_track, "Allocated");
pr_err("\n");
}

- free_track = kasan_get_free_track(cache, object, tag);
+ free_track = kasan_get_free_track(info->cache, info->object, tag);
if (free_track) {
print_track(free_track, "Freed");
pr_err("\n");
}

- kasan_print_aux_stacks(cache, object);
+ kasan_print_aux_stacks(info->cache, info->object);
}

-static void describe_object(struct kmem_cache *cache, void *object,
- const void *addr, u8 tag)
+static void describe_object(const void *addr, u8 tag,
+ struct kasan_report_info *info)
{
if (kasan_stack_collection_enabled())
- describe_object_stacks(cache, object, addr, tag);
- describe_object_addr(cache, object, addr);
+ describe_object_stacks(tag, info);
+ describe_object_addr(addr, info->cache, info->object);
}

static inline bool kernel_or_module_addr(const void *addr)
@@ -290,7 +289,7 @@ static void print_address_description(void *addr, u8 tag,
pr_err("\n");

if (info->cache && info->object) {
- describe_object(info->cache, info->object, addr, tag);
+ describe_object(addr, tag, info);
pr_err("\n");
}

--
2.25.1

2022-06-15 13:52:26

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH 24/32] kasan: move kasan_addr_to_slab to common.c

Hi,

I love your patch! Perhaps something to improve:

[auto build test WARNING on akpm-mm/mm-everything]
[also build test WARNING on linus/master v5.19-rc2 next-20220615]
[cannot apply to vbabka-slab/for-next]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url: https://github.com/intel-lab-lkp/linux/commits/andrey-konovalov-linux-dev/kasan-switch-tag-based-modes-to-stack-ring-from-per-object-metadata/20220614-042239
base: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
config: s390-allyesconfig (https://download.01.org/0day-ci/archive/20220615/[email protected]/config)
compiler: s390-linux-gcc (GCC) 11.3.0
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# https://github.com/intel-lab-lkp/linux/commit/b0b10a57b2d9a5e5ae5d7ca62046b9774df1a88f
git remote add linux-review https://github.com/intel-lab-lkp/linux
git fetch --no-tags linux-review andrey-konovalov-linux-dev/kasan-switch-tag-based-modes-to-stack-ring-from-per-object-metadata/20220614-042239
git checkout b0b10a57b2d9a5e5ae5d7ca62046b9774df1a88f
# save the config file
mkdir build_dir && cp config build_dir/.config
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.3.0 make.cross W=1 O=build_dir ARCH=s390 SHELL=/bin/bash mm/kasan/

If you fix the issue, kindly add following tag where applicable
Reported-by: kernel test robot <[email protected]>

All warnings (new ones prefixed by >>):

mm/kasan/common.c: In function 'kasan_addr_to_slab':
>> mm/kasan/common.c:35:19: warning: ordered comparison of pointer with null pointer [-Wextra]
35 | if ((addr >= (void *)PAGE_OFFSET) && (addr < high_memory))
| ^~
mm/kasan/common.c: In function '____kasan_slab_free':
mm/kasan/common.c:202:12: warning: variable 'tag' set but not used [-Wunused-but-set-variable]
202 | u8 tag;
| ^~~


vim +35 mm/kasan/common.c

32
33 struct slab *kasan_addr_to_slab(const void *addr)
34 {
> 35 if ((addr >= (void *)PAGE_OFFSET) && (addr < high_memory))
36 return virt_to_slab(addr);
37 return NULL;
38 }
39

--
0-DAY CI Kernel Test Service
https://01.org/lkp

2022-06-17 10:23:40

by Marco Elver

[permalink] [raw]
Subject: Re: [PATCH 00/32] kasan: switch tag-based modes to stack ring from per-object metadata

On Mon, Jun 13, 2022 at 10:13PM +0200, [email protected] wrote:
> From: Andrey Konovalov <[email protected]>
>
> This series makes the tag-based KASAN modes use a ring buffer for storing
> stack depot handles for alloc/free stack traces for slab objects instead
> of per-object metadata. This ring buffer is referred to as the stack ring.
>
> On each alloc/free of a slab object, the tagged address of the object and
> the current stack trace are recorded in the stack ring.
>
> On each bug report, if the accessed address belongs to a slab object, the
> stack ring is scanned for matching entries. The newest entries are used to
> print the alloc/free stack traces in the report: one entry for alloc and
> one for free.
>
> The ring buffer is lock-free.
>
> The advantages of this approach over storing stack trace handles in
> per-object metadata with the tag-based KASAN modes:
>
> - Allows to find relevant stack traces for use-after-free bugs without
> using quarantine for freed memory. (Currently, if the object was
> reallocated multiple times, the report contains the latest alloc/free
> stack traces, not necessarily the ones relevant to the buggy allocation.)
> - Allows to better identify and mark use-after-free bugs, effectively
> making the CONFIG_KASAN_TAGS_IDENTIFY functionality always-on.
> - Has fixed memory overhead.
>
> The disadvantage:
>
> - If the affected object was allocated/freed long before the bug happened
> and the stack trace events were purged from the stack ring, the report
> will have no stack traces.

Do you have statistics on how how likely this is? Maybe through
identifying what the average lifetime of an entry in the stack ring is?

How bad is this for very long lived objects (e.g. pagecache)?

> Discussion
> ==========
>
> The current implementation of the stack ring uses a single ring buffer for
> the whole kernel. This might lead to contention due to atomic accesses to
> the ring buffer index on multicore systems.
>
> It is unclear to me whether the performance impact from this contention
> is significant compared to the slowdown introduced by collecting stack
> traces.

I agree, but once stack trace collection becomes faster (per your future
plans below), this might need to be revisited.

> While these patches are being reviewed, I will do some tests on the arm64
> hardware that I have. However, I do not have a large multicore arm64
> system to do proper measurements.
>
> A considered alternative is to keep a separate ring buffer for each CPU
> and then iterate over all of them when printing a bug report. This approach
> requires somehow figuring out which of the stack rings has the freshest
> stack traces for an object if multiple stack rings have them.
>
> Further plans
> =============
>
> This series is a part of an effort to make KASAN stack trace collection
> suitable for production. This requires stack trace collection to be fast
> and memory-bounded.
>
> The planned steps are:
>
> 1. Speed up stack trace collection (potentially, by using SCS;
> patches on-hold until steps #2 and #3 are completed).
> 2. Keep stack trace handles in the stack ring (this series).
> 3. Add a memory-bounded mode to stack depot or provide an alternative
> memory-bounded stack storage.
> 4. Potentially, implement stack trace collection sampling to minimize
> the performance impact.
>
> Thanks!
>
> Andrey Konovalov (32):
> kasan: check KASAN_NO_FREE_META in __kasan_metadata_size
> kasan: rename kasan_set_*_info to kasan_save_*_info
> kasan: move is_kmalloc check out of save_alloc_info
> kasan: split save_alloc_info implementations
> kasan: drop CONFIG_KASAN_TAGS_IDENTIFY
> kasan: introduce kasan_print_aux_stacks
> kasan: introduce kasan_get_alloc_track
> kasan: introduce kasan_init_object_meta
> kasan: clear metadata functions for tag-based modes
> kasan: move kasan_get_*_meta to generic.c
> kasan: introduce kasan_requires_meta
> kasan: introduce kasan_init_cache_meta
> kasan: drop CONFIG_KASAN_GENERIC check from kasan_init_cache_meta
> kasan: only define kasan_metadata_size for Generic mode
> kasan: only define kasan_never_merge for Generic mode
> kasan: only define metadata offsets for Generic mode
> kasan: only define metadata structs for Generic mode
> kasan: only define kasan_cache_create for Generic mode
> kasan: pass tagged pointers to kasan_save_alloc/free_info
> kasan: move kasan_get_alloc/free_track definitions
> kasan: simplify invalid-free reporting
> kasan: cosmetic changes in report.c
> kasan: use kasan_addr_to_slab in print_address_description
> kasan: move kasan_addr_to_slab to common.c
> kasan: make kasan_addr_to_page static
> kasan: simplify print_report
> kasan: introduce complete_report_info
> kasan: fill in cache and object in complete_report_info
> kasan: rework function arguments in report.c
> kasan: introduce kasan_complete_mode_report_info
> kasan: implement stack ring for tag-based modes
> kasan: better identify bug types for tag-based modes

Let me go and review the patches now.

Thanks,
-- Marco

2022-06-17 11:47:37

by Marco Elver

[permalink] [raw]
Subject: Re: [PATCH 06/32] kasan: introduce kasan_print_aux_stacks

On Mon, 13 Jun 2022 at 22:16, <[email protected]> wrote:
>
> From: Andrey Konovalov <[email protected]>
>
> Add a kasan_print_aux_stacks() helper that prints the auxiliary stack
> traces for the Generic mode.
>
> This change hides references to alloc_meta from the common reporting code.
> This is desired as only the Generic mode will be using per-object metadata
> after this series.
>
> Signed-off-by: Andrey Konovalov <[email protected]>
> ---
> mm/kasan/kasan.h | 6 ++++++
> mm/kasan/report.c | 15 +--------------
> mm/kasan/report_generic.c | 20 ++++++++++++++++++++
> 3 files changed, 27 insertions(+), 14 deletions(-)
>
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index aa6b43936f8d..bcea5ed15631 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -265,6 +265,12 @@ void kasan_print_address_stack_frame(const void *addr);
> static inline void kasan_print_address_stack_frame(const void *addr) { }
> #endif
>
> +#ifdef CONFIG_KASAN_GENERIC
> +void kasan_print_aux_stacks(struct kmem_cache *cache, const void *object);
> +#else
> +static inline void kasan_print_aux_stacks(struct kmem_cache *cache, const void *object) { }
> +#endif

Why not put this into one of the existing "#ifdef
CONFIG_KASAN_GENERIC" blocks? There are several; probably the one 10
lines down might be ok?

2022-06-20 14:47:45

by Marco Elver

[permalink] [raw]
Subject: Re: [PATCH 03/32] kasan: move is_kmalloc check out of save_alloc_info

On Mon, 13 Jun 2022 at 22:15, <[email protected]> wrote:
>
> From: Andrey Konovalov <[email protected]>
>
> Move kasan_info.is_kmalloc check out of save_alloc_info().
>
> This is a preparatory change that simplifies the following patches
> in this series.
>
> Signed-off-by: Andrey Konovalov <[email protected]>

Reviewed-by: Marco Elver <[email protected]>


> ---
> mm/kasan/common.c | 15 +++++----------
> 1 file changed, 5 insertions(+), 10 deletions(-)
>
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index 753775b894b6..a6107e8375e0 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -423,15 +423,10 @@ void __kasan_slab_free_mempool(void *ptr, unsigned long ip)
> }
> }
>
> -static void save_alloc_info(struct kmem_cache *cache, void *object,
> - gfp_t flags, bool is_kmalloc)
> +static void save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
> {
> struct kasan_alloc_meta *alloc_meta;
>
> - /* Don't save alloc info for kmalloc caches in kasan_slab_alloc(). */
> - if (cache->kasan_info.is_kmalloc && !is_kmalloc)
> - return;
> -
> alloc_meta = kasan_get_alloc_meta(cache, object);
> if (alloc_meta)
> kasan_set_track(&alloc_meta->alloc_track, flags);
> @@ -466,8 +461,8 @@ void * __must_check __kasan_slab_alloc(struct kmem_cache *cache,
> kasan_unpoison(tagged_object, cache->object_size, init);
>
> /* Save alloc info (if possible) for non-kmalloc() allocations. */
> - if (kasan_stack_collection_enabled())
> - save_alloc_info(cache, (void *)object, flags, false);
> + if (kasan_stack_collection_enabled() && !cache->kasan_info.is_kmalloc)
> + save_alloc_info(cache, (void *)object, flags);
>
> return tagged_object;
> }
> @@ -512,8 +507,8 @@ static inline void *____kasan_kmalloc(struct kmem_cache *cache,
> * Save alloc info (if possible) for kmalloc() allocations.
> * This also rewrites the alloc info when called from kasan_krealloc().
> */
> - if (kasan_stack_collection_enabled())
> - save_alloc_info(cache, (void *)object, flags, true);
> + if (kasan_stack_collection_enabled() && cache->kasan_info.is_kmalloc)
> + save_alloc_info(cache, (void *)object, flags);
>
> /* Keep the tag that was set by kasan_slab_alloc(). */
> return (void *)object;
> --
> 2.25.1
>
> --
> You received this message because you are subscribed to the Google Groups "kasan-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
> To view this discussion on the web visit https://groups.google.com/d/msgid/kasan-dev/ad7b6cfa3fbe10d2d9c4d15a9d30c2db9a41362c.1655150842.git.andreyknvl%40google.com.

2022-07-18 22:46:57

by Andrey Konovalov

[permalink] [raw]
Subject: Re: [PATCH 06/32] kasan: introduce kasan_print_aux_stacks

On Fri, Jun 17, 2022 at 1:35 PM Marco Elver <[email protected]> wrote:
>
> > diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> > index aa6b43936f8d..bcea5ed15631 100644
> > --- a/mm/kasan/kasan.h
> > +++ b/mm/kasan/kasan.h
> > @@ -265,6 +265,12 @@ void kasan_print_address_stack_frame(const void *addr);
> > static inline void kasan_print_address_stack_frame(const void *addr) { }
> > #endif
> >
> > +#ifdef CONFIG_KASAN_GENERIC
> > +void kasan_print_aux_stacks(struct kmem_cache *cache, const void *object);
> > +#else
> > +static inline void kasan_print_aux_stacks(struct kmem_cache *cache, const void *object) { }
> > +#endif
>
> Why not put this into one of the existing "#ifdef
> CONFIG_KASAN_GENERIC" blocks? There are several; probably the one 10
> lines down might be ok?

The idea was to group functions based on their purpose, not on which
mode uses them. Here, kasan_print_aux_stacks() is related to printing
reports, so it goes next to other such functions. We could rework the
order of functions in this file, but I'd rather keep it as is in this
change. Thanks!

2022-07-18 22:55:10

by Andrey Konovalov

[permalink] [raw]
Subject: Re: [PATCH 00/32] kasan: switch tag-based modes to stack ring from per-object metadata

On Fri, Jun 17, 2022 at 11:32 AM Marco Elver <[email protected]> wrote:
>
> > The disadvantage:
> >
> > - If the affected object was allocated/freed long before the bug happened
> > and the stack trace events were purged from the stack ring, the report
> > will have no stack traces.
>
> Do you have statistics on how how likely this is? Maybe through
> identifying what the average lifetime of an entry in the stack ring is?
>
> How bad is this for very long lived objects (e.g. pagecache)?

I ran a test on Pixel 6: the stack ring of size (32 << 10) gets fully
rewritten every ~2.7 seconds during boot. Any buggy object that is
allocated/freed and then accessed with a bigger time span will not
have stack traces.

This can be dealt with by increasing the stack ring size, but this
comes down to how much memory one is willing to allocate for the stack
ring. If we decide to use sampling (saving stack traces only for every
Nth object), that will affect this too.

But any object that is allocated once during boot will be purged out
of the stack ring sooner or later. One could argue that such objects
are usually allocated at a single know place, so have a stack trace
won't considerably improve the report.

I would say that we need to deploy some solution, study the reports,
and adjust the implementation based on that.

> > Discussion
> > ==========
> >
> > The current implementation of the stack ring uses a single ring buffer for
> > the whole kernel. This might lead to contention due to atomic accesses to
> > the ring buffer index on multicore systems.
> >
> > It is unclear to me whether the performance impact from this contention
> > is significant compared to the slowdown introduced by collecting stack
> > traces.
>
> I agree, but once stack trace collection becomes faster (per your future
> plans below), this might need to be revisited.

Ack.

Thanks!

2022-07-18 22:57:02

by Andrey Konovalov

[permalink] [raw]
Subject: Re: [PATCH 24/32] kasan: move kasan_addr_to_slab to common.c

On Wed, Jun 15, 2022 at 3:28 PM kernel test robot <[email protected]> wrote:
>
> Hi,
>
> I love your patch! Perhaps something to improve:
>
> [auto build test WARNING on akpm-mm/mm-everything]
> [also build test WARNING on linus/master v5.19-rc2 next-20220615]
> [cannot apply to vbabka-slab/for-next]
> [If your patch is applied to the wrong git tree, kindly drop us a note.
> And when submitting patch, we suggest to use '--base' as documented in
> https://git-scm.com/docs/git-format-patch]
>
> url: https://github.com/intel-lab-lkp/linux/commits/andrey-konovalov-linux-dev/kasan-switch-tag-based-modes-to-stack-ring-from-per-object-metadata/20220614-042239
> base: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
> config: s390-allyesconfig (https://download.01.org/0day-ci/archive/20220615/[email protected]/config)
> compiler: s390-linux-gcc (GCC) 11.3.0
> reproduce (this is a W=1 build):
> wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
> chmod +x ~/bin/make.cross
> # https://github.com/intel-lab-lkp/linux/commit/b0b10a57b2d9a5e5ae5d7ca62046b9774df1a88f
> git remote add linux-review https://github.com/intel-lab-lkp/linux
> git fetch --no-tags linux-review andrey-konovalov-linux-dev/kasan-switch-tag-based-modes-to-stack-ring-from-per-object-metadata/20220614-042239
> git checkout b0b10a57b2d9a5e5ae5d7ca62046b9774df1a88f
> # save the config file
> mkdir build_dir && cp config build_dir/.config
> COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.3.0 make.cross W=1 O=build_dir ARCH=s390 SHELL=/bin/bash mm/kasan/
>
> If you fix the issue, kindly add following tag where applicable
> Reported-by: kernel test robot <[email protected]>
>
> All warnings (new ones prefixed by >>):
>
> mm/kasan/common.c: In function 'kasan_addr_to_slab':
> >> mm/kasan/common.c:35:19: warning: ordered comparison of pointer with null pointer [-Wextra]
> 35 | if ((addr >= (void *)PAGE_OFFSET) && (addr < high_memory))
> | ^~
> mm/kasan/common.c: In function '____kasan_slab_free':
> mm/kasan/common.c:202:12: warning: variable 'tag' set but not used [-Wunused-but-set-variable]
> 202 | u8 tag;
> | ^~~

Will fix both in v2. Thanks!