2023-10-23 16:23:18

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH v3 00/19] stackdepot: allow evicting stack traces

From: Andrey Konovalov <[email protected]>

Currently, the stack depot grows indefinitely until it reaches its
capacity. Once that happens, the stack depot stops saving new stack
traces.

This creates a problem for using the stack depot for in-field testing
and in production.

For such uses, an ideal stack trace storage should:

1. Allow saving fresh stack traces on systems with a large uptime while
limiting the amount of memory used to store the traces;
2. Have a low performance impact.

Implementing #1 in the stack depot is impossible with the current
keep-forever approach. This series targets to address that. Issue #2 is
left to be addressed in a future series.

This series changes the stack depot implementation to allow evicting
unneeded stack traces from the stack depot. The users of the stack depot
can do that via new stack_depot_save_flags(STACK_DEPOT_FLAG_GET) and
stack_depot_put APIs.

Internal changes to the stack depot code include:

1. Storing stack traces in fixed-frame-sized slots; the slot size is
controlled via CONFIG_STACKDEPOT_MAX_FRAMES (vs precisely-sized
slots in the current implementation);
2. Keeping available slots in a freelist (vs keeping an offset to the next
free slot);
3. Using a read/write lock for synchronization (vs a lock-free approach
combined with a spinlock).

This series also integrates the eviction functionality in the tag-based
KASAN modes.

Despite wasting some space on rounding up the size of each stack record,
with CONFIG_STACKDEPOT_MAX_FRAMES=32, the tag-based KASAN modes end up
consuming ~5% less memory in stack depot during boot (with the default
stack ring size of 32k entries). The reason for this is the eviction of
irrelevant stack traces from the stack depot, which frees up space for
other stack traces.

For other tools that heavily rely on the stack depot, like Generic KASAN
and KMSAN, this change leads to the stack depot capacity being reached
sooner than before. However, as these tools are mainly used in fuzzing
scenarios where the kernel is frequently rebooted, this outcome should
be acceptable.

There is no measurable boot time performance impact of these changes for
KASAN on x86-64. I haven't done any tests for arm64 modes (the stack
depot without performance optimizations is not suitable for intended use
of those anyway), but I expect a similar result. Obtaining and copying
stack trace frames when saving them into stack depot is what takes the
most time.

This series does not yet provide a way to configure the maximum size of
the stack depot externally (e.g. via a command-line parameter). This will
be added in a separate series, possibly together with the performance
improvement changes.

---

Changes v2->v3:
- Fix null-ptr-deref by using the proper number of entries for
initializing the stack table when alloc_large_system_hash()
auto-calculates the number (see patch #12).
- Keep STACKDEPOT/STACKDEPOT_ALWAYS_INIT Kconfig options not configurable
by users.
- Use lockdep_assert_held_read annotation in depot_fetch_stack.
- WARN_ON invalid flags in stack_depot_save_flags.
- Moved "../slab.h" include in mm/kasan/report_tags.c in the right patch.
- Various comment fixes.

Changes v1->v2:
- Rework API to stack_depot_save_flags(STACK_DEPOT_FLAG_GET) +
stack_depot_put.
- Add CONFIG_STACKDEPOT_MAX_FRAMES Kconfig option.
- Switch stack depot to using list_head's.
- Assorted minor changes, see the commit message for each path.

Andrey Konovalov (19):
lib/stackdepot: check disabled flag when fetching
lib/stackdepot: simplify __stack_depot_save
lib/stackdepot: drop valid bit from handles
lib/stackdepot: add depot_fetch_stack helper
lib/stackdepot: use fixed-sized slots for stack records
lib/stackdepot: fix and clean-up atomic annotations
lib/stackdepot: rework helpers for depot_alloc_stack
lib/stackdepot: rename next_pool_required to new_pool_required
lib/stackdepot: store next pool pointer in new_pool
lib/stackdepot: store free stack records in a freelist
lib/stackdepot: use read/write lock
lib/stackdepot: use list_head for stack record links
kmsan: use stack_depot_save instead of __stack_depot_save
lib/stackdepot, kasan: add flags to __stack_depot_save and rename
lib/stackdepot: add refcount for records
lib/stackdepot: allow users to evict stack traces
kasan: remove atomic accesses to stack ring entries
kasan: check object_size in kasan_complete_mode_report_info
kasan: use stack_depot_put for tag-based modes

include/linux/stackdepot.h | 59 ++++--
lib/Kconfig | 10 +
lib/stackdepot.c | 418 ++++++++++++++++++++++++-------------
mm/kasan/common.c | 7 +-
mm/kasan/generic.c | 9 +-
mm/kasan/kasan.h | 2 +-
mm/kasan/report_tags.c | 27 +--
mm/kasan/tags.c | 24 ++-
mm/kmsan/core.c | 7 +-
9 files changed, 365 insertions(+), 198 deletions(-)

--
2.25.1


2023-10-23 16:23:19

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH v3 01/19] lib/stackdepot: check disabled flag when fetching

From: Andrey Konovalov <[email protected]>

Do not try fetching a stack trace from the stack depot if the
stack_depot_disabled flag is enabled.

Reviewed-by: Alexander Potapenko <[email protected]>
Signed-off-by: Andrey Konovalov <[email protected]>
---
lib/stackdepot.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/lib/stackdepot.c b/lib/stackdepot.c
index 2f5aa851834e..3a945c7206f3 100644
--- a/lib/stackdepot.c
+++ b/lib/stackdepot.c
@@ -477,7 +477,7 @@ unsigned int stack_depot_fetch(depot_stack_handle_t handle,
*/
kmsan_unpoison_memory(entries, sizeof(*entries));

- if (!handle)
+ if (!handle || stack_depot_disabled)
return 0;

if (parts.pool_index > pool_index_cached) {
--
2.25.1

2023-10-23 16:23:30

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH v3 05/19] lib/stackdepot: use fixed-sized slots for stack records

From: Andrey Konovalov <[email protected]>

Instead of storing stack records in stack depot pools one right after
another, use fixed-sized slots.

Add a new Kconfig option STACKDEPOT_MAX_FRAMES that allows to select
the size of the slot in frames. Use 64 as the default value, which is
the maximum stack trace size both KASAN and KMSAN use right now.

Also add descriptions for other stack depot Kconfig options.

This is preparatory patch for implementing the eviction of stack records
from the stack depot.

Signed-off-by: Andrey Konovalov <[email protected]>

---

Changes v2->v3:
- Keep previously existing Kconfig options not configurable by users.

Changes v1->v2:
- Add and use STACKDEPOT_MAX_FRAMES Kconfig option.
---
lib/Kconfig | 10 ++++++++++
lib/stackdepot.c | 13 +++++++++----
2 files changed, 19 insertions(+), 4 deletions(-)

diff --git a/lib/Kconfig b/lib/Kconfig
index c686f4adc124..5f3fa3659fa9 100644
--- a/lib/Kconfig
+++ b/lib/Kconfig
@@ -710,10 +710,20 @@ config ARCH_STACKWALK
config STACKDEPOT
bool
select STACKTRACE
+ help
+ Stack depot: stack trace storage that avoids duplication

config STACKDEPOT_ALWAYS_INIT
bool
select STACKDEPOT
+ help
+ Always initialize stack depot during early boot
+
+config STACKDEPOT_MAX_FRAMES
+ int "Maximum number of frames in trace saved in stack depot"
+ range 1 256
+ default 64
+ depends on STACKDEPOT

config REF_TRACKER
bool
diff --git a/lib/stackdepot.c b/lib/stackdepot.c
index 9a004f15f59d..128ece21afe9 100644
--- a/lib/stackdepot.c
+++ b/lib/stackdepot.c
@@ -58,9 +58,12 @@ struct stack_record {
u32 hash; /* Hash in the hash table */
u32 size; /* Number of stored frames */
union handle_parts handle;
- unsigned long entries[]; /* Variable-sized array of frames */
+ unsigned long entries[CONFIG_STACKDEPOT_MAX_FRAMES]; /* Frames */
};

+#define DEPOT_STACK_RECORD_SIZE \
+ ALIGN(sizeof(struct stack_record), 1 << DEPOT_STACK_ALIGN)
+
static bool stack_depot_disabled;
static bool __stack_depot_early_init_requested __initdata = IS_ENABLED(CONFIG_STACKDEPOT_ALWAYS_INIT);
static bool __stack_depot_early_init_passed __initdata;
@@ -258,9 +261,7 @@ static struct stack_record *
depot_alloc_stack(unsigned long *entries, int size, u32 hash, void **prealloc)
{
struct stack_record *stack;
- size_t required_size = struct_size(stack, entries, size);
-
- required_size = ALIGN(required_size, 1 << DEPOT_STACK_ALIGN);
+ size_t required_size = DEPOT_STACK_RECORD_SIZE;

/* Check if there is not enough space in the current pool. */
if (unlikely(pool_offset + required_size > DEPOT_POOL_SIZE)) {
@@ -295,6 +296,10 @@ depot_alloc_stack(unsigned long *entries, int size, u32 hash, void **prealloc)
if (stack_pools[pool_index] == NULL)
return NULL;

+ /* Limit number of saved frames to CONFIG_STACKDEPOT_MAX_FRAMES. */
+ if (size > CONFIG_STACKDEPOT_MAX_FRAMES)
+ size = CONFIG_STACKDEPOT_MAX_FRAMES;
+
/* Save the stack trace. */
stack = stack_pools[pool_index] + pool_offset;
stack->hash = hash;
--
2.25.1

2023-10-23 16:23:37

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH v3 03/19] lib/stackdepot: drop valid bit from handles

From: Andrey Konovalov <[email protected]>

Stack depot doesn't use the valid bit in handles in any way, so drop it.

Reviewed-by: Alexander Potapenko <[email protected]>
Signed-off-by: Andrey Konovalov <[email protected]>
---
lib/stackdepot.c | 7 ++-----
1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/lib/stackdepot.c b/lib/stackdepot.c
index 0772125efe8a..482eac40791e 100644
--- a/lib/stackdepot.c
+++ b/lib/stackdepot.c
@@ -32,13 +32,12 @@

#define DEPOT_HANDLE_BITS (sizeof(depot_stack_handle_t) * 8)

-#define DEPOT_VALID_BITS 1
#define DEPOT_POOL_ORDER 2 /* Pool size order, 4 pages */
#define DEPOT_POOL_SIZE (1LL << (PAGE_SHIFT + DEPOT_POOL_ORDER))
#define DEPOT_STACK_ALIGN 4
#define DEPOT_OFFSET_BITS (DEPOT_POOL_ORDER + PAGE_SHIFT - DEPOT_STACK_ALIGN)
-#define DEPOT_POOL_INDEX_BITS (DEPOT_HANDLE_BITS - DEPOT_VALID_BITS - \
- DEPOT_OFFSET_BITS - STACK_DEPOT_EXTRA_BITS)
+#define DEPOT_POOL_INDEX_BITS (DEPOT_HANDLE_BITS - DEPOT_OFFSET_BITS - \
+ STACK_DEPOT_EXTRA_BITS)
#define DEPOT_POOLS_CAP 8192
#define DEPOT_MAX_POOLS \
(((1LL << (DEPOT_POOL_INDEX_BITS)) < DEPOT_POOLS_CAP) ? \
@@ -50,7 +49,6 @@ union handle_parts {
struct {
u32 pool_index : DEPOT_POOL_INDEX_BITS;
u32 offset : DEPOT_OFFSET_BITS;
- u32 valid : DEPOT_VALID_BITS;
u32 extra : STACK_DEPOT_EXTRA_BITS;
};
};
@@ -303,7 +301,6 @@ depot_alloc_stack(unsigned long *entries, int size, u32 hash, void **prealloc)
stack->size = size;
stack->handle.pool_index = pool_index;
stack->handle.offset = pool_offset >> DEPOT_STACK_ALIGN;
- stack->handle.valid = 1;
stack->handle.extra = 0;
memcpy(stack->entries, entries, flex_array_size(stack, entries, size));
pool_offset += required_size;
--
2.25.1

2023-10-23 16:23:39

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH v3 04/19] lib/stackdepot: add depot_fetch_stack helper

From: Andrey Konovalov <[email protected]>

Add a helper depot_fetch_stack function that fetches the pointer to
a stack record.

With this change, all static depot_* functions now operate on stack pools
and the exported stack_depot_* functions operate on the hash table.

Reviewed-by: Alexander Potapenko <[email protected]>
Signed-off-by: Andrey Konovalov <[email protected]>

---

Changes v1->v2:
- Minor comment fix as suggested by Alexander.
---
lib/stackdepot.c | 45 ++++++++++++++++++++++++++++-----------------
1 file changed, 28 insertions(+), 17 deletions(-)

diff --git a/lib/stackdepot.c b/lib/stackdepot.c
index 482eac40791e..9a004f15f59d 100644
--- a/lib/stackdepot.c
+++ b/lib/stackdepot.c
@@ -304,6 +304,7 @@ depot_alloc_stack(unsigned long *entries, int size, u32 hash, void **prealloc)
stack->handle.extra = 0;
memcpy(stack->entries, entries, flex_array_size(stack, entries, size));
pool_offset += required_size;
+
/*
* Let KMSAN know the stored stack record is initialized. This shall
* prevent false positive reports if instrumented code accesses it.
@@ -313,6 +314,32 @@ depot_alloc_stack(unsigned long *entries, int size, u32 hash, void **prealloc)
return stack;
}

+static struct stack_record *depot_fetch_stack(depot_stack_handle_t handle)
+{
+ union handle_parts parts = { .handle = handle };
+ /*
+ * READ_ONCE pairs with potential concurrent write in
+ * depot_alloc_stack().
+ */
+ int pool_index_cached = READ_ONCE(pool_index);
+ void *pool;
+ size_t offset = parts.offset << DEPOT_STACK_ALIGN;
+ struct stack_record *stack;
+
+ if (parts.pool_index > pool_index_cached) {
+ WARN(1, "pool index %d out of bounds (%d) for stack id %08x\n",
+ parts.pool_index, pool_index_cached, handle);
+ return NULL;
+ }
+
+ pool = stack_pools[parts.pool_index];
+ if (!pool)
+ return NULL;
+
+ stack = pool + offset;
+ return stack;
+}
+
/* Calculates the hash for a stack. */
static inline u32 hash_stack(unsigned long *entries, unsigned int size)
{
@@ -456,14 +483,6 @@ EXPORT_SYMBOL_GPL(stack_depot_save);
unsigned int stack_depot_fetch(depot_stack_handle_t handle,
unsigned long **entries)
{
- union handle_parts parts = { .handle = handle };
- /*
- * READ_ONCE pairs with potential concurrent write in
- * depot_alloc_stack.
- */
- int pool_index_cached = READ_ONCE(pool_index);
- void *pool;
- size_t offset = parts.offset << DEPOT_STACK_ALIGN;
struct stack_record *stack;

*entries = NULL;
@@ -476,15 +495,7 @@ unsigned int stack_depot_fetch(depot_stack_handle_t handle,
if (!handle || stack_depot_disabled)
return 0;

- if (parts.pool_index > pool_index_cached) {
- WARN(1, "pool index %d out of bounds (%d) for stack id %08x\n",
- parts.pool_index, pool_index_cached, handle);
- return 0;
- }
- pool = stack_pools[parts.pool_index];
- if (!pool)
- return 0;
- stack = pool + offset;
+ stack = depot_fetch_stack(handle);

*entries = stack->entries;
return stack->size;
--
2.25.1

2023-10-23 16:25:29

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH v3 13/19] kmsan: use stack_depot_save instead of __stack_depot_save

From: Andrey Konovalov <[email protected]>

Make KMSAN use stack_depot_save instead of __stack_depot_save,
as it always passes true to __stack_depot_save as the last argument.

Reviewed-by: Alexander Potapenko <[email protected]>
Signed-off-by: Andrey Konovalov <[email protected]>

---

Changes v1->v2:
- This is a new patch.
---
mm/kmsan/core.c | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/mm/kmsan/core.c b/mm/kmsan/core.c
index 3adb4c1d3b19..5d942f19d12a 100644
--- a/mm/kmsan/core.c
+++ b/mm/kmsan/core.c
@@ -76,7 +76,7 @@ depot_stack_handle_t kmsan_save_stack_with_flags(gfp_t flags,
/* Don't sleep. */
flags &= ~(__GFP_DIRECT_RECLAIM | __GFP_KSWAPD_RECLAIM);

- handle = __stack_depot_save(entries, nr_entries, flags, true);
+ handle = stack_depot_save(entries, nr_entries, flags);
return stack_depot_set_extra_bits(handle, extra);
}

@@ -250,11 +250,10 @@ depot_stack_handle_t kmsan_internal_chain_origin(depot_stack_handle_t id)
/*
* @entries is a local var in non-instrumented code, so KMSAN does not
* know it is initialized. Explicitly unpoison it to avoid false
- * positives when __stack_depot_save() passes it to instrumented code.
+ * positives when stack_depot_save() passes it to instrumented code.
*/
kmsan_internal_unpoison_memory(entries, sizeof(entries), false);
- handle = __stack_depot_save(entries, ARRAY_SIZE(entries), __GFP_HIGH,
- true);
+ handle = stack_depot_save(entries, ARRAY_SIZE(entries), __GFP_HIGH);
return stack_depot_set_extra_bits(handle, extra_bits);
}

--
2.25.1

2023-10-23 16:25:55

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH v3 15/19] lib/stackdepot: add refcount for records

From: Andrey Konovalov <[email protected]>

Add a reference counter for how many times a stack records has been added
to stack depot.

Add a new STACK_DEPOT_FLAG_GET flag to stack_depot_save_flags that
instructs the stack depot to increment the refcount.

Do not yet decrement the refcount; this is implemented in one of the
following patches.

Do not yet enable any users to use the flag to avoid overflowing the
refcount.

This is preparatory patch for implementing the eviction of stack records
from the stack depot.

Reviewed-by: Alexander Potapenko <[email protected]>
Signed-off-by: Andrey Konovalov <[email protected]>

---

Changes v1->v2:
- Add forgotten refcount_inc() under write lock.
- Add STACK_DEPOT_FLAG_GET flag for stack_depot_save_flags.
---
include/linux/stackdepot.h | 13 ++++++++++---
lib/stackdepot.c | 12 ++++++++++--
2 files changed, 20 insertions(+), 5 deletions(-)

diff --git a/include/linux/stackdepot.h b/include/linux/stackdepot.h
index 0b262e14144e..611716702d73 100644
--- a/include/linux/stackdepot.h
+++ b/include/linux/stackdepot.h
@@ -39,8 +39,9 @@ typedef u32 depot_flags_t;
* to its declaration for more details.
*/
#define STACK_DEPOT_FLAG_CAN_ALLOC ((depot_flags_t)0x0001)
+#define STACK_DEPOT_FLAG_GET ((depot_flags_t)0x0002)

-#define STACK_DEPOT_FLAGS_NUM 1
+#define STACK_DEPOT_FLAGS_NUM 2
#define STACK_DEPOT_FLAGS_MASK ((depot_flags_t)((1 << STACK_DEPOT_FLAGS_NUM) - 1))

/*
@@ -94,6 +95,9 @@ static inline int stack_depot_early_init(void) { return 0; }
* flags of @alloc_flags). Otherwise, stack depot avoids any allocations and
* fails if no space is left to store the stack trace.
*
+ * If STACK_DEPOT_FLAG_GET is set in @depot_flags, stack depot will increment
+ * the refcount on the saved stack trace if it already exists in stack depot.
+ *
* If the provided stack trace comes from the interrupt context, only the part
* up to the interrupt entry is saved.
*
@@ -116,8 +120,11 @@ depot_stack_handle_t stack_depot_save_flags(unsigned long *entries,
* @nr_entries: Number of frames in the stack
* @alloc_flags: Allocation GFP flags
*
- * Context: Contexts where allocations via alloc_pages() are allowed.
- * See stack_depot_save_flags() for more details.
+ * Does not increment the refcount on the saved stack trace; see
+ * stack_depot_save_flags() for more details.
+ *
+ * Context: Contexts where allocations via alloc_pages() are allowed;
+ * see stack_depot_save_flags() for more details.
*
* Return: Handle of the stack trace stored in depot, 0 on failure
*/
diff --git a/lib/stackdepot.c b/lib/stackdepot.c
index 902d69d3ee30..278ed646e418 100644
--- a/lib/stackdepot.c
+++ b/lib/stackdepot.c
@@ -23,6 +23,7 @@
#include <linux/mutex.h>
#include <linux/percpu.h>
#include <linux/printk.h>
+#include <linux/refcount.h>
#include <linux/slab.h>
#include <linux/spinlock.h>
#include <linux/stacktrace.h>
@@ -60,6 +61,7 @@ struct stack_record {
u32 hash; /* Hash in hash table */
u32 size; /* Number of stored frames */
union handle_parts handle;
+ refcount_t count;
unsigned long entries[CONFIG_STACKDEPOT_MAX_FRAMES]; /* Frames */
};

@@ -367,6 +369,7 @@ depot_alloc_stack(unsigned long *entries, int size, u32 hash, void **prealloc)
stack->hash = hash;
stack->size = size;
/* stack->handle is already filled in by depot_init_pool(). */
+ refcount_set(&stack->count, 1);
memcpy(stack->entries, entries, flex_array_size(stack, entries, size));

/*
@@ -483,6 +486,8 @@ depot_stack_handle_t stack_depot_save_flags(unsigned long *entries,
/* Fast path: look the stack trace up without full locking. */
found = find_stack(bucket, entries, nr_entries, hash);
if (found) {
+ if (depot_flags & STACK_DEPOT_FLAG_GET)
+ refcount_inc(&found->count);
read_unlock_irqrestore(&pool_rwlock, flags);
goto exit;
}
@@ -522,12 +527,15 @@ depot_stack_handle_t stack_depot_save_flags(unsigned long *entries,
list_add(&new->list, bucket);
found = new;
}
- } else if (prealloc) {
+ } else {
+ if (depot_flags & STACK_DEPOT_FLAG_GET)
+ refcount_inc(&found->count);
/*
* Stack depot already contains this stack trace, but let's
* keep the preallocated memory for future.
*/
- depot_keep_new_pool(&prealloc);
+ if (prealloc)
+ depot_keep_new_pool(&prealloc);
}

write_unlock_irqrestore(&pool_rwlock, flags);
--
2.25.1

2023-10-23 16:26:02

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH v3 12/19] lib/stackdepot: use list_head for stack record links

From: Andrey Konovalov <[email protected]>

Switch stack_record to use list_head for links in the hash table
and in the freelist.

This will allow removing entries from the hash table buckets.

This is preparatory patch for implementing the eviction of stack records
from the stack depot.

Signed-off-by: Andrey Konovalov <[email protected]>

---

Changes v2->v3:
- Use the proper number of entries for initializing the stack table when
alloc_large_system_hash() auto-calculates the number.

Changes v1->v2:
- Use list_head instead of open-coding backward links.
---
lib/stackdepot.c | 87 ++++++++++++++++++++++++++++--------------------
1 file changed, 50 insertions(+), 37 deletions(-)

diff --git a/lib/stackdepot.c b/lib/stackdepot.c
index b1ceade0acc9..85fd40c63817 100644
--- a/lib/stackdepot.c
+++ b/lib/stackdepot.c
@@ -18,6 +18,7 @@
#include <linux/jhash.h>
#include <linux/kernel.h>
#include <linux/kmsan.h>
+#include <linux/list.h>
#include <linux/mm.h>
#include <linux/mutex.h>
#include <linux/percpu.h>
@@ -55,7 +56,7 @@ union handle_parts {
};

struct stack_record {
- struct stack_record *next; /* Link in hash table or freelist */
+ struct list_head list; /* Links in hash table or freelist */
u32 hash; /* Hash in hash table */
u32 size; /* Number of stored frames */
union handle_parts handle;
@@ -77,21 +78,21 @@ static bool __stack_depot_early_init_passed __initdata;
/* Initial seed for jhash2. */
#define STACK_HASH_SEED 0x9747b28c

-/* Hash table of pointers to stored stack traces. */
-static struct stack_record **stack_table;
+/* Hash table of stored stack records. */
+static struct list_head *stack_table;
/* Fixed order of the number of table buckets. Used when KASAN is enabled. */
static unsigned int stack_bucket_number_order;
/* Hash mask for indexing the table. */
static unsigned int stack_hash_mask;

-/* Array of memory regions that store stack traces. */
+/* Array of memory regions that store stack records. */
static void *stack_pools[DEPOT_MAX_POOLS];
/* Newly allocated pool that is not yet added to stack_pools. */
static void *new_pool;
/* Number of pools in stack_pools. */
static int pools_num;
-/* Next stack in the freelist of stack records within stack_pools. */
-static struct stack_record *next_stack;
+/* Freelist of stack records within stack_pools. */
+static LIST_HEAD(free_stacks);
/*
* Stack depot tries to keep an extra pool allocated even before it runs out
* of space in the currently used pool. This flag marks whether this extra pool
@@ -123,6 +124,15 @@ void __init stack_depot_request_early_init(void)
__stack_depot_early_init_requested = true;
}

+/* Initialize list_head's within the hash table. */
+static void init_stack_table(unsigned long entries)
+{
+ unsigned long i;
+
+ for (i = 0; i < entries; i++)
+ INIT_LIST_HEAD(&stack_table[i]);
+}
+
/* Allocates a hash table via memblock. Can only be used during early boot. */
int __init stack_depot_early_init(void)
{
@@ -146,16 +156,16 @@ int __init stack_depot_early_init(void)

/*
* If stack_bucket_number_order is not set, leave entries as 0 to rely
- * on the automatic calculations performed by alloc_large_system_hash.
+ * on the automatic calculations performed by alloc_large_system_hash().
*/
if (stack_bucket_number_order)
entries = 1UL << stack_bucket_number_order;
pr_info("allocating hash table via alloc_large_system_hash\n");
stack_table = alloc_large_system_hash("stackdepot",
- sizeof(struct stack_record *),
+ sizeof(struct list_head),
entries,
STACK_HASH_TABLE_SCALE,
- HASH_EARLY | HASH_ZERO,
+ HASH_EARLY,
NULL,
&stack_hash_mask,
1UL << STACK_BUCKET_NUMBER_ORDER_MIN,
@@ -165,6 +175,14 @@ int __init stack_depot_early_init(void)
stack_depot_disabled = true;
return -ENOMEM;
}
+ if (!entries) {
+ /*
+ * Obtain the number of entries that was calculated by
+ * alloc_large_system_hash().
+ */
+ entries = stack_hash_mask + 1;
+ }
+ init_stack_table(entries);

return 0;
}
@@ -205,7 +223,7 @@ int stack_depot_init(void)
entries = 1UL << STACK_BUCKET_NUMBER_ORDER_MAX;

pr_info("allocating hash table of %lu entries via kvcalloc\n", entries);
- stack_table = kvcalloc(entries, sizeof(struct stack_record *), GFP_KERNEL);
+ stack_table = kvcalloc(entries, sizeof(struct list_head), GFP_KERNEL);
if (!stack_table) {
pr_err("hash table allocation failed, disabling\n");
stack_depot_disabled = true;
@@ -213,6 +231,7 @@ int stack_depot_init(void)
goto out_unlock;
}
stack_hash_mask = entries - 1;
+ init_stack_table(entries);

out_unlock:
mutex_unlock(&stack_depot_init_mutex);
@@ -224,31 +243,24 @@ EXPORT_SYMBOL_GPL(stack_depot_init);
/* Initializes a stack depol pool. */
static void depot_init_pool(void *pool)
{
- const int records_in_pool = DEPOT_POOL_SIZE / DEPOT_STACK_RECORD_SIZE;
- int i, offset;
+ int offset;

lockdep_assert_held_write(&pool_rwlock);

- /* Initialize handles and link stack records to each other. */
- for (i = 0, offset = 0;
- offset <= DEPOT_POOL_SIZE - DEPOT_STACK_RECORD_SIZE;
- i++, offset += DEPOT_STACK_RECORD_SIZE) {
+ WARN_ON(!list_empty(&free_stacks));
+
+ /* Initialize handles and link stack records into the freelist. */
+ for (offset = 0; offset <= DEPOT_POOL_SIZE - DEPOT_STACK_RECORD_SIZE;
+ offset += DEPOT_STACK_RECORD_SIZE) {
struct stack_record *stack = pool + offset;

stack->handle.pool_index = pools_num;
stack->handle.offset = offset >> DEPOT_STACK_ALIGN;
stack->handle.extra = 0;

- if (i < records_in_pool - 1)
- stack->next = (void *)stack + DEPOT_STACK_RECORD_SIZE;
- else
- stack->next = NULL;
+ list_add(&stack->list, &free_stacks);
}

- /* Link stack records into the freelist. */
- WARN_ON(next_stack);
- next_stack = pool;
-
/* Save reference to the pool to be used by depot_fetch_stack(). */
stack_pools[pools_num] = pool;
pools_num++;
@@ -289,7 +301,7 @@ static bool depot_update_pools(void **prealloc)
lockdep_assert_held_write(&pool_rwlock);

/* Check if we still have objects in the freelist. */
- if (next_stack)
+ if (!list_empty(&free_stacks))
goto out_keep_prealloc;

/* Check if we have a new pool saved and use it. */
@@ -340,19 +352,18 @@ depot_alloc_stack(unsigned long *entries, int size, u32 hash, void **prealloc)
return NULL;

/* Check if we have a stack record to save the stack trace. */
- stack = next_stack;
- if (!stack)
+ if (list_empty(&free_stacks))
return NULL;

- /* Advance the freelist. */
- next_stack = stack->next;
+ /* Get and unlink the first entry from the freelist. */
+ stack = list_first_entry(&free_stacks, struct stack_record, list);
+ list_del(&stack->list);

/* Limit number of saved frames to CONFIG_STACKDEPOT_MAX_FRAMES. */
if (size > CONFIG_STACKDEPOT_MAX_FRAMES)
size = CONFIG_STACKDEPOT_MAX_FRAMES;

/* Save the stack trace. */
- stack->next = NULL;
stack->hash = hash;
stack->size = size;
/* stack->handle is already filled in by depot_init_pool(). */
@@ -414,15 +425,17 @@ int stackdepot_memcmp(const unsigned long *u1, const unsigned long *u2,
}

/* Finds a stack in a bucket of the hash table. */
-static inline struct stack_record *find_stack(struct stack_record *bucket,
+static inline struct stack_record *find_stack(struct list_head *bucket,
unsigned long *entries, int size,
u32 hash)
{
+ struct list_head *pos;
struct stack_record *found;

lockdep_assert_held(&pool_rwlock);

- for (found = bucket; found; found = found->next) {
+ list_for_each(pos, bucket) {
+ found = list_entry(pos, struct stack_record, list);
if (found->hash == hash &&
found->size == size &&
!stackdepot_memcmp(entries, found->entries, size))
@@ -435,7 +448,8 @@ depot_stack_handle_t __stack_depot_save(unsigned long *entries,
unsigned int nr_entries,
gfp_t alloc_flags, bool can_alloc)
{
- struct stack_record *found = NULL, **bucket;
+ struct list_head *bucket;
+ struct stack_record *found = NULL;
depot_stack_handle_t handle = 0;
struct page *page = NULL;
void *prealloc = NULL;
@@ -462,7 +476,7 @@ depot_stack_handle_t __stack_depot_save(unsigned long *entries,
read_lock_irqsave(&pool_rwlock, flags);

/* Fast path: look the stack trace up without full locking. */
- found = find_stack(*bucket, entries, nr_entries, hash);
+ found = find_stack(bucket, entries, nr_entries, hash);
if (found) {
read_unlock_irqrestore(&pool_rwlock, flags);
goto exit;
@@ -494,14 +508,13 @@ depot_stack_handle_t __stack_depot_save(unsigned long *entries,

write_lock_irqsave(&pool_rwlock, flags);

- found = find_stack(*bucket, entries, nr_entries, hash);
+ found = find_stack(bucket, entries, nr_entries, hash);
if (!found) {
struct stack_record *new =
depot_alloc_stack(entries, nr_entries, hash, &prealloc);

if (new) {
- new->next = *bucket;
- *bucket = new;
+ list_add(&new->list, bucket);
found = new;
}
} else if (prealloc) {
--
2.25.1

2023-10-23 16:26:14

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH v3 17/19] kasan: remove atomic accesses to stack ring entries

From: Andrey Konovalov <[email protected]>

Remove the atomic accesses to entry fields in save_stack_info and
kasan_complete_mode_report_info for tag-based KASAN modes.

These atomics are not required, as the read/write lock prevents the
entries from being read (in kasan_complete_mode_report_info) while being
written (in save_stack_info) and the try_cmpxchg prevents the same entry
from being rewritten (in save_stack_info) in the unlikely case of wrapping
during writing.

Reviewed-by: Alexander Potapenko <[email protected]>
Signed-off-by: Andrey Konovalov <[email protected]>

---

Changes v1->v2:
- This is a new patch.
---
mm/kasan/report_tags.c | 25 +++++++------------------
mm/kasan/tags.c | 13 +++++--------
2 files changed, 12 insertions(+), 26 deletions(-)

diff --git a/mm/kasan/report_tags.c b/mm/kasan/report_tags.c
index 8b8bfdb3cfdb..78abdcde5da9 100644
--- a/mm/kasan/report_tags.c
+++ b/mm/kasan/report_tags.c
@@ -31,10 +31,6 @@ void kasan_complete_mode_report_info(struct kasan_report_info *info)
unsigned long flags;
u64 pos;
struct kasan_stack_ring_entry *entry;
- void *ptr;
- u32 pid;
- depot_stack_handle_t stack;
- bool is_free;
bool alloc_found = false, free_found = false;

if ((!info->cache || !info->object) && !info->bug_type) {
@@ -61,18 +57,11 @@ void kasan_complete_mode_report_info(struct kasan_report_info *info)

entry = &stack_ring.entries[i % stack_ring.size];

- /* Paired with smp_store_release() in save_stack_info(). */
- ptr = (void *)smp_load_acquire(&entry->ptr);
-
- if (kasan_reset_tag(ptr) != info->object ||
- get_tag(ptr) != get_tag(info->access_addr))
+ if (kasan_reset_tag(entry->ptr) != info->object ||
+ get_tag(entry->ptr) != get_tag(info->access_addr))
continue;

- pid = READ_ONCE(entry->pid);
- stack = READ_ONCE(entry->stack);
- is_free = READ_ONCE(entry->is_free);
-
- if (is_free) {
+ if (entry->is_free) {
/*
* Second free of the same object.
* Give up on trying to find the alloc entry.
@@ -80,8 +69,8 @@ void kasan_complete_mode_report_info(struct kasan_report_info *info)
if (free_found)
break;

- info->free_track.pid = pid;
- info->free_track.stack = stack;
+ info->free_track.pid = entry->pid;
+ info->free_track.stack = entry->stack;
free_found = true;

/*
@@ -95,8 +84,8 @@ void kasan_complete_mode_report_info(struct kasan_report_info *info)
if (alloc_found)
break;

- info->alloc_track.pid = pid;
- info->alloc_track.stack = stack;
+ info->alloc_track.pid = entry->pid;
+ info->alloc_track.stack = entry->stack;
alloc_found = true;

/*
diff --git a/mm/kasan/tags.c b/mm/kasan/tags.c
index 4fd32121b0fd..b6c017e670d8 100644
--- a/mm/kasan/tags.c
+++ b/mm/kasan/tags.c
@@ -121,15 +121,12 @@ static void save_stack_info(struct kmem_cache *cache, void *object,
if (!try_cmpxchg(&entry->ptr, &old_ptr, STACK_RING_BUSY_PTR))
goto next; /* Busy slot. */

- WRITE_ONCE(entry->size, cache->object_size);
- WRITE_ONCE(entry->pid, current->pid);
- WRITE_ONCE(entry->stack, stack);
- WRITE_ONCE(entry->is_free, is_free);
+ entry->size = cache->object_size;
+ entry->pid = current->pid;
+ entry->stack = stack;
+ entry->is_free = is_free;

- /*
- * Paired with smp_load_acquire() in kasan_complete_mode_report_info().
- */
- smp_store_release(&entry->ptr, (s64)object);
+ entry->ptr = object;

read_unlock_irqrestore(&stack_ring.lock, flags);
}
--
2.25.1

2023-10-23 16:26:14

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH v3 14/19] lib/stackdepot, kasan: add flags to __stack_depot_save and rename

From: Andrey Konovalov <[email protected]>

Change the bool can_alloc argument of __stack_depot_save to a
u32 argument that accepts a set of flags.

The following patch will add another flag to stack_depot_save_flags
besides the existing STACK_DEPOT_FLAG_CAN_ALLOC.

Also rename the function to stack_depot_save_flags, as __stack_depot_save
is a cryptic name,

Reviewed-by: Alexander Potapenko <[email protected]>
Signed-off-by: Andrey Konovalov <[email protected]>

---

Changes v2->v3:
- WARN_ON invalid flags in stack_depot_save_flags.

Changes v1->v2:
- This is a new patch.
---
include/linux/stackdepot.h | 36 +++++++++++++++++++++++++-----------
lib/stackdepot.c | 16 +++++++++++-----
mm/kasan/common.c | 7 ++++---
mm/kasan/generic.c | 9 +++++----
mm/kasan/kasan.h | 2 +-
mm/kasan/tags.c | 3 ++-
6 files changed, 48 insertions(+), 25 deletions(-)

diff --git a/include/linux/stackdepot.h b/include/linux/stackdepot.h
index e58306783d8e..0b262e14144e 100644
--- a/include/linux/stackdepot.h
+++ b/include/linux/stackdepot.h
@@ -32,6 +32,17 @@ typedef u32 depot_stack_handle_t;
*/
#define STACK_DEPOT_EXTRA_BITS 5

+typedef u32 depot_flags_t;
+
+/*
+ * Flags that can be passed to stack_depot_save_flags(); see the comment next
+ * to its declaration for more details.
+ */
+#define STACK_DEPOT_FLAG_CAN_ALLOC ((depot_flags_t)0x0001)
+
+#define STACK_DEPOT_FLAGS_NUM 1
+#define STACK_DEPOT_FLAGS_MASK ((depot_flags_t)((1 << STACK_DEPOT_FLAGS_NUM) - 1))
+
/*
* Using stack depot requires its initialization, which can be done in 3 ways:
*
@@ -69,31 +80,34 @@ static inline int stack_depot_early_init(void) { return 0; }
#endif

/**
- * __stack_depot_save - Save a stack trace to stack depot
+ * stack_depot_save_flags - Save a stack trace to stack depot
*
* @entries: Pointer to the stack trace
* @nr_entries: Number of frames in the stack
* @alloc_flags: Allocation GFP flags
- * @can_alloc: Allocate stack pools (increased chance of failure if false)
+ * @depot_flags: Stack depot flags
+ *
+ * Saves a stack trace from @entries array of size @nr_entries.
*
- * Saves a stack trace from @entries array of size @nr_entries. If @can_alloc is
- * %true, stack depot can replenish the stack pools in case no space is left
- * (allocates using GFP flags of @alloc_flags). If @can_alloc is %false, avoids
- * any allocations and fails if no space is left to store the stack trace.
+ * If STACK_DEPOT_FLAG_CAN_ALLOC is set in @depot_flags, stack depot can
+ * replenish the stack pools in case no space is left (allocates using GFP
+ * flags of @alloc_flags). Otherwise, stack depot avoids any allocations and
+ * fails if no space is left to store the stack trace.
*
* If the provided stack trace comes from the interrupt context, only the part
* up to the interrupt entry is saved.
*
- * Context: Any context, but setting @can_alloc to %false is required if
+ * Context: Any context, but setting STACK_DEPOT_FLAG_CAN_ALLOC is required if
* alloc_pages() cannot be used from the current context. Currently
* this is the case for contexts where neither %GFP_ATOMIC nor
* %GFP_NOWAIT can be used (NMI, raw_spin_lock).
*
* Return: Handle of the stack struct stored in depot, 0 on failure
*/
-depot_stack_handle_t __stack_depot_save(unsigned long *entries,
- unsigned int nr_entries,
- gfp_t gfp_flags, bool can_alloc);
+depot_stack_handle_t stack_depot_save_flags(unsigned long *entries,
+ unsigned int nr_entries,
+ gfp_t gfp_flags,
+ depot_flags_t depot_flags);

/**
* stack_depot_save - Save a stack trace to stack depot
@@ -103,7 +117,7 @@ depot_stack_handle_t __stack_depot_save(unsigned long *entries,
* @alloc_flags: Allocation GFP flags
*
* Context: Contexts where allocations via alloc_pages() are allowed.
- * See __stack_depot_save() for more details.
+ * See stack_depot_save_flags() for more details.
*
* Return: Handle of the stack trace stored in depot, 0 on failure
*/
diff --git a/lib/stackdepot.c b/lib/stackdepot.c
index 85fd40c63817..902d69d3ee30 100644
--- a/lib/stackdepot.c
+++ b/lib/stackdepot.c
@@ -444,19 +444,24 @@ static inline struct stack_record *find_stack(struct list_head *bucket,
return NULL;
}

-depot_stack_handle_t __stack_depot_save(unsigned long *entries,
- unsigned int nr_entries,
- gfp_t alloc_flags, bool can_alloc)
+depot_stack_handle_t stack_depot_save_flags(unsigned long *entries,
+ unsigned int nr_entries,
+ gfp_t alloc_flags,
+ depot_flags_t depot_flags)
{
struct list_head *bucket;
struct stack_record *found = NULL;
depot_stack_handle_t handle = 0;
struct page *page = NULL;
void *prealloc = NULL;
+ bool can_alloc = depot_flags & STACK_DEPOT_FLAG_CAN_ALLOC;
bool need_alloc = false;
unsigned long flags;
u32 hash;

+ if (WARN_ON(depot_flags & ~STACK_DEPOT_FLAGS_MASK))
+ return 0;
+
/*
* If this stack trace is from an interrupt, including anything before
* interrupt entry usually leads to unbounded stack depot growth.
@@ -535,13 +540,14 @@ depot_stack_handle_t __stack_depot_save(unsigned long *entries,
handle = found->handle.handle;
return handle;
}
-EXPORT_SYMBOL_GPL(__stack_depot_save);
+EXPORT_SYMBOL_GPL(stack_depot_save_flags);

depot_stack_handle_t stack_depot_save(unsigned long *entries,
unsigned int nr_entries,
gfp_t alloc_flags)
{
- return __stack_depot_save(entries, nr_entries, alloc_flags, true);
+ return stack_depot_save_flags(entries, nr_entries, alloc_flags,
+ STACK_DEPOT_FLAG_CAN_ALLOC);
}
EXPORT_SYMBOL_GPL(stack_depot_save);

diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 256930da578a..825a0240ec02 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -22,6 +22,7 @@
#include <linux/sched.h>
#include <linux/sched/task_stack.h>
#include <linux/slab.h>
+#include <linux/stackdepot.h>
#include <linux/stacktrace.h>
#include <linux/string.h>
#include <linux/types.h>
@@ -37,19 +38,19 @@ struct slab *kasan_addr_to_slab(const void *addr)
return NULL;
}

-depot_stack_handle_t kasan_save_stack(gfp_t flags, bool can_alloc)
+depot_stack_handle_t kasan_save_stack(gfp_t flags, depot_flags_t depot_flags)
{
unsigned long entries[KASAN_STACK_DEPTH];
unsigned int nr_entries;

nr_entries = stack_trace_save(entries, ARRAY_SIZE(entries), 0);
- return __stack_depot_save(entries, nr_entries, flags, can_alloc);
+ return stack_depot_save_flags(entries, nr_entries, flags, depot_flags);
}

void kasan_set_track(struct kasan_track *track, gfp_t flags)
{
track->pid = current->pid;
- track->stack = kasan_save_stack(flags, true);
+ track->stack = kasan_save_stack(flags, STACK_DEPOT_FLAG_CAN_ALLOC);
}

#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
index 4d837ab83f08..5d168c9afb32 100644
--- a/mm/kasan/generic.c
+++ b/mm/kasan/generic.c
@@ -25,6 +25,7 @@
#include <linux/sched.h>
#include <linux/sched/task_stack.h>
#include <linux/slab.h>
+#include <linux/stackdepot.h>
#include <linux/stacktrace.h>
#include <linux/string.h>
#include <linux/types.h>
@@ -472,7 +473,7 @@ size_t kasan_metadata_size(struct kmem_cache *cache, bool in_object)
sizeof(struct kasan_free_meta) : 0);
}

-static void __kasan_record_aux_stack(void *addr, bool can_alloc)
+static void __kasan_record_aux_stack(void *addr, depot_flags_t depot_flags)
{
struct slab *slab = kasan_addr_to_slab(addr);
struct kmem_cache *cache;
@@ -489,17 +490,17 @@ static void __kasan_record_aux_stack(void *addr, bool can_alloc)
return;

alloc_meta->aux_stack[1] = alloc_meta->aux_stack[0];
- alloc_meta->aux_stack[0] = kasan_save_stack(0, can_alloc);
+ alloc_meta->aux_stack[0] = kasan_save_stack(0, depot_flags);
}

void kasan_record_aux_stack(void *addr)
{
- return __kasan_record_aux_stack(addr, true);
+ return __kasan_record_aux_stack(addr, STACK_DEPOT_FLAG_CAN_ALLOC);
}

void kasan_record_aux_stack_noalloc(void *addr)
{
- return __kasan_record_aux_stack(addr, false);
+ return __kasan_record_aux_stack(addr, 0);
}

void kasan_save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index d37831b8511c..3787266d9794 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -368,7 +368,7 @@ static inline void kasan_init_cache_meta(struct kmem_cache *cache, unsigned int
static inline void kasan_init_object_meta(struct kmem_cache *cache, const void *object) { }
#endif

-depot_stack_handle_t kasan_save_stack(gfp_t flags, bool can_alloc);
+depot_stack_handle_t kasan_save_stack(gfp_t flags, depot_flags_t depot_flags);
void kasan_set_track(struct kasan_track *track, gfp_t flags);
void kasan_save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags);
void kasan_save_free_info(struct kmem_cache *cache, void *object);
diff --git a/mm/kasan/tags.c b/mm/kasan/tags.c
index 7dcfe341d48e..4fd32121b0fd 100644
--- a/mm/kasan/tags.c
+++ b/mm/kasan/tags.c
@@ -13,6 +13,7 @@
#include <linux/memblock.h>
#include <linux/memory.h>
#include <linux/mm.h>
+#include <linux/stackdepot.h>
#include <linux/static_key.h>
#include <linux/string.h>
#include <linux/types.h>
@@ -101,7 +102,7 @@ static void save_stack_info(struct kmem_cache *cache, void *object,
struct kasan_stack_ring_entry *entry;
void *old_ptr;

- stack = kasan_save_stack(gfp_flags, true);
+ stack = kasan_save_stack(gfp_flags, STACK_DEPOT_FLAG_CAN_ALLOC);

/*
* Prevent save_stack_info() from modifying stack ring
--
2.25.1

2023-10-23 16:26:18

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH v3 16/19] lib/stackdepot: allow users to evict stack traces

From: Andrey Konovalov <[email protected]>

Add stack_depot_put, a function that decrements the reference counter
on a stack record and removes it from the stack depot once the counter
reaches 0.

Internally, when removing a stack record, the function unlinks it from
the hash table bucket and returns to the freelist.

With this change, the users of stack depot can call stack_depot_put
when keeping a stack trace in the stack depot is not needed anymore.
This allows avoiding polluting the stack depot with irrelevant stack
traces and thus have more space to store the relevant ones before the
stack depot reaches its capacity.

Signed-off-by: Andrey Konovalov <[email protected]>

---

Changes v1->v2:
- Comments fixes as suggested by Marco.
- Add lockdep_assert annotation.
- Adapt to using list_head's.
- Rename stack_depot_evict to stack_depot_put.
---
include/linux/stackdepot.h | 14 ++++++++++++++
lib/stackdepot.c | 35 +++++++++++++++++++++++++++++++++++
2 files changed, 49 insertions(+)

diff --git a/include/linux/stackdepot.h b/include/linux/stackdepot.h
index 611716702d73..a6796f178913 100644
--- a/include/linux/stackdepot.h
+++ b/include/linux/stackdepot.h
@@ -97,6 +97,8 @@ static inline int stack_depot_early_init(void) { return 0; }
*
* If STACK_DEPOT_FLAG_GET is set in @depot_flags, stack depot will increment
* the refcount on the saved stack trace if it already exists in stack depot.
+ * Users of this flag must also call stack_depot_put() when keeping the stack
+ * trace is no longer required to avoid overflowing the refcount.
*
* If the provided stack trace comes from the interrupt context, only the part
* up to the interrupt entry is saved.
@@ -162,6 +164,18 @@ void stack_depot_print(depot_stack_handle_t stack);
int stack_depot_snprint(depot_stack_handle_t handle, char *buf, size_t size,
int spaces);

+/**
+ * stack_depot_put - Drop a reference to a stack trace from stack depot
+ *
+ * @handle: Stack depot handle returned from stack_depot_save()
+ *
+ * The stack trace is evicted from stack depot once all references to it have
+ * been dropped (once the number of stack_depot_evict() calls matches the
+ * number of stack_depot_save_flags() calls with STACK_DEPOT_FLAG_GET set for
+ * this stack trace).
+ */
+void stack_depot_put(depot_stack_handle_t handle);
+
/**
* stack_depot_set_extra_bits - Set extra bits in a stack depot handle
*
diff --git a/lib/stackdepot.c b/lib/stackdepot.c
index 278ed646e418..3a8f045696fd 100644
--- a/lib/stackdepot.c
+++ b/lib/stackdepot.c
@@ -404,6 +404,14 @@ static struct stack_record *depot_fetch_stack(depot_stack_handle_t handle)
return stack;
}

+/* Links stack into the freelist. */
+static void depot_free_stack(struct stack_record *stack)
+{
+ lockdep_assert_held_write(&pool_rwlock);
+
+ list_add(&stack->list, &free_stacks);
+}
+
/* Calculates the hash for a stack. */
static inline u32 hash_stack(unsigned long *entries, unsigned int size)
{
@@ -586,6 +594,33 @@ unsigned int stack_depot_fetch(depot_stack_handle_t handle,
}
EXPORT_SYMBOL_GPL(stack_depot_fetch);

+void stack_depot_put(depot_stack_handle_t handle)
+{
+ struct stack_record *stack;
+ unsigned long flags;
+
+ if (!handle || stack_depot_disabled)
+ return;
+
+ write_lock_irqsave(&pool_rwlock, flags);
+
+ stack = depot_fetch_stack(handle);
+ if (WARN_ON(!stack))
+ goto out;
+
+ if (refcount_dec_and_test(&stack->count)) {
+ /* Unlink stack from the hash table. */
+ list_del(&stack->list);
+
+ /* Free stack. */
+ depot_free_stack(stack);
+ }
+
+out:
+ write_unlock_irqrestore(&pool_rwlock, flags);
+}
+EXPORT_SYMBOL_GPL(stack_depot_put);
+
void stack_depot_print(depot_stack_handle_t stack)
{
unsigned long *entries;
--
2.25.1

2023-10-23 16:26:45

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH v3 18/19] kasan: check object_size in kasan_complete_mode_report_info

From: Andrey Konovalov <[email protected]>

Check the object size when looking up entries in the stack ring.

If the size of the object for which a report is being printed does not
match the size of the object for which a stack trace has been saved in
the stack ring, the saved stack trace is irrelevant.

Reviewed-by: Alexander Potapenko <[email protected]>
Signed-off-by: Andrey Konovalov <[email protected]>

---

Changes v2->v3:
- Added missing "../slab.h" include for accessing a kmem_cache field.

Changes v1->v2:
- This is a new patch.
---
mm/kasan/report_tags.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/mm/kasan/report_tags.c b/mm/kasan/report_tags.c
index 78abdcde5da9..55154743f915 100644
--- a/mm/kasan/report_tags.c
+++ b/mm/kasan/report_tags.c
@@ -7,6 +7,7 @@
#include <linux/atomic.h>

#include "kasan.h"
+#include "../slab.h"

extern struct kasan_stack_ring stack_ring;

@@ -58,7 +59,8 @@ void kasan_complete_mode_report_info(struct kasan_report_info *info)
entry = &stack_ring.entries[i % stack_ring.size];

if (kasan_reset_tag(entry->ptr) != info->object ||
- get_tag(entry->ptr) != get_tag(info->access_addr))
+ get_tag(entry->ptr) != get_tag(info->access_addr) ||
+ info->cache->object_size != entry->size)
continue;

if (entry->is_free) {
--
2.25.1

2023-10-23 16:27:05

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH v3 19/19] kasan: use stack_depot_put for tag-based modes

From: Andrey Konovalov <[email protected]>

Make tag-based KASAN modes evict stack traces from the stack depot once
they are evicted from the stack ring.

Internally, pass STACK_DEPOT_FLAG_GET to stack_depot_save_flags (via
kasan_save_stack) to increment the refcount when saving a new entry
to stack ring and call stack_depot_put when removing an entry from
stack ring.

Reviewed-by: Alexander Potapenko <[email protected]>
Signed-off-by: Andrey Konovalov <[email protected]>

---

Changes v1->v2:
- Adapt to the stack depot API change.
- Drop READ_ONCE when reading entry->stack.
---
mm/kasan/tags.c | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/mm/kasan/tags.c b/mm/kasan/tags.c
index b6c017e670d8..739ae997463d 100644
--- a/mm/kasan/tags.c
+++ b/mm/kasan/tags.c
@@ -97,12 +97,13 @@ static void save_stack_info(struct kmem_cache *cache, void *object,
gfp_t gfp_flags, bool is_free)
{
unsigned long flags;
- depot_stack_handle_t stack;
+ depot_stack_handle_t stack, old_stack;
u64 pos;
struct kasan_stack_ring_entry *entry;
void *old_ptr;

- stack = kasan_save_stack(gfp_flags, STACK_DEPOT_FLAG_CAN_ALLOC);
+ stack = kasan_save_stack(gfp_flags,
+ STACK_DEPOT_FLAG_CAN_ALLOC | STACK_DEPOT_FLAG_GET);

/*
* Prevent save_stack_info() from modifying stack ring
@@ -121,6 +122,8 @@ static void save_stack_info(struct kmem_cache *cache, void *object,
if (!try_cmpxchg(&entry->ptr, &old_ptr, STACK_RING_BUSY_PTR))
goto next; /* Busy slot. */

+ old_stack = entry->stack;
+
entry->size = cache->object_size;
entry->pid = current->pid;
entry->stack = stack;
@@ -129,6 +132,9 @@ static void save_stack_info(struct kmem_cache *cache, void *object,
entry->ptr = object;

read_unlock_irqrestore(&stack_ring.lock, flags);
+
+ if (old_stack)
+ stack_depot_put(old_stack);
}

void kasan_save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
--
2.25.1

2023-10-23 16:30:01

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH v3 10/19] lib/stackdepot: store free stack records in a freelist

From: Andrey Konovalov <[email protected]>

Instead of using the global pool_offset variable to find a free slot
when storing a new stack record, mainlain a freelist of free slots
within the allocated stack pools.

A global next_stack variable is used as the head of the freelist, and
the next field in the stack_record struct is reused as freelist link
(when the record is not in the freelist, this field is used as a link
in the hash table).

This is preparatory patch for implementing the eviction of stack records
from the stack depot.

Reviewed-by: Alexander Potapenko <[email protected]>
Signed-off-by: Andrey Konovalov <[email protected]>

---

Changes v2->v3:
- Add parentheses when referring to function calls in comments.

Changes v1->v2:
- Fix out-of-bounds when initializing a pool.
---
lib/stackdepot.c | 131 +++++++++++++++++++++++++++++------------------
1 file changed, 82 insertions(+), 49 deletions(-)

diff --git a/lib/stackdepot.c b/lib/stackdepot.c
index 5315952f26ec..1067c072a0f8 100644
--- a/lib/stackdepot.c
+++ b/lib/stackdepot.c
@@ -54,8 +54,8 @@ union handle_parts {
};

struct stack_record {
- struct stack_record *next; /* Link in the hash table */
- u32 hash; /* Hash in the hash table */
+ struct stack_record *next; /* Link in hash table or freelist */
+ u32 hash; /* Hash in hash table */
u32 size; /* Number of stored frames */
union handle_parts handle;
unsigned long entries[CONFIG_STACKDEPOT_MAX_FRAMES]; /* Frames */
@@ -87,10 +87,10 @@ static unsigned int stack_hash_mask;
static void *stack_pools[DEPOT_MAX_POOLS];
/* Newly allocated pool that is not yet added to stack_pools. */
static void *new_pool;
-/* Currently used pool in stack_pools. */
-static int pool_index;
-/* Offset to the unused space in the currently used pool. */
-static size_t pool_offset;
+/* Number of pools in stack_pools. */
+static int pools_num;
+/* Next stack in the freelist of stack records within stack_pools. */
+static struct stack_record *next_stack;
/* Lock that protects the variables above. */
static DEFINE_RAW_SPINLOCK(pool_lock);
/*
@@ -220,6 +220,42 @@ int stack_depot_init(void)
}
EXPORT_SYMBOL_GPL(stack_depot_init);

+/* Initializes a stack depol pool. */
+static void depot_init_pool(void *pool)
+{
+ const int records_in_pool = DEPOT_POOL_SIZE / DEPOT_STACK_RECORD_SIZE;
+ int i, offset;
+
+ /* Initialize handles and link stack records to each other. */
+ for (i = 0, offset = 0;
+ offset <= DEPOT_POOL_SIZE - DEPOT_STACK_RECORD_SIZE;
+ i++, offset += DEPOT_STACK_RECORD_SIZE) {
+ struct stack_record *stack = pool + offset;
+
+ stack->handle.pool_index = pools_num;
+ stack->handle.offset = offset >> DEPOT_STACK_ALIGN;
+ stack->handle.extra = 0;
+
+ if (i < records_in_pool - 1)
+ stack->next = (void *)stack + DEPOT_STACK_RECORD_SIZE;
+ else
+ stack->next = NULL;
+ }
+
+ /* Link stack records into the freelist. */
+ WARN_ON(next_stack);
+ next_stack = pool;
+
+ /* Save reference to the pool to be used by depot_fetch_stack(). */
+ stack_pools[pools_num] = pool;
+
+ /*
+ * WRITE_ONCE() pairs with potential concurrent read in
+ * depot_fetch_stack().
+ */
+ WRITE_ONCE(pools_num, pools_num + 1);
+}
+
/* Keeps the preallocated memory to be used for a new stack depot pool. */
static void depot_keep_new_pool(void **prealloc)
{
@@ -236,7 +272,7 @@ static void depot_keep_new_pool(void **prealloc)
* Use the preallocated memory for the new pool
* as long as we do not exceed the maximum number of pools.
*/
- if (pool_index + 1 < DEPOT_MAX_POOLS) {
+ if (pools_num < DEPOT_MAX_POOLS) {
new_pool = *prealloc;
*prealloc = NULL;
}
@@ -252,45 +288,42 @@ static void depot_keep_new_pool(void **prealloc)
}

/* Updates refences to the current and the next stack depot pools. */
-static bool depot_update_pools(size_t required_size, void **prealloc)
+static bool depot_update_pools(void **prealloc)
{
- /* Check if there is not enough space in the current pool. */
- if (unlikely(pool_offset + required_size > DEPOT_POOL_SIZE)) {
- /* Bail out if we reached the pool limit. */
- if (unlikely(pool_index + 1 >= DEPOT_MAX_POOLS)) {
- WARN_ONCE(1, "Stack depot reached limit capacity");
- return false;
- }
+ /* Check if we still have objects in the freelist. */
+ if (next_stack)
+ goto out_keep_prealloc;

- /*
- * Move on to the new pool.
- * WRITE_ONCE() pairs with potential concurrent read in
- * stack_depot_fetch().
- */
- WRITE_ONCE(pool_index, pool_index + 1);
- stack_pools[pool_index] = new_pool;
+ /* Check if we have a new pool saved and use it. */
+ if (new_pool) {
+ depot_init_pool(new_pool);
new_pool = NULL;
- pool_offset = 0;

- /*
- * If the maximum number of pools is not reached, take note
- * that yet another new pool needs to be allocated.
- * smp_store_release() pairs with smp_load_acquire() in
- * stack_depot_save().
- */
- if (pool_index + 1 < DEPOT_MAX_POOLS)
+ /* Take note that we might need a new new_pool. */
+ if (pools_num < DEPOT_MAX_POOLS)
smp_store_release(&new_pool_required, 1);
+
+ /* Try keeping the preallocated memory for new_pool. */
+ goto out_keep_prealloc;
+ }
+
+ /* Bail out if we reached the pool limit. */
+ if (unlikely(pools_num >= DEPOT_MAX_POOLS)) {
+ WARN_ONCE(1, "Stack depot reached limit capacity");
+ return false;
}

- /* Check if the current pool is not yet allocated. */
- if (*prealloc && stack_pools[pool_index] == NULL) {
- /* Use the preallocated memory for the current pool. */
- stack_pools[pool_index] = *prealloc;
+ /* Check if we have preallocated memory and use it. */
+ if (*prealloc) {
+ depot_init_pool(*prealloc);
*prealloc = NULL;
return true;
}

- /* Otherwise, try using the preallocated memory for a new pool. */
+ return false;
+
+out_keep_prealloc:
+ /* Keep the preallocated memory for a new pool if required. */
if (*prealloc)
depot_keep_new_pool(prealloc);
return true;
@@ -301,35 +334,35 @@ static struct stack_record *
depot_alloc_stack(unsigned long *entries, int size, u32 hash, void **prealloc)
{
struct stack_record *stack;
- size_t required_size = DEPOT_STACK_RECORD_SIZE;

/* Update current and new pools if required and possible. */
- if (!depot_update_pools(required_size, prealloc))
+ if (!depot_update_pools(prealloc))
return NULL;

- /* Check if we have a pool to save the stack trace. */
- if (stack_pools[pool_index] == NULL)
+ /* Check if we have a stack record to save the stack trace. */
+ stack = next_stack;
+ if (!stack)
return NULL;

+ /* Advance the freelist. */
+ next_stack = stack->next;
+
/* Limit number of saved frames to CONFIG_STACKDEPOT_MAX_FRAMES. */
if (size > CONFIG_STACKDEPOT_MAX_FRAMES)
size = CONFIG_STACKDEPOT_MAX_FRAMES;

/* Save the stack trace. */
- stack = stack_pools[pool_index] + pool_offset;
+ stack->next = NULL;
stack->hash = hash;
stack->size = size;
- stack->handle.pool_index = pool_index;
- stack->handle.offset = pool_offset >> DEPOT_STACK_ALIGN;
- stack->handle.extra = 0;
+ /* stack->handle is already filled in by depot_init_pool(). */
memcpy(stack->entries, entries, flex_array_size(stack, entries, size));
- pool_offset += required_size;

/*
* Let KMSAN know the stored stack record is initialized. This shall
* prevent false positive reports if instrumented code accesses it.
*/
- kmsan_unpoison_memory(stack, required_size);
+ kmsan_unpoison_memory(stack, DEPOT_STACK_RECORD_SIZE);

return stack;
}
@@ -339,16 +372,16 @@ static struct stack_record *depot_fetch_stack(depot_stack_handle_t handle)
union handle_parts parts = { .handle = handle };
/*
* READ_ONCE() pairs with potential concurrent write in
- * depot_update_pools().
+ * depot_init_pool().
*/
- int pool_index_cached = READ_ONCE(pool_index);
+ int pools_num_cached = READ_ONCE(pools_num);
void *pool;
size_t offset = parts.offset << DEPOT_STACK_ALIGN;
struct stack_record *stack;

- if (parts.pool_index > pool_index_cached) {
+ if (parts.pool_index > pools_num_cached) {
WARN(1, "pool index %d out of bounds (%d) for stack id %08x\n",
- parts.pool_index, pool_index_cached, handle);
+ parts.pool_index, pools_num_cached, handle);
return NULL;
}

--
2.25.1

2023-10-23 16:30:08

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH v3 11/19] lib/stackdepot: use read/write lock

From: Andrey Konovalov <[email protected]>

Currently, stack depot uses the following locking scheme:

1. Lock-free accesses when looking up a stack record, which allows to
have multiple users to look up records in parallel;
2. Spinlock for protecting the stack depot pools and the hash table
when adding a new record.

For implementing the eviction of stack traces from stack depot, the
lock-free approach is not going to work anymore, as we will need to be
able to also remove records from the hash table.

Convert the spinlock into a read/write lock, and drop the atomic accesses,
as they are no longer required.

Looking up stack traces is now protected by the read lock and adding new
records - by the write lock. One of the following patches will add a new
function for evicting stack records, which will be protected by the write
lock as well.

With this change, multiple users can still look up records in parallel.

This is preparatory patch for implementing the eviction of stack records
from the stack depot.

Reviewed-by: Alexander Potapenko <[email protected]>
Signed-off-by: Andrey Konovalov <[email protected]>

---

Changed v2->v3:
- Use lockdep_assert_held_read annotation in depot_fetch_stack.

Changes v1->v2:
- Add lockdep_assert annotations.
---
lib/stackdepot.c | 87 +++++++++++++++++++++++++-----------------------
1 file changed, 46 insertions(+), 41 deletions(-)

diff --git a/lib/stackdepot.c b/lib/stackdepot.c
index 1067c072a0f8..b1ceade0acc9 100644
--- a/lib/stackdepot.c
+++ b/lib/stackdepot.c
@@ -23,6 +23,7 @@
#include <linux/percpu.h>
#include <linux/printk.h>
#include <linux/slab.h>
+#include <linux/spinlock.h>
#include <linux/stacktrace.h>
#include <linux/stackdepot.h>
#include <linux/string.h>
@@ -91,15 +92,15 @@ static void *new_pool;
static int pools_num;
/* Next stack in the freelist of stack records within stack_pools. */
static struct stack_record *next_stack;
-/* Lock that protects the variables above. */
-static DEFINE_RAW_SPINLOCK(pool_lock);
/*
* Stack depot tries to keep an extra pool allocated even before it runs out
* of space in the currently used pool. This flag marks whether this extra pool
* needs to be allocated. It has the value 0 when either an extra pool is not
* yet allocated or if the limit on the number of pools is reached.
*/
-static int new_pool_required = 1;
+static bool new_pool_required = true;
+/* Lock that protects the variables above. */
+static DEFINE_RWLOCK(pool_rwlock);

static int __init disable_stack_depot(char *str)
{
@@ -226,6 +227,8 @@ static void depot_init_pool(void *pool)
const int records_in_pool = DEPOT_POOL_SIZE / DEPOT_STACK_RECORD_SIZE;
int i, offset;

+ lockdep_assert_held_write(&pool_rwlock);
+
/* Initialize handles and link stack records to each other. */
for (i = 0, offset = 0;
offset <= DEPOT_POOL_SIZE - DEPOT_STACK_RECORD_SIZE;
@@ -248,22 +251,17 @@ static void depot_init_pool(void *pool)

/* Save reference to the pool to be used by depot_fetch_stack(). */
stack_pools[pools_num] = pool;
-
- /*
- * WRITE_ONCE() pairs with potential concurrent read in
- * depot_fetch_stack().
- */
- WRITE_ONCE(pools_num, pools_num + 1);
+ pools_num++;
}

/* Keeps the preallocated memory to be used for a new stack depot pool. */
static void depot_keep_new_pool(void **prealloc)
{
+ lockdep_assert_held_write(&pool_rwlock);
+
/*
* If a new pool is already saved or the maximum number of
* pools is reached, do not use the preallocated memory.
- * Access new_pool_required non-atomically, as there are no concurrent
- * write accesses to this variable.
*/
if (!new_pool_required)
return;
@@ -281,15 +279,15 @@ static void depot_keep_new_pool(void **prealloc)
* At this point, either a new pool is kept or the maximum
* number of pools is reached. In either case, take note that
* keeping another pool is not required.
- * smp_store_release() pairs with smp_load_acquire() in
- * stack_depot_save().
*/
- smp_store_release(&new_pool_required, 0);
+ new_pool_required = false;
}

/* Updates refences to the current and the next stack depot pools. */
static bool depot_update_pools(void **prealloc)
{
+ lockdep_assert_held_write(&pool_rwlock);
+
/* Check if we still have objects in the freelist. */
if (next_stack)
goto out_keep_prealloc;
@@ -301,7 +299,7 @@ static bool depot_update_pools(void **prealloc)

/* Take note that we might need a new new_pool. */
if (pools_num < DEPOT_MAX_POOLS)
- smp_store_release(&new_pool_required, 1);
+ new_pool_required = true;

/* Try keeping the preallocated memory for new_pool. */
goto out_keep_prealloc;
@@ -335,6 +333,8 @@ depot_alloc_stack(unsigned long *entries, int size, u32 hash, void **prealloc)
{
struct stack_record *stack;

+ lockdep_assert_held_write(&pool_rwlock);
+
/* Update current and new pools if required and possible. */
if (!depot_update_pools(prealloc))
return NULL;
@@ -370,18 +370,15 @@ depot_alloc_stack(unsigned long *entries, int size, u32 hash, void **prealloc)
static struct stack_record *depot_fetch_stack(depot_stack_handle_t handle)
{
union handle_parts parts = { .handle = handle };
- /*
- * READ_ONCE() pairs with potential concurrent write in
- * depot_init_pool().
- */
- int pools_num_cached = READ_ONCE(pools_num);
void *pool;
size_t offset = parts.offset << DEPOT_STACK_ALIGN;
struct stack_record *stack;

- if (parts.pool_index > pools_num_cached) {
+ lockdep_assert_held_read(&pool_rwlock);
+
+ if (parts.pool_index > pools_num) {
WARN(1, "pool index %d out of bounds (%d) for stack id %08x\n",
- parts.pool_index, pools_num_cached, handle);
+ parts.pool_index, pools_num, handle);
return NULL;
}

@@ -423,6 +420,8 @@ static inline struct stack_record *find_stack(struct stack_record *bucket,
{
struct stack_record *found;

+ lockdep_assert_held(&pool_rwlock);
+
for (found = bucket; found; found = found->next) {
if (found->hash == hash &&
found->size == size &&
@@ -440,6 +439,7 @@ depot_stack_handle_t __stack_depot_save(unsigned long *entries,
depot_stack_handle_t handle = 0;
struct page *page = NULL;
void *prealloc = NULL;
+ bool need_alloc = false;
unsigned long flags;
u32 hash;

@@ -459,22 +459,26 @@ depot_stack_handle_t __stack_depot_save(unsigned long *entries,
hash = hash_stack(entries, nr_entries);
bucket = &stack_table[hash & stack_hash_mask];

- /*
- * Fast path: look the stack trace up without locking.
- * smp_load_acquire() pairs with smp_store_release() to |bucket| below.
- */
- found = find_stack(smp_load_acquire(bucket), entries, nr_entries, hash);
- if (found)
+ read_lock_irqsave(&pool_rwlock, flags);
+
+ /* Fast path: look the stack trace up without full locking. */
+ found = find_stack(*bucket, entries, nr_entries, hash);
+ if (found) {
+ read_unlock_irqrestore(&pool_rwlock, flags);
goto exit;
+ }
+
+ /* Take note if another stack pool needs to be allocated. */
+ if (new_pool_required)
+ need_alloc = true;
+
+ read_unlock_irqrestore(&pool_rwlock, flags);

/*
- * Check if another stack pool needs to be allocated. If so, allocate
- * the memory now: we won't be able to do that under the lock.
- *
- * smp_load_acquire() pairs with smp_store_release() in
- * depot_update_pools() and depot_keep_new_pool().
+ * Allocate memory for a new pool if required now:
+ * we won't be able to do that under the lock.
*/
- if (unlikely(can_alloc && smp_load_acquire(&new_pool_required))) {
+ if (unlikely(can_alloc && need_alloc)) {
/*
* Zero out zone modifiers, as we don't have specific zone
* requirements. Keep the flags related to allocation in atomic
@@ -488,7 +492,7 @@ depot_stack_handle_t __stack_depot_save(unsigned long *entries,
prealloc = page_address(page);
}

- raw_spin_lock_irqsave(&pool_lock, flags);
+ write_lock_irqsave(&pool_rwlock, flags);

found = find_stack(*bucket, entries, nr_entries, hash);
if (!found) {
@@ -497,11 +501,7 @@ depot_stack_handle_t __stack_depot_save(unsigned long *entries,

if (new) {
new->next = *bucket;
- /*
- * smp_store_release() pairs with smp_load_acquire()
- * from |bucket| above.
- */
- smp_store_release(bucket, new);
+ *bucket = new;
found = new;
}
} else if (prealloc) {
@@ -512,7 +512,7 @@ depot_stack_handle_t __stack_depot_save(unsigned long *entries,
depot_keep_new_pool(&prealloc);
}

- raw_spin_unlock_irqrestore(&pool_lock, flags);
+ write_unlock_irqrestore(&pool_rwlock, flags);
exit:
if (prealloc) {
/* Stack depot didn't use this memory, free it. */
@@ -536,6 +536,7 @@ unsigned int stack_depot_fetch(depot_stack_handle_t handle,
unsigned long **entries)
{
struct stack_record *stack;
+ unsigned long flags;

*entries = NULL;
/*
@@ -547,8 +548,12 @@ unsigned int stack_depot_fetch(depot_stack_handle_t handle,
if (!handle || stack_depot_disabled)
return 0;

+ read_lock_irqsave(&pool_rwlock, flags);
+
stack = depot_fetch_stack(handle);

+ read_unlock_irqrestore(&pool_rwlock, flags);
+
*entries = stack->entries;
return stack->size;
}
--
2.25.1

2023-10-23 16:30:33

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH v3 07/19] lib/stackdepot: rework helpers for depot_alloc_stack

From: Andrey Konovalov <[email protected]>

Split code in depot_alloc_stack and depot_init_pool into 3 functions:

1. depot_keep_next_pool that keeps preallocated memory for the next pool
if required.

2. depot_update_pools that moves on to the next pool if there's no space
left in the current pool, uses preallocated memory for the new current
pool if required, and calls depot_keep_next_pool otherwise.

3. depot_alloc_stack that calls depot_update_pools and then allocates
a stack record as before.

This makes it somewhat easier to follow the logic of depot_alloc_stack
and also serves as a preparation for implementing the eviction of stack
records from the stack depot.

Reviewed-by: Alexander Potapenko <[email protected]>
Signed-off-by: Andrey Konovalov <[email protected]>

---

Changes v2->v3:
- Add parentheses when referring to function calls in comments.
---
lib/stackdepot.c | 86 +++++++++++++++++++++++++++---------------------
1 file changed, 49 insertions(+), 37 deletions(-)

diff --git a/lib/stackdepot.c b/lib/stackdepot.c
index 60aea549429a..3f921aaae44a 100644
--- a/lib/stackdepot.c
+++ b/lib/stackdepot.c
@@ -219,11 +219,11 @@ int stack_depot_init(void)
}
EXPORT_SYMBOL_GPL(stack_depot_init);

-/* Uses preallocated memory to initialize a new stack depot pool. */
-static void depot_init_pool(void **prealloc)
+/* Keeps the preallocated memory to be used for the next stack depot pool. */
+static void depot_keep_next_pool(void **prealloc)
{
/*
- * If the next pool is already initialized or the maximum number of
+ * If the next pool is already saved or the maximum number of
* pools is reached, do not use the preallocated memory.
* Access next_pool_required non-atomically, as there are no concurrent
* write accesses to this variable.
@@ -231,44 +231,34 @@ static void depot_init_pool(void **prealloc)
if (!next_pool_required)
return;

- /* Check if the current pool is not yet allocated. */
- if (stack_pools[pool_index] == NULL) {
- /* Use the preallocated memory for the current pool. */
- stack_pools[pool_index] = *prealloc;
+ /*
+ * Use the preallocated memory for the next pool
+ * as long as we do not exceed the maximum number of pools.
+ */
+ if (pool_index + 1 < DEPOT_MAX_POOLS) {
+ stack_pools[pool_index + 1] = *prealloc;
*prealloc = NULL;
- } else {
- /*
- * Otherwise, use the preallocated memory for the next pool
- * as long as we do not exceed the maximum number of pools.
- */
- if (pool_index + 1 < DEPOT_MAX_POOLS) {
- stack_pools[pool_index + 1] = *prealloc;
- *prealloc = NULL;
- }
- /*
- * At this point, either the next pool is initialized or the
- * maximum number of pools is reached. In either case, take
- * note that initializing another pool is not required.
- * smp_store_release() pairs with smp_load_acquire() in
- * stack_depot_save().
- */
- smp_store_release(&next_pool_required, 0);
}
+
+ /*
+ * At this point, either the next pool is kept or the maximum
+ * number of pools is reached. In either case, take note that
+ * keeping another pool is not required.
+ * smp_store_release() pairs with smp_load_acquire() in
+ * stack_depot_save().
+ */
+ smp_store_release(&next_pool_required, 0);
}

-/* Allocates a new stack in a stack depot pool. */
-static struct stack_record *
-depot_alloc_stack(unsigned long *entries, int size, u32 hash, void **prealloc)
+/* Updates refences to the current and the next stack depot pools. */
+static bool depot_update_pools(size_t required_size, void **prealloc)
{
- struct stack_record *stack;
- size_t required_size = DEPOT_STACK_RECORD_SIZE;
-
/* Check if there is not enough space in the current pool. */
if (unlikely(pool_offset + required_size > DEPOT_POOL_SIZE)) {
/* Bail out if we reached the pool limit. */
if (unlikely(pool_index + 1 >= DEPOT_MAX_POOLS)) {
WARN_ONCE(1, "Stack depot reached limit capacity");
- return NULL;
+ return false;
}

/*
@@ -278,9 +268,10 @@ depot_alloc_stack(unsigned long *entries, int size, u32 hash, void **prealloc)
*/
WRITE_ONCE(pool_index, pool_index + 1);
pool_offset = 0;
+
/*
* If the maximum number of pools is not reached, take note
- * that the next pool needs to initialized.
+ * that the next pool needs to be initialized.
* smp_store_release() pairs with smp_load_acquire() in
* stack_depot_save().
*/
@@ -288,9 +279,30 @@ depot_alloc_stack(unsigned long *entries, int size, u32 hash, void **prealloc)
smp_store_release(&next_pool_required, 1);
}

- /* Assign the preallocated memory to a pool if required. */
+ /* Check if the current pool is not yet allocated. */
+ if (*prealloc && stack_pools[pool_index] == NULL) {
+ /* Use the preallocated memory for the current pool. */
+ stack_pools[pool_index] = *prealloc;
+ *prealloc = NULL;
+ return true;
+ }
+
+ /* Otherwise, try using the preallocated memory for the next pool. */
if (*prealloc)
- depot_init_pool(prealloc);
+ depot_keep_next_pool(prealloc);
+ return true;
+}
+
+/* Allocates a new stack in a stack depot pool. */
+static struct stack_record *
+depot_alloc_stack(unsigned long *entries, int size, u32 hash, void **prealloc)
+{
+ struct stack_record *stack;
+ size_t required_size = DEPOT_STACK_RECORD_SIZE;
+
+ /* Update current and next pools if required and possible. */
+ if (!depot_update_pools(required_size, prealloc))
+ return NULL;

/* Check if we have a pool to save the stack trace. */
if (stack_pools[pool_index] == NULL)
@@ -324,7 +336,7 @@ static struct stack_record *depot_fetch_stack(depot_stack_handle_t handle)
union handle_parts parts = { .handle = handle };
/*
* READ_ONCE() pairs with potential concurrent write in
- * depot_alloc_stack().
+ * depot_update_pools().
*/
int pool_index_cached = READ_ONCE(pool_index);
void *pool;
@@ -424,7 +436,7 @@ depot_stack_handle_t __stack_depot_save(unsigned long *entries,
* the memory now - we won't be able to do that under the lock.
*
* smp_load_acquire() pairs with smp_store_release() in
- * depot_alloc_stack() and depot_init_pool().
+ * depot_update_pools() and depot_keep_next_pool().
*/
if (unlikely(can_alloc && smp_load_acquire(&next_pool_required))) {
/*
@@ -461,7 +473,7 @@ depot_stack_handle_t __stack_depot_save(unsigned long *entries,
* Stack depot already contains this stack trace, but let's
* keep the preallocated memory for the future.
*/
- depot_init_pool(&prealloc);
+ depot_keep_next_pool(&prealloc);
}

raw_spin_unlock_irqrestore(&pool_lock, flags);
--
2.25.1

2023-10-23 16:31:04

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH v3 08/19] lib/stackdepot: rename next_pool_required to new_pool_required

From: Andrey Konovalov <[email protected]>

Rename next_pool_required to new_pool_required.

This a purely code readability change: the following patch will change
stack depot to store the pointer to the new pool in a separate variable,
and "new" seems like a more logical name.

Reviewed-by: Alexander Potapenko <[email protected]>
Signed-off-by: Andrey Konovalov <[email protected]>
---
lib/stackdepot.c | 49 ++++++++++++++++++++++++------------------------
1 file changed, 24 insertions(+), 25 deletions(-)

diff --git a/lib/stackdepot.c b/lib/stackdepot.c
index 3f921aaae44a..7579e20114b1 100644
--- a/lib/stackdepot.c
+++ b/lib/stackdepot.c
@@ -93,12 +93,11 @@ static size_t pool_offset;
static DEFINE_RAW_SPINLOCK(pool_lock);
/*
* Stack depot tries to keep an extra pool allocated even before it runs out
- * of space in the currently used pool.
- * This flag marks that this next extra pool needs to be allocated and
- * initialized. It has the value 0 when either the next pool is not yet
- * initialized or the limit on the number of pools is reached.
+ * of space in the currently used pool. This flag marks whether this extra pool
+ * needs to be allocated. It has the value 0 when either an extra pool is not
+ * yet allocated or if the limit on the number of pools is reached.
*/
-static int next_pool_required = 1;
+static int new_pool_required = 1;

static int __init disable_stack_depot(char *str)
{
@@ -219,20 +218,20 @@ int stack_depot_init(void)
}
EXPORT_SYMBOL_GPL(stack_depot_init);

-/* Keeps the preallocated memory to be used for the next stack depot pool. */
-static void depot_keep_next_pool(void **prealloc)
+/* Keeps the preallocated memory to be used for a new stack depot pool. */
+static void depot_keep_new_pool(void **prealloc)
{
/*
- * If the next pool is already saved or the maximum number of
+ * If a new pool is already saved or the maximum number of
* pools is reached, do not use the preallocated memory.
- * Access next_pool_required non-atomically, as there are no concurrent
+ * Access new_pool_required non-atomically, as there are no concurrent
* write accesses to this variable.
*/
- if (!next_pool_required)
+ if (!new_pool_required)
return;

/*
- * Use the preallocated memory for the next pool
+ * Use the preallocated memory for the new pool
* as long as we do not exceed the maximum number of pools.
*/
if (pool_index + 1 < DEPOT_MAX_POOLS) {
@@ -241,13 +240,13 @@ static void depot_keep_next_pool(void **prealloc)
}

/*
- * At this point, either the next pool is kept or the maximum
+ * At this point, either a new pool is kept or the maximum
* number of pools is reached. In either case, take note that
* keeping another pool is not required.
* smp_store_release() pairs with smp_load_acquire() in
* stack_depot_save().
*/
- smp_store_release(&next_pool_required, 0);
+ smp_store_release(&new_pool_required, 0);
}

/* Updates refences to the current and the next stack depot pools. */
@@ -262,7 +261,7 @@ static bool depot_update_pools(size_t required_size, void **prealloc)
}

/*
- * Move on to the next pool.
+ * Move on to the new pool.
* WRITE_ONCE() pairs with potential concurrent read in
* stack_depot_fetch().
*/
@@ -271,12 +270,12 @@ static bool depot_update_pools(size_t required_size, void **prealloc)

/*
* If the maximum number of pools is not reached, take note
- * that the next pool needs to be initialized.
+ * that yet another new pool needs to be allocated.
* smp_store_release() pairs with smp_load_acquire() in
* stack_depot_save().
*/
if (pool_index + 1 < DEPOT_MAX_POOLS)
- smp_store_release(&next_pool_required, 1);
+ smp_store_release(&new_pool_required, 1);
}

/* Check if the current pool is not yet allocated. */
@@ -287,9 +286,9 @@ static bool depot_update_pools(size_t required_size, void **prealloc)
return true;
}

- /* Otherwise, try using the preallocated memory for the next pool. */
+ /* Otherwise, try using the preallocated memory for a new pool. */
if (*prealloc)
- depot_keep_next_pool(prealloc);
+ depot_keep_new_pool(prealloc);
return true;
}

@@ -300,7 +299,7 @@ depot_alloc_stack(unsigned long *entries, int size, u32 hash, void **prealloc)
struct stack_record *stack;
size_t required_size = DEPOT_STACK_RECORD_SIZE;

- /* Update current and next pools if required and possible. */
+ /* Update current and new pools if required and possible. */
if (!depot_update_pools(required_size, prealloc))
return NULL;

@@ -432,13 +431,13 @@ depot_stack_handle_t __stack_depot_save(unsigned long *entries,
goto exit;

/*
- * Check if another stack pool needs to be initialized. If so, allocate
- * the memory now - we won't be able to do that under the lock.
+ * Check if another stack pool needs to be allocated. If so, allocate
+ * the memory now: we won't be able to do that under the lock.
*
* smp_load_acquire() pairs with smp_store_release() in
- * depot_update_pools() and depot_keep_next_pool().
+ * depot_update_pools() and depot_keep_new_pool().
*/
- if (unlikely(can_alloc && smp_load_acquire(&next_pool_required))) {
+ if (unlikely(can_alloc && smp_load_acquire(&new_pool_required))) {
/*
* Zero out zone modifiers, as we don't have specific zone
* requirements. Keep the flags related to allocation in atomic
@@ -471,9 +470,9 @@ depot_stack_handle_t __stack_depot_save(unsigned long *entries,
} else if (prealloc) {
/*
* Stack depot already contains this stack trace, but let's
- * keep the preallocated memory for the future.
+ * keep the preallocated memory for future.
*/
- depot_keep_next_pool(&prealloc);
+ depot_keep_new_pool(&prealloc);
}

raw_spin_unlock_irqrestore(&pool_lock, flags);
--
2.25.1

2023-10-23 16:31:31

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH v3 09/19] lib/stackdepot: store next pool pointer in new_pool

From: Andrey Konovalov <[email protected]>

Instead of using the last pointer in stack_pools for storing the pointer
to a new pool (which does not yet store any stack records), use a new
new_pool variable.

This a purely code readability change: it seems more logical to store
the pointer to a pool with a special meaning in a dedicated variable.

Reviewed-by: Alexander Potapenko <[email protected]>
Signed-off-by: Andrey Konovalov <[email protected]>
---
lib/stackdepot.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/lib/stackdepot.c b/lib/stackdepot.c
index 7579e20114b1..5315952f26ec 100644
--- a/lib/stackdepot.c
+++ b/lib/stackdepot.c
@@ -85,6 +85,8 @@ static unsigned int stack_hash_mask;

/* Array of memory regions that store stack traces. */
static void *stack_pools[DEPOT_MAX_POOLS];
+/* Newly allocated pool that is not yet added to stack_pools. */
+static void *new_pool;
/* Currently used pool in stack_pools. */
static int pool_index;
/* Offset to the unused space in the currently used pool. */
@@ -235,7 +237,7 @@ static void depot_keep_new_pool(void **prealloc)
* as long as we do not exceed the maximum number of pools.
*/
if (pool_index + 1 < DEPOT_MAX_POOLS) {
- stack_pools[pool_index + 1] = *prealloc;
+ new_pool = *prealloc;
*prealloc = NULL;
}

@@ -266,6 +268,8 @@ static bool depot_update_pools(size_t required_size, void **prealloc)
* stack_depot_fetch().
*/
WRITE_ONCE(pool_index, pool_index + 1);
+ stack_pools[pool_index] = new_pool;
+ new_pool = NULL;
pool_offset = 0;

/*
--
2.25.1

2023-10-23 16:31:32

by andrey.konovalov

[permalink] [raw]
Subject: [PATCH v3 06/19] lib/stackdepot: fix and clean-up atomic annotations

From: Andrey Konovalov <[email protected]>

Drop smp_load_acquire from next_pool_required in depot_init_pool, as both
depot_init_pool and the all smp_store_release's to this variable are
executed under the stack depot lock.

Also simplify and clean up comments accompanying the use of atomic
accesses in the stack depot code.

Reviewed-by: Alexander Potapenko <[email protected]>
Signed-off-by: Andrey Konovalov <[email protected]>

---

This patch is not strictly required, as the atomic accesses are fully
removed in one of the latter patches. However, I decided to keep the
patch just in case we end up needing these atomics in the following
iterations of this series.

Changes v2->v3:
- Keep parentheses when referring to functions in comments.
- Add comment that explains why depot_init_pool reads next_pool_required
non-atomically.

Changes v1->v2:
- Minor comment fix as suggested by Marco.
- Drop READ_ONCE marking for next_pool_required.
---
lib/stackdepot.c | 29 ++++++++++++++---------------
1 file changed, 14 insertions(+), 15 deletions(-)

diff --git a/lib/stackdepot.c b/lib/stackdepot.c
index 128ece21afe9..60aea549429a 100644
--- a/lib/stackdepot.c
+++ b/lib/stackdepot.c
@@ -225,10 +225,10 @@ static void depot_init_pool(void **prealloc)
/*
* If the next pool is already initialized or the maximum number of
* pools is reached, do not use the preallocated memory.
- * smp_load_acquire() here pairs with smp_store_release() below and
- * in depot_alloc_stack().
+ * Access next_pool_required non-atomically, as there are no concurrent
+ * write accesses to this variable.
*/
- if (!smp_load_acquire(&next_pool_required))
+ if (!next_pool_required)
return;

/* Check if the current pool is not yet allocated. */
@@ -249,8 +249,8 @@ static void depot_init_pool(void **prealloc)
* At this point, either the next pool is initialized or the
* maximum number of pools is reached. In either case, take
* note that initializing another pool is not required.
- * This smp_store_release pairs with smp_load_acquire() above
- * and in stack_depot_save().
+ * smp_store_release() pairs with smp_load_acquire() in
+ * stack_depot_save().
*/
smp_store_release(&next_pool_required, 0);
}
@@ -273,7 +273,7 @@ depot_alloc_stack(unsigned long *entries, int size, u32 hash, void **prealloc)

/*
* Move on to the next pool.
- * WRITE_ONCE pairs with potential concurrent read in
+ * WRITE_ONCE() pairs with potential concurrent read in
* stack_depot_fetch().
*/
WRITE_ONCE(pool_index, pool_index + 1);
@@ -281,8 +281,8 @@ depot_alloc_stack(unsigned long *entries, int size, u32 hash, void **prealloc)
/*
* If the maximum number of pools is not reached, take note
* that the next pool needs to initialized.
- * smp_store_release() here pairs with smp_load_acquire() in
- * stack_depot_save() and depot_init_pool().
+ * smp_store_release() pairs with smp_load_acquire() in
+ * stack_depot_save().
*/
if (pool_index + 1 < DEPOT_MAX_POOLS)
smp_store_release(&next_pool_required, 1);
@@ -323,7 +323,7 @@ static struct stack_record *depot_fetch_stack(depot_stack_handle_t handle)
{
union handle_parts parts = { .handle = handle };
/*
- * READ_ONCE pairs with potential concurrent write in
+ * READ_ONCE() pairs with potential concurrent write in
* depot_alloc_stack().
*/
int pool_index_cached = READ_ONCE(pool_index);
@@ -413,8 +413,7 @@ depot_stack_handle_t __stack_depot_save(unsigned long *entries,

/*
* Fast path: look the stack trace up without locking.
- * The smp_load_acquire() here pairs with smp_store_release() to
- * |bucket| below.
+ * smp_load_acquire() pairs with smp_store_release() to |bucket| below.
*/
found = find_stack(smp_load_acquire(bucket), entries, nr_entries, hash);
if (found)
@@ -424,8 +423,8 @@ depot_stack_handle_t __stack_depot_save(unsigned long *entries,
* Check if another stack pool needs to be initialized. If so, allocate
* the memory now - we won't be able to do that under the lock.
*
- * The smp_load_acquire() here pairs with smp_store_release() to
- * |next_pool_inited| in depot_alloc_stack() and depot_init_pool().
+ * smp_load_acquire() pairs with smp_store_release() in
+ * depot_alloc_stack() and depot_init_pool().
*/
if (unlikely(can_alloc && smp_load_acquire(&next_pool_required))) {
/*
@@ -451,8 +450,8 @@ depot_stack_handle_t __stack_depot_save(unsigned long *entries,
if (new) {
new->next = *bucket;
/*
- * This smp_store_release() pairs with
- * smp_load_acquire() from |bucket| above.
+ * smp_store_release() pairs with smp_load_acquire()
+ * from |bucket| above.
*/
smp_store_release(bucket, new);
found = new;
--
2.25.1

2023-10-24 09:36:29

by Anders Roxell

[permalink] [raw]
Subject: Re: [PATCH v3 00/19] stackdepot: allow evicting stack traces

On 2023-10-23 18:22, [email protected] wrote:
> From: Andrey Konovalov <[email protected]>
>
> Currently, the stack depot grows indefinitely until it reaches its
> capacity. Once that happens, the stack depot stops saving new stack
> traces.
>
> This creates a problem for using the stack depot for in-field testing
> and in production.
>
> For such uses, an ideal stack trace storage should:
>
> 1. Allow saving fresh stack traces on systems with a large uptime while
> limiting the amount of memory used to store the traces;
> 2. Have a low performance impact.
>
> Implementing #1 in the stack depot is impossible with the current
> keep-forever approach. This series targets to address that. Issue #2 is
> left to be addressed in a future series.
>
> This series changes the stack depot implementation to allow evicting
> unneeded stack traces from the stack depot. The users of the stack depot
> can do that via new stack_depot_save_flags(STACK_DEPOT_FLAG_GET) and
> stack_depot_put APIs.
>
> Internal changes to the stack depot code include:
>
> 1. Storing stack traces in fixed-frame-sized slots; the slot size is
> controlled via CONFIG_STACKDEPOT_MAX_FRAMES (vs precisely-sized
> slots in the current implementation);
> 2. Keeping available slots in a freelist (vs keeping an offset to the next
> free slot);
> 3. Using a read/write lock for synchronization (vs a lock-free approach
> combined with a spinlock).
>
> This series also integrates the eviction functionality in the tag-based
> KASAN modes.
>
> Despite wasting some space on rounding up the size of each stack record,
> with CONFIG_STACKDEPOT_MAX_FRAMES=32, the tag-based KASAN modes end up
> consuming ~5% less memory in stack depot during boot (with the default
> stack ring size of 32k entries). The reason for this is the eviction of
> irrelevant stack traces from the stack depot, which frees up space for
> other stack traces.
>
> For other tools that heavily rely on the stack depot, like Generic KASAN
> and KMSAN, this change leads to the stack depot capacity being reached
> sooner than before. However, as these tools are mainly used in fuzzing
> scenarios where the kernel is frequently rebooted, this outcome should
> be acceptable.
>
> There is no measurable boot time performance impact of these changes for
> KASAN on x86-64. I haven't done any tests for arm64 modes (the stack
> depot without performance optimizations is not suitable for intended use
> of those anyway), but I expect a similar result. Obtaining and copying
> stack trace frames when saving them into stack depot is what takes the
> most time.
>
> This series does not yet provide a way to configure the maximum size of
> the stack depot externally (e.g. via a command-line parameter). This will
> be added in a separate series, possibly together with the performance
> improvement changes.
>
> ---
>
> Changes v2->v3:
> - Fix null-ptr-deref by using the proper number of entries for
> initializing the stack table when alloc_large_system_hash()
> auto-calculates the number (see patch #12).
> - Keep STACKDEPOT/STACKDEPOT_ALWAYS_INIT Kconfig options not configurable
> by users.
> - Use lockdep_assert_held_read annotation in depot_fetch_stack.
> - WARN_ON invalid flags in stack_depot_save_flags.
> - Moved "../slab.h" include in mm/kasan/report_tags.c in the right patch.
> - Various comment fixes.
>
> Changes v1->v2:
> - Rework API to stack_depot_save_flags(STACK_DEPOT_FLAG_GET) +
> stack_depot_put.
> - Add CONFIG_STACKDEPOT_MAX_FRAMES Kconfig option.
> - Switch stack depot to using list_head's.
> - Assorted minor changes, see the commit message for each path.
>
> Andrey Konovalov (19):
> lib/stackdepot: check disabled flag when fetching
> lib/stackdepot: simplify __stack_depot_save
> lib/stackdepot: drop valid bit from handles
> lib/stackdepot: add depot_fetch_stack helper
> lib/stackdepot: use fixed-sized slots for stack records
> lib/stackdepot: fix and clean-up atomic annotations
> lib/stackdepot: rework helpers for depot_alloc_stack
> lib/stackdepot: rename next_pool_required to new_pool_required
> lib/stackdepot: store next pool pointer in new_pool
> lib/stackdepot: store free stack records in a freelist
> lib/stackdepot: use read/write lock
> lib/stackdepot: use list_head for stack record links
> kmsan: use stack_depot_save instead of __stack_depot_save
> lib/stackdepot, kasan: add flags to __stack_depot_save and rename
> lib/stackdepot: add refcount for records
> lib/stackdepot: allow users to evict stack traces
> kasan: remove atomic accesses to stack ring entries
> kasan: check object_size in kasan_complete_mode_report_info
> kasan: use stack_depot_put for tag-based modes

Tested-by: Anders Roxell <[email protected]>

Applied this patchset to linux-next tag next-20231023 and built an arm64
kernel and that
booted fine in QEMU.

Cheers,
Anders

2023-10-24 13:14:45

by Marco Elver

[permalink] [raw]
Subject: Re: [PATCH v3 00/19] stackdepot: allow evicting stack traces

On Mon, 23 Oct 2023 at 18:22, <[email protected]> wrote:
[...]
> ---
>
> Changes v2->v3:
> - Fix null-ptr-deref by using the proper number of entries for
> initializing the stack table when alloc_large_system_hash()
> auto-calculates the number (see patch #12).
> - Keep STACKDEPOT/STACKDEPOT_ALWAYS_INIT Kconfig options not configurable
> by users.
> - Use lockdep_assert_held_read annotation in depot_fetch_stack.
> - WARN_ON invalid flags in stack_depot_save_flags.
> - Moved "../slab.h" include in mm/kasan/report_tags.c in the right patch.
> - Various comment fixes.
>
> Changes v1->v2:
> - Rework API to stack_depot_save_flags(STACK_DEPOT_FLAG_GET) +
> stack_depot_put.
> - Add CONFIG_STACKDEPOT_MAX_FRAMES Kconfig option.
> - Switch stack depot to using list_head's.
> - Assorted minor changes, see the commit message for each path.
>
> Andrey Konovalov (19):
> lib/stackdepot: check disabled flag when fetching
> lib/stackdepot: simplify __stack_depot_save
> lib/stackdepot: drop valid bit from handles
> lib/stackdepot: add depot_fetch_stack helper
> lib/stackdepot: use fixed-sized slots for stack records

1. I know fixed-sized slots are need for eviction to work, but have
you evaluated if this causes some excessive memory waste now? Or is it
negligible?

If it turns out to be a problem, one way out would be to partition the
freelist into stack size classes; e.g. one for each of stack traces of
size 8, 16, 32, 64.

> lib/stackdepot: fix and clean-up atomic annotations
> lib/stackdepot: rework helpers for depot_alloc_stack
> lib/stackdepot: rename next_pool_required to new_pool_required
> lib/stackdepot: store next pool pointer in new_pool
> lib/stackdepot: store free stack records in a freelist
> lib/stackdepot: use read/write lock

2. I still think switching to the percpu_rwsem right away is the right
thing, and not actually a downside. I mentioned this before, but you
promised a follow-up patch, so I trust that this will happen. ;-)

> lib/stackdepot: use list_head for stack record links
> kmsan: use stack_depot_save instead of __stack_depot_save
> lib/stackdepot, kasan: add flags to __stack_depot_save and rename
> lib/stackdepot: add refcount for records
> lib/stackdepot: allow users to evict stack traces
> kasan: remove atomic accesses to stack ring entries
> kasan: check object_size in kasan_complete_mode_report_info
> kasan: use stack_depot_put for tag-based modes
>
> include/linux/stackdepot.h | 59 ++++--
> lib/Kconfig | 10 +
> lib/stackdepot.c | 418 ++++++++++++++++++++++++-------------
> mm/kasan/common.c | 7 +-
> mm/kasan/generic.c | 9 +-
> mm/kasan/kasan.h | 2 +-
> mm/kasan/report_tags.c | 27 +--
> mm/kasan/tags.c | 24 ++-
> mm/kmsan/core.c | 7 +-
> 9 files changed, 365 insertions(+), 198 deletions(-)

Acked-by: Marco Elver <[email protected]>

The series looks good in its current state. However, see my 2
higher-level comments above.

2023-11-03 21:38:20

by Andrey Konovalov

[permalink] [raw]
Subject: Re: [PATCH v3 00/19] stackdepot: allow evicting stack traces

On Tue, Oct 24, 2023 at 3:14 PM Marco Elver <[email protected]> wrote:
>
> 1. I know fixed-sized slots are need for eviction to work, but have
> you evaluated if this causes some excessive memory waste now? Or is it
> negligible?

With the current default stack depot slot size of 64 frames, a single
stack trace takes up ~3-4x on average compared to precisely sized
slots (KMSAN is closer to ~4x due to its 3-frame-sized linking
records).

However, as the tag-based KASAN modes evict old stack traces, the
average total amount of memory used for stack traces is ~0.5 MB (with
the default stack ring size of 32k entries).

I also have just mailed an eviction implementation for Generic KASAN.
With it, the stack traces take up ~1 MB per 1 GB of RAM while running
syzkaller (stack traces are evicted when they are flushed from
quarantine, and quarantine's size depends on the amount of RAM.)

The only problem is KMSAN. Based on a discussion with Alexander, it
might not be possible to implement the eviction for it. So I suspect,
with this change, syzbot might run into the capacity WARNING from time
to time.

The simplest solution would be to bump the maximum size of stack depot
storage to x4 if KMSAN is enabled (to 512 MB from the current 128 MB).
KMSAN requires a significant amount of RAM for shadow anyway.

Would that be acceptable?

> If it turns out to be a problem, one way out would be to partition the
> freelist into stack size classes; e.g. one for each of stack traces of
> size 8, 16, 32, 64.

This shouldn't be hard to implement.

However, as one of the perf improvements, I'm thinking of saving a
stack trace directly into a stack depot slot (to avoid copying it).
With this, we won't know the stack trace size before it is saved. So
this won't work together with the size classes.

> 2. I still think switching to the percpu_rwsem right away is the right
> thing, and not actually a downside. I mentioned this before, but you
> promised a follow-up patch, so I trust that this will happen. ;-)

First thing on my TODO list wrt perf improvements :)

> Acked-by: Marco Elver <[email protected]>
>
> The series looks good in its current state. However, see my 2
> higher-level comments above.

Thank you!

2023-11-06 17:41:56

by Andrey Konovalov

[permalink] [raw]
Subject: Re: [PATCH v3 00/19] stackdepot: allow evicting stack traces

On Fri, Nov 3, 2023 at 10:37 PM Andrey Konovalov <[email protected]> wrote:
>
> On Tue, Oct 24, 2023 at 3:14 PM Marco Elver <[email protected]> wrote:
> >
> > 1. I know fixed-sized slots are need for eviction to work, but have
> > you evaluated if this causes some excessive memory waste now? Or is it
> > negligible?
>
> With the current default stack depot slot size of 64 frames, a single
> stack trace takes up ~3-4x on average compared to precisely sized
> slots (KMSAN is closer to ~4x due to its 3-frame-sized linking
> records).
>
> However, as the tag-based KASAN modes evict old stack traces, the
> average total amount of memory used for stack traces is ~0.5 MB (with
> the default stack ring size of 32k entries).
>
> I also have just mailed an eviction implementation for Generic KASAN.
> With it, the stack traces take up ~1 MB per 1 GB of RAM while running
> syzkaller (stack traces are evicted when they are flushed from
> quarantine, and quarantine's size depends on the amount of RAM.)
>
> The only problem is KMSAN. Based on a discussion with Alexander, it
> might not be possible to implement the eviction for it. So I suspect,
> with this change, syzbot might run into the capacity WARNING from time
> to time.
>
> The simplest solution would be to bump the maximum size of stack depot
> storage to x4 if KMSAN is enabled (to 512 MB from the current 128 MB).
> KMSAN requires a significant amount of RAM for shadow anyway.
>
> Would that be acceptable?
>
> > If it turns out to be a problem, one way out would be to partition the
> > freelist into stack size classes; e.g. one for each of stack traces of
> > size 8, 16, 32, 64.
>
> This shouldn't be hard to implement.
>
> However, as one of the perf improvements, I'm thinking of saving a
> stack trace directly into a stack depot slot (to avoid copying it).
> With this, we won't know the stack trace size before it is saved. So
> this won't work together with the size classes.

On a second thought, saving stack traces directly into a stack depot
slot will require taking the write lock, which will badly affect
performance, or using some other elaborate locking scheme, which might
be an overkill.

> > 2. I still think switching to the percpu_rwsem right away is the right
> > thing, and not actually a downside. I mentioned this before, but you
> > promised a follow-up patch, so I trust that this will happen. ;-)
>
> First thing on my TODO list wrt perf improvements :)
>
> > Acked-by: Marco Elver <[email protected]>
> >
> > The series looks good in its current state. However, see my 2
> > higher-level comments above.
>
> Thank you!