This patch is KASAN's report adds the alloc/free stack for page allocator
in order to help programmer to see memory corruption caused by the page.
By default, KASAN doesn't record alloc or free stack for page allocator.
It is difficult to fix up the page use-after-free or double-free issue.
We add the following changing:
1) KASAN enable PAGE_OWNER by default to get the alloc stack of the page.
2) Add new feature option to get the free stack of the page.
The new feature KASAN_DUMP_PAGE depends on DEBUG_PAGEALLOC, it will help
to record free stack of the page, it is very helpful for solving the page
use-after-free or double-free issue.
When KASAN_DUMP_PAGE is enabled then KASAN's report will show the last
alloc and free stack of the page, it should be:
BUG: KASAN: use-after-free in kmalloc_pagealloc_uaf+0x70/0x80
Write of size 1 at addr ffffffc0d60e4000 by task cat/115
...
prep_new_page+0x1c8/0x218
get_page_from_freelist+0x1ba0/0x28d0
__alloc_pages_nodemask+0x1d4/0x1978
kmalloc_order+0x28/0x58
kmalloc_order_trace+0x28/0xe0
kmalloc_pagealloc_uaf+0x2c/0x80
page last free stack trace:
__free_pages_ok+0x116c/0x1630
__free_pages+0x50/0x78
kfree+0x1c4/0x250
kmalloc_pagealloc_uaf+0x38/0x80
Changes since v1:
- slim page_owner and move it into kasan
- enable the feature by default
Changes since v2:
- enable PAGE_OWNER by default
- use DEBUG_PAGEALLOC to get page information
cc: Andrey Ryabinin <[email protected]>
cc: Vlastimil Babka <[email protected]>
cc: Andrey Konovalov <[email protected]>
Signed-off-by: Walter Wu <[email protected]>
---
lib/Kconfig.kasan | 15 +++++++++++++++
1 file changed, 15 insertions(+)
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index 4fafba1a923b..4d59458c0c5a 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -41,6 +41,7 @@ config KASAN_GENERIC
select SLUB_DEBUG if SLUB
select CONSTRUCTORS
select STACKDEPOT
+ select PAGER_OWNER
help
Enables generic KASAN mode.
Supported in both GCC and Clang. With GCC it requires version 4.9.2
@@ -63,6 +64,7 @@ config KASAN_SW_TAGS
select SLUB_DEBUG if SLUB
select CONSTRUCTORS
select STACKDEPOT
+ select PAGER_OWNER
help
Enables software tag-based KASAN mode.
This mode requires Top Byte Ignore support by the CPU and therefore
@@ -135,6 +137,19 @@ config KASAN_S390_4_LEVEL_PAGING
to 3TB of RAM with KASan enabled). This options allows to force
4-level paging instead.
+config KASAN_DUMP_PAGE
+ bool "Dump the last allocation and freeing stack of the page"
+ depends on KASAN
+ select DEBUG_PAGEALLOC
+ help
+ By default, KASAN enable PAGE_OWNER only to record alloc stack
+ for page allocator. It is difficult to fix up page use-after-free
+ or double-free issue.
+ This feature depends on DEBUG_PAGEALLOC, it will extra record
+ free stack of page. It is very helpful for solving the page
+ use-after-free or double-free issue.
+ This option will have a small memory overhead.
+
config TEST_KASAN
tristate "Module for testing KASAN for bug detection"
depends on m && KASAN
--
2.18.0
> On Sep 11, 2019, at 4:39 AM, Walter Wu <[email protected]> wrote:
>
> This patch is KASAN's report adds the alloc/free stack for page allocator
> in order to help programmer to see memory corruption caused by the page.
>
> By default, KASAN doesn't record alloc or free stack for page allocator.
> It is difficult to fix up the page use-after-free or double-free issue.
>
> We add the following changing:
> 1) KASAN enable PAGE_OWNER by default to get the alloc stack of the page.
> 2) Add new feature option to get the free stack of the page.
>
> The new feature KASAN_DUMP_PAGE depends on DEBUG_PAGEALLOC, it will help
> to record free stack of the page, it is very helpful for solving the page
> use-after-free or double-free issue.
>
> When KASAN_DUMP_PAGE is enabled then KASAN's report will show the last
> alloc and free stack of the page, it should be:
>
> BUG: KASAN: use-after-free in kmalloc_pagealloc_uaf+0x70/0x80
> Write of size 1 at addr ffffffc0d60e4000 by task cat/115
> ...
> prep_new_page+0x1c8/0x218
> get_page_from_freelist+0x1ba0/0x28d0
> __alloc_pages_nodemask+0x1d4/0x1978
> kmalloc_order+0x28/0x58
> kmalloc_order_trace+0x28/0xe0
> kmalloc_pagealloc_uaf+0x2c/0x80
> page last free stack trace:
> __free_pages_ok+0x116c/0x1630
> __free_pages+0x50/0x78
> kfree+0x1c4/0x250
> kmalloc_pagealloc_uaf+0x38/0x80
>
> Changes since v1:
> - slim page_owner and move it into kasan
> - enable the feature by default
>
> Changes since v2:
> - enable PAGE_OWNER by default
> - use DEBUG_PAGEALLOC to get page information
>
> cc: Andrey Ryabinin <[email protected]>
> cc: Vlastimil Babka <[email protected]>
> cc: Andrey Konovalov <[email protected]>
> Signed-off-by: Walter Wu <[email protected]>
> ---
> lib/Kconfig.kasan | 15 +++++++++++++++
> 1 file changed, 15 insertions(+)
>
> diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
> index 4fafba1a923b..4d59458c0c5a 100644
> --- a/lib/Kconfig.kasan
> +++ b/lib/Kconfig.kasan
> @@ -41,6 +41,7 @@ config KASAN_GENERIC
> select SLUB_DEBUG if SLUB
> select CONSTRUCTORS
> select STACKDEPOT
> + select PAGER_OWNER
> help
> Enables generic KASAN mode.
> Supported in both GCC and Clang. With GCC it requires version 4.9.2
> @@ -63,6 +64,7 @@ config KASAN_SW_TAGS
> select SLUB_DEBUG if SLUB
> select CONSTRUCTORS
> select STACKDEPOT
> + select PAGER_OWNER
> help
> Enables software tag-based KASAN mode.
> This mode requires Top Byte Ignore support by the CPU and therefore
> @@ -135,6 +137,19 @@ config KASAN_S390_4_LEVEL_PAGING
> to 3TB of RAM with KASan enabled). This options allows to force
> 4-level paging instead.
>
> +config KASAN_DUMP_PAGE
> + bool "Dump the last allocation and freeing stack of the page"
> + depends on KASAN
> + select DEBUG_PAGEALLOC
> + help
> + By default, KASAN enable PAGE_OWNER only to record alloc stack
> + for page allocator. It is difficult to fix up page use-after-free
> + or double-free issue.
> + This feature depends on DEBUG_PAGEALLOC, it will extra record
> + free stack of page. It is very helpful for solving the page
> + use-after-free or double-free issue.
> + This option will have a small memory overhead.
> +
> config TEST_KASAN
> tristate "Module for testing KASAN for bug detection"
> depends on m && KASAN
> —
The new config looks redundant and confusing. It looks to me more of a document update
in Documentation/dev-tools/kasan.txt to educate developers to select PAGE_OWNER and
DEBUG_PAGEALLOC if needed.
On Thu, 2019-09-12 at 15:53 +0200, Vlastimil Babka wrote:
> On 9/11/19 5:19 PM, Qian Cai wrote:
> >
> > The new config looks redundant and confusing. It looks to me more of a document update
> > in Documentation/dev-tools/kasan.txt to educate developers to select PAGE_OWNER and
> > DEBUG_PAGEALLOC if needed.
>
>
> Agreed. But if you want it fully automatic, how about something
> like this (on top of mmotm/next)? If you agree I'll add changelog
> and send properly.
>
> ----8<----
>
> From a528d14c71d7fdf5872ca8ab3bd1b5bad26670c9 Mon Sep 17 00:00:00 2001
> From: Vlastimil Babka <[email protected]>
> Date: Thu, 12 Sep 2019 15:51:23 +0200
> Subject: [PATCH] make KASAN enable page_owner with free stack capture
>
> ---
> include/linux/page_owner.h | 1 +
> lib/Kconfig.kasan | 4 ++++
> mm/Kconfig.debug | 5 +++++
> mm/page_alloc.c | 6 +++++-
> mm/page_owner.c | 37 ++++++++++++++++++++++++-------------
> 5 files changed, 39 insertions(+), 14 deletions(-)
>
> diff --git a/include/linux/page_owner.h b/include/linux/page_owner.h
> index 8679ccd722e8..6ffe8b81ba85 100644
> --- a/include/linux/page_owner.h
> +++ b/include/linux/page_owner.h
> @@ -6,6 +6,7 @@
>
> #ifdef CONFIG_PAGE_OWNER
> extern struct static_key_false page_owner_inited;
> +extern bool page_owner_free_stack_disabled;
> extern struct page_ext_operations page_owner_ops;
>
> extern void __reset_page_owner(struct page *page, unsigned int order);
> diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
> index 6c9682ce0254..dc560c7562e8 100644
> --- a/lib/Kconfig.kasan
> +++ b/lib/Kconfig.kasan
> @@ -41,6 +41,8 @@ config KASAN_GENERIC
> select SLUB_DEBUG if SLUB
> select CONSTRUCTORS
> select STACKDEPOT
> + select PAGE_OWNER
> + select PAGE_OWNER_FREE_STACK
> help
> Enables generic KASAN mode.
> Supported in both GCC and Clang. With GCC it requires version 4.9.2
> @@ -63,6 +65,8 @@ config KASAN_SW_TAGS
> select SLUB_DEBUG if SLUB
> select CONSTRUCTORS
> select STACKDEPOT
> + select PAGE_OWNER
> + select PAGE_OWNER_FREE_STACK
> help
> Enables software tag-based KASAN mode.
> This mode requires Top Byte Ignore support by the CPU and therefore
I don't know how KASAN people will feel about this. Especially, KASAN_SW_TAGS
was designed for people who complain about memory footprint of KASAN_GENERIC is
too high as far as I can tell.
I guess it depends on them to test the new memory footprint of KASAN to see if
they are happy with it.
> diff --git a/mm/Kconfig.debug b/mm/Kconfig.debug
> index 327b3ebf23bf..a71d52636687 100644
> --- a/mm/Kconfig.debug
> +++ b/mm/Kconfig.debug
> @@ -13,6 +13,7 @@ config DEBUG_PAGEALLOC
> depends on DEBUG_KERNEL
> depends on !HIBERNATION || ARCH_SUPPORTS_DEBUG_PAGEALLOC && !PPC && !SPARC
> select PAGE_POISONING if !ARCH_SUPPORTS_DEBUG_PAGEALLOC
> + select PAGE_OWNER_FREE_STACK if PAGE_OWNER
> ---help---
> Unmap pages from the kernel linear mapping after free_pages().
> Depending on runtime enablement, this results in a small or large
> @@ -62,6 +63,10 @@ config PAGE_OWNER
>
> If unsure, say N.
>
> +config PAGE_OWNER_FREE_STACK
> + def_bool n
> + depends on PAGE_OWNER
> +
> config PAGE_POISONING
> bool "Poison pages after freeing"
> select PAGE_POISONING_NO_SANITY if HIBERNATION
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index c5d62f1c2851..d9e44671af3f 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -710,8 +710,12 @@ static int __init early_debug_pagealloc(char *buf)
> if (kstrtobool(buf, &enable))
> return -EINVAL;
>
> - if (enable)
> + if (enable) {
> static_branch_enable(&_debug_pagealloc_enabled);
> +#ifdef CONFIG_PAGE_OWNER
> + page_owner_free_stack_disabled = false;
> +#endif
> + }
>
> return 0;
> }
> diff --git a/mm/page_owner.c b/mm/page_owner.c
> index dee931184788..d4551d7012d0 100644
> --- a/mm/page_owner.c
> +++ b/mm/page_owner.c
> @@ -24,13 +24,15 @@ struct page_owner {
> short last_migrate_reason;
> gfp_t gfp_mask;
> depot_stack_handle_t handle;
> -#ifdef CONFIG_DEBUG_PAGEALLOC
> +#ifdef CONFIG_PAGE_OWNER_FREE_STACK
> depot_stack_handle_t free_handle;
> #endif
> };
>
> static bool page_owner_disabled = true;
> +bool page_owner_free_stack_disabled = true;
> DEFINE_STATIC_KEY_FALSE(page_owner_inited);
> +static DEFINE_STATIC_KEY_FALSE(page_owner_free_stack);
>
> static depot_stack_handle_t dummy_handle;
> static depot_stack_handle_t failure_handle;
> @@ -46,6 +48,11 @@ static int __init early_page_owner_param(char *buf)
> if (strcmp(buf, "on") == 0)
> page_owner_disabled = false;
>
> + if (IS_ENABLED(CONFIG_KASAN)) {
> + page_owner_disabled = false;
> + page_owner_free_stack_disabled = false;
> + }
> +
> return 0;
> }
> early_param("page_owner", early_page_owner_param);
> @@ -91,6 +98,8 @@ static void init_page_owner(void)
> register_failure_stack();
> register_early_stack();
> static_branch_enable(&page_owner_inited);
> + if (!page_owner_free_stack_disabled)
> + static_branch_enable(&page_owner_free_stack);
> init_early_allocated_pages();
> }
>
> @@ -148,11 +157,11 @@ void __reset_page_owner(struct page *page, unsigned int order)
> {
> int i;
> struct page_ext *page_ext;
> -#ifdef CONFIG_DEBUG_PAGEALLOC
> +#ifdef CONFIG_PAGE_OWNER_FREE_STACK
> depot_stack_handle_t handle = 0;
> struct page_owner *page_owner;
>
> - if (debug_pagealloc_enabled())
> + if (static_branch_unlikely(&page_owner_free_stack))
> handle = save_stack(GFP_NOWAIT | __GFP_NOWARN);
> #endif
>
> @@ -161,8 +170,8 @@ void __reset_page_owner(struct page *page, unsigned int order)
> if (unlikely(!page_ext))
> continue;
> __clear_bit(PAGE_EXT_OWNER_ACTIVE, &page_ext->flags);
> -#ifdef CONFIG_DEBUG_PAGEALLOC
> - if (debug_pagealloc_enabled()) {
> +#ifdef CONFIG_PAGE_OWNER_FREE_STACK
> + if (static_branch_unlikely(&page_owner_free_stack)) {
> page_owner = get_page_owner(page_ext);
> page_owner->free_handle = handle;
> }
> @@ -451,14 +460,16 @@ void __dump_page_owner(struct page *page)
> stack_trace_print(entries, nr_entries, 0);
> }
>
> -#ifdef CONFIG_DEBUG_PAGEALLOC
> - handle = READ_ONCE(page_owner->free_handle);
> - if (!handle) {
> - pr_alert("page_owner free stack trace missing\n");
> - } else {
> - nr_entries = stack_depot_fetch(handle, &entries);
> - pr_alert("page last free stack trace:\n");
> - stack_trace_print(entries, nr_entries, 0);
> +#ifdef CONFIG_PAGE_OWNER_FREE_STACK
> + if (static_branch_unlikely(&page_owner_free_stack)) {
> + handle = READ_ONCE(page_owner->free_handle);
> + if (!handle) {
> + pr_alert("page_owner free stack trace missing\n");
> + } else {
> + nr_entries = stack_depot_fetch(handle, &entries);
> + pr_alert("page last free stack trace:\n");
> + stack_trace_print(entries, nr_entries, 0);
> + }
> }
> #endif
>
On 9/12/19 4:08 PM, Walter Wu wrote:
>
>> extern void __reset_page_owner(struct page *page, unsigned int order);
>> diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
>> index 6c9682ce0254..dc560c7562e8 100644
>> --- a/lib/Kconfig.kasan
>> +++ b/lib/Kconfig.kasan
>> @@ -41,6 +41,8 @@ config KASAN_GENERIC
>> select SLUB_DEBUG if SLUB
>> select CONSTRUCTORS
>> select STACKDEPOT
>> + select PAGE_OWNER
>> + select PAGE_OWNER_FREE_STACK
>> help
>> Enables generic KASAN mode.
>> Supported in both GCC and Clang. With GCC it requires version 4.9.2
>> @@ -63,6 +65,8 @@ config KASAN_SW_TAGS
>> select SLUB_DEBUG if SLUB
>> select CONSTRUCTORS
>> select STACKDEPOT
>> + select PAGE_OWNER
>> + select PAGE_OWNER_FREE_STACK
>> help
>
> What is the difference between PAGE_OWNER+PAGE_OWNER_FREE_STACK and
> DEBUG_PAGEALLOC?
Same memory usage, but debug_pagealloc means also extra checks and
restricting memory access to freed pages to catch UAF.
> If you directly enable PAGE_OWNER+PAGE_OWNER_FREE_STACK
> PAGE_OWNER_FREE_STACK,don't you think low-memory device to want to use
> KASAN?
OK, so it should be optional? But I think it's enough to distinguish no
PAGE_OWNER at all, and PAGE_OWNER+PAGE_OWNER_FREE_STACK together - I
don't see much point in PAGE_OWNER only for this kind of debugging.
So how about this? KASAN wouldn't select PAGE_OWNER* but it would be
recommended in the help+docs. When PAGE_OWNER and KASAN are selected by
user, PAGE_OWNER_FREE_STACK gets also selected, and both will be also
runtime enabled without explicit page_owner=on.
I mostly want to avoid another boot-time option for enabling
PAGE_OWNER_FREE_STACK.
Would that be enough flexibility for low-memory devices vs full-fledged
debugging?
On 9/11/19 5:19 PM, Qian Cai wrote:
>
> The new config looks redundant and confusing. It looks to me more of a document update
> in Documentation/dev-tools/kasan.txt to educate developers to select PAGE_OWNER and
> DEBUG_PAGEALLOC if needed.
Agreed. But if you want it fully automatic, how about something
like this (on top of mmotm/next)? If you agree I'll add changelog
and send properly.
----8<----
From a528d14c71d7fdf5872ca8ab3bd1b5bad26670c9 Mon Sep 17 00:00:00 2001
From: Vlastimil Babka <[email protected]>
Date: Thu, 12 Sep 2019 15:51:23 +0200
Subject: [PATCH] make KASAN enable page_owner with free stack capture
---
include/linux/page_owner.h | 1 +
lib/Kconfig.kasan | 4 ++++
mm/Kconfig.debug | 5 +++++
mm/page_alloc.c | 6 +++++-
mm/page_owner.c | 37 ++++++++++++++++++++++++-------------
5 files changed, 39 insertions(+), 14 deletions(-)
diff --git a/include/linux/page_owner.h b/include/linux/page_owner.h
index 8679ccd722e8..6ffe8b81ba85 100644
--- a/include/linux/page_owner.h
+++ b/include/linux/page_owner.h
@@ -6,6 +6,7 @@
#ifdef CONFIG_PAGE_OWNER
extern struct static_key_false page_owner_inited;
+extern bool page_owner_free_stack_disabled;
extern struct page_ext_operations page_owner_ops;
extern void __reset_page_owner(struct page *page, unsigned int order);
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index 6c9682ce0254..dc560c7562e8 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -41,6 +41,8 @@ config KASAN_GENERIC
select SLUB_DEBUG if SLUB
select CONSTRUCTORS
select STACKDEPOT
+ select PAGE_OWNER
+ select PAGE_OWNER_FREE_STACK
help
Enables generic KASAN mode.
Supported in both GCC and Clang. With GCC it requires version 4.9.2
@@ -63,6 +65,8 @@ config KASAN_SW_TAGS
select SLUB_DEBUG if SLUB
select CONSTRUCTORS
select STACKDEPOT
+ select PAGE_OWNER
+ select PAGE_OWNER_FREE_STACK
help
Enables software tag-based KASAN mode.
This mode requires Top Byte Ignore support by the CPU and therefore
diff --git a/mm/Kconfig.debug b/mm/Kconfig.debug
index 327b3ebf23bf..a71d52636687 100644
--- a/mm/Kconfig.debug
+++ b/mm/Kconfig.debug
@@ -13,6 +13,7 @@ config DEBUG_PAGEALLOC
depends on DEBUG_KERNEL
depends on !HIBERNATION || ARCH_SUPPORTS_DEBUG_PAGEALLOC && !PPC && !SPARC
select PAGE_POISONING if !ARCH_SUPPORTS_DEBUG_PAGEALLOC
+ select PAGE_OWNER_FREE_STACK if PAGE_OWNER
---help---
Unmap pages from the kernel linear mapping after free_pages().
Depending on runtime enablement, this results in a small or large
@@ -62,6 +63,10 @@ config PAGE_OWNER
If unsure, say N.
+config PAGE_OWNER_FREE_STACK
+ def_bool n
+ depends on PAGE_OWNER
+
config PAGE_POISONING
bool "Poison pages after freeing"
select PAGE_POISONING_NO_SANITY if HIBERNATION
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index c5d62f1c2851..d9e44671af3f 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -710,8 +710,12 @@ static int __init early_debug_pagealloc(char *buf)
if (kstrtobool(buf, &enable))
return -EINVAL;
- if (enable)
+ if (enable) {
static_branch_enable(&_debug_pagealloc_enabled);
+#ifdef CONFIG_PAGE_OWNER
+ page_owner_free_stack_disabled = false;
+#endif
+ }
return 0;
}
diff --git a/mm/page_owner.c b/mm/page_owner.c
index dee931184788..d4551d7012d0 100644
--- a/mm/page_owner.c
+++ b/mm/page_owner.c
@@ -24,13 +24,15 @@ struct page_owner {
short last_migrate_reason;
gfp_t gfp_mask;
depot_stack_handle_t handle;
-#ifdef CONFIG_DEBUG_PAGEALLOC
+#ifdef CONFIG_PAGE_OWNER_FREE_STACK
depot_stack_handle_t free_handle;
#endif
};
static bool page_owner_disabled = true;
+bool page_owner_free_stack_disabled = true;
DEFINE_STATIC_KEY_FALSE(page_owner_inited);
+static DEFINE_STATIC_KEY_FALSE(page_owner_free_stack);
static depot_stack_handle_t dummy_handle;
static depot_stack_handle_t failure_handle;
@@ -46,6 +48,11 @@ static int __init early_page_owner_param(char *buf)
if (strcmp(buf, "on") == 0)
page_owner_disabled = false;
+ if (IS_ENABLED(CONFIG_KASAN)) {
+ page_owner_disabled = false;
+ page_owner_free_stack_disabled = false;
+ }
+
return 0;
}
early_param("page_owner", early_page_owner_param);
@@ -91,6 +98,8 @@ static void init_page_owner(void)
register_failure_stack();
register_early_stack();
static_branch_enable(&page_owner_inited);
+ if (!page_owner_free_stack_disabled)
+ static_branch_enable(&page_owner_free_stack);
init_early_allocated_pages();
}
@@ -148,11 +157,11 @@ void __reset_page_owner(struct page *page, unsigned int order)
{
int i;
struct page_ext *page_ext;
-#ifdef CONFIG_DEBUG_PAGEALLOC
+#ifdef CONFIG_PAGE_OWNER_FREE_STACK
depot_stack_handle_t handle = 0;
struct page_owner *page_owner;
- if (debug_pagealloc_enabled())
+ if (static_branch_unlikely(&page_owner_free_stack))
handle = save_stack(GFP_NOWAIT | __GFP_NOWARN);
#endif
@@ -161,8 +170,8 @@ void __reset_page_owner(struct page *page, unsigned int order)
if (unlikely(!page_ext))
continue;
__clear_bit(PAGE_EXT_OWNER_ACTIVE, &page_ext->flags);
-#ifdef CONFIG_DEBUG_PAGEALLOC
- if (debug_pagealloc_enabled()) {
+#ifdef CONFIG_PAGE_OWNER_FREE_STACK
+ if (static_branch_unlikely(&page_owner_free_stack)) {
page_owner = get_page_owner(page_ext);
page_owner->free_handle = handle;
}
@@ -451,14 +460,16 @@ void __dump_page_owner(struct page *page)
stack_trace_print(entries, nr_entries, 0);
}
-#ifdef CONFIG_DEBUG_PAGEALLOC
- handle = READ_ONCE(page_owner->free_handle);
- if (!handle) {
- pr_alert("page_owner free stack trace missing\n");
- } else {
- nr_entries = stack_depot_fetch(handle, &entries);
- pr_alert("page last free stack trace:\n");
- stack_trace_print(entries, nr_entries, 0);
+#ifdef CONFIG_PAGE_OWNER_FREE_STACK
+ if (static_branch_unlikely(&page_owner_free_stack)) {
+ handle = READ_ONCE(page_owner->free_handle);
+ if (!handle) {
+ pr_alert("page_owner free stack trace missing\n");
+ } else {
+ nr_entries = stack_depot_fetch(handle, &entries);
+ pr_alert("page last free stack trace:\n");
+ stack_trace_print(entries, nr_entries, 0);
+ }
}
#endif
--
2.23.0
> extern void __reset_page_owner(struct page *page, unsigned int order);
> diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
> index 6c9682ce0254..dc560c7562e8 100644
> --- a/lib/Kconfig.kasan
> +++ b/lib/Kconfig.kasan
> @@ -41,6 +41,8 @@ config KASAN_GENERIC
> select SLUB_DEBUG if SLUB
> select CONSTRUCTORS
> select STACKDEPOT
> + select PAGE_OWNER
> + select PAGE_OWNER_FREE_STACK
> help
> Enables generic KASAN mode.
> Supported in both GCC and Clang. With GCC it requires version 4.9.2
> @@ -63,6 +65,8 @@ config KASAN_SW_TAGS
> select SLUB_DEBUG if SLUB
> select CONSTRUCTORS
> select STACKDEPOT
> + select PAGE_OWNER
> + select PAGE_OWNER_FREE_STACK
> help
What is the difference between PAGE_OWNER+PAGE_OWNER_FREE_STACK and
DEBUG_PAGEALLOC?
If you directly enable PAGE_OWNER+PAGE_OWNER_FREE_STACK
PAGE_OWNER_FREE_STACK,don't you think low-memory device to want to use
KASAN?
On 9/12/19 4:53 PM, Vlastimil Babka wrote:
> On 9/11/19 5:19 PM, Qian Cai wrote:
>>
>> The new config looks redundant and confusing. It looks to me more of a document update
>> in Documentation/dev-tools/kasan.txt to educate developers to select PAGE_OWNER and
>> DEBUG_PAGEALLOC if needed.
>
> Agreed. But if you want it fully automatic, how about something
> like this (on top of mmotm/next)? If you agree I'll add changelog
> and send properly.
>
> ----8<----
>
> From a528d14c71d7fdf5872ca8ab3bd1b5bad26670c9 Mon Sep 17 00:00:00 2001
> From: Vlastimil Babka <[email protected]>
> Date: Thu, 12 Sep 2019 15:51:23 +0200
> Subject: [PATCH] make KASAN enable page_owner with free stack capture
>
> ---
> include/linux/page_owner.h | 1 +
> lib/Kconfig.kasan | 4 ++++
> mm/Kconfig.debug | 5 +++++
> mm/page_alloc.c | 6 +++++-
> mm/page_owner.c | 37 ++++++++++++++++++++++++-------------
> 5 files changed, 39 insertions(+), 14 deletions(-)
>
Looks ok to me. This certainly better than full dependency on the DEBUG_PAGEALLOC which we don't need.
On Thu, 2019-09-12 at 16:31 +0200, Vlastimil Babka wrote:
> On 9/12/19 4:08 PM, Walter Wu wrote:
> >
> >> extern void __reset_page_owner(struct page *page, unsigned int order);
> >> diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
> >> index 6c9682ce0254..dc560c7562e8 100644
> >> --- a/lib/Kconfig.kasan
> >> +++ b/lib/Kconfig.kasan
> >> @@ -41,6 +41,8 @@ config KASAN_GENERIC
> >> select SLUB_DEBUG if SLUB
> >> select CONSTRUCTORS
> >> select STACKDEPOT
> >> + select PAGE_OWNER
> >> + select PAGE_OWNER_FREE_STACK
> >> help
> >> Enables generic KASAN mode.
> >> Supported in both GCC and Clang. With GCC it requires version 4.9.2
> >> @@ -63,6 +65,8 @@ config KASAN_SW_TAGS
> >> select SLUB_DEBUG if SLUB
> >> select CONSTRUCTORS
> >> select STACKDEPOT
> >> + select PAGE_OWNER
> >> + select PAGE_OWNER_FREE_STACK
> >> help
> >
> > What is the difference between PAGE_OWNER+PAGE_OWNER_FREE_STACK and
> > DEBUG_PAGEALLOC?
>
> Same memory usage, but debug_pagealloc means also extra checks and
> restricting memory access to freed pages to catch UAF.
>
> > If you directly enable PAGE_OWNER+PAGE_OWNER_FREE_STACK
> > PAGE_OWNER_FREE_STACK,don't you think low-memory device to want to use
> > KASAN?
>
> OK, so it should be optional? But I think it's enough to distinguish no
> PAGE_OWNER at all, and PAGE_OWNER+PAGE_OWNER_FREE_STACK together - I
> don't see much point in PAGE_OWNER only for this kind of debugging.
>
If it's possible, it should be optional.
My experience is that PAGE_OWNER usually debug memory leakage.
> So how about this? KASAN wouldn't select PAGE_OWNER* but it would be
> recommended in the help+docs. When PAGE_OWNER and KASAN are selected by
> user, PAGE_OWNER_FREE_STACK gets also selected, and both will be also
> runtime enabled without explicit page_owner=on.
> I mostly want to avoid another boot-time option for enabling
> PAGE_OWNER_FREE_STACK.
> Would that be enough flexibility for low-memory devices vs full-fledged
> debugging?
We usually see feature option to decide whether it meet the platform.
The boot-time option isn't troubled to us, because enable the feature
owner should know what he should add to do.
On 9/12/19 5:31 PM, Vlastimil Babka wrote:
> On 9/12/19 4:08 PM, Walter Wu wrote:
>>
>>> extern void __reset_page_owner(struct page *page, unsigned int order);
>>> diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
>>> index 6c9682ce0254..dc560c7562e8 100644
>>> --- a/lib/Kconfig.kasan
>>> +++ b/lib/Kconfig.kasan
>>> @@ -41,6 +41,8 @@ config KASAN_GENERIC
>>> select SLUB_DEBUG if SLUB
>>> select CONSTRUCTORS
>>> select STACKDEPOT
>>> + select PAGE_OWNER
>>> + select PAGE_OWNER_FREE_STACK
>>> help
>>> Enables generic KASAN mode.
>>> Supported in both GCC and Clang. With GCC it requires version 4.9.2
>>> @@ -63,6 +65,8 @@ config KASAN_SW_TAGS
>>> select SLUB_DEBUG if SLUB
>>> select CONSTRUCTORS
>>> select STACKDEPOT
>>> + select PAGE_OWNER
>>> + select PAGE_OWNER_FREE_STACK
>>> help
>>
>> What is the difference between PAGE_OWNER+PAGE_OWNER_FREE_STACK and
>> DEBUG_PAGEALLOC?
>
> Same memory usage, but debug_pagealloc means also extra checks and restricting memory access to freed pages to catch UAF.
>
>> If you directly enable PAGE_OWNER+PAGE_OWNER_FREE_STACK
>> PAGE_OWNER_FREE_STACK,don't you think low-memory device to want to use
>> KASAN?
>
> OK, so it should be optional? But I think it's enough to distinguish no PAGE_OWNER at all, and PAGE_OWNER+PAGE_OWNER_FREE_STACK together - I don't see much point in PAGE_OWNER only for this kind of debugging.
>
> So how about this? KASAN wouldn't select PAGE_OWNER* but it would be recommended in the help+docs. When PAGE_OWNER and KASAN are selected by user, PAGE_OWNER_FREE_STACK gets also selected, and both will be also runtime enabled without explicit page_owner=on.
> I mostly want to avoid another boot-time option for enabling PAGE_OWNER_FREE_STACK.
> Would that be enough flexibility for low-memory devices vs full-fledged debugging?
Originally I thought that with you patch users still can disable page_owner via "page_owner=off" boot param.
But now I realized that this won't work. I think it should work, we should allow users to disable it.
Or another alternative option (and actually easier one to implement), leave PAGE_OWNER as is (no "select"s in Kconfigs)
Make PAGE_OWNER_FREE_STACK like this:
+config PAGE_OWNER_FREE_STACK
+ def_bool KASAN || DEBUG_PAGEALLOC
+ depends on PAGE_OWNER
+
So, users that want alloc/free stack will have to enable CONFIG_PAGE_OWNER=y and add page_owner=on to boot cmdline.
Basically the difference between these alternative is whether we enable page_owner by default or not. But there is always a possibility to disable it.
On 9/12/19 7:05 PM, Andrey Ryabinin wrote:
>
> Or another alternative option (and actually easier one to implement), leave PAGE_OWNER as is (no "select"s in Kconfigs)
> Make PAGE_OWNER_FREE_STACK like this:
>
> +config PAGE_OWNER_FREE_STACK
> + def_bool KASAN || DEBUG_PAGEALLOC
> + depends on PAGE_OWNER
> +
>
> So, users that want alloc/free stack will have to enable CONFIG_PAGE_OWNER=y and add page_owner=on to boot cmdline.
>
>
> Basically the difference between these alternative is whether we enable page_owner by default or not. But there is always a possibility to disable it.
OK, how about this?
BTW, the bugzilla [1] also mentions that on overflow we might be dumping
the wrong page (including stacks). I'll leave that to somebody familiar
with KASAN internals though.
[1] https://bugzilla.kernel.org/show_bug.cgi?id=203967
----8<----
From 887e3c092c073d996098ac2b101b0feaef110b54 Mon Sep 17 00:00:00 2001
From: Vlastimil Babka <[email protected]>
Date: Mon, 16 Sep 2019 11:28:19 +0200
Subject: [PATCH] mm, debug, kasan: save and dump freeing stack trace for kasan
The commit "mm, page_owner, debug_pagealloc: save and dump freeing stack trace"
enhanced page_owner to also store freeing stack trace, when debug_pagealloc is
also enabled. KASAN would also like to do this [1] to improve error reports to
debug e.g. UAF issues. This patch therefore introduces a helper config option
PAGE_OWNER_FREE_STACK, which is enabled when PAGE_OWNER and either of
DEBUG_PAGEALLOC or KASAN is enabled. Boot-time, the free stack saving is
enabled when booting a KASAN kernel with page_owner=on, or non-KASAN kernel
with debug_pagealloc=on and page_owner=on.
[1] https://bugzilla.kernel.org/show_bug.cgi?id=203967
Suggested-by: Dmitry Vyukov <[email protected]>
Suggested-by: Walter Wu <[email protected]>
Suggested-by: Andrey Ryabinin <[email protected]>
Signed-off-by: Vlastimil Babka <[email protected]>
---
Documentation/dev-tools/kasan.rst | 4 ++++
include/linux/page_owner.h | 1 +
mm/Kconfig.debug | 4 ++++
mm/page_alloc.c | 6 +++++-
mm/page_owner.c | 35 +++++++++++++++++++------------
5 files changed, 36 insertions(+), 14 deletions(-)
diff --git a/Documentation/dev-tools/kasan.rst b/Documentation/dev-tools/kasan.rst
index b72d07d70239..434e605030e9 100644
--- a/Documentation/dev-tools/kasan.rst
+++ b/Documentation/dev-tools/kasan.rst
@@ -41,6 +41,10 @@ smaller binary while the latter is 1.1 - 2 times faster.
Both KASAN modes work with both SLUB and SLAB memory allocators.
For better bug detection and nicer reporting, enable CONFIG_STACKTRACE.
+To augment reports with last allocation and freeing stack of the physical
+page, it is recommended to configure kernel also with CONFIG_PAGE_OWNER = y
+and boot with page_owner=on.
+
To disable instrumentation for specific files or directories, add a line
similar to the following to the respective kernel Makefile:
diff --git a/include/linux/page_owner.h b/include/linux/page_owner.h
index 8679ccd722e8..6ffe8b81ba85 100644
--- a/include/linux/page_owner.h
+++ b/include/linux/page_owner.h
@@ -6,6 +6,7 @@
#ifdef CONFIG_PAGE_OWNER
extern struct static_key_false page_owner_inited;
+extern bool page_owner_free_stack_disabled;
extern struct page_ext_operations page_owner_ops;
extern void __reset_page_owner(struct page *page, unsigned int order);
diff --git a/mm/Kconfig.debug b/mm/Kconfig.debug
index 327b3ebf23bf..1ea247da3322 100644
--- a/mm/Kconfig.debug
+++ b/mm/Kconfig.debug
@@ -62,6 +62,10 @@ config PAGE_OWNER
If unsure, say N.
+config PAGE_OWNER_FREE_STACK
+ def_bool KASAN || DEBUG_PAGEALLOC
+ depends on PAGE_OWNER
+
config PAGE_POISONING
bool "Poison pages after freeing"
select PAGE_POISONING_NO_SANITY if HIBERNATION
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index c5d62f1c2851..d9e44671af3f 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -710,8 +710,12 @@ static int __init early_debug_pagealloc(char *buf)
if (kstrtobool(buf, &enable))
return -EINVAL;
- if (enable)
+ if (enable) {
static_branch_enable(&_debug_pagealloc_enabled);
+#ifdef CONFIG_PAGE_OWNER
+ page_owner_free_stack_disabled = false;
+#endif
+ }
return 0;
}
diff --git a/mm/page_owner.c b/mm/page_owner.c
index dee931184788..b589bfbc4795 100644
--- a/mm/page_owner.c
+++ b/mm/page_owner.c
@@ -24,13 +24,15 @@ struct page_owner {
short last_migrate_reason;
gfp_t gfp_mask;
depot_stack_handle_t handle;
-#ifdef CONFIG_DEBUG_PAGEALLOC
+#ifdef CONFIG_PAGE_OWNER_FREE_STACK
depot_stack_handle_t free_handle;
#endif
};
static bool page_owner_disabled = true;
+bool page_owner_free_stack_disabled = true;
DEFINE_STATIC_KEY_FALSE(page_owner_inited);
+static DEFINE_STATIC_KEY_FALSE(page_owner_free_stack);
static depot_stack_handle_t dummy_handle;
static depot_stack_handle_t failure_handle;
@@ -46,6 +48,9 @@ static int __init early_page_owner_param(char *buf)
if (strcmp(buf, "on") == 0)
page_owner_disabled = false;
+ if (!page_owner_disabled && IS_ENABLED(CONFIG_KASAN))
+ page_owner_free_stack_disabled = false;
+
return 0;
}
early_param("page_owner", early_page_owner_param);
@@ -91,6 +96,8 @@ static void init_page_owner(void)
register_failure_stack();
register_early_stack();
static_branch_enable(&page_owner_inited);
+ if (!page_owner_free_stack_disabled)
+ static_branch_enable(&page_owner_free_stack);
init_early_allocated_pages();
}
@@ -148,11 +155,11 @@ void __reset_page_owner(struct page *page, unsigned int order)
{
int i;
struct page_ext *page_ext;
-#ifdef CONFIG_DEBUG_PAGEALLOC
+#ifdef CONFIG_PAGE_OWNER_FREE_STACK
depot_stack_handle_t handle = 0;
struct page_owner *page_owner;
- if (debug_pagealloc_enabled())
+ if (static_branch_unlikely(&page_owner_free_stack))
handle = save_stack(GFP_NOWAIT | __GFP_NOWARN);
#endif
@@ -161,8 +168,8 @@ void __reset_page_owner(struct page *page, unsigned int order)
if (unlikely(!page_ext))
continue;
__clear_bit(PAGE_EXT_OWNER_ACTIVE, &page_ext->flags);
-#ifdef CONFIG_DEBUG_PAGEALLOC
- if (debug_pagealloc_enabled()) {
+#ifdef CONFIG_PAGE_OWNER_FREE_STACK
+ if (static_branch_unlikely(&page_owner_free_stack)) {
page_owner = get_page_owner(page_ext);
page_owner->free_handle = handle;
}
@@ -451,14 +458,16 @@ void __dump_page_owner(struct page *page)
stack_trace_print(entries, nr_entries, 0);
}
-#ifdef CONFIG_DEBUG_PAGEALLOC
- handle = READ_ONCE(page_owner->free_handle);
- if (!handle) {
- pr_alert("page_owner free stack trace missing\n");
- } else {
- nr_entries = stack_depot_fetch(handle, &entries);
- pr_alert("page last free stack trace:\n");
- stack_trace_print(entries, nr_entries, 0);
+#ifdef CONFIG_PAGE_OWNER_FREE_STACK
+ if (static_branch_unlikely(&page_owner_free_stack)) {
+ handle = READ_ONCE(page_owner->free_handle);
+ if (!handle) {
+ pr_alert("page_owner free stack trace missing\n");
+ } else {
+ nr_entries = stack_depot_fetch(handle, &entries);
+ pr_alert("page last free stack trace:\n");
+ stack_trace_print(entries, nr_entries, 0);
+ }
}
#endif
--
2.23.0
On 9/16/19 12:42 PM, Vlastimil Babka wrote:
> On 9/12/19 7:05 PM, Andrey Ryabinin wrote:
>>
>> Or another alternative option (and actually easier one to implement), leave PAGE_OWNER as is (no "select"s in Kconfigs)
>> Make PAGE_OWNER_FREE_STACK like this:
>>
>> +config PAGE_OWNER_FREE_STACK
>> + def_bool KASAN || DEBUG_PAGEALLOC
>> + depends on PAGE_OWNER
>> +
>>
>> So, users that want alloc/free stack will have to enable CONFIG_PAGE_OWNER=y and add page_owner=on to boot cmdline.
>>
>>
>> Basically the difference between these alternative is whether we enable page_owner by default or not. But there is always a possibility to disable it.
>
> OK, how about this?
>
...
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index c5d62f1c2851..d9e44671af3f 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -710,8 +710,12 @@ static int __init early_debug_pagealloc(char *buf)
> if (kstrtobool(buf, &enable))
> return -EINVAL;
>
> - if (enable)
> + if (enable) {
> static_branch_enable(&_debug_pagealloc_enabled);
> +#ifdef CONFIG_PAGE_OWNER
> + page_owner_free_stack_disabled = false;
I think this won't work with CONFIG_DEBUG_PAGEALLOC_ENABLE_DEFAULT=y
> +#endif
> + }
>
> return 0;
> }
> diff --git a/mm/page_owner.c b/mm/page_owner.c
> index dee931184788..b589bfbc4795 100644
> --- a/mm/page_owner.c
> +++ b/mm/page_owner.c
> @@ -24,13 +24,15 @@ struct page_owner {
> short last_migrate_reason;
> gfp_t gfp_mask;
> depot_stack_handle_t handle;
> -#ifdef CONFIG_DEBUG_PAGEALLOC
> +#ifdef CONFIG_PAGE_OWNER_FREE_STACK
> depot_stack_handle_t free_handle;
> #endif
> };
>
> static bool page_owner_disabled = true;
> +bool page_owner_free_stack_disabled = true;
> DEFINE_STATIC_KEY_FALSE(page_owner_inited);
> +static DEFINE_STATIC_KEY_FALSE(page_owner_free_stack);
>
> static depot_stack_handle_t dummy_handle;
> static depot_stack_handle_t failure_handle;
> @@ -46,6 +48,9 @@ static int __init early_page_owner_param(char *buf)
> if (strcmp(buf, "on") == 0)
> page_owner_disabled = false;
>
> + if (!page_owner_disabled && IS_ENABLED(CONFIG_KASAN))
I'd rather keep all logic in one place, i.e. "if (!page_owner_disabled && (IS_ENABLED(CONFIG_KASAN) || debug_pagealloc_enabled())"
With this no changes in early_debug_pagealloc() required and CONFIG_DEBUG_PAGEALLOC_ENABLE_DEFAULT=y should also work correctly.
> + page_owner_free_stack_disabled = false;
> +
> return 0;
> }
> early_param("page_owner", early_page_owner_param);
On 9/16/19 5:57 PM, Andrey Ryabinin wrote:
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -710,8 +710,12 @@ static int __init early_debug_pagealloc(char *buf)
>> if (kstrtobool(buf, &enable))
>> return -EINVAL;
>>
>> - if (enable)
>> + if (enable) {
>> static_branch_enable(&_debug_pagealloc_enabled);
>> +#ifdef CONFIG_PAGE_OWNER
>> + page_owner_free_stack_disabled = false;
>
> I think this won't work with CONFIG_DEBUG_PAGEALLOC_ENABLE_DEFAULT=y
Good point, thanks.
>> +#endif
>> + }
>>
>> return 0;
>> }
>> diff --git a/mm/page_owner.c b/mm/page_owner.c
>> index dee931184788..b589bfbc4795 100644
>> --- a/mm/page_owner.c
>> +++ b/mm/page_owner.c
>> @@ -24,13 +24,15 @@ struct page_owner {
>> short last_migrate_reason;
>> gfp_t gfp_mask;
>> depot_stack_handle_t handle;
>> -#ifdef CONFIG_DEBUG_PAGEALLOC
>> +#ifdef CONFIG_PAGE_OWNER_FREE_STACK
>> depot_stack_handle_t free_handle;
>> #endif
>> };
>>
>> static bool page_owner_disabled = true;
>> +bool page_owner_free_stack_disabled = true;
>> DEFINE_STATIC_KEY_FALSE(page_owner_inited);
>> +static DEFINE_STATIC_KEY_FALSE(page_owner_free_stack);
>>
>> static depot_stack_handle_t dummy_handle;
>> static depot_stack_handle_t failure_handle;
>> @@ -46,6 +48,9 @@ static int __init early_page_owner_param(char *buf)
>> if (strcmp(buf, "on") == 0)
>> page_owner_disabled = false;
>>
>> + if (!page_owner_disabled && IS_ENABLED(CONFIG_KASAN))
>
> I'd rather keep all logic in one place, i.e. "if (!page_owner_disabled && (IS_ENABLED(CONFIG_KASAN) || debug_pagealloc_enabled())"
> With this no changes in early_debug_pagealloc() required and CONFIG_DEBUG_PAGEALLOC_ENABLE_DEFAULT=y should also work correctly.
In this function it would not work if the debug_pagealloc param gets
processed later than page_owner, but should be doable in
init_page_owner(), I'll try, thanks.
>
>> + page_owner_free_stack_disabled = false;
>> +
>> return 0;
>> }
>> early_param("page_owner", early_page_owner_param);
>
>
On 9/16/19 5:57 PM, Andrey Ryabinin wrote:
> I'd rather keep all logic in one place, i.e. "if (!page_owner_disabled && (IS_ENABLED(CONFIG_KASAN) || debug_pagealloc_enabled())"
> With this no changes in early_debug_pagealloc() required and CONFIG_DEBUG_PAGEALLOC_ENABLE_DEFAULT=y should also work correctly.
OK.
----8<----
From 7437c43f02682fdde5680fa83e87029f7529e222 Mon Sep 17 00:00:00 2001
From: Vlastimil Babka <[email protected]>
Date: Mon, 16 Sep 2019 11:28:19 +0200
Subject: [PATCH] mm, debug, kasan: save and dump freeing stack trace for kasan
The commit "mm, page_owner, debug_pagealloc: save and dump freeing stack trace"
enhanced page_owner to also store freeing stack trace, when debug_pagealloc is
also enabled. KASAN would also like to do this [1] to improve error reports to
debug e.g. UAF issues. This patch therefore introduces a helper config option
PAGE_OWNER_FREE_STACK, which is enabled when PAGE_OWNER and either of
DEBUG_PAGEALLOC or KASAN is enabled. Boot-time, the free stack saving is
enabled when booting a KASAN kernel with page_owner=on, or non-KASAN kernel
with debug_pagealloc=on and page_owner=on.
[1] https://bugzilla.kernel.org/show_bug.cgi?id=203967
Suggested-by: Dmitry Vyukov <[email protected]>
Suggested-by: Walter Wu <[email protected]>
Suggested-by: Andrey Ryabinin <[email protected]>
Signed-off-by: Vlastimil Babka <[email protected]>
---
Documentation/dev-tools/kasan.rst | 4 ++++
mm/Kconfig.debug | 4 ++++
mm/page_owner.c | 31 ++++++++++++++++++-------------
3 files changed, 26 insertions(+), 13 deletions(-)
diff --git a/Documentation/dev-tools/kasan.rst b/Documentation/dev-tools/kasan.rst
index b72d07d70239..434e605030e9 100644
--- a/Documentation/dev-tools/kasan.rst
+++ b/Documentation/dev-tools/kasan.rst
@@ -41,6 +41,10 @@ smaller binary while the latter is 1.1 - 2 times faster.
Both KASAN modes work with both SLUB and SLAB memory allocators.
For better bug detection and nicer reporting, enable CONFIG_STACKTRACE.
+To augment reports with last allocation and freeing stack of the physical
+page, it is recommended to configure kernel also with CONFIG_PAGE_OWNER = y
+and boot with page_owner=on.
+
To disable instrumentation for specific files or directories, add a line
similar to the following to the respective kernel Makefile:
diff --git a/mm/Kconfig.debug b/mm/Kconfig.debug
index 327b3ebf23bf..1ea247da3322 100644
--- a/mm/Kconfig.debug
+++ b/mm/Kconfig.debug
@@ -62,6 +62,10 @@ config PAGE_OWNER
If unsure, say N.
+config PAGE_OWNER_FREE_STACK
+ def_bool KASAN || DEBUG_PAGEALLOC
+ depends on PAGE_OWNER
+
config PAGE_POISONING
bool "Poison pages after freeing"
select PAGE_POISONING_NO_SANITY if HIBERNATION
diff --git a/mm/page_owner.c b/mm/page_owner.c
index dee931184788..8b6b05676158 100644
--- a/mm/page_owner.c
+++ b/mm/page_owner.c
@@ -24,13 +24,14 @@ struct page_owner {
short last_migrate_reason;
gfp_t gfp_mask;
depot_stack_handle_t handle;
-#ifdef CONFIG_DEBUG_PAGEALLOC
+#ifdef CONFIG_PAGE_OWNER_FREE_STACK
depot_stack_handle_t free_handle;
#endif
};
static bool page_owner_disabled = true;
DEFINE_STATIC_KEY_FALSE(page_owner_inited);
+static DEFINE_STATIC_KEY_FALSE(page_owner_free_stack);
static depot_stack_handle_t dummy_handle;
static depot_stack_handle_t failure_handle;
@@ -91,6 +92,8 @@ static void init_page_owner(void)
register_failure_stack();
register_early_stack();
static_branch_enable(&page_owner_inited);
+ if (IS_ENABLED(CONFIG_KASAN) || debug_pagealloc_enabled())
+ static_branch_enable(&page_owner_free_stack);
init_early_allocated_pages();
}
@@ -148,11 +151,11 @@ void __reset_page_owner(struct page *page, unsigned int order)
{
int i;
struct page_ext *page_ext;
-#ifdef CONFIG_DEBUG_PAGEALLOC
+#ifdef CONFIG_PAGE_OWNER_FREE_STACK
depot_stack_handle_t handle = 0;
struct page_owner *page_owner;
- if (debug_pagealloc_enabled())
+ if (static_branch_unlikely(&page_owner_free_stack))
handle = save_stack(GFP_NOWAIT | __GFP_NOWARN);
#endif
@@ -161,8 +164,8 @@ void __reset_page_owner(struct page *page, unsigned int order)
if (unlikely(!page_ext))
continue;
__clear_bit(PAGE_EXT_OWNER_ACTIVE, &page_ext->flags);
-#ifdef CONFIG_DEBUG_PAGEALLOC
- if (debug_pagealloc_enabled()) {
+#ifdef CONFIG_PAGE_OWNER_FREE_STACK
+ if (static_branch_unlikely(&page_owner_free_stack)) {
page_owner = get_page_owner(page_ext);
page_owner->free_handle = handle;
}
@@ -451,14 +454,16 @@ void __dump_page_owner(struct page *page)
stack_trace_print(entries, nr_entries, 0);
}
-#ifdef CONFIG_DEBUG_PAGEALLOC
- handle = READ_ONCE(page_owner->free_handle);
- if (!handle) {
- pr_alert("page_owner free stack trace missing\n");
- } else {
- nr_entries = stack_depot_fetch(handle, &entries);
- pr_alert("page last free stack trace:\n");
- stack_trace_print(entries, nr_entries, 0);
+#ifdef CONFIG_PAGE_OWNER_FREE_STACK
+ if (static_branch_unlikely(&page_owner_free_stack)) {
+ handle = READ_ONCE(page_owner->free_handle);
+ if (!handle) {
+ pr_alert("page_owner free stack trace missing\n");
+ } else {
+ nr_entries = stack_depot_fetch(handle, &entries);
+ pr_alert("page last free stack trace:\n");
+ stack_trace_print(entries, nr_entries, 0);
+ }
}
#endif
--
2.23.0
On 9/23/19 11:20 AM, Vlastimil Babka wrote:
> On 9/16/19 5:57 PM, Andrey Ryabinin wrote:
>> I'd rather keep all logic in one place, i.e. "if (!page_owner_disabled && (IS_ENABLED(CONFIG_KASAN) || debug_pagealloc_enabled())"
>> With this no changes in early_debug_pagealloc() required and CONFIG_DEBUG_PAGEALLOC_ENABLE_DEFAULT=y should also work correctly.
>
> OK.
>
> ----8<----
>
> From 7437c43f02682fdde5680fa83e87029f7529e222 Mon Sep 17 00:00:00 2001
> From: Vlastimil Babka <[email protected]>
> Date: Mon, 16 Sep 2019 11:28:19 +0200
> Subject: [PATCH] mm, debug, kasan: save and dump freeing stack trace for kasan
>
> The commit "mm, page_owner, debug_pagealloc: save and dump freeing stack trace"
> enhanced page_owner to also store freeing stack trace, when debug_pagealloc is
> also enabled. KASAN would also like to do this [1] to improve error reports to
> debug e.g. UAF issues. This patch therefore introduces a helper config option
> PAGE_OWNER_FREE_STACK, which is enabled when PAGE_OWNER and either of
> DEBUG_PAGEALLOC or KASAN is enabled. Boot-time, the free stack saving is
> enabled when booting a KASAN kernel with page_owner=on, or non-KASAN kernel
> with debug_pagealloc=on and page_owner=on.
>
> [1] https://bugzilla.kernel.org/show_bug.cgi?id=203967
>
> Suggested-by: Dmitry Vyukov <[email protected]>
> Suggested-by: Walter Wu <[email protected]>
> Suggested-by: Andrey Ryabinin <[email protected]>
> Signed-off-by: Vlastimil Babka <[email protected]>
> ---
Reviewed-by: Andrey Ryabinin <[email protected]>