2019-01-07 22:40:09

by Qian Cai

[permalink] [raw]
Subject: [PATCH] page_poison: plays nicely with KASAN

KASAN does not play well with the page poisoning
(CONFIG_PAGE_POISONING). It triggers false positives in the allocation
path,

BUG: KASAN: use-after-free in memchr_inv+0x2ea/0x330
Read of size 8 at addr ffff88881f800000 by task swapper/0
CPU: 0 PID: 0 Comm: swapper Not tainted 5.0.0-rc1+ #54
Call Trace:
dump_stack+0xe0/0x19a
print_address_description.cold.2+0x9/0x28b
kasan_report.cold.3+0x7a/0xb5
__asan_report_load8_noabort+0x19/0x20
memchr_inv+0x2ea/0x330
kernel_poison_pages+0x103/0x3d5
get_page_from_freelist+0x15e7/0x4d90

because KASAN has not yet unpoisoned the shadow page for allocation
before it checks memchr_inv() but only found a stale poison pattern.

Also, false positives in free path,

BUG: KASAN: slab-out-of-bounds in kernel_poison_pages+0x29e/0x3d5
Write of size 4096 at addr ffff8888112cc000 by task swapper/0/1
CPU: 5 PID: 1 Comm: swapper/0 Not tainted 5.0.0-rc1+ #55
Call Trace:
dump_stack+0xe0/0x19a
print_address_description.cold.2+0x9/0x28b
kasan_report.cold.3+0x7a/0xb5
check_memory_region+0x22d/0x250
memset+0x28/0x40
kernel_poison_pages+0x29e/0x3d5
__free_pages_ok+0x75f/0x13e0

due to KASAN adds poisoned redzones around slab objects, but the page
poisoning needs to poison the whole page, so simply unpoision the shadow
page before running the page poison's memset.

Signed-off-by: Qian Cai <[email protected]>
---
mm/page_alloc.c | 2 +-
mm/page_poison.c | 2 ++
2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index d295c9bc01a8..906250a9b89c 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1945,8 +1945,8 @@ inline void post_alloc_hook(struct page *page, unsigned int order,

arch_alloc_page(page, order);
kernel_map_pages(page, 1 << order, 1);
- kernel_poison_pages(page, 1 << order, 1);
kasan_alloc_pages(page, order);
+ kernel_poison_pages(page, 1 << order, 1);
set_page_owner(page, order, gfp_flags);
}

diff --git a/mm/page_poison.c b/mm/page_poison.c
index f0c15e9017c0..e546b70e592a 100644
--- a/mm/page_poison.c
+++ b/mm/page_poison.c
@@ -6,6 +6,7 @@
#include <linux/page_ext.h>
#include <linux/poison.h>
#include <linux/ratelimit.h>
+#include <linux/kasan.h>

static bool want_page_poisoning __read_mostly;

@@ -40,6 +41,7 @@ static void poison_page(struct page *page)
{
void *addr = kmap_atomic(page);

+ kasan_unpoison_shadow(addr, PAGE_SIZE);
memset(addr, PAGE_POISON, PAGE_SIZE);
kunmap_atomic(addr);
}
--
2.17.2 (Apple Git-113)



2019-01-11 21:00:03

by Andrey Ryabinin

[permalink] [raw]
Subject: Re: [PATCH] page_poison: plays nicely with KASAN



On 1/8/19 1:36 AM, Qian Cai wrote:

>
> diff --git a/mm/page_poison.c b/mm/page_poison.c
> index f0c15e9017c0..e546b70e592a 100644
> --- a/mm/page_poison.c
> +++ b/mm/page_poison.c
> @@ -6,6 +6,7 @@
> #include <linux/page_ext.h>
> #include <linux/poison.h>
> #include <linux/ratelimit.h>
> +#include <linux/kasan.h>
>
> static bool want_page_poisoning __read_mostly;
>
> @@ -40,6 +41,7 @@ static void poison_page(struct page *page)
> {
> void *addr = kmap_atomic(page);
>
> + kasan_unpoison_shadow(addr, PAGE_SIZE);
> memset(addr, PAGE_POISON, PAGE_SIZE);

kasan_disable/enable_current() should be slightly more efficient for this case.

> kunmap_atomic(addr);
> }
>

2019-01-14 23:36:05

by Qian Cai

[permalink] [raw]
Subject: [PATCH v2] page_poison: play nicely with KASAN

KASAN does not play well with the page poisoning
(CONFIG_PAGE_POISONING). It triggers false positives in the allocation
path,

BUG: KASAN: use-after-free in memchr_inv+0x2ea/0x330
Read of size 8 at addr ffff88881f800000 by task swapper/0
CPU: 0 PID: 0 Comm: swapper Not tainted 5.0.0-rc1+ #54
Call Trace:
dump_stack+0xe0/0x19a
print_address_description.cold.2+0x9/0x28b
kasan_report.cold.3+0x7a/0xb5
__asan_report_load8_noabort+0x19/0x20
memchr_inv+0x2ea/0x330
kernel_poison_pages+0x103/0x3d5
get_page_from_freelist+0x15e7/0x4d90

because KASAN has not yet unpoisoned the shadow page for allocation
before it checks memchr_inv() but only found a stale poison pattern.

Also, false positives in free path,

BUG: KASAN: slab-out-of-bounds in kernel_poison_pages+0x29e/0x3d5
Write of size 4096 at addr ffff8888112cc000 by task swapper/0/1
CPU: 5 PID: 1 Comm: swapper/0 Not tainted 5.0.0-rc1+ #55
Call Trace:
dump_stack+0xe0/0x19a
print_address_description.cold.2+0x9/0x28b
kasan_report.cold.3+0x7a/0xb5
check_memory_region+0x22d/0x250
memset+0x28/0x40
kernel_poison_pages+0x29e/0x3d5
__free_pages_ok+0x75f/0x13e0

due to KASAN adds poisoned redzones around slab objects, but the page
poisoning needs to poison the whole page.

Signed-off-by: Qian Cai <[email protected]>
---

v2: use kasan_disable/enable_current() instead.

mm/page_alloc.c | 2 +-
mm/page_poison.c | 4 ++++
2 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index d295c9bc01a8..906250a9b89c 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1945,8 +1945,8 @@ inline void post_alloc_hook(struct page *page, unsigned int order,

arch_alloc_page(page, order);
kernel_map_pages(page, 1 << order, 1);
- kernel_poison_pages(page, 1 << order, 1);
kasan_alloc_pages(page, order);
+ kernel_poison_pages(page, 1 << order, 1);
set_page_owner(page, order, gfp_flags);
}

diff --git a/mm/page_poison.c b/mm/page_poison.c
index f0c15e9017c0..21d4f97cb49b 100644
--- a/mm/page_poison.c
+++ b/mm/page_poison.c
@@ -6,6 +6,7 @@
#include <linux/page_ext.h>
#include <linux/poison.h>
#include <linux/ratelimit.h>
+#include <linux/kasan.h>

static bool want_page_poisoning __read_mostly;

@@ -40,7 +41,10 @@ static void poison_page(struct page *page)
{
void *addr = kmap_atomic(page);

+ /* KASAN still think the page is in-use, so skip it. */
+ kasan_disable_current();
memset(addr, PAGE_POISON, PAGE_SIZE);
+ kasan_enable_current();
kunmap_atomic(addr);
}

--
2.17.2 (Apple Git-113)


2019-01-16 04:17:49

by Andrey Ryabinin

[permalink] [raw]
Subject: Re: [PATCH v2] page_poison: play nicely with KASAN



On 1/15/19 2:34 AM, Qian Cai wrote:
> KASAN does not play well with the page poisoning
> (CONFIG_PAGE_POISONING). It triggers false positives in the allocation
> path,
>
> BUG: KASAN: use-after-free in memchr_inv+0x2ea/0x330
> Read of size 8 at addr ffff88881f800000 by task swapper/0
> CPU: 0 PID: 0 Comm: swapper Not tainted 5.0.0-rc1+ #54
> Call Trace:
> dump_stack+0xe0/0x19a
> print_address_description.cold.2+0x9/0x28b
> kasan_report.cold.3+0x7a/0xb5
> __asan_report_load8_noabort+0x19/0x20
> memchr_inv+0x2ea/0x330
> kernel_poison_pages+0x103/0x3d5
> get_page_from_freelist+0x15e7/0x4d90
>
> because KASAN has not yet unpoisoned the shadow page for allocation
> before it checks memchr_inv() but only found a stale poison pattern.
>
> Also, false positives in free path,
>
> BUG: KASAN: slab-out-of-bounds in kernel_poison_pages+0x29e/0x3d5
> Write of size 4096 at addr ffff8888112cc000 by task swapper/0/1
> CPU: 5 PID: 1 Comm: swapper/0 Not tainted 5.0.0-rc1+ #55
> Call Trace:
> dump_stack+0xe0/0x19a
> print_address_description.cold.2+0x9/0x28b
> kasan_report.cold.3+0x7a/0xb5
> check_memory_region+0x22d/0x250
> memset+0x28/0x40
> kernel_poison_pages+0x29e/0x3d5
> __free_pages_ok+0x75f/0x13e0
>
> due to KASAN adds poisoned redzones around slab objects, but the page
> poisoning needs to poison the whole page.
>
> Signed-off-by: Qian Cai <[email protected]>
> ---
>
Acked-by: Andrey Ryabinin <[email protected]>