Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755555AbaKEO6P (ORCPT ); Wed, 5 Nov 2014 09:58:15 -0500 Received: from mailout2.w1.samsung.com ([210.118.77.12]:62129 "EHLO mailout2.w1.samsung.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755326AbaKEOy1 (ORCPT ); Wed, 5 Nov 2014 09:54:27 -0500 X-AuditID: cbfec7f5-b7f956d000005ed7-39-545a3a21c30e From: Andrey Ryabinin To: akpm@linux-foundation.org Cc: Andrey Ryabinin , Dmitry Vyukov , Konstantin Serebryany , Dmitry Chernenkov , Andrey Konovalov , Yuri Gribov , Konstantin Khlebnikov , Sasha Levin , Christoph Lameter , Joonsoo Kim , Dave Hansen , Andi Kleen , Vegard Nossum , "H. Peter Anvin" , Dave Jones , x86@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v6 04/11] mm: page_alloc: add kasan hooks on alloc and free paths Date: Wed, 05 Nov 2014 17:53:54 +0300 Message-id: <1415199241-5121-5-git-send-email-a.ryabinin@samsung.com> X-Mailer: git-send-email 2.1.3 In-reply-to: <1415199241-5121-1-git-send-email-a.ryabinin@samsung.com> References: <1404905415-9046-1-git-send-email-a.ryabinin@samsung.com> <1415199241-5121-1-git-send-email-a.ryabinin@samsung.com> X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFrrDLMWRmVeSWpSXmKPExsVy+t/xq7qKVlEhBuuPa1ls+/WIzeL33pms FnPWr2GzOHLtO7vF9W9vGC0+vXzAaLHlehOTxfOHD9ktJjxsY7eYtlHcYmV3M5vF9mdvmSxW dj5gtbi8aw6bxb01/1ktFh+5zWzx7tlkZourqw6yW/zY8JjVQdhj/s6PjB47Z91l91iwqdRj 8Z6XTB6bVnWyeWz6NIndo+vtFSaPEzN+s3g8uTKdyePj01ssHu/3XWXz6NuyitHj8yY5jxMt X1gD+KK4bFJSczLLUov07RK4Mpq6Z7IVPFCtmPpnOWsD4wT5LkZODgkBE4l7d1axQ9hiEhfu rWfrYuTiEBJYyijRvnoxE4TTxyRx/RmIw8nBJqAn8W/WdjYQW0RAVmLq3/MsIDazQCurxKmt SSC2sECwxN2JV8HiLAKqEhteXwTq5eDgFXCVuHaFF8SUEJCT2LrOG6SCU8BN4kj/IhaIVY2M Ep9vdbJMYORdwMiwilE0tTS5oDgpPddIrzgxt7g0L10vOT93EyMkRr7uYFx6zOoQowAHoxIP r0dTZIgQa2JZcWXuIUYJDmYlEd4m7agQId6UxMqq1KL8+KLSnNTiQ4xMHJxSDYwJiuv+fFFy tml9evbzfQv1iujQex3/daxUHQyLl8zmffNH9f+6qEkLulVTbMSDJebLvfo8s/m+dwFLqkT+ Ka3gD8usT764J/Cvst17wiqVBTd8qruPbr2se0BQ+4TpXo+7++KXvjFue+Uu+Ct+3oJlERKa Anc2zmObFp1rrNNZN8eCX+Eow1MlluKMREMt5qLiRADjYi18bwIAAA== Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Add kernel address sanitizer hooks to mark allocated page's addresses as accessible in corresponding shadow region. Mark freed pages as inaccessible. Signed-off-by: Andrey Ryabinin --- include/linux/kasan.h | 6 ++++++ mm/compaction.c | 2 ++ mm/kasan/kasan.c | 14 ++++++++++++++ mm/kasan/kasan.h | 1 + mm/kasan/report.c | 7 +++++++ mm/page_alloc.c | 3 +++ 6 files changed, 33 insertions(+) diff --git a/include/linux/kasan.h b/include/linux/kasan.h index 01c99fe..9714fba 100644 --- a/include/linux/kasan.h +++ b/include/linux/kasan.h @@ -30,6 +30,9 @@ static inline void kasan_disable_local(void) void kasan_unpoison_shadow(const void *address, size_t size); +void kasan_alloc_pages(struct page *page, unsigned int order); +void kasan_free_pages(struct page *page, unsigned int order); + #else /* CONFIG_KASAN */ static inline void kasan_unpoison_shadow(const void *address, size_t size) {} @@ -37,6 +40,9 @@ static inline void kasan_unpoison_shadow(const void *address, size_t size) {} static inline void kasan_enable_local(void) {} static inline void kasan_disable_local(void) {} +static inline void kasan_alloc_pages(struct page *page, unsigned int order) {} +static inline void kasan_free_pages(struct page *page, unsigned int order) {} + #endif /* CONFIG_KASAN */ #endif /* LINUX_KASAN_H */ diff --git a/mm/compaction.c b/mm/compaction.c index e6e7405..aa529ad 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -16,6 +16,7 @@ #include #include #include +#include #include "internal.h" #ifdef CONFIG_COMPACTION @@ -59,6 +60,7 @@ static void map_pages(struct list_head *list) list_for_each_entry(page, list, lru) { arch_alloc_page(page, 0); kernel_map_pages(page, 1, 1); + kasan_alloc_pages(page, 0); } } diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c index ea5e464..7d4dcc3 100644 --- a/mm/kasan/kasan.c +++ b/mm/kasan/kasan.c @@ -247,6 +247,20 @@ static __always_inline void check_memory_region(unsigned long addr, kasan_report(addr, size, write); } +void kasan_alloc_pages(struct page *page, unsigned int order) +{ + if (likely(!PageHighMem(page))) + kasan_unpoison_shadow(page_address(page), PAGE_SIZE << order); +} + +void kasan_free_pages(struct page *page, unsigned int order) +{ + if (likely(!PageHighMem(page))) + kasan_poison_shadow(page_address(page), + PAGE_SIZE << order, + KASAN_FREE_PAGE); +} + void __asan_load1(unsigned long addr) { check_memory_region(addr, 1, false); diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h index 6da1d78..2a6a961 100644 --- a/mm/kasan/kasan.h +++ b/mm/kasan/kasan.h @@ -6,6 +6,7 @@ #define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT) #define KASAN_SHADOW_MASK (KASAN_SHADOW_SCALE_SIZE - 1) +#define KASAN_FREE_PAGE 0xFF /* page was freed */ #define KASAN_SHADOW_GAP 0xF9 /* address belongs to shadow memory */ struct access_info { diff --git a/mm/kasan/report.c b/mm/kasan/report.c index 7f559b4..bfe3a31 100644 --- a/mm/kasan/report.c +++ b/mm/kasan/report.c @@ -57,6 +57,9 @@ static void print_error_description(struct access_info *info) case 0 ... KASAN_SHADOW_SCALE_SIZE - 1: bug_type = "out of bounds access"; break; + case KASAN_FREE_PAGE: + bug_type = "use after free"; + break; case KASAN_SHADOW_GAP: bug_type = "wild memory access"; break; @@ -75,6 +78,10 @@ static void print_address_description(struct access_info *info) page = virt_to_head_page((void *)info->access_addr); switch (shadow_val) { + case KASAN_FREE_PAGE: + dump_page(page, "kasan error"); + dump_stack(); + break; case KASAN_SHADOW_GAP: pr_err("No metainfo is available for this access.\n"); dump_stack(); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index fa94263..9ae7d0e 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -59,6 +59,7 @@ #include #include #include +#include #include #include @@ -759,6 +760,7 @@ static bool free_pages_prepare(struct page *page, unsigned int order) trace_mm_page_free(page, order); kmemcheck_free_shadow(page, order); + kasan_free_pages(page, order); if (PageAnon(page)) page->mapping = NULL; @@ -945,6 +947,7 @@ static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags) arch_alloc_page(page, order); kernel_map_pages(page, 1 << order, 1); + kasan_alloc_pages(page, order); if (gfp_flags & __GFP_ZERO) prep_zero_page(page, order, gfp_flags); -- 2.1.3 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/