Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S966470AbbBCRnr (ORCPT ); Tue, 3 Feb 2015 12:43:47 -0500 Received: from mailout4.w1.samsung.com ([210.118.77.14]:47151 "EHLO mailout4.w1.samsung.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S966417AbbBCRni (ORCPT ); Tue, 3 Feb 2015 12:43:38 -0500 X-AuditID: cbfec7f4-b7f126d000001e9a-9e-54d108321866 From: Andrey Ryabinin To: linux-kernel@vger.kernel.org Cc: Andrey Ryabinin , Dmitry Vyukov , Konstantin Serebryany , Dmitry Chernenkov , Andrey Konovalov , Yuri Gribov , Konstantin Khlebnikov , Sasha Levin , Christoph Lameter , Joonsoo Kim , Andrew Morton , Dave Hansen , Andi Kleen , x86@kernel.org, linux-mm@kvack.org Subject: [PATCH v11 05/19] mm: page_alloc: add kasan hooks on alloc and free paths Date: Tue, 03 Feb 2015 20:42:58 +0300 Message-id: <1422985392-28652-6-git-send-email-a.ryabinin@samsung.com> X-Mailer: git-send-email 2.2.2 In-reply-to: <1422985392-28652-1-git-send-email-a.ryabinin@samsung.com> References: <1404905415-9046-1-git-send-email-a.ryabinin@samsung.com> <1422985392-28652-1-git-send-email-a.ryabinin@samsung.com> X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFjrKLMWRmVeSWpSXmKPExsVy+t/xK7pGHBdDDCZ0WVps+/WIzeL33pms FnPWr2GzOHLtO7vF9W9vGC0+vXzAaPH84UN2iwkP29gtVnY3s1lsf/aWyWJl5wNWi8u75rBZ 3Fvzn9Vi8ZHbzBbvnk1mtvix4TGrg4DH/J0fGT12zrrL7rFgU6nH4j0vmTw2repk89j0aRK7 R9fbK0weJ2b8ZvF4cmU6k8fHp7dYPPq2rGL0+LxJLoAnissmJTUnsyy1SN8ugStj94GXjAV3 1Cq2/r/O3MD4V76LkZNDQsBEYsrXbWwQtpjEhXvrgWwuDiGBpYwS/UcPs0A4fUwSf7Z1s4NU sQnoSfybtR2sQ0RAQWJz7zNWkCJmgRUsEl+urWAESQgLhEgsnPaeBcRmEVCVOHqwHSjOwcEr 4CYxbbkoiCkhICdx4WM8SAWngLvEvv5njBC7mhgl3ixuYZ7AyLuAkWEVo2hqaXJBcVJ6rqFe cWJucWleul5yfu4mRkjQf9nBuPiY1SFGAQ5GJR7eFUYXQoRYE8uKK3MPMUpwMCuJ8O75DRTi TUmsrEotyo8vKs1JLT7EyMTBKdXA6G0ZutVpngRTxksp9ms3X5ioq97XYFyd8Mjk9PGtgc+v ypetTHYK2BH05qxsr3vpFtn2la2vl7al7F4eL32vj+VYlU36mQlcOUd53rMJx/5qDn+VHDt3 2ZLQRiXPzZoLw29rbFc18uyyZopR/XZ+d1a49JJPH05rfX+UeHxKjeTeALPU0IaLSizFGYmG WsxFxYkAs0YNdVgCAAA= Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 5045 Lines: 161 Add kernel address sanitizer hooks to mark allocated page's addresses as accessible in corresponding shadow region. Mark freed pages as inaccessible. Signed-off-by: Andrey Ryabinin --- include/linux/kasan.h | 6 ++++++ mm/compaction.c | 2 ++ mm/kasan/kasan.c | 14 ++++++++++++++ mm/kasan/kasan.h | 2 ++ mm/kasan/report.c | 11 +++++++++++ mm/page_alloc.c | 3 +++ 6 files changed, 38 insertions(+) diff --git a/include/linux/kasan.h b/include/linux/kasan.h index 9102fda..f00c15c 100644 --- a/include/linux/kasan.h +++ b/include/linux/kasan.h @@ -34,6 +34,9 @@ static inline void kasan_disable_current(void) void kasan_unpoison_shadow(const void *address, size_t size); +void kasan_alloc_pages(struct page *page, unsigned int order); +void kasan_free_pages(struct page *page, unsigned int order); + #else /* CONFIG_KASAN */ static inline void kasan_unpoison_shadow(const void *address, size_t size) {} @@ -41,6 +44,9 @@ static inline void kasan_unpoison_shadow(const void *address, size_t size) {} static inline void kasan_enable_current(void) {} static inline void kasan_disable_current(void) {} +static inline void kasan_alloc_pages(struct page *page, unsigned int order) {} +static inline void kasan_free_pages(struct page *page, unsigned int order) {} + #endif /* CONFIG_KASAN */ #endif /* LINUX_KASAN_H */ diff --git a/mm/compaction.c b/mm/compaction.c index b68736c..b2d3ef9 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -16,6 +16,7 @@ #include #include #include +#include #include "internal.h" #ifdef CONFIG_COMPACTION @@ -72,6 +73,7 @@ static void map_pages(struct list_head *list) list_for_each_entry(page, list, lru) { arch_alloc_page(page, 0); kernel_map_pages(page, 1, 1); + kasan_alloc_pages(page, 0); } } diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c index def8110..b516eb8 100644 --- a/mm/kasan/kasan.c +++ b/mm/kasan/kasan.c @@ -254,6 +254,20 @@ static __always_inline void check_memory_region(unsigned long addr, kasan_report(addr, size, write, _RET_IP_); } +void kasan_alloc_pages(struct page *page, unsigned int order) +{ + if (likely(!PageHighMem(page))) + kasan_unpoison_shadow(page_address(page), PAGE_SIZE << order); +} + +void kasan_free_pages(struct page *page, unsigned int order) +{ + if (likely(!PageHighMem(page))) + kasan_poison_shadow(page_address(page), + PAGE_SIZE << order, + KASAN_FREE_PAGE); +} + #define DEFINE_ASAN_LOAD_STORE(size) \ void __asan_load##size(unsigned long addr) \ { \ diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h index 648b9c0..d3c90d5 100644 --- a/mm/kasan/kasan.h +++ b/mm/kasan/kasan.h @@ -6,6 +6,8 @@ #define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT) #define KASAN_SHADOW_MASK (KASAN_SHADOW_SCALE_SIZE - 1) +#define KASAN_FREE_PAGE 0xFF /* page was freed */ + struct kasan_access_info { const void *access_addr; const void *first_bad_addr; diff --git a/mm/kasan/report.c b/mm/kasan/report.c index 5835d69..fab8e78 100644 --- a/mm/kasan/report.c +++ b/mm/kasan/report.c @@ -54,6 +54,9 @@ static void print_error_description(struct kasan_access_info *info) shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr); switch (shadow_val) { + case KASAN_FREE_PAGE: + bug_type = "use after free"; + break; case 0 ... KASAN_SHADOW_SCALE_SIZE - 1: bug_type = "out of bounds access"; break; @@ -69,6 +72,14 @@ static void print_error_description(struct kasan_access_info *info) static void print_address_description(struct kasan_access_info *info) { + const void *addr = info->access_addr; + + if ((addr >= (void *)PAGE_OFFSET) && + (addr < high_memory)) { + struct page *page = virt_to_head_page(addr); + dump_page(page, "kasan: bad access detected"); + } + dump_stack(); } diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 8d52ab1..31bc2e8 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -25,6 +25,7 @@ #include #include #include +#include #include #include #include @@ -787,6 +788,7 @@ static bool free_pages_prepare(struct page *page, unsigned int order) trace_mm_page_free(page, order); kmemcheck_free_shadow(page, order); + kasan_free_pages(page, order); if (PageAnon(page)) page->mapping = NULL; @@ -970,6 +972,7 @@ static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags, arch_alloc_page(page, order); kernel_map_pages(page, 1 << order, 1); + kasan_alloc_pages(page, order); if (gfp_flags & __GFP_ZERO) prep_zero_page(page, order, gfp_flags); -- 2.2.2 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/