Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754614AbbGXQmi (ORCPT ); Fri, 24 Jul 2015 12:42:38 -0400 Received: from mailout4.w1.samsung.com ([210.118.77.14]:51789 "EHLO mailout4.w1.samsung.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754570AbbGXQmc (ORCPT ); Fri, 24 Jul 2015 12:42:32 -0400 X-AuditID: cbfec7f5-f794b6d000001495-76-55b26af414cc From: Andrey Ryabinin To: Catalin Marinas , Will Deacon , linux-arm-kernel@lists.infradead.org Cc: Arnd Bergmann , Linus Walleij , David Keitel , Alexander Potapenko , Andrew Morton , Dmitry Vyukov , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Alexey Klimov , Andrey Ryabinin , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , x86@kernel.org Subject: [PATCH v4 2/7] mm: kasan: introduce generic kasan_populate_zero_shadow() Date: Fri, 24 Jul 2015 19:41:54 +0300 Message-id: <1437756119-12817-3-git-send-email-a.ryabinin@samsung.com> X-Mailer: git-send-email 2.4.5 In-reply-to: <1437756119-12817-1-git-send-email-a.ryabinin@samsung.com> References: <1437756119-12817-1-git-send-email-a.ryabinin@samsung.com> X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFvrMLMWRmVeSWpSXmKPExsVy+t/xy7pfsjaFGnS8NbHY9usRm8Wc9WvY LP5OOsZu8X5ZD6PF7kvPmC0mPGxjt2j/uJfZYtpGcYtT5x+yWEz5s5zJYtPja6wWl3fNYbO4 t+Y/q8WlAwuYLDZvmsps8fLjCRaLHxseszoIeqyZt4bR4/evSYwel/t6mTx2zrrL7rFgU6nH plWdbB6bPk1i97hzbQ+bx7tz59g9Tsz4zeKxeUm9x/t9V9k8+rasYvT4vEnO40TLF9YA/igu m5TUnMyy1CJ9uwSujE93HrIUtFpWfNnXzdjAuE2vi5GTQ0LAROL4u4esELaYxIV769m6GLk4 hASWMkqc6ehkgXCamCS2nNvOAlLFJqAn8W/WdjYQW0SgQuLq1QmMIDazwDdmiYUvRUBsYYFg iSU7FzGB2CwCqhKfXy0Es3kF3CR2vN/C3sXIAbRNTmJpcy1ImFPAXaLp0VkWkLAQUMmCjUIT GHkXMDKsYhRNLU0uKE5KzzXSK07MLS7NS9dLzs/dxAiJiq87GJceszrEKMDBqMTDe2DSxlAh 1sSy4srcQ4wSHMxKIrwOnptChXhTEiurUovy44tKc1KLDzFKc7AoifPO3PU+REggPbEkNTs1 tSC1CCbLxMEp1cB41Ekhebvmc6YFq9esiX904cOJ48r3Xz89YhN7K3ZeF2+4yEKjc9sk//y8 7zatOczs7sMDxe73XAJWP2H/ufBxuKbDSdm+rUk2627qzJFarr87TfI2wzFLMaEV7/bNruUq 4ZtbJH0sTl3iwWWjsIeX1DufxRu4+j1IYv1zcSm70pzsr1uv5a3ZpsRSnJFoqMVcVJwIAOjm f/WGAgAA Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 7315 Lines: 236 Introduce generic kasan_populate_zero_shadow(start, end). This function maps kasan_zero_page to the [start, end] addresses. In follow on patches it will be used for ARMv8 (and maybe other architectures) and will replace x86_64 specific populate_zero_shadow(). Signed-off-by: Andrey Ryabinin --- arch/x86/mm/kasan_init_64.c | 14 ---- include/linux/kasan.h | 8 +++ mm/kasan/Makefile | 2 +- mm/kasan/kasan_init.c | 151 ++++++++++++++++++++++++++++++++++++++++++++ 4 files changed, 160 insertions(+), 15 deletions(-) create mode 100644 mm/kasan/kasan_init.c diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c index e1840f3..812086c 100644 --- a/arch/x86/mm/kasan_init_64.c +++ b/arch/x86/mm/kasan_init_64.c @@ -12,20 +12,6 @@ extern pgd_t early_level4_pgt[PTRS_PER_PGD]; extern struct range pfn_mapped[E820_X_MAX]; -static pud_t kasan_zero_pud[PTRS_PER_PUD] __page_aligned_bss; -static pmd_t kasan_zero_pmd[PTRS_PER_PMD] __page_aligned_bss; -static pte_t kasan_zero_pte[PTRS_PER_PTE] __page_aligned_bss; - -/* - * This page used as early shadow. We don't use empty_zero_page - * at early stages, stack instrumentation could write some garbage - * to this page. - * Latter we reuse it as zero shadow for large ranges of memory - * that allowed to access, but not instrumented by kasan - * (vmalloc/vmemmap ...). - */ -static unsigned char kasan_zero_page[PAGE_SIZE] __page_aligned_bss; - static int __init map_range(struct range *range) { unsigned long start; diff --git a/include/linux/kasan.h b/include/linux/kasan.h index 6fb1c7d..d795f53 100644 --- a/include/linux/kasan.h +++ b/include/linux/kasan.h @@ -12,8 +12,16 @@ struct vm_struct; #define KASAN_SHADOW_SCALE_SHIFT 3 #include +#include #include +extern unsigned char kasan_zero_page[PAGE_SIZE]; +extern pte_t kasan_zero_pte[PTRS_PER_PTE]; +extern pmd_t kasan_zero_pmd[PTRS_PER_PMD]; +extern pud_t kasan_zero_pud[PTRS_PER_PUD]; + +void kasan_populate_zero_shadow(const void *from, const void *to); + static inline void *kasan_mem_to_shadow(const void *addr) { return (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT) diff --git a/mm/kasan/Makefile b/mm/kasan/Makefile index bd837b8..6471014 100644 --- a/mm/kasan/Makefile +++ b/mm/kasan/Makefile @@ -5,4 +5,4 @@ CFLAGS_REMOVE_kasan.o = -pg # see: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63533 CFLAGS_kasan.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector) -obj-y := kasan.o report.o +obj-y := kasan.o report.o kasan_init.o diff --git a/mm/kasan/kasan_init.c b/mm/kasan/kasan_init.c new file mode 100644 index 0000000..e276853 --- /dev/null +++ b/mm/kasan/kasan_init.c @@ -0,0 +1,151 @@ +/* + * This file contains some kasan initialization code. + * + * Copyright (c) 2015 Samsung Electronics Co., Ltd. + * Author: Andrey Ryabinin + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + */ + +#include +#include +#include +#include +#include +#include + +#include +#include + +/* + * This page serves two purposes: + * - It used as early shadow memory. The entire shadow region populated + * with this page, before we will be able to setup normal shadow memory. + * - Latter it reused it as zero shadow to cover large ranges of memory + * that allowed to access, but not handled by kasan (vmalloc/vmemmap ...). + */ +unsigned char kasan_zero_page[PAGE_SIZE] __page_aligned_bss; + +#if CONFIG_PGTABLE_LEVELS > 3 +pud_t kasan_zero_pud[PTRS_PER_PUD] __page_aligned_bss; +#endif +#if CONFIG_PGTABLE_LEVELS > 2 +pmd_t kasan_zero_pmd[PTRS_PER_PMD] __page_aligned_bss; +#endif +pte_t kasan_zero_pte[PTRS_PER_PTE] __page_aligned_bss; + +static __init void *early_alloc(size_t size, int node) +{ + return memblock_virt_alloc_try_nid(size, size, __pa(MAX_DMA_ADDRESS), + BOOTMEM_ALLOC_ACCESSIBLE, node); +} + +static void __init zero_pte_populate(pmd_t *pmd, unsigned long addr, + unsigned long end) +{ + pte_t *pte = pte_offset_kernel(pmd, addr); + pte_t zero_pte; + + zero_pte = pfn_pte(PFN_DOWN(__pa(kasan_zero_page)), PAGE_KERNEL); + zero_pte = pte_wrprotect(zero_pte); + + while (addr + PAGE_SIZE <= end) { + set_pte_at(&init_mm, addr, pte, zero_pte); + addr += PAGE_SIZE; + pte = pte_offset_kernel(pmd, addr); + } +} + +static void __init zero_pmd_populate(pud_t *pud, unsigned long addr, + unsigned long end) +{ + pmd_t *pmd = pmd_offset(pud, addr); + unsigned long next; + + do { + next = pmd_addr_end(addr, end); + + if (IS_ALIGNED(addr, PMD_SIZE) && end - addr >= PMD_SIZE) { + pmd_populate_kernel(&init_mm, pmd, kasan_zero_pte); + continue; + } + + if (pmd_none(*pmd)) { + pmd_populate_kernel(&init_mm, pmd, + early_alloc(PAGE_SIZE, NUMA_NO_NODE)); + } + zero_pte_populate(pmd, addr, next); + } while (pmd++, addr = next, addr != end); +} + +static void __init zero_pud_populate(pgd_t *pgd, unsigned long addr, + unsigned long end) +{ + pud_t *pud = pud_offset(pgd, addr); + unsigned long next; + + do { + next = pud_addr_end(addr, end); + if (IS_ALIGNED(addr, PUD_SIZE) && end - addr >= PUD_SIZE) { + pmd_t *pmd; + + pud_populate(&init_mm, pud, kasan_zero_pmd); + pmd = pmd_offset(pud, addr); + pmd_populate_kernel(&init_mm, pmd, kasan_zero_pte); + continue; + } + + if (pud_none(*pud)) { + pud_populate(&init_mm, pud, + early_alloc(PAGE_SIZE, NUMA_NO_NODE)); + } + zero_pmd_populate(pud, addr, next); + } while (pud++, addr = next, addr != end); +} + +/** + * kasan_populate_zero_shadow - populate shadow memory region with + * kasan_zero_page + * @from - start of the memory range to populate + * @to - end of the memory range to populate + */ +void __init kasan_populate_zero_shadow(const void *from, const void *to) +{ + unsigned long addr = (unsigned long)from; + unsigned long end = (unsigned long)to; + pgd_t *pgd = pgd_offset_k(addr); + unsigned long next; + + do { + next = pgd_addr_end(addr, end); + + if (IS_ALIGNED(addr, PGDIR_SIZE) && end - addr >= PGDIR_SIZE) { + pud_t *pud; + pmd_t *pmd; + + /* + * kasan_zero_pud should be populated with pmds + * at this moment. + * [pud,pmd]_populate*() below needed only for + * 3,2 - level page tables where we don't have + * puds,pmds, so pgd_populate(), pud_populate() + * is noops. + */ + pgd_populate(&init_mm, pgd, kasan_zero_pud); + pud = pud_offset(pgd, addr); + pud_populate(&init_mm, pud, kasan_zero_pmd); + pmd = pmd_offset(pud, addr); + pmd_populate_kernel(&init_mm, pmd, kasan_zero_pte); + continue; + } + + if (pgd_none(*pgd)) { + pgd_populate(&init_mm, pgd, + early_alloc(PAGE_SIZE, NUMA_NO_NODE)); + } + zero_pud_populate(pgd, addr, next); + } while (pgd++, addr = next, addr != end); +} -- 2.4.5 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/