Received: by 2002:ac0:a5a6:0:0:0:0:0 with SMTP id m35-v6csp466572imm; Wed, 29 Aug 2018 04:37:54 -0700 (PDT) X-Google-Smtp-Source: ANB0VdZpVosMLXoGb1Ha5SyA1TTnvhc40QZs67h2meInBE+0lteBWawXKw07ZjzEysQOKt4Cs+1+ X-Received: by 2002:a62:1219:: with SMTP id a25-v6mr5602664pfj.104.1535542674275; Wed, 29 Aug 2018 04:37:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1535542674; cv=none; d=google.com; s=arc-20160816; b=ssiSSR4JbM7lIR+aLJvt7nSNVDdOXUibXrJxZofq3352xWKAaqjTlTtqQ5+61rCgI+ KtEJ4xC88/likYG8/4h7RYwtAeKjPNoMQcZE6yySjN0G5xPF4SU39n435YHC7ap8tY5A YcICcqgT4Kfxd/gmO7zxdR4tL5sI4OJ+iECD532tE40+58wJUEuM+927ydu5emOAzgJP HEqzTRgcb72oseJr38r6xOov2r+o0ZdFTzq3FKqdoGXUIe6gVAOtTp83j4ag7rYNVnRI y4KLwJM+EU3mcADzCrN4NN5Om5M5vpPI6DD42HTq3QrdxmFK9AwYx/PAtiKy+YSvVEMl 7pew== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:arc-authentication-results; bh=8D+0d2svCGP44PBWvbZeNTmiiLzDyhC5ASiSUcDXIBM=; b=T08WgQb0wqYyzuuJQOnfeO54oULHSwj/8idCeCfZcMQTOwEWFt/UMRfhDk4j5cTBW2 ZkwMc/cT+b6TbBeeWFKvyrHYvYF6+QZXBjo6l38R47gXViMKjk6IhHTUx4e/6D1v2c6+ c+45wFkZrPsb83FIo9NxCkdIqhxFcDpRvcVaadchh8w2eo7lGrGSvKZYwc8qb7fvSb37 Y0YWs8cGOhFcwAD4faLT0kx9WDn9L7K0Ok/ccAKXWVX2dnII5axAK+6nxjkP1vrlCJ74 PHDPquRX0a/aFdkJNQNxQCBCwjNvhbhxTP+timREGPtAE5kYymLpIhVSN0z39YdrxOX3 Q3nQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=QtQcEXUy; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w188-v6si3474834pfw.307.2018.08.29.04.37.39; Wed, 29 Aug 2018 04:37:54 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=QtQcEXUy; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728671AbeH2PcX (ORCPT + 99 others); Wed, 29 Aug 2018 11:32:23 -0400 Received: from mail-wr1-f66.google.com ([209.85.221.66]:44783 "EHLO mail-wr1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728636AbeH2PcV (ORCPT ); Wed, 29 Aug 2018 11:32:21 -0400 Received: by mail-wr1-f66.google.com with SMTP id v16-v6so4485490wro.11 for ; Wed, 29 Aug 2018 04:35:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=8D+0d2svCGP44PBWvbZeNTmiiLzDyhC5ASiSUcDXIBM=; b=QtQcEXUyerMEAIN4bI7SNvX7nNqbWN5LK9jTYoIJDkqY5Q5c3fSKZ4xkqnhK9MFDH/ tZ9g2GxDgWDATPmlwURmxQ+hw4VaOJgNOonZ1O8zuntjsIEf3nCWGzh9QuY2BJ5p5pvX GVycuBJ+Pfb4J6uHCHIMg76EGZBuXijowmPpcK6jCIDpIYfkMmar5+e2RKM5Os25zbSI Y3T+Z4Lf2qcJeMs/jN4+8DrpcWCHKWAAqy6eQZtN+4Mcr58qr3NnAQjZv3PNHhuEqN7W nNVwXzCOjz9LD0TmQYWo3UIaz7r3V30d0D9HX56VyZs/FFrq1VBjZWlPvPfue7B5pou/ Zj+g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=8D+0d2svCGP44PBWvbZeNTmiiLzDyhC5ASiSUcDXIBM=; b=ST0Xp17ip9D++LHVkq4Ur7KhBipHhD9NPVj/pRniTdYew+aY0uZmWYz2FwmkQD9wlE 4JRbnw8niLgXh5PoEf04WOylBncz6D00R+UAFhSRsirvrGjoaAXzwOsCurQ1QgY3MnYN XRBLdP3fZyIQB8K70g2KMJMNC2wmcOiqIWWAB1A4cMLEIwWpOMTTliaFLZMqB2691riL 7e7vWRI3eO8C52Qk+0rimLTpK+sk8TRUTNlJMTcFTQhrYVsGC+9Fd4zBv9MHcHXU/c+l KN3ciJj/CJnaXsLAwQo90vVNLq0zDwMWXDPSG9O+JrjEWSRcyyKnmt5zD7ef1g9TGGsg 1kRA== X-Gm-Message-State: APzg51DEPsR+Md/3jyTHXPSL9meHRqBLVJ3Qijz6Bh2Eq67sgdjzQPT4 VZC1+I1gscwz0fpPRS2sL1sVjw== X-Received: by 2002:adf:fe06:: with SMTP id n6-v6mr4196360wrr.171.1535542550487; Wed, 29 Aug 2018 04:35:50 -0700 (PDT) Received: from andreyknvl0.muc.corp.google.com ([2a00:79e0:15:10:84be:a42a:826d:c530]) by smtp.gmail.com with ESMTPSA id s10-v6sm7800454wmd.22.2018.08.29.04.35.48 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 29 Aug 2018 04:35:49 -0700 (PDT) From: Andrey Konovalov To: Andrey Ryabinin , Alexander Potapenko , Dmitry Vyukov , Catalin Marinas , Will Deacon , Christoph Lameter , Andrew Morton , Mark Rutland , Nick Desaulniers , Marc Zyngier , Dave Martin , Ard Biesheuvel , "Eric W . Biederman" , Ingo Molnar , Paul Lawrence , Geert Uytterhoeven , Arnd Bergmann , "Kirill A . Shutemov" , Greg Kroah-Hartman , Kate Stewart , Mike Rapoport , kasan-dev@googlegroups.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-sparse@vger.kernel.org, linux-mm@kvack.org, linux-kbuild@vger.kernel.org Cc: Kostya Serebryany , Evgeniy Stepanov , Lee Smith , Ramana Radhakrishnan , Jacob Bramley , Ruben Ayrapetyan , Jann Horn , Mark Brand , Chintan Pandya , Vishwath Mohan , Andrey Konovalov Subject: [PATCH v6 14/18] khwasan: add hooks implementation Date: Wed, 29 Aug 2018 13:35:18 +0200 Message-Id: <4267d0903e0fdf9c261b91cf8a2bf0f71047a43c.1535462971.git.andreyknvl@google.com> X-Mailer: git-send-email 2.19.0.rc0.228.g281dcd1b4d0-goog In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This commit adds KHWASAN specific hooks implementation and adjusts common KASAN and KHWASAN ones. 1. When a new slab cache is created, KHWASAN rounds up the size of the objects in this cache to KASAN_SHADOW_SCALE_SIZE (== 16). 2. On each kmalloc KHWASAN generates a random tag, sets the shadow memory, that corresponds to this object to this tag, and embeds this tag value into the top byte of the returned pointer. 3. On each kfree KHWASAN poisons the shadow memory with a random tag to allow detection of use-after-free bugs. The rest of the logic of the hook implementation is very much similar to the one provided by KASAN. KHWASAN saves allocation and free stack metadata to the slab object the same was KASAN does this. Signed-off-by: Andrey Konovalov --- mm/kasan/common.c | 82 +++++++++++++++++++++++++++++++++++----------- mm/kasan/kasan.h | 8 +++++ mm/kasan/khwasan.c | 40 ++++++++++++++++++++++ 3 files changed, 111 insertions(+), 19 deletions(-) diff --git a/mm/kasan/common.c b/mm/kasan/common.c index bed8e13c6e1d..938229b26f3a 100644 --- a/mm/kasan/common.c +++ b/mm/kasan/common.c @@ -140,6 +140,9 @@ void kasan_poison_shadow(const void *address, size_t size, u8 value) { void *shadow_start, *shadow_end; + /* Perform shadow offset calculation based on untagged address */ + address = reset_tag(address); + shadow_start = kasan_mem_to_shadow(address); shadow_end = kasan_mem_to_shadow(address + size); @@ -148,11 +151,20 @@ void kasan_poison_shadow(const void *address, size_t size, u8 value) void kasan_unpoison_shadow(const void *address, size_t size) { - kasan_poison_shadow(address, size, 0); + u8 tag = get_tag(address); + + /* Perform shadow offset calculation based on untagged address */ + address = reset_tag(address); + + kasan_poison_shadow(address, size, tag); if (size & KASAN_SHADOW_MASK) { u8 *shadow = (u8 *)kasan_mem_to_shadow(address + size); - *shadow = size & KASAN_SHADOW_MASK; + + if (IS_ENABLED(CONFIG_KASAN_HW)) + *shadow = tag; + else + *shadow = size & KASAN_SHADOW_MASK; } } @@ -200,8 +212,9 @@ void kasan_unpoison_stack_above_sp_to(const void *watermark) void kasan_alloc_pages(struct page *page, unsigned int order) { - if (likely(!PageHighMem(page))) - kasan_unpoison_shadow(page_address(page), PAGE_SIZE << order); + if (unlikely(PageHighMem(page))) + return; + kasan_unpoison_shadow(page_address(page), PAGE_SIZE << order); } void kasan_free_pages(struct page *page, unsigned int order) @@ -235,6 +248,7 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size, slab_flags_t *flags) { unsigned int orig_size = *size; + unsigned int redzone_size = 0; int redzone_adjust; /* Add alloc meta. */ @@ -242,20 +256,20 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size, *size += sizeof(struct kasan_alloc_meta); /* Add free meta. */ - if (cache->flags & SLAB_TYPESAFE_BY_RCU || cache->ctor || - cache->object_size < sizeof(struct kasan_free_meta)) { + if (IS_ENABLED(CONFIG_KASAN_GENERIC) && + (cache->flags & SLAB_TYPESAFE_BY_RCU || cache->ctor || + cache->object_size < sizeof(struct kasan_free_meta))) { cache->kasan_info.free_meta_offset = *size; *size += sizeof(struct kasan_free_meta); } - redzone_adjust = optimal_redzone(cache->object_size) - - (*size - cache->object_size); + redzone_size = optimal_redzone(cache->object_size); + redzone_adjust = redzone_size - (*size - cache->object_size); if (redzone_adjust > 0) *size += redzone_adjust; *size = min_t(unsigned int, KMALLOC_MAX_SIZE, - max(*size, cache->object_size + - optimal_redzone(cache->object_size))); + max(*size, cache->object_size + redzone_size)); /* * If the metadata doesn't fit, don't enable KASAN at all. @@ -268,6 +282,8 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size, return; } + cache->align = round_up(cache->align, KASAN_SHADOW_SCALE_SIZE); + *flags |= SLAB_KASAN; } @@ -328,15 +344,30 @@ void *kasan_slab_alloc(struct kmem_cache *cache, void *object, gfp_t flags) return kasan_kmalloc(cache, object, cache->object_size, flags); } +static inline bool shadow_invalid(u8 tag, s8 shadow_byte) +{ + if (IS_ENABLED(CONFIG_KASAN_GENERIC)) + return shadow_byte < 0 || + shadow_byte >= KASAN_SHADOW_SCALE_SIZE; + else + return tag != (u8)shadow_byte; +} + static bool __kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip, bool quarantine) { s8 shadow_byte; + u8 tag; + void *tagged_object; unsigned long rounded_up_size; + tag = get_tag(object); + tagged_object = object; + object = reset_tag(object); + if (unlikely(nearest_obj(cache, virt_to_head_page(object), object) != object)) { - kasan_report_invalid_free(object, ip); + kasan_report_invalid_free(tagged_object, ip); return true; } @@ -345,20 +376,22 @@ static bool __kasan_slab_free(struct kmem_cache *cache, void *object, return false; shadow_byte = READ_ONCE(*(s8 *)kasan_mem_to_shadow(object)); - if (shadow_byte < 0 || shadow_byte >= KASAN_SHADOW_SCALE_SIZE) { - kasan_report_invalid_free(object, ip); + if (shadow_invalid(tag, shadow_byte)) { + kasan_report_invalid_free(tagged_object, ip); return true; } rounded_up_size = round_up(cache->object_size, KASAN_SHADOW_SCALE_SIZE); kasan_poison_shadow(object, rounded_up_size, KASAN_KMALLOC_FREE); - if (!quarantine || unlikely(!(cache->flags & SLAB_KASAN))) + if ((IS_ENABLED(CONFIG_KASAN_GENERIC) && !quarantine) || + unlikely(!(cache->flags & SLAB_KASAN))) return false; set_track(&get_alloc_info(cache, object)->free_track, GFP_NOWAIT); quarantine_put(get_free_info(cache, object), cache); - return true; + + return IS_ENABLED(CONFIG_KASAN_GENERIC); } bool kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip) @@ -371,6 +404,7 @@ void *kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size, { unsigned long redzone_start; unsigned long redzone_end; + u8 tag; if (gfpflags_allow_blocking(flags)) quarantine_reduce(); @@ -383,14 +417,24 @@ void *kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size, redzone_end = round_up((unsigned long)object + cache->object_size, KASAN_SHADOW_SCALE_SIZE); - kasan_unpoison_shadow(object, size); + /* + * Objects with contructors and objects from SLAB_TYPESAFE_BY_RCU slabs + * have tags preassigned and are already tagged. + */ + if (IS_ENABLED(CONFIG_KASAN_HW) && + (cache->ctor || cache->flags & SLAB_TYPESAFE_BY_RCU)) + tag = get_tag(object); + else + tag = random_tag(); + + kasan_unpoison_shadow(set_tag(object, tag), size); kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start, KASAN_KMALLOC_REDZONE); if (cache->flags & SLAB_KASAN) set_track(&get_alloc_info(cache, object)->alloc_track, flags); - return (void *)object; + return set_tag(object, tag); } EXPORT_SYMBOL(kasan_kmalloc); @@ -440,7 +484,7 @@ void kasan_poison_kfree(void *ptr, unsigned long ip) page = virt_to_head_page(ptr); if (unlikely(!PageSlab(page))) { - if (ptr != page_address(page)) { + if (reset_tag(ptr) != page_address(page)) { kasan_report_invalid_free(ptr, ip); return; } @@ -453,7 +497,7 @@ void kasan_poison_kfree(void *ptr, unsigned long ip) void kasan_kfree_large(void *ptr, unsigned long ip) { - if (ptr != page_address(virt_to_head_page(ptr))) + if (reset_tag(ptr) != page_address(virt_to_head_page(ptr))) kasan_report_invalid_free(ptr, ip); /* The object will be poisoned by page_alloc. */ } diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h index d60859d26be7..6f4f2ebf5f57 100644 --- a/mm/kasan/kasan.h +++ b/mm/kasan/kasan.h @@ -12,10 +12,18 @@ #define KHWASAN_TAG_INVALID 0xFE /* inaccessible memory tag */ #define KHWASAN_TAG_MAX 0xFD /* maximum value for random tags */ +#ifdef CONFIG_KASAN_GENERIC #define KASAN_FREE_PAGE 0xFF /* page was freed */ #define KASAN_PAGE_REDZONE 0xFE /* redzone for kmalloc_large allocations */ #define KASAN_KMALLOC_REDZONE 0xFC /* redzone inside slub object */ #define KASAN_KMALLOC_FREE 0xFB /* object was freed (kmem_cache_free/kfree) */ +#else +#define KASAN_FREE_PAGE KHWASAN_TAG_INVALID +#define KASAN_PAGE_REDZONE KHWASAN_TAG_INVALID +#define KASAN_KMALLOC_REDZONE KHWASAN_TAG_INVALID +#define KASAN_KMALLOC_FREE KHWASAN_TAG_INVALID +#endif + #define KASAN_GLOBAL_REDZONE 0xFA /* redzone for global variable */ /* diff --git a/mm/kasan/khwasan.c b/mm/kasan/khwasan.c index 9d91bf3c8246..6b1309278e39 100644 --- a/mm/kasan/khwasan.c +++ b/mm/kasan/khwasan.c @@ -106,15 +106,52 @@ void *khwasan_preset_slab_tag(struct kmem_cache *cache, unsigned int idx, void check_memory_region(unsigned long addr, size_t size, bool write, unsigned long ret_ip) { + u8 tag; + u8 *shadow_first, *shadow_last, *shadow; + void *untagged_addr; + + tag = get_tag((const void *)addr); + + /* Ignore accesses for pointers tagged with 0xff (native kernel + * pointer tag) to suppress false positives caused by kmap. + * + * Some kernel code was written to account for archs that don't keep + * high memory mapped all the time, but rather map and unmap particular + * pages when needed. Instead of storing a pointer to the kernel memory, + * this code saves the address of the page structure and offset within + * that page for later use. Those pages are then mapped and unmapped + * with kmap/kunmap when necessary and virt_to_page is used to get the + * virtual address of the page. For arm64 (that keeps the high memory + * mapped all the time), kmap is turned into a page_address call. + + * The issue is that with use of the page_address + virt_to_page + * sequence the top byte value of the original pointer gets lost (gets + * set to KHWASAN_TAG_KERNEL (0xFF). + */ + if (tag == KHWASAN_TAG_KERNEL) + return; + + untagged_addr = reset_tag((const void *)addr); + shadow_first = kasan_mem_to_shadow(untagged_addr); + shadow_last = kasan_mem_to_shadow(untagged_addr + size - 1); + + for (shadow = shadow_first; shadow <= shadow_last; shadow++) { + if (*shadow != tag) { + kasan_report(addr, size, write, ret_ip); + return; + } + } } #define DEFINE_HWASAN_LOAD_STORE(size) \ void __hwasan_load##size##_noabort(unsigned long addr) \ { \ + check_memory_region(addr, size, false, _RET_IP_); \ } \ EXPORT_SYMBOL(__hwasan_load##size##_noabort); \ void __hwasan_store##size##_noabort(unsigned long addr) \ { \ + check_memory_region(addr, size, true, _RET_IP_); \ } \ EXPORT_SYMBOL(__hwasan_store##size##_noabort) @@ -126,15 +163,18 @@ DEFINE_HWASAN_LOAD_STORE(16); void __hwasan_loadN_noabort(unsigned long addr, unsigned long size) { + check_memory_region(addr, size, false, _RET_IP_); } EXPORT_SYMBOL(__hwasan_loadN_noabort); void __hwasan_storeN_noabort(unsigned long addr, unsigned long size) { + check_memory_region(addr, size, true, _RET_IP_); } EXPORT_SYMBOL(__hwasan_storeN_noabort); void __hwasan_tag_memory(unsigned long addr, u8 tag, unsigned long size) { + kasan_poison_shadow((void *)addr, size, tag); } EXPORT_SYMBOL(__hwasan_tag_memory); -- 2.19.0.rc0.228.g281dcd1b4d0-goog