Received: by 10.223.164.202 with SMTP id h10csp2730017wrb; Mon, 27 Nov 2017 23:50:42 -0800 (PST) X-Google-Smtp-Source: AGs4zMba9D6EKOeAzpF+AN/WbXGKwGlCd03gWzxiXoUxFaBdvYO3RdnADAzEyzvwLK3zZ6ojC3hg X-Received: by 10.99.106.134 with SMTP id f128mr39567491pgc.430.1511855442835; Mon, 27 Nov 2017 23:50:42 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1511855442; cv=none; d=google.com; s=arc-20160816; b=vYCkspOY58Veai9UsWJZLneCpWpSARXCBfAH9U02DGmRu6ajumaLBwIhMt5YYWntMs eGTgesLsWgtjcfun0re6AJBLJUADIYzSogl2WKxwssFXp3uSEwVWQjSaMtHeO3DojEyv pTFYIHypsGkfmQcY4bfpUzqSrMHalwqz3J9tbrbQHx5RMMhX54QW8y8Wq5IhLKw/lSaZ H3KLnQXuypANCeZIODXAMe+5J8U0CWr/Lvk7pJXOs5E4CYl7th0CyXdoWTk/v6L11HlB RvxPpMjdeQq/NHlImL3xyz6Sk9pposKqpJ/eq8mGKSCO7YWA5vMtGAC49+ckEvPWfJh5 m40g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=9UTy0jpy+EnQIRdS2r0kgTXUZHQuEdmx/xTafSU69Gk=; b=gzVwdf0FBZAg9xr3t3bjefGPWDng1lO26RhKUIMJYc3Uv5f5X7U5DFYdvl6UTU92Vu y9oBc6K8SlHT6kSQrl8u2wbBeRVzZR/2gW2ZEuhgC5n4wHuLMygbk7cpjHd5KeIVqCbT jufr2TASDZMcaw73j8iNV8jiJk/GLjnVtU3teladPGdvXQgf5BztkkrhclqB8hCw2bJj O1kETYfCrEOzxAO0f7icixay3jwEKCdRArvNp1GcfcRTRjsmJiCyR582gVCkmpDNRebn E81vKWqoT8uYxF1ArpHYIBp2kn0MGZ3a34Up/UbeKDAE/8CYBcwu8uj7u9KcRzvjGh1z bSXw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=FGQgtAnb; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i1si24463741plt.238.2017.11.27.23.50.31; Mon, 27 Nov 2017 23:50:42 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=FGQgtAnb; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752160AbdK1Hte (ORCPT + 78 others); Tue, 28 Nov 2017 02:49:34 -0500 Received: from mail-pf0-f194.google.com ([209.85.192.194]:35761 "EHLO mail-pf0-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752127AbdK1Hta (ORCPT ); Tue, 28 Nov 2017 02:49:30 -0500 Received: by mail-pf0-f194.google.com with SMTP id r88so14651915pfi.2 for ; Mon, 27 Nov 2017 23:49:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=9UTy0jpy+EnQIRdS2r0kgTXUZHQuEdmx/xTafSU69Gk=; b=FGQgtAnb+MsOLFSq9gjP/ZTjvZafkqI0urKwdGR+buMlbBPJo0RiyPgLSRbCoXslPt 7iqCUC75OtvZoaZnjTRiWW/nKKyjDKLr2nw7ZvFqemJynXIYVRl9TS+Tm2Uky/7yNNEm blZ+k+jbsJ2Uh1Qr4rdKoxo5uL82KDYOFFRIBoyL0b5XYDTEhc2rpxAO8bvRDc9W0AKh URc6h/zMVIyAqC1Dt2RKZcSz9zL1v1nfYX4OlfSblVKtY81g22+7y0XP+YVYZbNiuBuc Fv9PXq+6ixAQnZA+BdetDdxkt91jSypNBJzcq/IW5+k1AwbJRxU5eIfre2fdYcvy5vBZ dwjg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=9UTy0jpy+EnQIRdS2r0kgTXUZHQuEdmx/xTafSU69Gk=; b=f0t1BIZ8GkoFb90OWPlAD1gASsw6iAnXRwvyE1snpHO47S4/VUKTBlkg0XbAn5kKDR SeB0wzSaNvtSYPdCZklaCtgSzpHnGgCyKu1EXspUA4npDGCM0zoDqCDQX3xv7iyeUiyu 0I67DhDcQ0FXAW1GJiGqgfeeXxgNaop6F7ZskRBAH61+tc2uneMAc9Ls3euIUIXV5fM9 DDb9r7sCLDO45uD9wXqKWeSWxocnZOztzurpsepZ//UY8TpOKaSFNtYtduCyEFZeIqho ho0J2Nx9MAPTFhv8QO2M92szGQPtF+a3cIHS59K60UFt9VC2nguCbo8LDPrkm5OtaCq6 lA3A== X-Gm-Message-State: AJaThX5RnoUEfU7ngDAEp2Hr5qM9d/LJz1Waxa4kPCURRyVw4xcQuPIb 7Du6W+7fzcAWENckO7KjmFY= X-Received: by 10.99.97.66 with SMTP id v63mr38307215pgb.84.1511855369932; Mon, 27 Nov 2017 23:49:29 -0800 (PST) Received: from localhost.localdomain ([124.56.155.17]) by smtp.gmail.com with ESMTPSA id 67sm39403946pfz.171.2017.11.27.23.49.26 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 27 Nov 2017 23:49:29 -0800 (PST) From: js1304@gmail.com X-Google-Original-From: iamjoonsoo.kim@lge.com To: Andrew Morton Cc: Andrey Ryabinin , Alexander Potapenko , Dmitry Vyukov , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Namhyung Kim , Wengang Wang , Joonsoo Kim Subject: [PATCH 03/18] vchecker: mark/unmark the shadow of the allocated objects Date: Tue, 28 Nov 2017 16:48:38 +0900 Message-Id: <1511855333-3570-4-git-send-email-iamjoonsoo.kim@lge.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1511855333-3570-1-git-send-email-iamjoonsoo.kim@lge.com> References: <1511855333-3570-1-git-send-email-iamjoonsoo.kim@lge.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Joonsoo Kim Mark/unmark the shadow of the objects that is allocated before the vchecker is enabled/disabled. It is necessary to fully debug the system. Since there is no synchronization way to prevent slab object free, we cannot synchronously mark/unmark the shadow of the allocated object. Therefore, with this patch, it would be possible to overwrite KASAN_KMALLOC_FREE shadow value to KASAN_VCHECKER_GRAYZONE/0 and UAF check in KASAN would be missed. However, it is okay since it happens rarely and we would decide to use this feature as a last resort. We can solve this race problem if another shadow memory is introduced. It will be considered after the usefulness of the feature is justified. Signed-off-by: Joonsoo Kim --- include/linux/slab.h | 6 ++++++ mm/kasan/vchecker.c | 27 +++++++++++++++++++++++++++ mm/kasan/vchecker.h | 7 +++++++ mm/slab.c | 31 +++++++++++++++++++++++++++++++ mm/slab.h | 4 ++-- mm/slub.c | 36 ++++++++++++++++++++++++++++++++++-- 6 files changed, 107 insertions(+), 4 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 47e70e6..f6efbbe 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -108,6 +108,12 @@ #define SLAB_KASAN 0 #endif +#ifdef CONFIG_VCHECKER +#define SLAB_VCHECKER 0x10000000UL +#else +#define SLAB_VCHECKER 0x00000000UL +#endif + /* The following flags affect the page allocator grouping pages by mobility */ /* Objects are reclaimable */ #define SLAB_RECLAIM_ACCOUNT ((slab_flags_t __force)0x00020000U) diff --git a/mm/kasan/vchecker.c b/mm/kasan/vchecker.c index 0ac031c..0b8a1e7 100644 --- a/mm/kasan/vchecker.c +++ b/mm/kasan/vchecker.c @@ -109,6 +109,12 @@ static int remove_cbs(struct kmem_cache *s, struct vchecker_type *t) return 0; } +void vchecker_cache_create(struct kmem_cache *s, + size_t *size, slab_flags_t *flags) +{ + *flags |= SLAB_VCHECKER; +} + void vchecker_kmalloc(struct kmem_cache *s, const void *object, size_t size) { struct vchecker *checker; @@ -130,6 +136,26 @@ void vchecker_kmalloc(struct kmem_cache *s, const void *object, size_t size) rcu_read_unlock(); } +void vchecker_enable_obj(struct kmem_cache *s, const void *object, + size_t size, bool enable) +{ + struct vchecker *checker; + struct vchecker_cb *cb; + s8 shadow_val = READ_ONCE(*(s8 *)kasan_mem_to_shadow(object)); + s8 mark = enable ? KASAN_VCHECKER_GRAYZONE : 0; + + /* It would be freed object. We don't need to mark it */ + if (shadow_val < 0 && (u8)shadow_val != KASAN_VCHECKER_GRAYZONE) + return; + + checker = s->vchecker_cache.checker; + list_for_each_entry(cb, &checker->cb_list, list) { + kasan_poison_shadow(object + cb->begin, + round_up(cb->end - cb->begin, + KASAN_SHADOW_SCALE_SIZE), mark); + } +} + static void vchecker_report(unsigned long addr, size_t size, bool write, unsigned long ret_ip, struct kmem_cache *s, struct vchecker_cb *cb, void *object) @@ -380,6 +406,7 @@ static ssize_t enable_write(struct file *filp, const char __user *ubuf, * left that accesses checker's cb list if vchecker is disabled. */ synchronize_sched(); + vchecker_enable_cache(s, enable); mutex_unlock(&vchecker_meta); return cnt; diff --git a/mm/kasan/vchecker.h b/mm/kasan/vchecker.h index 77ba07d..aa22e8d 100644 --- a/mm/kasan/vchecker.h +++ b/mm/kasan/vchecker.h @@ -16,6 +16,11 @@ bool vchecker_check(unsigned long addr, size_t size, bool write, unsigned long ret_ip); int init_vchecker(struct kmem_cache *s); void fini_vchecker(struct kmem_cache *s); +void vchecker_cache_create(struct kmem_cache *s, size_t *size, + slab_flags_t *flags); +void vchecker_enable_cache(struct kmem_cache *s, bool enable); +void vchecker_enable_obj(struct kmem_cache *s, const void *object, + size_t size, bool enable); #else static inline void vchecker_kmalloc(struct kmem_cache *s, @@ -24,6 +29,8 @@ static inline bool vchecker_check(unsigned long addr, size_t size, bool write, unsigned long ret_ip) { return false; } static inline int init_vchecker(struct kmem_cache *s) { return 0; } static inline void fini_vchecker(struct kmem_cache *s) { } +static inline void vchecker_cache_create(struct kmem_cache *s, + size_t *size, slab_flags_t *flags) {} #endif diff --git a/mm/slab.c b/mm/slab.c index 78ea436..ba45c15 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -2551,6 +2551,37 @@ static inline bool shuffle_freelist(struct kmem_cache *cachep, } #endif /* CONFIG_SLAB_FREELIST_RANDOM */ +#ifdef CONFIG_VCHECKER +static void __vchecker_enable_cache(struct kmem_cache *s, + struct page *page, bool enable) +{ + int i; + void *p; + + for (i = 0; i < s->num; i++) { + p = index_to_obj(s, page, i); + vchecker_enable_obj(s, p, s->object_size, enable); + } +} + +void vchecker_enable_cache(struct kmem_cache *s, bool enable) +{ + int node; + struct kmem_cache_node *n; + struct page *page; + unsigned long flags; + + for_each_kmem_cache_node(s, node, n) { + spin_lock_irqsave(&n->list_lock, flags); + list_for_each_entry(page, &n->slabs_partial, lru) + __vchecker_enable_cache(s, page, enable); + list_for_each_entry(page, &n->slabs_full, lru) + __vchecker_enable_cache(s, page, enable); + spin_unlock_irqrestore(&n->list_lock, flags); + } +} +#endif + static void cache_init_objs(struct kmem_cache *cachep, struct page *page) { diff --git a/mm/slab.h b/mm/slab.h index d054da8..c1cf486 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -136,10 +136,10 @@ static inline slab_flags_t kmem_cache_flags(unsigned long object_size, SLAB_TYPESAFE_BY_RCU | SLAB_DEBUG_OBJECTS ) #if defined(CONFIG_DEBUG_SLAB) -#define SLAB_DEBUG_FLAGS (SLAB_RED_ZONE | SLAB_POISON | SLAB_STORE_USER) +#define SLAB_DEBUG_FLAGS (SLAB_RED_ZONE | SLAB_POISON | SLAB_STORE_USER | SLAB_VCHECKER) #elif defined(CONFIG_SLUB_DEBUG) #define SLAB_DEBUG_FLAGS (SLAB_RED_ZONE | SLAB_POISON | SLAB_STORE_USER | \ - SLAB_TRACE | SLAB_CONSISTENCY_CHECKS) + SLAB_TRACE | SLAB_CONSISTENCY_CHECKS | SLAB_VCHECKER) #else #define SLAB_DEBUG_FLAGS (0) #endif diff --git a/mm/slub.c b/mm/slub.c index bb8c949..67364cb 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1026,7 +1026,7 @@ static void trace(struct kmem_cache *s, struct page *page, void *object, static void add_full(struct kmem_cache *s, struct kmem_cache_node *n, struct page *page) { - if (!(s->flags & SLAB_STORE_USER)) + if (!(s->flags & (SLAB_STORE_USER | SLAB_VCHECKER))) return; lockdep_assert_held(&n->list_lock); @@ -1035,7 +1035,7 @@ static void add_full(struct kmem_cache *s, static void remove_full(struct kmem_cache *s, struct kmem_cache_node *n, struct page *page) { - if (!(s->flags & SLAB_STORE_USER)) + if (!(s->flags & (SLAB_STORE_USER | SLAB_VCHECKER))) return; lockdep_assert_held(&n->list_lock); @@ -1555,6 +1555,38 @@ static inline bool shuffle_freelist(struct kmem_cache *s, struct page *page) } #endif /* CONFIG_SLAB_FREELIST_RANDOM */ +#ifdef CONFIG_VCHECKER +static void __vchecker_enable_cache(struct kmem_cache *s, + struct page *page, bool enable) +{ + void *p; + void *addr = page_address(page); + + if (!page->inuse) + return; + + for_each_object(p, s, addr, page->objects) + vchecker_enable_obj(s, p, s->object_size, enable); +} + +void vchecker_enable_cache(struct kmem_cache *s, bool enable) +{ + int node; + struct kmem_cache_node *n; + struct page *page; + unsigned long flags; + + for_each_kmem_cache_node(s, node, n) { + spin_lock_irqsave(&n->list_lock, flags); + list_for_each_entry(page, &n->partial, lru) + __vchecker_enable_cache(s, page, enable); + list_for_each_entry(page, &n->full, lru) + __vchecker_enable_cache(s, page, enable); + spin_unlock_irqrestore(&n->list_lock, flags); + } +} +#endif + static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node) { struct page *page; -- 2.7.4 From 1585256800663548005@xxx Mon Nov 27 21:38:15 +0000 2017 X-GM-THRID: 1585256800663548005 X-Gmail-Labels: Inbox,Category Forums,HistoricalUnread