Received: by 2002:a05:6a10:9e8c:0:0:0:0 with SMTP id y12csp440025pxx; Thu, 29 Oct 2020 06:20:02 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx3HSbMixdVPbrHUCgYpGx4ECXImC8JpsuGazeKfGkLdeDnu3o5S/DbmTwZtio9IAUwjN52 X-Received: by 2002:a17:906:7f0d:: with SMTP id d13mr3783220ejr.299.1603977602507; Thu, 29 Oct 2020 06:20:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1603977602; cv=none; d=google.com; s=arc-20160816; b=NtUV3c0KPLiFs3X+rlJdQu6EXNsAJ0WH2w7PEqHLWq3bBOfrJe8uqRoFcuqaJIyUjd Qpt/plEDlZOL7qc+yedH8wHxShvEm1WKGU9qyzrbjZK/4l5OnaD70Dp341oiSKVZxSl+ tzyyF6AJ2u63lLBktYtIm7QzEMWCwQaMv2YW/zyC6ndhiRp57Ezyo77OETFCmDHnluLC Go9g+vbVtdIuBp1IIfeBpAwffkKsVwCuFTPEejWaYVBo02dWi9WfuW15zIPH8fdUFl/Z 9WHE/xnHxDkzMLrBONd/7Q/npVcrt+79vgyfYQvDD2VAXmFhuRWKafSstnFQooGHBb8c fHgA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:references:mime-version :message-id:in-reply-to:date:sender:dkim-signature; bh=cqyurZUQ7OQKE9wGN9B3YMwt11yCHQJRQLLKtpcq/xo=; b=fXl/8eX19qScv6OZXvwbMogxxK/aF4LFXUbMKbvtMqqFZxnxVFmAid3GWHuXS395bt yFjwGVy2sxuRc+eMNy8q8+w0e7UoDcYdWvWomra+QMRs6A6nsJUKcp3XMn7YFY5u2D5O C8/dMnkSXAOvCS/7tm2SCTWsSNR9UTZgZWV4ThZjkI4e8WDt/wJAuU/W0hvLYKcZvEMF c45esjqPiX8Nv2Fd5odZPdVUgQDGbHDKJYNVd8pSHrU6uOfeFUxFX6bH9SSf8vKkUWHb yHrWAgdliAztVQcx7EinEbBDkviTWRieS+nLNbPKtwLkMeK5AyPHZwtjdcBsAHa7TYXo zhag== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=Ya1sz9V4; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id y13si1833258ejc.435.2020.10.29.06.19.39; Thu, 29 Oct 2020 06:20:02 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=Ya1sz9V4; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726949AbgJ2NRO (ORCPT + 99 others); Thu, 29 Oct 2020 09:17:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53922 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726874AbgJ2NRL (ORCPT ); Thu, 29 Oct 2020 09:17:11 -0400 Received: from mail-qk1-x749.google.com (mail-qk1-x749.google.com [IPv6:2607:f8b0:4864:20::749]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 80019C0613D5 for ; Thu, 29 Oct 2020 06:17:09 -0700 (PDT) Received: by mail-qk1-x749.google.com with SMTP id x85so1733766qka.14 for ; Thu, 29 Oct 2020 06:17:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=cqyurZUQ7OQKE9wGN9B3YMwt11yCHQJRQLLKtpcq/xo=; b=Ya1sz9V4c/yqeLmiBcgamElvc+6FcDyM4BxILO29gMxJgsLdWjUVgzK305AK1P3Zeo +rvEOermFAn2taOc99y+tfSIV30JsKeL7k+WDR+fmgTvy4V24OvEovPV5wlQy8YURbkh C4REpuLQzg+EsOOXi3vpkdPIUw92i1iCATZoiyum9hQF8JScPDdibj9nD+oHlIZcbrYJ txBSRYAeLcKr3HHaq9sAXYOoBRG6mC2D1DTQDJNT6bx5tTh1nJo5M5NdQQE+z4RNDCIn xs75D0Qf5RwVJW5W7nM/r7JPn5dmdckUfXke6c016pkzTRzczVKFkBK4G7NB2mNfiUt7 f7yw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=cqyurZUQ7OQKE9wGN9B3YMwt11yCHQJRQLLKtpcq/xo=; b=CtTF1hn4ilyIlcU5akKfHKAoN1MynEia81Rv2eSZ48SzHz6g1cmyyhWVMN6gk7UjSt KiSzd60fowTELMqhNhXAcqBloPzrZnjf1krvu/TIvw53euF4cB4hFJ5vb+/QGyvfupCV WAJofib8Tyzuf8XPac2EdxG080CqrJgx6zUwtiNFiGSow6Y2MP5qW5IvDkvzhFz1p7fZ ROACIkI4WUkWYkV4V+jw47mn1IsEzL2076OcZQ1fV4nIPv94EcyEHL2eaop9oRv64AG0 Kk7TGABoUKAClJy+9vNEavEoGSVLqyN51u2emiO6mCkbTIkylMBBC4mwulGj6Ekf116H 4G7Q== X-Gm-Message-State: AOAM531sQoNvA5Lb8+sRUHlD4MtK1xF0EK8e5oxP2fE4ZNXq6xZ01pkL HD3FcqTWvKdKI0tou8NLOr7xz77sBg== Sender: "elver via sendgmr" X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:f693:9fff:fef4:2449]) (user=elver job=sendgmr) by 2002:a0c:b7a9:: with SMTP id l41mr4263108qve.32.1603977428424; Thu, 29 Oct 2020 06:17:08 -0700 (PDT) Date: Thu, 29 Oct 2020 14:16:44 +0100 In-Reply-To: <20201029131649.182037-1-elver@google.com> Message-Id: <20201029131649.182037-5-elver@google.com> Mime-Version: 1.0 References: <20201029131649.182037-1-elver@google.com> X-Mailer: git-send-email 2.29.1.341.ge80a0c044ae-goog Subject: [PATCH v6 4/9] mm, kfence: insert KFENCE hooks for SLAB From: Marco Elver To: elver@google.com, akpm@linux-foundation.org, glider@google.com Cc: hpa@zytor.com, paulmck@kernel.org, andreyknvl@google.com, aryabinin@virtuozzo.com, luto@kernel.org, bp@alien8.de, catalin.marinas@arm.com, cl@linux.com, dave.hansen@linux.intel.com, rientjes@google.com, dvyukov@google.com, edumazet@google.com, gregkh@linuxfoundation.org, hdanton@sina.com, mingo@redhat.com, jannh@google.com, Jonathan.Cameron@huawei.com, corbet@lwn.net, iamjoonsoo.kim@lge.com, joern@purestorage.com, keescook@chromium.org, mark.rutland@arm.com, penberg@kernel.org, peterz@infradead.org, sjpark@amazon.com, tglx@linutronix.de, vbabka@suse.cz, will@kernel.org, x86@kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Alexander Potapenko Inserts KFENCE hooks into the SLAB allocator. To pass the originally requested size to KFENCE, add an argument 'orig_size' to slab_alloc*(). The additional argument is required to preserve the requested original size for kmalloc() allocations, which uses size classes (e.g. an allocation of 272 bytes will return an object of size 512). Therefore, kmem_cache::size does not represent the kmalloc-caller's requested size, and we must introduce the argument 'orig_size' to propagate the originally requested size to KFENCE. Without the originally requested size, we would not be able to detect out-of-bounds accesses for objects placed at the end of a KFENCE object page if that object is not equal to the kmalloc-size class it was bucketed into. When KFENCE is disabled, there is no additional overhead, since slab_alloc*() functions are __always_inline. Reviewed-by: Dmitry Vyukov Co-developed-by: Marco Elver Signed-off-by: Marco Elver Signed-off-by: Alexander Potapenko --- v5: * New kfence_shutdown_cache(): we need to defer kfence_shutdown_cache() to before the cache is actually freed. In case of SLAB_TYPESAFE_BY_RCU, the objects may still legally be used until the next RCU grace period. * Fix objs_per_slab_page for kfence objects. * Revert and use fixed obj_to_index() in __check_heap_object(). v3: * Rewrite patch description to clarify need for 'orig_size' [reported by Christopher Lameter]. --- include/linux/slab_def.h | 3 +++ mm/slab.c | 37 ++++++++++++++++++++++++++++--------- mm/slab_common.c | 5 ++++- 3 files changed, 35 insertions(+), 10 deletions(-) diff --git a/include/linux/slab_def.h b/include/linux/slab_def.h index 9eb430c163c2..3aa5e1e73ab6 100644 --- a/include/linux/slab_def.h +++ b/include/linux/slab_def.h @@ -2,6 +2,7 @@ #ifndef _LINUX_SLAB_DEF_H #define _LINUX_SLAB_DEF_H +#include #include /* @@ -114,6 +115,8 @@ static inline unsigned int obj_to_index(const struct kmem_cache *cache, static inline int objs_per_slab_page(const struct kmem_cache *cache, const struct page *page) { + if (is_kfence_address(page_address(page))) + return 1; return cache->num; } diff --git a/mm/slab.c b/mm/slab.c index b1113561b98b..ebff1c333558 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -100,6 +100,7 @@ #include #include #include +#include #include #include #include @@ -3208,7 +3209,7 @@ static void *____cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, } static __always_inline void * -slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, +slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, size_t orig_size, unsigned long caller) { unsigned long save_flags; @@ -3221,6 +3222,10 @@ slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, if (unlikely(!cachep)) return NULL; + ptr = kfence_alloc(cachep, orig_size, flags); + if (unlikely(ptr)) + goto out_hooks; + cache_alloc_debugcheck_before(cachep, flags); local_irq_save(save_flags); @@ -3253,6 +3258,7 @@ slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, if (unlikely(slab_want_init_on_alloc(flags, cachep)) && ptr) memset(ptr, 0, cachep->object_size); +out_hooks: slab_post_alloc_hook(cachep, objcg, flags, 1, &ptr); return ptr; } @@ -3290,7 +3296,7 @@ __do_cache_alloc(struct kmem_cache *cachep, gfp_t flags) #endif /* CONFIG_NUMA */ static __always_inline void * -slab_alloc(struct kmem_cache *cachep, gfp_t flags, unsigned long caller) +slab_alloc(struct kmem_cache *cachep, gfp_t flags, size_t orig_size, unsigned long caller) { unsigned long save_flags; void *objp; @@ -3301,6 +3307,10 @@ slab_alloc(struct kmem_cache *cachep, gfp_t flags, unsigned long caller) if (unlikely(!cachep)) return NULL; + objp = kfence_alloc(cachep, orig_size, flags); + if (unlikely(objp)) + goto out; + cache_alloc_debugcheck_before(cachep, flags); local_irq_save(save_flags); objp = __do_cache_alloc(cachep, flags); @@ -3311,6 +3321,7 @@ slab_alloc(struct kmem_cache *cachep, gfp_t flags, unsigned long caller) if (unlikely(slab_want_init_on_alloc(flags, cachep)) && objp) memset(objp, 0, cachep->object_size); +out: slab_post_alloc_hook(cachep, objcg, flags, 1, &objp); return objp; } @@ -3416,6 +3427,11 @@ static void cache_flusharray(struct kmem_cache *cachep, struct array_cache *ac) static __always_inline void __cache_free(struct kmem_cache *cachep, void *objp, unsigned long caller) { + if (kfence_free(objp)) { + kmemleak_free_recursive(objp, cachep->flags); + return; + } + /* Put the object into the quarantine, don't touch it for now. */ if (kasan_slab_free(cachep, objp, _RET_IP_)) return; @@ -3481,7 +3497,7 @@ void ___cache_free(struct kmem_cache *cachep, void *objp, */ void *kmem_cache_alloc(struct kmem_cache *cachep, gfp_t flags) { - void *ret = slab_alloc(cachep, flags, _RET_IP_); + void *ret = slab_alloc(cachep, flags, cachep->object_size, _RET_IP_); trace_kmem_cache_alloc(_RET_IP_, ret, cachep->object_size, cachep->size, flags); @@ -3514,7 +3530,7 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, local_irq_disable(); for (i = 0; i < size; i++) { - void *objp = __do_cache_alloc(s, flags); + void *objp = kfence_alloc(s, s->object_size, flags) ?: __do_cache_alloc(s, flags); if (unlikely(!objp)) goto error; @@ -3547,7 +3563,7 @@ kmem_cache_alloc_trace(struct kmem_cache *cachep, gfp_t flags, size_t size) { void *ret; - ret = slab_alloc(cachep, flags, _RET_IP_); + ret = slab_alloc(cachep, flags, size, _RET_IP_); ret = kasan_kmalloc(cachep, ret, size, flags); trace_kmalloc(_RET_IP_, ret, @@ -3573,7 +3589,7 @@ EXPORT_SYMBOL(kmem_cache_alloc_trace); */ void *kmem_cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid) { - void *ret = slab_alloc_node(cachep, flags, nodeid, _RET_IP_); + void *ret = slab_alloc_node(cachep, flags, nodeid, cachep->object_size, _RET_IP_); trace_kmem_cache_alloc_node(_RET_IP_, ret, cachep->object_size, cachep->size, @@ -3591,7 +3607,7 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *cachep, { void *ret; - ret = slab_alloc_node(cachep, flags, nodeid, _RET_IP_); + ret = slab_alloc_node(cachep, flags, nodeid, size, _RET_IP_); ret = kasan_kmalloc(cachep, ret, size, flags); trace_kmalloc_node(_RET_IP_, ret, @@ -3652,7 +3668,7 @@ static __always_inline void *__do_kmalloc(size_t size, gfp_t flags, cachep = kmalloc_slab(size, flags); if (unlikely(ZERO_OR_NULL_PTR(cachep))) return cachep; - ret = slab_alloc(cachep, flags, caller); + ret = slab_alloc(cachep, flags, size, caller); ret = kasan_kmalloc(cachep, ret, size, flags); trace_kmalloc(caller, ret, @@ -4151,7 +4167,10 @@ void __check_heap_object(const void *ptr, unsigned long n, struct page *page, BUG_ON(objnr >= cachep->num); /* Find offset within object. */ - offset = ptr - index_to_obj(cachep, page, objnr) - obj_offset(cachep); + if (is_kfence_address(ptr)) + offset = ptr - kfence_object_start(ptr); + else + offset = ptr - index_to_obj(cachep, page, objnr) - obj_offset(cachep); /* Allow address range falling entirely within usercopy region. */ if (offset >= cachep->useroffset && diff --git a/mm/slab_common.c b/mm/slab_common.c index f9ccd5dc13f3..13125773dae2 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -12,6 +12,7 @@ #include #include #include +#include #include #include #include @@ -435,6 +436,7 @@ static void slab_caches_to_rcu_destroy_workfn(struct work_struct *work) rcu_barrier(); list_for_each_entry_safe(s, s2, &to_destroy, list) { + kfence_shutdown_cache(s); #ifdef SLAB_SUPPORTS_SYSFS sysfs_slab_release(s); #else @@ -460,6 +462,7 @@ static int shutdown_cache(struct kmem_cache *s) list_add_tail(&s->list, &slab_caches_to_rcu_destroy); schedule_work(&slab_caches_to_rcu_destroy_work); } else { + kfence_shutdown_cache(s); #ifdef SLAB_SUPPORTS_SYSFS sysfs_slab_unlink(s); sysfs_slab_release(s); @@ -1171,7 +1174,7 @@ size_t ksize(const void *objp) if (unlikely(ZERO_OR_NULL_PTR(objp)) || !__kasan_check_read(objp, 1)) return 0; - size = __ksize(objp); + size = kfence_ksize(objp) ?: __ksize(objp); /* * We assume that ksize callers could use whole allocated area, * so we need to unpoison this area. -- 2.29.1.341.ge80a0c044ae-goog