Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp3801313pxk; Tue, 29 Sep 2020 06:40:56 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyP5PCQyaQdFwrJa/VViPb9+6qiwBPx6XmMfBlUwq42w9fz5ukCL9Z/a3NpbGUa/yVZGGp/ X-Received: by 2002:a17:906:4754:: with SMTP id j20mr4041954ejs.293.1601386856278; Tue, 29 Sep 2020 06:40:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1601386856; cv=none; d=google.com; s=arc-20160816; b=fjqlulhbgcG78ya5+lmStYvsJl69XWMd9UxQyqNL2bv+J5B5zXgleR04fRRMgRXEhj ftUU64GlbaQI5GfIkWeJpGINHlF6NxbsZPCNIU7HKnz9kJBO1KEwVi6aUFG2QT7eQsxg 4kSK1HF8LtllRHs1WAh4gcc9YMYpp4wiYudzg3Q/K0/LsjxN3GMqFsMizIrRhuh0WOeG jQ+aSk91mOTE4LWO6JfZa95VraqyoHc9AXWtz4IJRUylV4vKlSOVFguK8ZUl0F1KQD/o hCWWXiHu3fu4az73AURcSf+EStVLxyommcE6IvDX4siUxgvT6FtelVubvSzKuxEi/KkA TdrA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:references:mime-version :message-id:in-reply-to:date:sender:dkim-signature; bh=RyhekuRkgmFAPj0MnrvGu5klxhimFNDAR1CWyJxk0no=; b=nNjS7OEREdfKBz1DMLSYcJAZpt3bhDlAYuqHo6SJM2RFkpP5wIvaPi+UtTGLrBMrps LzOXQEioND+hEyCKkiZihORyb57Zqc5AExFvUmorK7V/WWGeGywhrQDkzxmDOiXbQH3K 7U0aG8d/WlQW3Ww7JNP20JgWGoJO4WVXl9o3mv0yMtOOlaI2fF24nEaHatx15RAkF4fP +v4W1vMi+h+CVykLbHvTstHwTtlczfGmlf05o4kdJSNU2iWpznQ1ZZAHGceYr9tp7tu8 GKOiCceZqbSpVM4Ltc8Gc6FDc2qsx+qkRME1buurWz04lQJw1gPtbT+cs6hOyvi8V7RE tfHg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=P3GJtu0X; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id l14si2790696eje.378.2020.09.29.06.40.33; Tue, 29 Sep 2020 06:40:56 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=P3GJtu0X; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730041AbgI2Ni7 (ORCPT + 99 others); Tue, 29 Sep 2020 09:38:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57264 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729862AbgI2Niy (ORCPT ); Tue, 29 Sep 2020 09:38:54 -0400 Received: from mail-wr1-x44a.google.com (mail-wr1-x44a.google.com [IPv6:2a00:1450:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BB189C0613D3 for ; Tue, 29 Sep 2020 06:38:53 -0700 (PDT) Received: by mail-wr1-x44a.google.com with SMTP id 33so1763719wrk.12 for ; Tue, 29 Sep 2020 06:38:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=RyhekuRkgmFAPj0MnrvGu5klxhimFNDAR1CWyJxk0no=; b=P3GJtu0X8/mJUA5wXsaUMq1f3Ji9qpURBWwQny9MQj/kaOKNiFCaVvpKqfv9QN5Oud NFMOrxdnTB5rI7v2QoRp/m4K+WVjB2ylGW1qPdBOArKXUnFJ5L6PoW/pkBh6PmXz/73W 8/7G7HQGFMyoeeNMMz6h1fJjhyHlhTLhyX6PU1aKQYN2GQWc2d3KINv4gZOdrE8BWj21 GJHchh4d2nVlPH1kVTeRffZXDH01afBmx1rUjt2Y00pfq5tNSgzRmPDv/aMpUFJoBVJq KIhJc862P6f+Xrydgg+lmi00K5DzKPH7jaBf9NDpA9uPWh1EN4SEZOamH3iMcjNFsIq+ 7RNQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=RyhekuRkgmFAPj0MnrvGu5klxhimFNDAR1CWyJxk0no=; b=fEFqj5ZbtZTKyvtZdTwMnInV6vnG0wq4kX0XNaJaWjpkrg+GkguV4Sy93HW9PhUqiS DQsN2919KrcHrmnq14c7M0cX8yNVVHa7u/2UAvXIPGYwQ3wmYkMgWLJ73aMjfyD39qym ya+mqWDKpGIApV/N5A2YpS5ZmB5GNQg6FD9QDs/jB6PGIbmNtXME/JLVjaLhw8kP/M5J u8EyGJi2sOB8jy14dtyCQwE7OMU61RFYyG5gkiIoR9jK5jYmEauewWS0gV0VbzHq+spx IgcI5ZUTZJ5fvamchtQLG7hhufDtBtxxRX/qmPFwM68j/O3ilF5tcAG88YrJMm+EASh8 Ow1w== X-Gm-Message-State: AOAM531J8HMINK8EyqKfjHMZu0+0+6q7TNWYNv68+CjVnHw+isUaiv1b +q8178aKFmVJuLTFm+k2x/yOJxOH9A== Sender: "elver via sendgmr" X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:f693:9fff:fef4:2449]) (user=elver job=sendgmr) by 2002:a7b:c453:: with SMTP id l19mr4414928wmi.163.1601386732135; Tue, 29 Sep 2020 06:38:52 -0700 (PDT) Date: Tue, 29 Sep 2020 15:38:07 +0200 In-Reply-To: <20200929133814.2834621-1-elver@google.com> Message-Id: <20200929133814.2834621-5-elver@google.com> Mime-Version: 1.0 References: <20200929133814.2834621-1-elver@google.com> X-Mailer: git-send-email 2.28.0.709.gb0816b6eb0-goog Subject: [PATCH v4 04/11] mm, kfence: insert KFENCE hooks for SLAB From: Marco Elver To: elver@google.com, akpm@linux-foundation.org, glider@google.com Cc: hpa@zytor.com, paulmck@kernel.org, andreyknvl@google.com, aryabinin@virtuozzo.com, luto@kernel.org, bp@alien8.de, catalin.marinas@arm.com, cl@linux.com, dave.hansen@linux.intel.com, rientjes@google.com, dvyukov@google.com, edumazet@google.com, gregkh@linuxfoundation.org, hdanton@sina.com, mingo@redhat.com, jannh@google.com, Jonathan.Cameron@huawei.com, corbet@lwn.net, iamjoonsoo.kim@lge.com, keescook@chromium.org, mark.rutland@arm.com, penberg@kernel.org, peterz@infradead.org, sjpark@amazon.com, tglx@linutronix.de, vbabka@suse.cz, will@kernel.org, x86@kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Alexander Potapenko Inserts KFENCE hooks into the SLAB allocator. To pass the originally requested size to KFENCE, add an argument 'orig_size' to slab_alloc*(). The additional argument is required to preserve the requested original size for kmalloc() allocations, which uses size classes (e.g. an allocation of 272 bytes will return an object of size 512). Therefore, kmem_cache::size does not represent the kmalloc-caller's requested size, and we must introduce the argument 'orig_size' to propagate the originally requested size to KFENCE. Without the originally requested size, we would not be able to detect out-of-bounds accesses for objects placed at the end of a KFENCE object page if that object is not equal to the kmalloc-size class it was bucketed into. When KFENCE is disabled, there is no additional overhead, since slab_alloc*() functions are __always_inline. Reviewed-by: Dmitry Vyukov Co-developed-by: Marco Elver Signed-off-by: Marco Elver Signed-off-by: Alexander Potapenko --- v3: * Rewrite patch description to clarify need for 'orig_size' [reported by Christopher Lameter]. --- mm/slab.c | 46 ++++++++++++++++++++++++++++++++++------------ mm/slab_common.c | 6 +++++- 2 files changed, 39 insertions(+), 13 deletions(-) diff --git a/mm/slab.c b/mm/slab.c index 3160dff6fd76..30aba06ae02b 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -100,6 +100,7 @@ #include #include #include +#include #include #include #include @@ -3206,7 +3207,7 @@ static void *____cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, } static __always_inline void * -slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, +slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, size_t orig_size, unsigned long caller) { unsigned long save_flags; @@ -3219,6 +3220,10 @@ slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, if (unlikely(!cachep)) return NULL; + ptr = kfence_alloc(cachep, orig_size, flags); + if (unlikely(ptr)) + goto out_hooks; + cache_alloc_debugcheck_before(cachep, flags); local_irq_save(save_flags); @@ -3251,6 +3256,7 @@ slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, if (unlikely(slab_want_init_on_alloc(flags, cachep)) && ptr) memset(ptr, 0, cachep->object_size); +out_hooks: slab_post_alloc_hook(cachep, objcg, flags, 1, &ptr); return ptr; } @@ -3288,7 +3294,7 @@ __do_cache_alloc(struct kmem_cache *cachep, gfp_t flags) #endif /* CONFIG_NUMA */ static __always_inline void * -slab_alloc(struct kmem_cache *cachep, gfp_t flags, unsigned long caller) +slab_alloc(struct kmem_cache *cachep, gfp_t flags, size_t orig_size, unsigned long caller) { unsigned long save_flags; void *objp; @@ -3299,6 +3305,10 @@ slab_alloc(struct kmem_cache *cachep, gfp_t flags, unsigned long caller) if (unlikely(!cachep)) return NULL; + objp = kfence_alloc(cachep, orig_size, flags); + if (unlikely(objp)) + goto leave; + cache_alloc_debugcheck_before(cachep, flags); local_irq_save(save_flags); objp = __do_cache_alloc(cachep, flags); @@ -3309,6 +3319,7 @@ slab_alloc(struct kmem_cache *cachep, gfp_t flags, unsigned long caller) if (unlikely(slab_want_init_on_alloc(flags, cachep)) && objp) memset(objp, 0, cachep->object_size); +leave: slab_post_alloc_hook(cachep, objcg, flags, 1, &objp); return objp; } @@ -3414,6 +3425,11 @@ static void cache_flusharray(struct kmem_cache *cachep, struct array_cache *ac) static __always_inline void __cache_free(struct kmem_cache *cachep, void *objp, unsigned long caller) { + if (kfence_free(objp)) { + kmemleak_free_recursive(objp, cachep->flags); + return; + } + /* Put the object into the quarantine, don't touch it for now. */ if (kasan_slab_free(cachep, objp, _RET_IP_)) return; @@ -3479,7 +3495,7 @@ void ___cache_free(struct kmem_cache *cachep, void *objp, */ void *kmem_cache_alloc(struct kmem_cache *cachep, gfp_t flags) { - void *ret = slab_alloc(cachep, flags, _RET_IP_); + void *ret = slab_alloc(cachep, flags, cachep->object_size, _RET_IP_); trace_kmem_cache_alloc(_RET_IP_, ret, cachep->object_size, cachep->size, flags); @@ -3512,7 +3528,7 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, local_irq_disable(); for (i = 0; i < size; i++) { - void *objp = __do_cache_alloc(s, flags); + void *objp = kfence_alloc(s, s->object_size, flags) ?: __do_cache_alloc(s, flags); if (unlikely(!objp)) goto error; @@ -3545,7 +3561,7 @@ kmem_cache_alloc_trace(struct kmem_cache *cachep, gfp_t flags, size_t size) { void *ret; - ret = slab_alloc(cachep, flags, _RET_IP_); + ret = slab_alloc(cachep, flags, size, _RET_IP_); ret = kasan_kmalloc(cachep, ret, size, flags); trace_kmalloc(_RET_IP_, ret, @@ -3571,7 +3587,7 @@ EXPORT_SYMBOL(kmem_cache_alloc_trace); */ void *kmem_cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid) { - void *ret = slab_alloc_node(cachep, flags, nodeid, _RET_IP_); + void *ret = slab_alloc_node(cachep, flags, nodeid, cachep->object_size, _RET_IP_); trace_kmem_cache_alloc_node(_RET_IP_, ret, cachep->object_size, cachep->size, @@ -3589,7 +3605,7 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *cachep, { void *ret; - ret = slab_alloc_node(cachep, flags, nodeid, _RET_IP_); + ret = slab_alloc_node(cachep, flags, nodeid, size, _RET_IP_); ret = kasan_kmalloc(cachep, ret, size, flags); trace_kmalloc_node(_RET_IP_, ret, @@ -3650,7 +3666,7 @@ static __always_inline void *__do_kmalloc(size_t size, gfp_t flags, cachep = kmalloc_slab(size, flags); if (unlikely(ZERO_OR_NULL_PTR(cachep))) return cachep; - ret = slab_alloc(cachep, flags, caller); + ret = slab_alloc(cachep, flags, size, caller); ret = kasan_kmalloc(cachep, ret, size, flags); trace_kmalloc(caller, ret, @@ -4138,18 +4154,24 @@ void __check_heap_object(const void *ptr, unsigned long n, struct page *page, bool to_user) { struct kmem_cache *cachep; - unsigned int objnr; + unsigned int objnr = 0; unsigned long offset; + bool is_kfence = is_kfence_address(ptr); ptr = kasan_reset_tag(ptr); /* Find and validate object. */ cachep = page->slab_cache; - objnr = obj_to_index(cachep, page, (void *)ptr); - BUG_ON(objnr >= cachep->num); + if (!is_kfence) { + objnr = obj_to_index(cachep, page, (void *)ptr); + BUG_ON(objnr >= cachep->num); + } /* Find offset within object. */ - offset = ptr - index_to_obj(cachep, page, objnr) - obj_offset(cachep); + if (is_kfence_address(ptr)) + offset = ptr - kfence_object_start(ptr); + else + offset = ptr - index_to_obj(cachep, page, objnr) - obj_offset(cachep); /* Allow address range falling entirely within usercopy region. */ if (offset >= cachep->useroffset && diff --git a/mm/slab_common.c b/mm/slab_common.c index f9ccd5dc13f3..6e35e273681a 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -12,6 +12,7 @@ #include #include #include +#include #include #include #include @@ -448,6 +449,9 @@ static int shutdown_cache(struct kmem_cache *s) /* free asan quarantined objects */ kasan_cache_shutdown(s); + if (!kfence_shutdown_cache(s)) + return -EBUSY; + if (__kmem_cache_shutdown(s) != 0) return -EBUSY; @@ -1171,7 +1175,7 @@ size_t ksize(const void *objp) if (unlikely(ZERO_OR_NULL_PTR(objp)) || !__kasan_check_read(objp, 1)) return 0; - size = __ksize(objp); + size = kfence_ksize(objp) ?: __ksize(objp); /* * We assume that ksize callers could use whole allocated area, * so we need to unpoison this area. -- 2.28.0.709.gb0816b6eb0-goog