Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp3801194pxk; Tue, 29 Sep 2020 06:40:48 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzw4KmsbysumzXoY7CSEiOi98EFBUdE2HTWh4mIuC1jHLCroVeCfEveYhFzXFVHp3M1ok0t X-Received: by 2002:a50:d2d1:: with SMTP id q17mr3295313edg.167.1601386847978; Tue, 29 Sep 2020 06:40:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1601386847; cv=none; d=google.com; s=arc-20160816; b=oiNU0KQhDS/7kKWDnfBy2xqtPgXbW+v0pvBl5yGIC5vm2w2dF8/eD9Kg1zlpak3042 QGeVmcmu9iM31kqbYiIhdwV+cF55OSs7FgRk+q7N48Gy18nO2pHsxCl8+M+ZW50dzFIm 5VkQorhV1by6zyeHkx9lspXB4DkLV06vylotedaV+tfu9IHdG4FOh4FQnsYPCcTssgbY 22V+fGJWbh2BusyzJtH4avlru2eNsIiaZbbgLNuYdPwLonjYjnhXQUedwQYaJBuXSd7Y VSFkfA97YdNr2dC1608oNlC0Iubw/4EnBy2n5X+lUGDqS46rdVrWRr76DpKBTUVb1M8D DFIw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:references:mime-version :message-id:in-reply-to:date:sender:dkim-signature; bh=TcPFKG27AKWC24GfjbdRwfCqEVji/eGlkYn6VN42bOA=; b=t3W+EK1vNtNsyGhNMDVnJ/MS0oQ4Yv0qaO1zKDan2oUtx+sQQ5AUXiOWqm4Ey9be5I VI+Z3ErZ6fvc3GmUKtf+f/BtGl+5jopIAMfTAPgUTM2dG5t+S4BapZEcHiII3to8bh3S 7+bEU93PpLc/9sPMUd239aIzanDKWBLvibVvtpJso5/E1Pe9DlFXyQ+D/S8GFUCV4lz/ 2K2RFUJ3jDT9dXFNszpJLbZKNv0oKE2fyLZEGsGJZFx9SOT9e3fGKyAcycCHVevlA5kL Maogd4xeqb44BSMKcrOL88G34xEvA495nDhpoam54YsxlpiR5bY47E+xa7eKxOBXLxFO 2pUA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=m3ciNFIc; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id rn4si2887395ejb.593.2020.09.29.06.40.24; Tue, 29 Sep 2020 06:40:47 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=m3ciNFIc; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730113AbgI2NjE (ORCPT + 99 others); Tue, 29 Sep 2020 09:39:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57276 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730031AbgI2Ni6 (ORCPT ); Tue, 29 Sep 2020 09:38:58 -0400 Received: from mail-wr1-x449.google.com (mail-wr1-x449.google.com [IPv6:2a00:1450:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 67954C0613D6 for ; Tue, 29 Sep 2020 06:38:56 -0700 (PDT) Received: by mail-wr1-x449.google.com with SMTP id a12so1770923wrg.13 for ; Tue, 29 Sep 2020 06:38:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=TcPFKG27AKWC24GfjbdRwfCqEVji/eGlkYn6VN42bOA=; b=m3ciNFIcLJLc0/CABzp40eciPJve/txucLB9B8dH0CotxO1wuzVP/iXZbNGeWBDzpl w6dBrfU0pizbMlH0zNkeLQ9q2VVHOjFT5Fd37+1VehId57CezCgbu4tamDSwL0O9C/bJ 1f8GcZ5C686vFB/RN8ba3Ae+aEomG08RxH4vjAxbooxoeinRDi/qvpf1lxWoxLQRvBbo vv1d124ldFjHlShrkCMfR2kXIehbW+PFNDNzSBIWBiPpJlnXbYUP9x4KFn3W8j6Czc3T p2nrSpucue4yKvJDPOISbKsI4zanzPGMKSa8ngM9q9j0thCmqcVXjpfaBio9aFONLE/k 7HMg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=TcPFKG27AKWC24GfjbdRwfCqEVji/eGlkYn6VN42bOA=; b=OBuIYa3ioV8YK3ycDfOdJS3wrY2DZpvIhbf3sh7pbEzM8O7g5FntqRFjFEtpj+SX5e SbEsDA1GrlMkzwqAhRZRQY4rYZPX9fUjNzyvcPUkmeA1CaLdIssg9arw4zs6Uk10t/Tu eNoFn2GFLcrnh1iMTPl14Lsjky934fMuOrjR2lWl7QnPhDfHBYikwoLm7yx5LBfD4iJ5 d2trmJLSwZd9kOn05yQk3KUxB7IvQsSNxaFVllHkT3PvGHWopGjbLkjKmGQA9hHjQyYC 9qacG8m5LQpZdLAhV4TVsa3kGLFSfY6mlgg83rjbl1irYWz/NePXTukNjeMn4uDEX7yD z3TQ== X-Gm-Message-State: AOAM531NTdmmTKAIPBeJlMSOCEb6gw4yqjYqKnGYAIpEkFUohrsCjDmJ rtNXd+ifz6az7TDaUoHnJAUTdvMpRg== Sender: "elver via sendgmr" X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:f693:9fff:fef4:2449]) (user=elver job=sendgmr) by 2002:a5d:55c8:: with SMTP id i8mr4470523wrw.331.1601386734786; Tue, 29 Sep 2020 06:38:54 -0700 (PDT) Date: Tue, 29 Sep 2020 15:38:08 +0200 In-Reply-To: <20200929133814.2834621-1-elver@google.com> Message-Id: <20200929133814.2834621-6-elver@google.com> Mime-Version: 1.0 References: <20200929133814.2834621-1-elver@google.com> X-Mailer: git-send-email 2.28.0.709.gb0816b6eb0-goog Subject: [PATCH v4 05/11] mm, kfence: insert KFENCE hooks for SLUB From: Marco Elver To: elver@google.com, akpm@linux-foundation.org, glider@google.com Cc: hpa@zytor.com, paulmck@kernel.org, andreyknvl@google.com, aryabinin@virtuozzo.com, luto@kernel.org, bp@alien8.de, catalin.marinas@arm.com, cl@linux.com, dave.hansen@linux.intel.com, rientjes@google.com, dvyukov@google.com, edumazet@google.com, gregkh@linuxfoundation.org, hdanton@sina.com, mingo@redhat.com, jannh@google.com, Jonathan.Cameron@huawei.com, corbet@lwn.net, iamjoonsoo.kim@lge.com, keescook@chromium.org, mark.rutland@arm.com, penberg@kernel.org, peterz@infradead.org, sjpark@amazon.com, tglx@linutronix.de, vbabka@suse.cz, will@kernel.org, x86@kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Alexander Potapenko Inserts KFENCE hooks into the SLUB allocator. To pass the originally requested size to KFENCE, add an argument 'orig_size' to slab_alloc*(). The additional argument is required to preserve the requested original size for kmalloc() allocations, which uses size classes (e.g. an allocation of 272 bytes will return an object of size 512). Therefore, kmem_cache::size does not represent the kmalloc-caller's requested size, and we must introduce the argument 'orig_size' to propagate the originally requested size to KFENCE. Without the originally requested size, we would not be able to detect out-of-bounds accesses for objects placed at the end of a KFENCE object page if that object is not equal to the kmalloc-size class it was bucketed into. When KFENCE is disabled, there is no additional overhead, since slab_alloc*() functions are __always_inline. Reviewed-by: Dmitry Vyukov Co-developed-by: Marco Elver Signed-off-by: Marco Elver Signed-off-by: Alexander Potapenko --- v3: * Rewrite patch description to clarify need for 'orig_size' [reported by Christopher Lameter]. --- mm/slub.c | 72 ++++++++++++++++++++++++++++++++++++++++--------------- 1 file changed, 53 insertions(+), 19 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index d4177aecedf6..5c5a13a7857c 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -27,6 +27,7 @@ #include #include #include +#include #include #include #include @@ -1557,6 +1558,11 @@ static inline bool slab_free_freelist_hook(struct kmem_cache *s, void *old_tail = *tail ? *tail : *head; int rsize; + if (is_kfence_address(next)) { + slab_free_hook(s, next); + return true; + } + /* Head and tail of the reconstructed freelist */ *head = NULL; *tail = NULL; @@ -2660,7 +2666,8 @@ static inline void *get_freelist(struct kmem_cache *s, struct page *page) * already disabled (which is the case for bulk allocation). */ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, - unsigned long addr, struct kmem_cache_cpu *c) + unsigned long addr, struct kmem_cache_cpu *c, + size_t orig_size) { void *freelist; struct page *page; @@ -2763,7 +2770,8 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, * cpu changes by refetching the per cpu area pointer. */ static void *__slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, - unsigned long addr, struct kmem_cache_cpu *c) + unsigned long addr, struct kmem_cache_cpu *c, + size_t orig_size) { void *p; unsigned long flags; @@ -2778,7 +2786,7 @@ static void *__slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, c = this_cpu_ptr(s->cpu_slab); #endif - p = ___slab_alloc(s, gfpflags, node, addr, c); + p = ___slab_alloc(s, gfpflags, node, addr, c, orig_size); local_irq_restore(flags); return p; } @@ -2805,7 +2813,7 @@ static __always_inline void maybe_wipe_obj_freeptr(struct kmem_cache *s, * Otherwise we can simply pick the next object from the lockless free list. */ static __always_inline void *slab_alloc_node(struct kmem_cache *s, - gfp_t gfpflags, int node, unsigned long addr) + gfp_t gfpflags, int node, unsigned long addr, size_t orig_size) { void *object; struct kmem_cache_cpu *c; @@ -2816,6 +2824,11 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s, s = slab_pre_alloc_hook(s, &objcg, 1, gfpflags); if (!s) return NULL; + + object = kfence_alloc(s, orig_size, gfpflags); + if (unlikely(object)) + goto out; + redo: /* * Must read kmem_cache cpu data via this cpu ptr. Preemption is @@ -2853,7 +2866,7 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s, object = c->freelist; page = c->page; if (unlikely(!object || !node_match(page, node))) { - object = __slab_alloc(s, gfpflags, node, addr, c); + object = __slab_alloc(s, gfpflags, node, addr, c, orig_size); stat(s, ALLOC_SLOWPATH); } else { void *next_object = get_freepointer_safe(s, object); @@ -2889,20 +2902,21 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s, if (unlikely(slab_want_init_on_alloc(gfpflags, s)) && object) memset(object, 0, s->object_size); +out: slab_post_alloc_hook(s, objcg, gfpflags, 1, &object); return object; } static __always_inline void *slab_alloc(struct kmem_cache *s, - gfp_t gfpflags, unsigned long addr) + gfp_t gfpflags, unsigned long addr, size_t orig_size) { - return slab_alloc_node(s, gfpflags, NUMA_NO_NODE, addr); + return slab_alloc_node(s, gfpflags, NUMA_NO_NODE, addr, orig_size); } void *kmem_cache_alloc(struct kmem_cache *s, gfp_t gfpflags) { - void *ret = slab_alloc(s, gfpflags, _RET_IP_); + void *ret = slab_alloc(s, gfpflags, _RET_IP_, s->object_size); trace_kmem_cache_alloc(_RET_IP_, ret, s->object_size, s->size, gfpflags); @@ -2914,7 +2928,7 @@ EXPORT_SYMBOL(kmem_cache_alloc); #ifdef CONFIG_TRACING void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size) { - void *ret = slab_alloc(s, gfpflags, _RET_IP_); + void *ret = slab_alloc(s, gfpflags, _RET_IP_, size); trace_kmalloc(_RET_IP_, ret, size, s->size, gfpflags); ret = kasan_kmalloc(s, ret, size, gfpflags); return ret; @@ -2925,7 +2939,7 @@ EXPORT_SYMBOL(kmem_cache_alloc_trace); #ifdef CONFIG_NUMA void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags, int node) { - void *ret = slab_alloc_node(s, gfpflags, node, _RET_IP_); + void *ret = slab_alloc_node(s, gfpflags, node, _RET_IP_, s->object_size); trace_kmem_cache_alloc_node(_RET_IP_, ret, s->object_size, s->size, gfpflags, node); @@ -2939,7 +2953,7 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *s, gfp_t gfpflags, int node, size_t size) { - void *ret = slab_alloc_node(s, gfpflags, node, _RET_IP_); + void *ret = slab_alloc_node(s, gfpflags, node, _RET_IP_, size); trace_kmalloc_node(_RET_IP_, ret, size, s->size, gfpflags, node); @@ -2973,6 +2987,9 @@ static void __slab_free(struct kmem_cache *s, struct page *page, stat(s, FREE_SLOWPATH); + if (kfence_free(head)) + return; + if (kmem_cache_debug(s) && !free_debug_processing(s, page, head, tail, cnt, addr)) return; @@ -3216,6 +3233,13 @@ int build_detached_freelist(struct kmem_cache *s, size_t size, df->s = cache_from_obj(s, object); /* Support for memcg */ } + if (is_kfence_address(object)) { + slab_free_hook(df->s, object); + WARN_ON(!kfence_free(object)); + p[size] = NULL; /* mark object processed */ + return size; + } + /* Start new detached freelist */ df->page = page; set_freepointer(df->s, object, NULL); @@ -3290,8 +3314,14 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, c = this_cpu_ptr(s->cpu_slab); for (i = 0; i < size; i++) { - void *object = c->freelist; + void *object = kfence_alloc(s, s->object_size, flags); + if (unlikely(object)) { + p[i] = object; + continue; + } + + object = c->freelist; if (unlikely(!object)) { /* * We may have removed an object from c->freelist using @@ -3307,7 +3337,7 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, * of re-populating per CPU c->freelist */ p[i] = ___slab_alloc(s, flags, NUMA_NO_NODE, - _RET_IP_, c); + _RET_IP_, c, size); if (unlikely(!p[i])) goto error; @@ -3962,7 +3992,7 @@ void *__kmalloc(size_t size, gfp_t flags) if (unlikely(ZERO_OR_NULL_PTR(s))) return s; - ret = slab_alloc(s, flags, _RET_IP_); + ret = slab_alloc(s, flags, _RET_IP_, size); trace_kmalloc(_RET_IP_, ret, size, s->size, flags); @@ -4010,7 +4040,7 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node) if (unlikely(ZERO_OR_NULL_PTR(s))) return s; - ret = slab_alloc_node(s, flags, node, _RET_IP_); + ret = slab_alloc_node(s, flags, node, _RET_IP_, size); trace_kmalloc_node(_RET_IP_, ret, size, s->size, flags, node); @@ -4036,6 +4066,7 @@ void __check_heap_object(const void *ptr, unsigned long n, struct page *page, struct kmem_cache *s; unsigned int offset; size_t object_size; + bool is_kfence = is_kfence_address(ptr); ptr = kasan_reset_tag(ptr); @@ -4048,10 +4079,13 @@ void __check_heap_object(const void *ptr, unsigned long n, struct page *page, to_user, 0, n); /* Find offset within object. */ - offset = (ptr - page_address(page)) % s->size; + if (is_kfence) + offset = ptr - kfence_object_start(ptr); + else + offset = (ptr - page_address(page)) % s->size; /* Adjust for redzone and reject if within the redzone. */ - if (kmem_cache_debug_flags(s, SLAB_RED_ZONE)) { + if (!is_kfence && kmem_cache_debug_flags(s, SLAB_RED_ZONE)) { if (offset < s->red_left_pad) usercopy_abort("SLUB object in left red zone", s->name, to_user, offset, n); @@ -4460,7 +4494,7 @@ void *__kmalloc_track_caller(size_t size, gfp_t gfpflags, unsigned long caller) if (unlikely(ZERO_OR_NULL_PTR(s))) return s; - ret = slab_alloc(s, gfpflags, caller); + ret = slab_alloc(s, gfpflags, caller, size); /* Honor the call site pointer we received. */ trace_kmalloc(caller, ret, size, s->size, gfpflags); @@ -4491,7 +4525,7 @@ void *__kmalloc_node_track_caller(size_t size, gfp_t gfpflags, if (unlikely(ZERO_OR_NULL_PTR(s))) return s; - ret = slab_alloc_node(s, gfpflags, node, caller); + ret = slab_alloc_node(s, gfpflags, node, caller, size); /* Honor the call site pointer we received. */ trace_kmalloc_node(caller, ret, size, s->size, gfpflags, node); -- 2.28.0.709.gb0816b6eb0-goog