Received: by 2002:a05:6358:45e:b0:b5:b6eb:e1f9 with SMTP id 30csp4710801rwe; Tue, 30 Aug 2022 15:41:09 -0700 (PDT) X-Google-Smtp-Source: AA6agR5bGQmO3bLx6ErjWtWL3EiP5obnbgpj5CJ1tH3b3kDYCG1HpwtSkx8LdQQfROAgwmNZCOK8 X-Received: by 2002:a05:6402:5ca:b0:445:c80a:3c2 with SMTP id n10-20020a05640205ca00b00445c80a03c2mr22346652edx.247.1661899269031; Tue, 30 Aug 2022 15:41:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1661899269; cv=none; d=google.com; s=arc-20160816; b=q9BagELX5M6/lHPPCeVEpoxKSzEXgbmYNvCPHoi9S01Bw8a8nb1wKwwMlGHHD8ICLI F6auvgC0tJW0SIgbkm+BsrAK+GWp7kKforYHHXoRkPdWLoRFQQ+UPppgnRM3964aAMXR /+mBg3JrJuNFwEAsC4YalKkSKP0WGhwxmPO4+kv4KHVzxjij0Q5MMrNj8MDgjBVQVLk/ am2EDNQD/4x7lHpZsVihTcTSISmxO4IdvgY/Fr3fx8F0aUJK+1IbLPo1z+LQwgEEBduN 2yaWWaD8xkjz6HBn9pSE/Ltl3uBCNwH2bSEsQ9c2FB7pOKnUJRaxgVlICKFbKR49KwTb y1wQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=DVDWc4n0OUELi9+TR5vEerYUeoJhmwr6oegHq8VoMPY=; b=lRvwsM6KXVgDrWTCnAepiYpUx/AsM7CdDZTOQ9xZoWOTSqVjDaRJDZyYtDQbo+poBC eKFoO5OH+29ClZ/fR6KtqVG6axOdoB7Igi03pTF6vU1wm8IcOuJLUzcc3QhGaZrnCawG q7LbDwhC/h/VlgecHx8kW8uHkWGv/U2ioA5KRGjUbsQU8Es1bK5kTqggGEnDqIQsq5sh YPBTSm/Lkn+N5pcnhpn5a5hQQcP3FZZVPhRZgQYTE3peOWu1+Dt7Oi4bJWuNvTPBWJNW pZM34q259sfEEx4vii3RtOztYhWFrxfl7pmckPBbX65SQxbxlsEesYhYYRfjUMSgKxEZ QhNQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b="bM/4HTwD"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ga10-20020a1709070c0a00b0073d62820e42si11388278ejc.288.2022.08.30.15.40.42; Tue, 30 Aug 2022 15:41:09 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b="bM/4HTwD"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231921AbiH3VwX (ORCPT + 99 others); Tue, 30 Aug 2022 17:52:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60922 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232079AbiH3Vv2 (ORCPT ); Tue, 30 Aug 2022 17:51:28 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 78EBF2C111 for ; Tue, 30 Aug 2022 14:50:07 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id v5-20020a2583c5000000b006964324be8cso714870ybm.14 for ; Tue, 30 Aug 2022 14:50:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc; bh=DVDWc4n0OUELi9+TR5vEerYUeoJhmwr6oegHq8VoMPY=; b=bM/4HTwD/5NvD2z1e/m/chKOx8M+k/cup4NHKUTyLbSRW7GSEIiXXrcH3cV9SZ1SAI L09ba1CkrtuU8u1GNYrpa4iH8Pf6SJhzN8yZiOz8fjB8X6SKmg6EcT8m4ikT2VrNvIZC fcEJcUtC8cuaOScZm4W400OnCT8oTeDu69+FoT8U0Z8bPUbNNtUMn8N9gtPxsRiO1Jkp yHen1YuPdi7I2iwNEDue9JoiosDrnFDYBeVHrjJCFkXdPVyn9cYkcijvjT6kVZJ46mfx 8WNJGPhwmcInAt8+4wUEyRhB6jGCNZ+WfXvMftBjOymVBGsg95WkR7Uw2/KIXsL46tgh o0Kg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc; bh=DVDWc4n0OUELi9+TR5vEerYUeoJhmwr6oegHq8VoMPY=; b=dyNZqd7f1+GoaEcSTrypQllt9lRY1h9K4xi3cT6LDMWI0GXiMjS3x10O9eIF+FL8Go 7mtXfEHapulGE1+JhpmMnUYaN2hlZd92wJbU6KAtyrj6YaKyi0IopOhBs1N/+J9lIVu8 BoYVYYmR3ijjoTI3PRgT8rCXoVzyfNIWT3VGZbITi9VR/SOc0MrwasCr2MXVkCczxOJf pm1tycqAQmFcexb9CQ7TZIJ6ijm+a8bELIdUVz2TNlKAT9ME/M0D/AeSGQSL2h6YHDzb oAtBC0zO6yFj/JL0Qjscx2tcuGpYoObV00C3AIH43MEaIkikjxiDg4MG32osCadesLrA A7vQ== X-Gm-Message-State: ACgBeo2VW4KD4XjcSKzwsick4zyyJa5dJAMLkPzyO+DX8Vo0M6mb+GD+ FavNy51hRMv4Yr5N1zwSC2BRIUFhY/4= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:200:a005:55b3:6c26:b3e4]) (user=surenb job=sendgmr) by 2002:a25:9f85:0:b0:693:614:cb2a with SMTP id u5-20020a259f85000000b006930614cb2amr13240649ybq.143.1661896206375; Tue, 30 Aug 2022 14:50:06 -0700 (PDT) Date: Tue, 30 Aug 2022 14:49:05 -0700 In-Reply-To: <20220830214919.53220-1-surenb@google.com> Mime-Version: 1.0 References: <20220830214919.53220-1-surenb@google.com> X-Mailer: git-send-email 2.37.2.672.g94769d06f0-goog Message-ID: <20220830214919.53220-17-surenb@google.com> Subject: [RFC PATCH 16/30] mm: enable slab allocation tagging for kmalloc and friends From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, changbin.du@intel.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, arnd@arndb.de, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-mm@kvack.org, iommu@lists.linux.dev, kasan-dev@googlegroups.com, io-uring@vger.kernel.org, linux-arch@vger.kernel.org, xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org, linux-modules@vger.kernel.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Redefine kmalloc, krealloc, kzalloc, kcalloc, etc. to record allocations and deallocations done by these functions. Signed-off-by: Suren Baghdasaryan Co-developed-by: Kent Overstreet Signed-off-by: Kent Overstreet --- include/linux/slab.h | 103 +++++++++++++++++++++++++------------------ mm/slab.c | 2 + mm/slab_common.c | 16 +++---- mm/slob.c | 2 + mm/slub.c | 2 + 5 files changed, 75 insertions(+), 50 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 5a198aa02a08..89273be35743 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -191,7 +191,10 @@ int kmem_cache_shrink(struct kmem_cache *s); /* * Common kmalloc functions provided by all allocators */ -void * __must_check krealloc(const void *objp, size_t new_size, gfp_t flags) __alloc_size(2); +void * __must_check _krealloc(const void *objp, size_t new_size, gfp_t flags) __alloc_size(2); +#define krealloc(_p, _size, _flags) \ + krealloc_hooks(_p, _krealloc(_p, _size, _flags)) + void kfree(const void *objp); void kfree_sensitive(const void *objp); size_t __ksize(const void *objp); @@ -463,6 +466,15 @@ static inline void slab_tag_dec(const void *ptr) {} #endif +#define krealloc_hooks(_p, _do_alloc) \ +({ \ + void *_res = _do_alloc; \ + slab_tag_add(_p, _res); \ + _res; \ +}) + +#define kmalloc_hooks(_do_alloc) krealloc_hooks(NULL, _do_alloc) + void *__kmalloc(size_t size, gfp_t flags) __assume_kmalloc_alignment __alloc_size(1); void *kmem_cache_alloc(struct kmem_cache *s, gfp_t flags) __assume_slab_alignment __malloc; void *kmem_cache_alloc_lru(struct kmem_cache *s, struct list_lru *lru, @@ -541,25 +553,31 @@ static __always_inline void *kmem_cache_alloc_node_trace(struct kmem_cache *s, g } #endif /* CONFIG_TRACING */ -extern void *kmalloc_order(size_t size, gfp_t flags, unsigned int order) __assume_page_alignment +extern void *_kmalloc_order(size_t size, gfp_t flags, unsigned int order) __assume_page_alignment __alloc_size(1); +#define kmalloc_order(_size, _flags, _order) \ + kmalloc_hooks(_kmalloc_order(_size, _flags, _order)) #ifdef CONFIG_TRACING -extern void *kmalloc_order_trace(size_t size, gfp_t flags, unsigned int order) +extern void *_kmalloc_order_trace(size_t size, gfp_t flags, unsigned int order) __assume_page_alignment __alloc_size(1); #else -static __always_inline __alloc_size(1) void *kmalloc_order_trace(size_t size, gfp_t flags, +static __always_inline __alloc_size(1) void *_kmalloc_order_trace(size_t size, gfp_t flags, unsigned int order) { - return kmalloc_order(size, flags, order); + return _kmalloc_order(size, flags, order); } #endif +#define kmalloc_order_trace(_size, _flags, _order) \ + kmalloc_hooks(_kmalloc_order_trace(_size, _flags, _order)) -static __always_inline __alloc_size(1) void *kmalloc_large(size_t size, gfp_t flags) +static __always_inline __alloc_size(1) void *_kmalloc_large(size_t size, gfp_t flags) { unsigned int order = get_order(size); - return kmalloc_order_trace(size, flags, order); + return _kmalloc_order_trace(size, flags, order); } +#define kmalloc_large(_size, _flags) \ + kmalloc_hooks(_kmalloc_large(_size, _flags)) /** * kmalloc - allocate memory @@ -615,14 +633,14 @@ static __always_inline __alloc_size(1) void *kmalloc_large(size_t size, gfp_t fl * Try really hard to succeed the allocation but fail * eventually. */ -static __always_inline __alloc_size(1) void *kmalloc(size_t size, gfp_t flags) +static __always_inline __alloc_size(1) void *_kmalloc(size_t size, gfp_t flags) { if (__builtin_constant_p(size)) { #ifndef CONFIG_SLOB unsigned int index; #endif if (size > KMALLOC_MAX_CACHE_SIZE) - return kmalloc_large(size, flags); + return _kmalloc_large(size, flags); #ifndef CONFIG_SLOB index = kmalloc_index(size); @@ -636,8 +654,9 @@ static __always_inline __alloc_size(1) void *kmalloc(size_t size, gfp_t flags) } return __kmalloc(size, flags); } +#define kmalloc(_size, _flags) kmalloc_hooks(_kmalloc(_size, _flags)) -static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp_t flags, int node) +static __always_inline __alloc_size(1) void *_kmalloc_node(size_t size, gfp_t flags, int node) { #ifndef CONFIG_SLOB if (__builtin_constant_p(size) && @@ -654,6 +673,8 @@ static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp_t fla #endif return __kmalloc_node(size, flags, node); } +#define kmalloc_node(_size, _flags, _node) \ + kmalloc_hooks(_kmalloc_node(_size, _flags, _node)) /** * kmalloc_array - allocate memory for an array. @@ -661,16 +682,18 @@ static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp_t fla * @size: element size. * @flags: the type of memory to allocate (see kmalloc). */ -static inline __alloc_size(1, 2) void *kmalloc_array(size_t n, size_t size, gfp_t flags) +static inline __alloc_size(1, 2) void *_kmalloc_array(size_t n, size_t size, gfp_t flags) { size_t bytes; if (unlikely(check_mul_overflow(n, size, &bytes))) return NULL; if (__builtin_constant_p(n) && __builtin_constant_p(size)) - return kmalloc(bytes, flags); - return __kmalloc(bytes, flags); + return _kmalloc(bytes, flags); + return _kmalloc(bytes, flags); } +#define kmalloc_array(_n, _size, _flags) \ + kmalloc_hooks(_kmalloc_array(_n, _size, _flags)) /** * krealloc_array - reallocate memory for an array. @@ -679,7 +702,7 @@ static inline __alloc_size(1, 2) void *kmalloc_array(size_t n, size_t size, gfp_ * @new_size: new size of a single member of the array * @flags: the type of memory to allocate (see kmalloc) */ -static inline __alloc_size(2, 3) void * __must_check krealloc_array(void *p, +static inline __alloc_size(2, 3) void * __must_check _krealloc_array(void *p, size_t new_n, size_t new_size, gfp_t flags) @@ -689,8 +712,10 @@ static inline __alloc_size(2, 3) void * __must_check krealloc_array(void *p, if (unlikely(check_mul_overflow(new_n, new_size, &bytes))) return NULL; - return krealloc(p, bytes, flags); + return _krealloc(p, bytes, flags); } +#define krealloc_array(_p, _n, _size, _flags) \ + krealloc_hooks(_p, _krealloc_array(_p, _n, _size, _flags)) /** * kcalloc - allocate memory for an array. The memory is set to zero. @@ -698,10 +723,8 @@ static inline __alloc_size(2, 3) void * __must_check krealloc_array(void *p, * @size: element size. * @flags: the type of memory to allocate (see kmalloc). */ -static inline __alloc_size(1, 2) void *kcalloc(size_t n, size_t size, gfp_t flags) -{ - return kmalloc_array(n, size, flags | __GFP_ZERO); -} +#define kcalloc(_n, _size, _flags) \ + kmalloc_array(_n, _size, (_flags)|__GFP_ZERO) /* * kmalloc_track_caller is a special version of kmalloc that records the @@ -712,10 +735,10 @@ static inline __alloc_size(1, 2) void *kcalloc(size_t n, size_t size, gfp_t flag * request comes from. */ extern void *__kmalloc_track_caller(size_t size, gfp_t flags, unsigned long caller); -#define kmalloc_track_caller(size, flags) \ - __kmalloc_track_caller(size, flags, _RET_IP_) +#define kmalloc_track_caller(size, flags) \ + kmalloc_hooks(__kmalloc_track_caller(size, flags, _RET_IP_)) -static inline __alloc_size(1, 2) void *kmalloc_array_node(size_t n, size_t size, gfp_t flags, +static inline __alloc_size(1, 2) void *_kmalloc_array_node(size_t n, size_t size, gfp_t flags, int node) { size_t bytes; @@ -723,26 +746,24 @@ static inline __alloc_size(1, 2) void *kmalloc_array_node(size_t n, size_t size, if (unlikely(check_mul_overflow(n, size, &bytes))) return NULL; if (__builtin_constant_p(n) && __builtin_constant_p(size)) - return kmalloc_node(bytes, flags, node); + return _kmalloc_node(bytes, flags, node); return __kmalloc_node(bytes, flags, node); } +#define kmalloc_array_node(_n, _size, _flags, _node) \ + kmalloc_hooks(_kmalloc_array_node(_n, _size, _flags, _node)) -static inline __alloc_size(1, 2) void *kcalloc_node(size_t n, size_t size, gfp_t flags, int node) -{ - return kmalloc_array_node(n, size, flags | __GFP_ZERO, node); -} - +#define kcalloc_node(_n, _size, _flags, _node) \ + kmalloc_array_node(_n, _size, (_flags)|__GFP_ZERO, _node) #ifdef CONFIG_NUMA extern void *__kmalloc_node_track_caller(size_t size, gfp_t flags, int node, unsigned long caller) __alloc_size(1); -#define kmalloc_node_track_caller(size, flags, node) \ - __kmalloc_node_track_caller(size, flags, node, \ - _RET_IP_) +#define kmalloc_node_track_caller(size, flags, node) \ + kmalloc_hooks(__kmalloc_node_track_caller(size, flags, node, _RET_IP_)) #else /* CONFIG_NUMA */ -#define kmalloc_node_track_caller(size, flags, node) \ +#define kmalloc_node_track_caller(size, flags, node) \ kmalloc_track_caller(size, flags) #endif /* CONFIG_NUMA */ @@ -750,20 +771,16 @@ extern void *__kmalloc_node_track_caller(size_t size, gfp_t flags, int node, /* * Shortcuts */ -static inline void *kmem_cache_zalloc(struct kmem_cache *k, gfp_t flags) -{ - return kmem_cache_alloc(k, flags | __GFP_ZERO); -} +#define kmem_cache_zalloc(_k, _flags) \ + kmem_cache_alloc(_k, (_flags)|__GFP_ZERO) /** * kzalloc - allocate memory. The memory is set to zero. * @size: how many bytes of memory are required. * @flags: the type of memory to allocate (see kmalloc). */ -static inline __alloc_size(1) void *kzalloc(size_t size, gfp_t flags) -{ - return kmalloc(size, flags | __GFP_ZERO); -} +#define kzalloc(_size, _flags) \ + kmalloc(_size, (_flags)|__GFP_ZERO) /** * kzalloc_node - allocate zeroed memory from a particular memory node. @@ -771,10 +788,12 @@ static inline __alloc_size(1) void *kzalloc(size_t size, gfp_t flags) * @flags: the type of memory to allocate (see kmalloc). * @node: memory node from which to allocate */ -static inline __alloc_size(1) void *kzalloc_node(size_t size, gfp_t flags, int node) +static inline __alloc_size(1) void *_kzalloc_node(size_t size, gfp_t flags, int node) { - return kmalloc_node(size, flags | __GFP_ZERO, node); + return _kmalloc_node(size, flags | __GFP_ZERO, node); } +#define kzalloc_node(_size, _flags, _node) \ + kmalloc_hooks(_kzalloc_node(_size, _flags, _node)) extern void *kvmalloc_node(size_t size, gfp_t flags, int node) __alloc_size(1); static inline __alloc_size(1) void *kvmalloc(size_t size, gfp_t flags) diff --git a/mm/slab.c b/mm/slab.c index ba97aeef7ec1..db344de3b260 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3402,6 +3402,7 @@ static __always_inline void __cache_free(struct kmem_cache *cachep, void *objp, if (is_kfence_address(objp)) { kmemleak_free_recursive(objp, cachep->flags); + slab_tag_dec(objp); __kfence_free(objp); return; } @@ -3433,6 +3434,7 @@ void ___cache_free(struct kmem_cache *cachep, void *objp, check_irq_off(); kmemleak_free_recursive(objp, cachep->flags); + slab_tag_dec(objp); objp = cache_free_debugcheck(cachep, objp, caller); /* diff --git a/mm/slab_common.c b/mm/slab_common.c index 272eda62ecaa..7b6473db5ab4 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -938,7 +938,7 @@ gfp_t kmalloc_fix_flags(gfp_t flags) * directly to the page allocator. We use __GFP_COMP, because we will need to * know the allocation order to free the pages properly in kfree. */ -void *kmalloc_order(size_t size, gfp_t flags, unsigned int order) +void *_kmalloc_order(size_t size, gfp_t flags, unsigned int order) { void *ret = NULL; struct page *page; @@ -958,16 +958,16 @@ void *kmalloc_order(size_t size, gfp_t flags, unsigned int order) kmemleak_alloc(ret, size, 1, flags); return ret; } -EXPORT_SYMBOL(kmalloc_order); +EXPORT_SYMBOL(_kmalloc_order); #ifdef CONFIG_TRACING -void *kmalloc_order_trace(size_t size, gfp_t flags, unsigned int order) +void *_kmalloc_order_trace(size_t size, gfp_t flags, unsigned int order) { - void *ret = kmalloc_order(size, flags, order); + void *ret = _kmalloc_order(size, flags, order); trace_kmalloc(_RET_IP_, ret, NULL, size, PAGE_SIZE << order, flags); return ret; } -EXPORT_SYMBOL(kmalloc_order_trace); +EXPORT_SYMBOL(_kmalloc_order_trace); #endif #ifdef CONFIG_SLAB_FREELIST_RANDOM @@ -1187,7 +1187,7 @@ static __always_inline void *__do_krealloc(const void *p, size_t new_size, return (void *)p; } - ret = kmalloc_track_caller(new_size, flags); + ret = __kmalloc_track_caller(new_size, flags, _RET_IP_); if (ret && p) { /* Disable KASAN checks as the object's redzone is accessed. */ kasan_disable_current(); @@ -1211,7 +1211,7 @@ static __always_inline void *__do_krealloc(const void *p, size_t new_size, * * Return: pointer to the allocated memory or %NULL in case of error */ -void *krealloc(const void *p, size_t new_size, gfp_t flags) +void *_krealloc(const void *p, size_t new_size, gfp_t flags) { void *ret; @@ -1226,7 +1226,7 @@ void *krealloc(const void *p, size_t new_size, gfp_t flags) return ret; } -EXPORT_SYMBOL(krealloc); +EXPORT_SYMBOL(_krealloc); /** * kfree_sensitive - Clear sensitive information in memory before freeing diff --git a/mm/slob.c b/mm/slob.c index 2bd4f476c340..23b49f6c9c8f 100644 --- a/mm/slob.c +++ b/mm/slob.c @@ -554,6 +554,7 @@ void kfree(const void *block) if (unlikely(ZERO_OR_NULL_PTR(block))) return; kmemleak_free(block); + slab_tag_dec(block); sp = virt_to_folio(block); if (folio_test_slab(sp)) { @@ -680,6 +681,7 @@ static void kmem_rcu_free(struct rcu_head *head) void kmem_cache_free(struct kmem_cache *c, void *b) { kmemleak_free_recursive(b, c->flags); + slab_tag_dec(b); trace_kmem_cache_free(_RET_IP_, b, c->name); if (unlikely(c->flags & SLAB_TYPESAFE_BY_RCU)) { struct slob_rcu *slob_rcu; diff --git a/mm/slub.c b/mm/slub.c index 80199d5ac7c9..caf752087ad6 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1715,6 +1715,7 @@ static inline void *kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags) static __always_inline void kfree_hook(void *x) { kmemleak_free(x); + slab_tag_dec(x); kasan_kfree_large(x); } @@ -1722,6 +1723,7 @@ static __always_inline bool slab_free_hook(struct kmem_cache *s, void *x, bool init) { kmemleak_free_recursive(x, s->flags); + slab_tag_dec(x); debug_check_no_locks_freed(x, s->object_size); -- 2.37.2.672.g94769d06f0-goog