Received: by 2002:a05:6602:18e:0:0:0:0 with SMTP id m14csp1662019ioo; Sun, 22 May 2022 23:28:37 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwdsXiyFjD8nERDQ2hABqjxF0ODRuoSoC1IwzOoriEE97CBHJT4mCI9YrgFenOAndqaY2r1 X-Received: by 2002:a17:903:483:b0:161:44a6:e3a with SMTP id jj3-20020a170903048300b0016144a60e3amr21787383plb.12.1653287317139; Sun, 22 May 2022 23:28:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1653287317; cv=none; d=google.com; s=arc-20160816; b=06CXQFfI7RrTT4FKSgPkNnFsYfyIsrqPDVzJ4Ngi5DbMphRqERgDMHVzYX3tyQXuMU lFiyzYxPJGnFBc+qOuhld6RQWM+2UxQA7EjOzbRRJlf3BQreRM/4UdTq/AA8PC836dZ+ 2naAzM8JcFqAhT+EoXhpIdPL8nqgueMWVallk6/Xe3ZqoCAvJmEya4Ol0Ihm4YI7CQGS D14RpyK1CD9/PFANjO0dK3YWgWsnmg94oKKwLYiHQGLMsUjK5nRCgKI/MTQMfDwEkZd/ P22CyaC11HcUjda0fs/hiK5N+4kClrraQrmfYGijIdvX0RJVd54JoXRy0uK43Wa1IJlJ 9ZgQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to :content-language:references:cc:to:subject:from:user-agent :mime-version:date:message-id:dkim-signature; bh=xtMMO/oDHzJHIOHScwiZ+J4PQk+a+0aBTO/nBIMmFRY=; b=qbi5JAX7IwbPLCpGpqpylxL2Q3VhNcY4vj9DOW8gkonFi4QREJD0l4yLQsKpegMxsc r9nWofdT1fVw4o+LcEZyCc8AlKWQRAoOY/D4xkM0Qe8p7dh6o+Vywa/s/D1M0/JtZpPO MLSn8pkWjvkcUcPQxAx+Ax+ZLhWGpDeOGoxIvbl12b0pG4ZiXaCzWYDMXHWzoapUXOPU HRlMlyVWiIk5YSIoV3mPw2xDKg0HWvwQblXxGpRedrcoIz6FbkAILuX5HtIZYQI5lOoX NDIBv6ifcXkLmj1B9A/emTQRrsuouIyMReJEHV+nJmDEUPDxj+JD3Tp43yvo6l01UYGi 6qhA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@openvz-org.20210112.gappssmtp.com header.s=20210112 header.b=JwTl3pFM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=openvz.org Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [2620:137:e000::1:18]) by mx.google.com with ESMTPS id 25-20020a17090a005900b001dfbea9f5a2si12677590pjb.94.2022.05.22.23.28.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 22 May 2022 23:28:37 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) client-ip=2620:137:e000::1:18; Authentication-Results: mx.google.com; dkim=pass header.i=@openvz-org.20210112.gappssmtp.com header.s=20210112 header.b=JwTl3pFM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=openvz.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 2E19213DCF; Sun, 22 May 2022 23:06:41 -0700 (PDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344639AbiEUShB (ORCPT + 99 others); Sat, 21 May 2022 14:37:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47474 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238619AbiEUShA (ORCPT ); Sat, 21 May 2022 14:37:00 -0400 Received: from mail-lf1-x136.google.com (mail-lf1-x136.google.com [IPv6:2a00:1450:4864:20::136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F2614457A7 for ; Sat, 21 May 2022 11:36:57 -0700 (PDT) Received: by mail-lf1-x136.google.com with SMTP id u30so19264671lfm.9 for ; Sat, 21 May 2022 11:36:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=openvz-org.20210112.gappssmtp.com; s=20210112; h=message-id:date:mime-version:user-agent:from:subject:to:cc :references:content-language:in-reply-to:content-transfer-encoding; bh=xtMMO/oDHzJHIOHScwiZ+J4PQk+a+0aBTO/nBIMmFRY=; b=JwTl3pFMeVKGy8g359/U2CB7Hbsh7Qmmxd0j2ZC8UdQuEs8O9J51t9Jd8mYEG3WbcC rX0tB4qT9ze9LM7PNetcF8DOBRFRFFihcznS1WyLr0dLbnfftxgFt7nxbMrOLLlFDSqW nlhvyCx4Uo3fnFGwWAk/SWII2zoYSW7KN2Ad19Z112xIk6JAtcN5c5GdrschS+npKqak 0EcHbO9CjPFE4E3XbJfVDZDmk8JH1vI50VYRqyf2kdrOOyozCO2/jZ+vmWrzcWhXlIAh fdRIMY7XbCIm/Q2HGdn0o7Ynw7mBIIWWZLhzdgspdeUO1zHLay8Gdv4hHfC0CR8cZNvO bJPA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:message-id:date:mime-version:user-agent:from :subject:to:cc:references:content-language:in-reply-to :content-transfer-encoding; bh=xtMMO/oDHzJHIOHScwiZ+J4PQk+a+0aBTO/nBIMmFRY=; b=YJWcN5zqXctUDJGocDDhvh6h4v41Pz/c7R4bYU69zYcCysKhVQlZnkCYMUHj6gsl0r uOJe5CXxOA1jb6WqTAhmdXM3lfOx8YgCA7KqsDuEeKKvkhbiPGqLpG/6Q8OP07ya/2l9 mgHUR3AXN3JD3PpD/DCwhuPwBbscebNJH+Pa+z6LDFfjvZvDHyVb/ye/qVnoAaJd4TDr ktWQes0yp1RBM0m3Uhb0ykw11GgG3+Qtj93ig8aDLMENJrNQCRDe7PEo4tPMZpXgIjPP jU+KCQVstUuUzeIfJzBsneY1OcOZHjAw5+vAA14hzcRDNjqmV21AI65LI95VCre+hq0x vLCA== X-Gm-Message-State: AOAM533tRFtavltJ9Y9YNyTOE0nH39pci2jsr5kMm5SKh6pBkxBV/geo ekd5l4tjMw50g+E2yS0LXwtXLg== X-Received: by 2002:ac2:5509:0:b0:477:b18a:b5b5 with SMTP id j9-20020ac25509000000b00477b18ab5b5mr10720498lfk.297.1653158216303; Sat, 21 May 2022 11:36:56 -0700 (PDT) Received: from [192.168.1.65] ([46.188.121.185]) by smtp.gmail.com with ESMTPSA id dt8-20020a0565122a8800b00477cdd53190sm892505lfb.74.2022.05.21.11.36.55 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Sat, 21 May 2022 11:36:55 -0700 (PDT) Message-ID: Date: Sat, 21 May 2022 21:36:54 +0300 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.8.1 From: Vasily Averin Subject: [PATCH v4] tracing: add 'accounted' entry into output of allocation tracepoints To: Andrew Morton Cc: kernel@openvz.org, linux-kernel@vger.kernel.org, Steven Rostedt , Ingo Molnar , linux-mm@kvack.org, Shakeel Butt , Roman Gushchin , Vlastimil Babka , Matthew Wilcox , Joonsoo Kim , David Rientjes , Pekka Enberg , Christoph Lameter , Michal Hocko , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Muchun Song References: <0c73ce5c-3625-6187-820e-1277e168b3bc@openvz.org> Content-Language: en-US In-Reply-To: <0c73ce5c-3625-6187-820e-1277e168b3bc@openvz.org> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,RDNS_NONE, SPF_HELO_NONE,T_SCC_BODY_TEXT_LINE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Slab caches marked with SLAB_ACCOUNT force accounting for every allocation from this cache even if __GFP_ACCOUNT flag is not passed. Unfortunately, at the moment this flag is not visible in ftrace output, and this makes it difficult to analyze the accounted allocations. This patch adds boolean "accounted" entry into trace output, and set it to 'true' for calls used __GFP_ACCOUNT flag and for allocations from caches marked with SLAB_ACCOUNT. Signed-off-by: Vasily Averin Acked-by: Shakeel Butt --- v4: 1) replaced in patch descripion: "accounted" instead of "allocated" 2) added "Acked-by" from Shakeel, 3) re-addressed to akpm@ v3: 1) rework kmem_cache_alloc* tracepoints once again, added struct kmem_cache argument into existing templates, thanks to Matthew Wilcox 2) updated according trace_* calls 3) added boolean "allocated" entry into trace output, thanks to Roman 4) updated patch subject and description v2: 1) handle kmem_cache_alloc_node(), thanks to Shakeel 2) rework kmem_cache_alloc* tracepoints to use cachep instead of current cachep->*size parameters. NB: kmem_cache_alloc_node tracepoint in SLOB cannot use cachep, and therefore it was replaced by kmalloc_node. --- include/trace/events/kmem.h | 38 +++++++++++++++++++++++-------------- mm/slab.c | 10 +++++----- mm/slab_common.c | 9 ++++----- mm/slob.c | 8 ++++---- mm/slub.c | 20 +++++++++---------- 5 files changed, 47 insertions(+), 38 deletions(-) diff --git a/include/trace/events/kmem.h b/include/trace/events/kmem.h index 71c141804222..5bfeb6f276f1 100644 --- a/include/trace/events/kmem.h +++ b/include/trace/events/kmem.h @@ -13,11 +13,12 @@ DECLARE_EVENT_CLASS(kmem_alloc, TP_PROTO(unsigned long call_site, const void *ptr, + struct kmem_cache *s, size_t bytes_req, size_t bytes_alloc, gfp_t gfp_flags), - TP_ARGS(call_site, ptr, bytes_req, bytes_alloc, gfp_flags), + TP_ARGS(call_site, ptr, s, bytes_req, bytes_alloc, gfp_flags), TP_STRUCT__entry( __field( unsigned long, call_site ) @@ -25,6 +26,7 @@ DECLARE_EVENT_CLASS(kmem_alloc, __field( size_t, bytes_req ) __field( size_t, bytes_alloc ) __field( unsigned long, gfp_flags ) + __field( bool, accounted ) ), TP_fast_assign( @@ -33,42 +35,46 @@ DECLARE_EVENT_CLASS(kmem_alloc, __entry->bytes_req = bytes_req; __entry->bytes_alloc = bytes_alloc; __entry->gfp_flags = (__force unsigned long)gfp_flags; + __entry->accounted = (gfp_flags & __GFP_ACCOUNT) || + (s && s->flags & SLAB_ACCOUNT); ), - TP_printk("call_site=%pS ptr=%p bytes_req=%zu bytes_alloc=%zu gfp_flags=%s", + TP_printk("call_site=%pS ptr=%p bytes_req=%zu bytes_alloc=%zu gfp_flags=%s accounted=%s", (void *)__entry->call_site, __entry->ptr, __entry->bytes_req, __entry->bytes_alloc, - show_gfp_flags(__entry->gfp_flags)) + show_gfp_flags(__entry->gfp_flags), + __entry->accounted ? "true" : "false") ); DEFINE_EVENT(kmem_alloc, kmalloc, - TP_PROTO(unsigned long call_site, const void *ptr, + TP_PROTO(unsigned long call_site, const void *ptr, struct kmem_cache *s, size_t bytes_req, size_t bytes_alloc, gfp_t gfp_flags), - TP_ARGS(call_site, ptr, bytes_req, bytes_alloc, gfp_flags) + TP_ARGS(call_site, ptr, s, bytes_req, bytes_alloc, gfp_flags) ); DEFINE_EVENT(kmem_alloc, kmem_cache_alloc, - TP_PROTO(unsigned long call_site, const void *ptr, + TP_PROTO(unsigned long call_site, const void *ptr, struct kmem_cache *s, size_t bytes_req, size_t bytes_alloc, gfp_t gfp_flags), - TP_ARGS(call_site, ptr, bytes_req, bytes_alloc, gfp_flags) + TP_ARGS(call_site, ptr, s, bytes_req, bytes_alloc, gfp_flags) ); DECLARE_EVENT_CLASS(kmem_alloc_node, TP_PROTO(unsigned long call_site, const void *ptr, + struct kmem_cache *s, size_t bytes_req, size_t bytes_alloc, gfp_t gfp_flags, int node), - TP_ARGS(call_site, ptr, bytes_req, bytes_alloc, gfp_flags, node), + TP_ARGS(call_site, ptr, s, bytes_req, bytes_alloc, gfp_flags, node), TP_STRUCT__entry( __field( unsigned long, call_site ) @@ -77,6 +83,7 @@ DECLARE_EVENT_CLASS(kmem_alloc_node, __field( size_t, bytes_alloc ) __field( unsigned long, gfp_flags ) __field( int, node ) + __field( bool, accounted ) ), TP_fast_assign( @@ -86,33 +93,36 @@ DECLARE_EVENT_CLASS(kmem_alloc_node, __entry->bytes_alloc = bytes_alloc; __entry->gfp_flags = (__force unsigned long)gfp_flags; __entry->node = node; + __entry->accounted = (gfp_flags & __GFP_ACCOUNT) || + (s && s->flags & SLAB_ACCOUNT); ), - TP_printk("call_site=%pS ptr=%p bytes_req=%zu bytes_alloc=%zu gfp_flags=%s node=%d", + TP_printk("call_site=%pS ptr=%p bytes_req=%zu bytes_alloc=%zu gfp_flags=%s node=%d accounted=%s", (void *)__entry->call_site, __entry->ptr, __entry->bytes_req, __entry->bytes_alloc, show_gfp_flags(__entry->gfp_flags), - __entry->node) + __entry->node, + __entry->accounted ? "true" : "false") ); DEFINE_EVENT(kmem_alloc_node, kmalloc_node, TP_PROTO(unsigned long call_site, const void *ptr, - size_t bytes_req, size_t bytes_alloc, + struct kmem_cache *s, size_t bytes_req, size_t bytes_alloc, gfp_t gfp_flags, int node), - TP_ARGS(call_site, ptr, bytes_req, bytes_alloc, gfp_flags, node) + TP_ARGS(call_site, ptr, s, bytes_req, bytes_alloc, gfp_flags, node) ); DEFINE_EVENT(kmem_alloc_node, kmem_cache_alloc_node, TP_PROTO(unsigned long call_site, const void *ptr, - size_t bytes_req, size_t bytes_alloc, + struct kmem_cache *s, size_t bytes_req, size_t bytes_alloc, gfp_t gfp_flags, int node), - TP_ARGS(call_site, ptr, bytes_req, bytes_alloc, gfp_flags, node) + TP_ARGS(call_site, ptr, s, bytes_req, bytes_alloc, gfp_flags, node) ); TRACE_EVENT(kfree, diff --git a/mm/slab.c b/mm/slab.c index 0edb474edef1..e5802445c7d6 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3492,7 +3492,7 @@ void *__kmem_cache_alloc_lru(struct kmem_cache *cachep, struct list_lru *lru, { void *ret = slab_alloc(cachep, lru, flags, cachep->object_size, _RET_IP_); - trace_kmem_cache_alloc(_RET_IP_, ret, + trace_kmem_cache_alloc(_RET_IP_, ret, cachep, cachep->object_size, cachep->size, flags); return ret; @@ -3581,7 +3581,7 @@ kmem_cache_alloc_trace(struct kmem_cache *cachep, gfp_t flags, size_t size) ret = slab_alloc(cachep, NULL, flags, size, _RET_IP_); ret = kasan_kmalloc(cachep, ret, size, flags); - trace_kmalloc(_RET_IP_, ret, + trace_kmalloc(_RET_IP_, ret, cachep, size, cachep->size, flags); return ret; } @@ -3606,7 +3606,7 @@ void *kmem_cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid) { void *ret = slab_alloc_node(cachep, flags, nodeid, cachep->object_size, _RET_IP_); - trace_kmem_cache_alloc_node(_RET_IP_, ret, + trace_kmem_cache_alloc_node(_RET_IP_, ret, cachep, cachep->object_size, cachep->size, flags, nodeid); @@ -3625,7 +3625,7 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *cachep, ret = slab_alloc_node(cachep, flags, nodeid, size, _RET_IP_); ret = kasan_kmalloc(cachep, ret, size, flags); - trace_kmalloc_node(_RET_IP_, ret, + trace_kmalloc_node(_RET_IP_, ret, cachep, size, cachep->size, flags, nodeid); return ret; @@ -3708,7 +3708,7 @@ static __always_inline void *__do_kmalloc(size_t size, gfp_t flags, ret = slab_alloc(cachep, NULL, flags, size, caller); ret = kasan_kmalloc(cachep, ret, size, flags); - trace_kmalloc(caller, ret, + trace_kmalloc(caller, ret, cachep, size, cachep->size, flags); return ret; diff --git a/mm/slab_common.c b/mm/slab_common.c index 2b3206a2c3b5..a345e8600e00 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -25,13 +25,12 @@ #include #include -#define CREATE_TRACE_POINTS -#include - #include "internal.h" - #include "slab.h" +#define CREATE_TRACE_POINTS +#include + enum slab_state slab_state; LIST_HEAD(slab_caches); DEFINE_MUTEX(slab_mutex); @@ -967,7 +966,7 @@ EXPORT_SYMBOL(kmalloc_order); void *kmalloc_order_trace(size_t size, gfp_t flags, unsigned int order) { void *ret = kmalloc_order(size, flags, order); - trace_kmalloc(_RET_IP_, ret, size, PAGE_SIZE << order, flags); + trace_kmalloc(_RET_IP_, ret, NULL, size, PAGE_SIZE << order, flags); return ret; } EXPORT_SYMBOL(kmalloc_order_trace); diff --git a/mm/slob.c b/mm/slob.c index 40ea6e2d4ccd..dbefa0da0dfc 100644 --- a/mm/slob.c +++ b/mm/slob.c @@ -505,7 +505,7 @@ __do_kmalloc_node(size_t size, gfp_t gfp, int node, unsigned long caller) *m = size; ret = (void *)m + minalign; - trace_kmalloc_node(caller, ret, + trace_kmalloc_node(caller, ret, NULL, size, size + minalign, gfp, node); } else { unsigned int order = get_order(size); @@ -514,7 +514,7 @@ __do_kmalloc_node(size_t size, gfp_t gfp, int node, unsigned long caller) gfp |= __GFP_COMP; ret = slob_new_pages(gfp, order, node); - trace_kmalloc_node(caller, ret, + trace_kmalloc_node(caller, ret, NULL, size, PAGE_SIZE << order, gfp, node); } @@ -610,12 +610,12 @@ static void *slob_alloc_node(struct kmem_cache *c, gfp_t flags, int node) if (c->size < PAGE_SIZE) { b = slob_alloc(c->size, flags, c->align, node, 0); - trace_kmem_cache_alloc_node(_RET_IP_, b, c->object_size, + trace_kmem_cache_alloc_node(_RET_IP_, b, NULL, c->object_size, SLOB_UNITS(c->size) * SLOB_UNIT, flags, node); } else { b = slob_new_pages(flags, get_order(c->size), node); - trace_kmem_cache_alloc_node(_RET_IP_, b, c->object_size, + trace_kmem_cache_alloc_node(_RET_IP_, b, NULL, c->object_size, PAGE_SIZE << get_order(c->size), flags, node); } diff --git a/mm/slub.c b/mm/slub.c index ed5c2c03a47a..9b10591646dd 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3231,7 +3231,7 @@ void *__kmem_cache_alloc_lru(struct kmem_cache *s, struct list_lru *lru, { void *ret = slab_alloc(s, lru, gfpflags, _RET_IP_, s->object_size); - trace_kmem_cache_alloc(_RET_IP_, ret, s->object_size, + trace_kmem_cache_alloc(_RET_IP_, ret, s, s->object_size, s->size, gfpflags); return ret; @@ -3254,7 +3254,7 @@ EXPORT_SYMBOL(kmem_cache_alloc_lru); void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size) { void *ret = slab_alloc(s, NULL, gfpflags, _RET_IP_, size); - trace_kmalloc(_RET_IP_, ret, size, s->size, gfpflags); + trace_kmalloc(_RET_IP_, ret, s, size, s->size, gfpflags); ret = kasan_kmalloc(s, ret, size, gfpflags); return ret; } @@ -3266,7 +3266,7 @@ void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags, int node) { void *ret = slab_alloc_node(s, NULL, gfpflags, node, _RET_IP_, s->object_size); - trace_kmem_cache_alloc_node(_RET_IP_, ret, + trace_kmem_cache_alloc_node(_RET_IP_, ret, s, s->object_size, s->size, gfpflags, node); return ret; @@ -3280,7 +3280,7 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *s, { void *ret = slab_alloc_node(s, NULL, gfpflags, node, _RET_IP_, size); - trace_kmalloc_node(_RET_IP_, ret, + trace_kmalloc_node(_RET_IP_, ret, s, size, s->size, gfpflags, node); ret = kasan_kmalloc(s, ret, size, gfpflags); @@ -4409,7 +4409,7 @@ void *__kmalloc(size_t size, gfp_t flags) ret = slab_alloc(s, NULL, flags, _RET_IP_, size); - trace_kmalloc(_RET_IP_, ret, size, s->size, flags); + trace_kmalloc(_RET_IP_, ret, s, size, s->size, flags); ret = kasan_kmalloc(s, ret, size, flags); @@ -4443,7 +4443,7 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node) if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) { ret = kmalloc_large_node(size, flags, node); - trace_kmalloc_node(_RET_IP_, ret, + trace_kmalloc_node(_RET_IP_, ret, NULL, size, PAGE_SIZE << get_order(size), flags, node); @@ -4457,7 +4457,7 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node) ret = slab_alloc_node(s, NULL, flags, node, _RET_IP_, size); - trace_kmalloc_node(_RET_IP_, ret, size, s->size, flags, node); + trace_kmalloc_node(_RET_IP_, ret, s, size, s->size, flags, node); ret = kasan_kmalloc(s, ret, size, flags); @@ -4916,7 +4916,7 @@ void *__kmalloc_track_caller(size_t size, gfp_t gfpflags, unsigned long caller) ret = slab_alloc(s, NULL, gfpflags, caller, size); /* Honor the call site pointer we received. */ - trace_kmalloc(caller, ret, size, s->size, gfpflags); + trace_kmalloc(caller, ret, s, size, s->size, gfpflags); return ret; } @@ -4932,7 +4932,7 @@ void *__kmalloc_node_track_caller(size_t size, gfp_t gfpflags, if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) { ret = kmalloc_large_node(size, gfpflags, node); - trace_kmalloc_node(caller, ret, + trace_kmalloc_node(caller, ret, NULL, size, PAGE_SIZE << get_order(size), gfpflags, node); @@ -4947,7 +4947,7 @@ void *__kmalloc_node_track_caller(size_t size, gfp_t gfpflags, ret = slab_alloc_node(s, NULL, gfpflags, node, caller, size); /* Honor the call site pointer we received. */ - trace_kmalloc_node(caller, ret, size, s->size, gfpflags, node); + trace_kmalloc_node(caller, ret, s, size, s->size, gfpflags, node); return ret; } -- 2.31.1