Received: by 2002:a6b:fb09:0:0:0:0:0 with SMTP id h9csp657073iog; Thu, 30 Jun 2022 07:49:41 -0700 (PDT) X-Google-Smtp-Source: AGRyM1sxWJXKwq66WoKRDO7Z8s7BYe9V6RUiBgtijzJn1HMDhSrvK010f6tpZSg07hwcFS6Y5zab X-Received: by 2002:a17:907:2d08:b0:726:35bd:b3c1 with SMTP id gs8-20020a1709072d0800b0072635bdb3c1mr9120698ejc.281.1656600581024; Thu, 30 Jun 2022 07:49:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1656600581; cv=none; d=google.com; s=arc-20160816; b=J44j1jok4DUzHq/PlARHPqFHMResF+j1uZH/4JXq2X8NPSQj5LbNUAMBx7FHWu/US7 3W0jSRySKB+AcmTILKmNxiFauEzFbgTDJwIxLC0OgBrBappR+P1QELEQFLCNCMYa/WID hQG5ELLoFjMMZar6t4WbsuD6Fnf92/44E9tWJacMu/WAnRVwX+6upuQs9NLOb+auLVOB Ts1EK9+6mQnZuIQMX2ghzkUrtncMDyLWwT867JEGvDF9LP3dJg9daZBa/YuSkATe793D Bgzc6XH8IFM2q5Q7du/94rPFsuberk5cEh41rCDgxyPd+JOPtbpz0BwFbWxClJSQKhbu 8BAg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=q7sb8c8/9PJlhmUiNejm61yDilq3MmOEn0jHe+9OI4A=; b=wEgNrbuZ6VqKhdbZhl1/dWz30N0vW/wuJDkQ7Wp0chgbIMiH3EBA6z2no4d0JnVF84 twN9hI0GjLyU92dwQBS375YLbnfnuMZmLx2e3F9QqCW7c29/kfLPu9nEtsURQQXrWKTM OG2tzcFE3QQqxNdPffVrypB7TpmJs8Wgl77IcwgU5vt0aWVqGsxUwtxJUzi5xYh8OQOr j//HhJR1FPsxxmrUEIrs3rcArBwrwAo763PFS2XCBU4ucujP9ODhoPPUVf9NYAococ5M ncaCxUlRMiCTxuRjT1S168Y2kXQ+BBW2am9dYFAh5JBz/90+OUVH13bqVNDSARAWhcZc NhEw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=lAF4FQRK; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id l2-20020a1709067d4200b0070f8f93e1f5si1277626ejp.306.2022.06.30.07.49.16; Thu, 30 Jun 2022 07:49:41 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=lAF4FQRK; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236504AbiF3Ojo (ORCPT + 99 others); Thu, 30 Jun 2022 10:39:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48270 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237126AbiF3Oj2 (ORCPT ); Thu, 30 Jun 2022 10:39:28 -0400 Received: from mail-pf1-x429.google.com (mail-pf1-x429.google.com [IPv6:2607:f8b0:4864:20::429]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 307701FCDB for ; Thu, 30 Jun 2022 07:38:34 -0700 (PDT) Received: by mail-pf1-x429.google.com with SMTP id k9so9127307pfg.5 for ; Thu, 30 Jun 2022 07:38:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=q7sb8c8/9PJlhmUiNejm61yDilq3MmOEn0jHe+9OI4A=; b=lAF4FQRKy+t3yJrM2SIucdNYZaQKk6OGV8aWjMnp8/kO3cJe86yB5QVhbi6nDc/obv tfwzd+0qJBNFB6q0WFE+pooLOFMSvXz5XjqLIuSgpZEfBKvBvIwE3p8adx5vdKoIMvxl +PCv/IBYUwv2v2WIJO4/JZy1jOMwc+HBFeHEQE+V63lF89hY/O0uw6blHjJ6ym7jMgIE auEMse/81rYcam1X6OawKl4cSkZO+CdjM5e+aT8wWRiV6rnyqq+PziwodLVn6plgw5Fa TMc6IF3utJb8q0jivevAJ9axWMdVt42lrSevLmE4zj0nsoQ1Lh9W4XfVwis8yp23QIWw cUIA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=q7sb8c8/9PJlhmUiNejm61yDilq3MmOEn0jHe+9OI4A=; b=njpmVFbowzbpkgNprXElp/jV3ppMcpaKDKgjnXWzBkJxag57pEA03ErGmg3IotomzN qWCM2eKwTVkQ/2RBw4gallvi5sFpmmx8rey8IHM7UZRGundLiqqeqyKTy3WuujfB0ob8 A2VoQlEOIM5EVxzUq22XWBpiNDowqORyKKOkyS4UfdHrSEi6dYhdadSWa2kxqgT0yl0F xza8tygE310coCyrLYSKmKo0AkNS8Vyqqc76rpycJXhpyqN9xSTEzSr/w2UJEdWSE9IC 62mrQgHyD5G9yRPT3RhF8F1QHziXFdcevsuPdzxHitxYO0sCFfClbS7SZ4QDl1xolSfK EO7A== X-Gm-Message-State: AJIora+3w73jw7vnYM+kjuoiRzrAgTG1opjLntO46AUTFrjjr0Zt/MC+ AXY8PWpkn3MOzrxC6+H7dnc= X-Received: by 2002:a05:6a00:10d4:b0:522:9215:c399 with SMTP id d20-20020a056a0010d400b005229215c399mr14907833pfu.18.1656599913566; Thu, 30 Jun 2022 07:38:33 -0700 (PDT) Received: from hyeyoo ([114.29.24.243]) by smtp.gmail.com with ESMTPSA id s37-20020a056a0017a500b0052513b5d078sm13787294pfg.31.2022.06.30.07.38.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 30 Jun 2022 07:38:32 -0700 (PDT) Date: Thu, 30 Jun 2022 23:38:26 +0900 From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Feng Tang Cc: Andrew Morton , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Vlastimil Babka , Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org, dave.hansen@intel.com Subject: Re: [RFC PATCH] mm/slub: enable debugging memory wasting of kmalloc Message-ID: References: <20220630014715.73330-1-feng.tang@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220630014715.73330-1-feng.tang@intel.com> X-Spam-Status: No, score=-0.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM,HK_RANDOM_ENVFROM, HK_RANDOM_FROM,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Jun 30, 2022 at 09:47:15AM +0800, Feng Tang wrote: > kmalloc's API family is critical for mm, with one shortcoming that > its object size is fixed to be power of 2. When user requests memory > for '2^n + 1' bytes, actually 2^(n+1) bytes will be allocated, so > in worst case, there is around 50% memory space waste. > > We've met a kernel boot OOM panic, and from the dumped slab info: > > [ 26.062145] kmalloc-2k 814056KB 814056KB > > From debug we found there are huge number of 'struct iova_magazine', > whose size is 1032 bytes (1024 + 8), so each allocation will waste > 1016 bytes. Though the issue is solved by giving the right(bigger) > size of RAM, it is still better to optimize the size (either use > a kmalloc friendly size or create a dedicated slab for it). > > And from lkml archive, there was another crash kernel OOM case [1] > back in 2019, which seems to be related with the similar slab waste > situation, as the log is similar: > > [ 4.332648] iommu: Adding device 0000:20:02.0 to group 16 > [ 4.338946] swapper/0 invoked oom-killer: gfp_mask=0x6040c0(GFP_KERNEL|__GFP_COMP), nodemask=(null), order=0, oom_score_adj=0 > ... > [ 4.857565] kmalloc-2048 59164KB 59164KB > > The crash kernel only has 256M memory, and 59M is pretty big here. > > So add an way to track each kmalloc's memory waste info, and leverage > the existing SLUB debug framework to show its call stack info, so > that user can evaluate the waste situation, identify some hot spots > and optimize accordingly, for a better utilization of memory. > > The waste info is integrated into existing interface: > /sys/kernel/debug/slab/kmalloc-xx/alloc_traces, one example of > 'kmalloc-4k' after boot is: > > 126 ixgbe_alloc_q_vector+0xa5/0x4a0 [ixgbe] waste: 233856/1856 age=1493302/1493830/1494358 pid=1284 cpus=32 nodes=1 > __slab_alloc.isra.86+0x52/0x80 > __kmalloc_node+0x143/0x350 > ixgbe_alloc_q_vector+0xa5/0x4a0 [ixgbe] > ixgbe_init_interrupt_scheme+0x1a6/0x730 [ixgbe] > ixgbe_probe+0xc8e/0x10d0 [ixgbe] > local_pci_probe+0x42/0x80 > work_for_cpu_fn+0x13/0x20 > process_one_work+0x1c5/0x390 > worker_thread+0x1b9/0x360 > kthread+0xe6/0x110 > ret_from_fork+0x1f/0x30 > > which means in 'kmalloc-4k' slab, there are 126 requests of > 2240 bytes which got a 4KB space (wasting 1856 bytes each > and 233856 bytes in total). And when system starts some real > workload like multiple docker instances, there are more > severe waste. > > [1]. https://lkml.org/lkml/2019/8/12/266 > > Signed-off-by: Feng Tang > --- > Note: > * this is based on linux-next tree with tag next-20220628 So this makes use of the fact that orig_size differ from s->object_size when allocated from kmalloc, and for non-kmalloc caches it doesn't track waste because s->object_size == orig_size. Am I following? And then it has overhead of 'waste' field for every non-kmalloc objects because track is saved per object. Also the field is not used at free. (Maybe that would be okay as it's only for debugging, just noting.) > mm/slub.c | 45 ++++++++++++++++++++++++++++++--------------- > 1 file changed, 30 insertions(+), 15 deletions(-) > > diff --git a/mm/slub.c b/mm/slub.c > index 26b00951aad1..bc4f9d4fb1e2 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -271,6 +271,7 @@ struct track { > #endif > int cpu; /* Was running on cpu */ > int pid; /* Pid context */ > + unsigned long waste; /* memory waste for a kmalloc-ed object */ > unsigned long when; /* When did the operation occur */ > }; > > @@ -747,6 +748,7 @@ static inline depot_stack_handle_t set_track_prepare(void) > > static void set_track_update(struct kmem_cache *s, void *object, > enum track_item alloc, unsigned long addr, > + unsigned long waste, > depot_stack_handle_t handle) > { > struct track *p = get_track(s, object, alloc); > @@ -758,14 +760,16 @@ static void set_track_update(struct kmem_cache *s, void *object, > p->cpu = smp_processor_id(); > p->pid = current->pid; > p->when = jiffies; > + p->waste = waste; > } > > static __always_inline void set_track(struct kmem_cache *s, void *object, > - enum track_item alloc, unsigned long addr) > + enum track_item alloc, unsigned long addr, > + unsigned long waste) > { > depot_stack_handle_t handle = set_track_prepare(); > > - set_track_update(s, object, alloc, addr, handle); > + set_track_update(s, object, alloc, addr, waste, handle); > } > > static void init_tracking(struct kmem_cache *s, void *object) > @@ -1325,7 +1329,9 @@ static inline int alloc_consistency_checks(struct kmem_cache *s, > > static noinline int alloc_debug_processing(struct kmem_cache *s, > struct slab *slab, > - void *object, unsigned long addr) > + void *object, unsigned long addr, > + unsigned long waste > + ) > { > if (s->flags & SLAB_CONSISTENCY_CHECKS) { > if (!alloc_consistency_checks(s, slab, object)) > @@ -1334,7 +1340,7 @@ static noinline int alloc_debug_processing(struct kmem_cache *s, > > /* Success perform special debug activities for allocs */ > if (s->flags & SLAB_STORE_USER) > - set_track(s, object, TRACK_ALLOC, addr); > + set_track(s, object, TRACK_ALLOC, addr, waste); > trace(s, slab, object, 1); > init_object(s, object, SLUB_RED_ACTIVE); > return 1; > @@ -1398,6 +1404,7 @@ static noinline int free_debug_processing( > int ret = 0; > depot_stack_handle_t handle = 0; > > + /* TODO: feng: we can slab->waste -= track?) or in set_track */ > if (s->flags & SLAB_STORE_USER) > handle = set_track_prepare(); > > @@ -1418,7 +1425,7 @@ static noinline int free_debug_processing( > } > > if (s->flags & SLAB_STORE_USER) > - set_track_update(s, object, TRACK_FREE, addr, handle); > + set_track_update(s, object, TRACK_FREE, addr, 0, handle); > trace(s, slab, object, 0); > /* Freepointer not overwritten by init_object(), SLAB_POISON moved it */ > init_object(s, object, SLUB_RED_INACTIVE); > @@ -2905,7 +2912,7 @@ static inline void *get_freelist(struct kmem_cache *s, struct slab *slab) > * already disabled (which is the case for bulk allocation). > */ > static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, > - unsigned long addr, struct kmem_cache_cpu *c) > + unsigned long addr, struct kmem_cache_cpu *c, unsigned int orig_size) > { > void *freelist; > struct slab *slab; > @@ -3048,7 +3055,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, > check_new_slab: > > if (kmem_cache_debug(s)) { > - if (!alloc_debug_processing(s, slab, freelist, addr)) { > + if (!alloc_debug_processing(s, slab, freelist, addr, s->object_size - orig_size)) { > /* Slab failed checks. Next slab needed */ > goto new_slab; > } else { > @@ -3102,7 +3109,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, > * pointer. > */ > static void *__slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, > - unsigned long addr, struct kmem_cache_cpu *c) > + unsigned long addr, struct kmem_cache_cpu *c, unsigned int orig_size) > { > void *p; > > @@ -3115,7 +3122,7 @@ static void *__slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, > c = slub_get_cpu_ptr(s->cpu_slab); > #endif > > - p = ___slab_alloc(s, gfpflags, node, addr, c); > + p = ___slab_alloc(s, gfpflags, node, addr, c, orig_size); > #ifdef CONFIG_PREEMPT_COUNT > slub_put_cpu_ptr(s->cpu_slab); > #endif > @@ -3206,7 +3213,7 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s, struct list_l > */ > if (IS_ENABLED(CONFIG_PREEMPT_RT) || > unlikely(!object || !slab || !node_match(slab, node))) { > - object = __slab_alloc(s, gfpflags, node, addr, c); > + object = __slab_alloc(s, gfpflags, node, addr, c, orig_size); > } else { > void *next_object = get_freepointer_safe(s, object); > > @@ -3709,7 +3716,7 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, > * of re-populating per CPU c->freelist > */ > p[i] = ___slab_alloc(s, flags, NUMA_NO_NODE, > - _RET_IP_, c); > + _RET_IP_, c, size); This looks wrong. size here is size of array. Maybe just s->object_size instead of size? > if (unlikely(!p[i])) > goto error; > > @@ -5068,6 +5075,7 @@ struct location { > depot_stack_handle_t handle; > unsigned long count; > unsigned long addr; > + unsigned long waste; > long long sum_time; > long min_time; > long max_time; > @@ -5138,11 +5146,12 @@ static int add_location(struct loc_track *t, struct kmem_cache *s, > if (pos == end) > break; > > - caddr = t->loc[pos].addr; > - chandle = t->loc[pos].handle; > - if ((track->addr == caddr) && (handle == chandle)) { > + l = &t->loc[pos]; > + caddr = l->addr; > + chandle = l->handle; > + if ((track->addr == caddr) && (handle == chandle) && > + (track->waste == l->waste)) { > > - l = &t->loc[pos]; > l->count++; > if (track->when) { > l->sum_time += age; > @@ -5190,6 +5199,7 @@ static int add_location(struct loc_track *t, struct kmem_cache *s, > l->min_pid = track->pid; > l->max_pid = track->pid; > l->handle = handle; > + l->waste = track->waste; I think this may be fooled when there are different wastes values from same caller (i.e. when a kmalloc_track_caller() is used.) because the array is sorted by caller address, but not sorted by waste. And writing this I noticed that it already can be fooled now :) It's also not sorted by handle. > cpumask_clear(to_cpumask(l->cpus)); > cpumask_set_cpu(track->cpu, to_cpumask(l->cpus)); > nodes_clear(l->nodes); > @@ -6078,6 +6088,11 @@ static int slab_debugfs_show(struct seq_file *seq, void *v) > else > seq_puts(seq, ""); > > + > + if (l->waste) > + seq_printf(seq, " waste: %lu/%lu", Maybe waste=%lu/%lu like others? > + l->count * l->waste, l->waste); > + > if (l->sum_time != l->min_time) { > seq_printf(seq, " age=%ld/%llu/%ld", > l->min_time, div_u64(l->sum_time, l->count), > -- > 2.27.0 > -- Thanks, Hyeonggon