Received: by 2002:a05:6a10:af89:0:0:0:0 with SMTP id iu9csp1418707pxb; Wed, 12 Jan 2022 14:25:52 -0800 (PST) X-Google-Smtp-Source: ABdhPJzX1cnZ3VVJcpglSBqCs8HaignzmhBWdl8xenYQG1ziWcKeQHxE2Bl48G76fpU+2eWBoV0U X-Received: by 2002:a17:907:728c:: with SMTP id dt12mr1308219ejc.313.1642026352752; Wed, 12 Jan 2022 14:25:52 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1642026352; cv=none; d=google.com; s=arc-20160816; b=o2p0Myb6rOkMBRxHqNgowSNOvjhbB3EzLB9HCTKr5wuxBgiWLoZuYAfV1AXj6/06D+ HJy2H5ruq4HmLfoeA6HkmOg2/PvKauhCvTOC7lGHfriRC/TituogzGIryb0jKh4s1pZD aXpskmFR+sYSRIol6gpjIB3al4NlRAWoQWD1Y5uBkXmrMv5r8yoywUy2IjBlh+1QpvIq 2aRx+uW8/IFOt4/vzXG/mpVYJDa+piat2m4UxkNAAo863m9HUMjZukyUbukNvzekaS5D qFaXkc1eG8Ho7UbKEI3x8zb+pwDqtoZFNFiifUTIbTl+VG1w7LkjAeISD1ntX8/3LOrc QL3A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=zLZAC3CMSCIXpnQURW0tWmpIOKmVcmIrl+Z52NxTeuQ=; b=ycKWQhdhpQwhrjI5buDLLIn8g5xWDzka2StBBZso2zeDtaWLlAwSFJxVMohKnK34Xz R3naIgb9O3p2GBXp5Ghml753iKeHwW/cEnCBAHt9bw8uRngvpB0Xd99Mh7c0DsvlGowL nMyeD6szyD2vI5UcaJ1uB8hYyb7dNuxMVWYq2fxwyYlos+BaIthFGRn6E/8gga7UEAIv 0VH+CJ0XcOw3jv4cTZ4rNlWR+7THdqmtK+4FtCkB3rUaa08ihyayYp+CWzq+7ftmbtim GwJ8+USeCaQsvdOhcDiyQvgFSyGQuMyitt+8PGnSNI7CPPDxwgaBawUpp1zZouZnn7h1 NGpQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=LCIkAOvJ; spf=pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-nfs-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=bytedance.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id w15si521257edv.256.2022.01.12.14.25.28; Wed, 12 Jan 2022 14:25:52 -0800 (PST) Received-SPF: pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=LCIkAOvJ; spf=pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-nfs-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=bytedance.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346072AbiALEsi (ORCPT + 99 others); Tue, 11 Jan 2022 23:48:38 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57302 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233079AbiALEsi (ORCPT ); Tue, 11 Jan 2022 23:48:38 -0500 Received: from mail-yb1-xb2b.google.com (mail-yb1-xb2b.google.com [IPv6:2607:f8b0:4864:20::b2b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CF360C061748 for ; Tue, 11 Jan 2022 20:48:37 -0800 (PST) Received: by mail-yb1-xb2b.google.com with SMTP id p5so2933016ybd.13 for ; Tue, 11 Jan 2022 20:48:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=zLZAC3CMSCIXpnQURW0tWmpIOKmVcmIrl+Z52NxTeuQ=; b=LCIkAOvJVwWFouI+xP+uISMFfHNgU9FxU3p7BFxfiPXa+M0vBPwjMymrSUDgziWttR p144ZmbB3+yXZm0rpBTHosOYbFVunyGfkCsGwtkDPSkwJwQnfeXAKLxiob6mpjH/1qnr ZFIvxkOPQh3H2G2HUVdqapmUPxUsyAJcsLEYGgC1KBGfLGl0gmadY8XUorqu6xa5fihw xN5juk8uCbIX47eUEFIAhfe+y9Idl6gNJlzF3gnriES26RuyD3SYMZS281eEmMpgUspK l72b5hAJplFcyjP5xYy4kh62C4EQIjBpkW+ileNPV9Ak7yAp3Yp+Bxpawtqe2zIBZYoE N3mQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=zLZAC3CMSCIXpnQURW0tWmpIOKmVcmIrl+Z52NxTeuQ=; b=KQLyXuZLcNPSQwbJbhCYjiXdmUpGLOUHXSj6O/pmJkfjlyk1uPkMOOkynpxaxjn3YE ZXGkeoms5kmTWSTMRGBSH9yPtyR2IRsVfAmafqltHJzChXRe21gqrv4z0WPW6wnJ2NVA 8+g0OHFemhqhUDljOARADfJK2ux8Rdub6M/h1KaOaQZIL5WRuhQ0KkqmXWZCmyHirNMZ 4gGaZ/Lyh6XCSW8yWubimlGkypOzGAEhhaJNJd5a6B2YY+DxJb1bIAP4BRzePQro9EYT WX0wc3OXjZHire3I0L1vC3uVXbAkSGYKkkX5NB5oQx74avc/WZwigqudjJIjen5KsJhi qCXQ== X-Gm-Message-State: AOAM531N84Kt/w5MUxM9PIuL3cuC0LQkf0QEMJN7Uor5DSc08RFS96F9 cGEmtba/SvSSFsyT8hrAmWwXuCrveajaVYDltIiDS5WTTt+em1x+4+Y= X-Received: by 2002:a25:abcb:: with SMTP id v69mr10868130ybi.317.1641962916903; Tue, 11 Jan 2022 20:48:36 -0800 (PST) MIME-Version: 1.0 References: <20211220085649.8196-1-songmuchun@bytedance.com> <20211220085649.8196-11-songmuchun@bytedance.com> In-Reply-To: From: Muchun Song Date: Wed, 12 Jan 2022 12:48:00 +0800 Message-ID: Subject: Re: [PATCH v5 10/16] mm: list_lru: allocate list_lru_one only when needed To: Roman Gushchin Cc: Matthew Wilcox , Andrew Morton , Johannes Weiner , Michal Hocko , Vladimir Davydov , Shakeel Butt , Yang Shi , Alex Shi , Wei Yang , Dave Chinner , trond.myklebust@hammerspace.com, anna.schumaker@netapp.com, jaegeuk@kernel.org, chao@kernel.org, Kari Argillander , linux-fsdevel , LKML , Linux Memory Management List , linux-nfs@vger.kernel.org, Qi Zheng , Xiongchun duan , Fam Zheng , Muchun Song Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org On Wed, Jan 12, 2022 at 4:00 AM Roman Gushchin wrote: > > On Mon, Dec 20, 2021 at 04:56:43PM +0800, Muchun Song wrote: > > In our server, we found a suspected memory leak problem. The kmalloc-32 > > consumes more than 6GB of memory. Other kmem_caches consume less than > > 2GB memory. > > > > After our in-depth analysis, the memory consumption of kmalloc-32 slab > > cache is the cause of list_lru_one allocation. > > > > crash> p memcg_nr_cache_ids > > memcg_nr_cache_ids = $2 = 24574 > > > > memcg_nr_cache_ids is very large and memory consumption of each list_lru > > can be calculated with the following formula. > > > > num_numa_node * memcg_nr_cache_ids * 32 (kmalloc-32) > > > > There are 4 numa nodes in our system, so each list_lru consumes ~3MB. > > > > crash> list super_blocks | wc -l > > 952 > > > > Every mount will register 2 list lrus, one is for inode, another is for > > dentry. There are 952 super_blocks. So the total memory is 952 * 2 * 3 > > MB (~5.6GB). But the number of memory cgroup is less than 500. So I > > guess more than 12286 containers have been deployed on this machine (I > > do not know why there are so many containers, it may be a user's bug or > > the user really want to do that). And memcg_nr_cache_ids has not been > > reduced to a suitable value. This can waste a lot of memory. > > But on the other side you increase the size of struct list_lru_per_memcg, > so if number of cgroups is close to memcg_nr_cache_ids, we can actually > waste more memory. The saving comes from the fact that we currently allocate scope for every memcg to be able to be tracked on every superblock instantiated in the system, regardless of whether that superblock is even accessible to that memcg. In theory, increasing struct list_lru_per_memcg is not significant, most savings is from decreasing the number of allocations of struct list_lru_per_memcg. > I'm not saying the change is not worth it, but would be > nice to add some real-world numbers. OK. I will do a test. > > Or it's all irrelevant and is done as a preparation to the conversion to xarray? Right. It's also a preparation to transfer to xarray. > If so, please, make it clear. Will do. Thanks.