Received: by 2002:a25:ab43:0:0:0:0:0 with SMTP id u61csp6131769ybi; Tue, 4 Jun 2019 19:46:41 -0700 (PDT) X-Google-Smtp-Source: APXvYqyofkNuUGUSDyomZ2F6HiW0v/K18aFKkhbrvKxjPMex0tM54pF8gp7D3pYaYfqMHZpMM5F5 X-Received: by 2002:a63:4754:: with SMTP id w20mr1018889pgk.31.1559702801305; Tue, 04 Jun 2019 19:46:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1559702801; cv=none; d=google.com; s=arc-20160816; b=DmXoX+AhncsvtY3/MZ/XVHOA4S4OL7s4AQXGVMwaztHJ8wbGaFUSXAKj+tPN3DaXLh 59pL4epIVfftasldF7oea5/R0Iq/XMJ//xfpx1R2NCHDNsYybOgdnG28aUVccO5/rMW7 5OKDveu3X0T31mgC0X47D05JMSp5j1tpr1w3w+BKTJf/oF7l0SzLFdsVQsERxaZu6OOU atHXd8pkMftJ+bEJulWKE4uyYFTUbTn80QPq/g5HlH6RuskIIKitZb/a9Ha71ixLbVdY gb3I2WspOPAKvTWh+NlzcyEzZSoY8PinAYP1Zacg6Wq5m2ECPWgzNAVF3wrCHfWmDGIj EjjQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:message-id:date:subject :smtp-origin-cluster:cc:to:smtp-origin-hostname:from :smtp-origin-hostprefix:dkim-signature; bh=ll9mjXjyPSTWkjT0RH+ntySWmhqIHqoS/UCssxgxW+8=; b=wNlxjEbja+O4vUxaeaOd+tGmHLHCWKQLlKhweDB8DaIy0tHmbfIXSAReuNdXPPIjps fqLLPOw4DEN/Hz0sdaAdR9gRumV9Ogdl901lPkGOBXExSf4lcsCaDMs+HC8AbE03UOEG kuw1XrpImq+Xw+e3viQy8xD4BhP/YNtnJkxiQCVrE7GqwXJf3TkzY4jxdZ+xPLNqCO6X WuBXxWHphkriTyS10PFlqIiMf+oJL+MERq+Z0hM40LSt7IMHsCQ96Ku0QkgZjfpy5RS4 TpQgy0swpLoIER3ZQOyusmBQFB8H/1sOUfxDy4sY8jgYPCoFFyjf2L2VieMJdTyKCCK2 hltw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@fb.com header.s=facebook header.b=EpwO8+np; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=fb.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id s15si6874138pjq.65.2019.06.04.19.46.24; Tue, 04 Jun 2019 19:46:41 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@fb.com header.s=facebook header.b=EpwO8+np; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=fb.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726792AbfFECpK (ORCPT + 99 others); Tue, 4 Jun 2019 22:45:10 -0400 Received: from mx0b-00082601.pphosted.com ([67.231.153.30]:52554 "EHLO mx0a-00082601.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726743AbfFECpJ (ORCPT ); Tue, 4 Jun 2019 22:45:09 -0400 Received: from pps.filterd (m0001255.ppops.net [127.0.0.1]) by mx0b-00082601.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x552gIO4010888 for ; Tue, 4 Jun 2019 19:45:08 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : mime-version : content-type; s=facebook; bh=ll9mjXjyPSTWkjT0RH+ntySWmhqIHqoS/UCssxgxW+8=; b=EpwO8+npvLWFpVz9ImQ3NdVfmajkNi3SDHwWx05JWXthxuj3zTXcLU7RLr47MFnKbtxt qQUchegkxA2K4yfE0993zxya7bjou25fNvGbeQvcx4m5oc0xvt6zsIL/+H1hsWhLI+g7 PK7uRTPbAfiLYsyzkVRlzQebjdzs6fLAQlQ= Received: from maileast.thefacebook.com ([163.114.130.16]) by mx0b-00082601.pphosted.com with ESMTP id 2swxuq161e-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Tue, 04 Jun 2019 19:45:08 -0700 Received: from mx-out.facebook.com (2620:10d:c0a8:1b::d) by mail.thefacebook.com (2620:10d:c0a8:82::e) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1713.5; Tue, 4 Jun 2019 19:44:59 -0700 Received: by devvm2643.prn2.facebook.com (Postfix, from userid 111017) id 75D6E12C7FDC1; Tue, 4 Jun 2019 19:44:58 -0700 (PDT) Smtp-Origin-Hostprefix: devvm From: Roman Gushchin Smtp-Origin-Hostname: devvm2643.prn2.facebook.com To: Andrew Morton CC: , , , Johannes Weiner , Shakeel Butt , Vladimir Davydov , Waiman Long , Roman Gushchin Smtp-Origin-Cluster: prn2c23 Subject: [PATCH v6 00/10] mm: reparent slab memory on cgroup removal Date: Tue, 4 Jun 2019 19:44:44 -0700 Message-ID: <20190605024454.1393507-1-guro@fb.com> X-Mailer: git-send-email 2.17.1 X-FB-Internal: Safe MIME-Version: 1.0 Content-Type: text/plain X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:,, definitions=2019-06-05_02:,, signatures=0 X-Proofpoint-Spam-Details: rule=fb_default_notspam policy=fb_default score=0 priorityscore=1501 malwarescore=0 suspectscore=1 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1810050000 definitions=main-1906050015 X-FB-Internal: deliver Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org # Why do we need this? We've noticed that the number of dying cgroups is steadily growing on most of our hosts in production. The following investigation revealed an issue in userspace memory reclaim code [1], accounting of kernel stacks [2], and also the mainreason: slab objects. The underlying problem is quite simple: any page charged to a cgroup holds a reference to it, so the cgroup can't be reclaimed unless all charged pages are gone. If a slab object is actively used by other cgroups, it won't be reclaimed, and will prevent the origin cgroup from being reclaimed. Slab objects, and first of all vfs cache, is shared between cgroups, which are using the same underlying fs, and what's even more important, it's shared between multiple generations of the same workload. So if something is running periodically every time in a new cgroup (like how systemd works), we do accumulate multiple dying cgroups. Strictly speaking pagecache isn't different here, but there is a key difference: we disable protection and apply some extra pressure on LRUs of dying cgroups, and these LRUs contain all charged pages. My experiments show that with the disabled kernel memory accounting the number of dying cgroups stabilizes at a relatively small number (~100, depends on memory pressure and cgroup creation rate), and with kernel memory accounting it grows pretty steadily up to several thousands. Memory cgroups are quite complex and big objects (mostly due to percpu stats), so it leads to noticeable memory losses. Memory occupied by dying cgroups is measured in hundreds of megabytes. I've even seen a host with more than 100Gb of memory wasted for dying cgroups. It leads to a degradation of performance with the uptime, and generally limits the usage of cgroups. My previous attempt [3] to fix the problem by applying extra pressure on slab shrinker lists caused a regressions with xfs and ext4, and has been reverted [4]. The following attempts to find the right balance [5, 6] were not successful. So instead of trying to find a maybe non-existing balance, let's do reparent the accounted slabs to the parent cgroup on cgroup removal. # Implementation approach There is however a significant problem with reparenting of slab memory: there is no list of charged pages. Some of them are in shrinker lists, but not all. Introducing of a new list is really not an option. But fortunately there is a way forward: every slab page has a stable pointer to the corresponding kmem_cache. So the idea is to reparent kmem_caches instead of slab pages. It's actually simpler and cheaper, but requires some underlying changes: 1) Make kmem_caches to hold a single reference to the memory cgroup, instead of a separate reference per every slab page. 2) Stop setting page->mem_cgroup pointer for memcg slab pages and use page->kmem_cache->memcg indirection instead. It's used only on slab page release, so it shouldn't be a big issue. 3) Introduce a refcounter for non-root slab caches. It's required to be able to destroy kmem_caches when they become empty and release the associated memory cgroup. There is a bonus: currently we do release empty kmem_caches on cgroup removal, however all other are waiting for the releasing of the memory cgroup. These refactorings allow kmem_caches to be released as soon as they become inactive and free. Some additional implementation details are provided in corresponding commit messages. # Results Below is the average number of dying cgroups on two groups of our production hosts. They do run some sort of web frontend workload, the memory pressure is moderate. As we can see, with the kernel memory reparenting the number stabilizes in 60s range; however with the original version it grows almost linearly and doesn't show any signs of plateauing. The difference in slab and percpu usage between patched and unpatched versions also grows linearly. In 7 days it exceeded 200Mb. day 0 1 2 3 4 5 6 7 original 56 362 628 752 1070 1250 1490 1560 patched 23 46 51 55 60 57 67 69 mem diff(Mb) 22 74 123 152 164 182 214 241 # History v6: 1) split biggest patches into parts to make the review easier 2) changed synchronization around the dying flag 3) sysfs entry removal on deactivation is back 4) got rid of redundant rcu wait on kmem_cache release 5) fixed getting memcg pointer in mem_cgroup_from_kmem() 5) fixed missed smp_rmb() 6) removed redundant CONFIG_SLOB 7) some renames and cosmetic fixes v5: 1) fixed a compilation warning around missing kmemcg_queue_cache_shutdown() 2) s/rcu_read_lock()/rcu_read_unlock() in memcg_kmem_get_cache() v4: 1) removed excessive memcg != parent check in memcg_deactivate_kmem_caches() 2) fixed rcu_read_lock() usage in memcg_charge_slab() 3) fixed synchronization around dying flag in kmemcg_queue_cache_shutdown() 4) refreshed test results data 5) reworked PageTail() checks in memcg_from_slab_page() 6) added some comments in multiple places v3: 1) reworked memcg kmem_cache search on allocation path 2) fixed /proc/kpagecgroup interface v2: 1) switched to percpu kmem_cache refcounter 2) a reference to kmem_cache is held during the allocation 3) slabs stats are fixed for !MEMCG case (and the refactoring is separated into a standalone patch) 4) kmem_cache reparenting is performed from deactivatation context v1: https://lkml.org/lkml/2019/4/17/1095 # Links [1]: commit 68600f623d69 ("mm: don't miss the last page because of round-off error") [2]: commit 9b6f7e163cd0 ("mm: rework memcg kernel stack accounting") [3]: commit 172b06c32b94 ("mm: slowly shrink slabs with a relatively small number of objects") [4]: commit a9a238e83fbb ("Revert "mm: slowly shrink slabs with a relatively small number of objects") [5]: https://lkml.org/lkml/2019/1/28/1865 [6]: https://marc.info/?l=linux-mm&m=155064763626437&w=2 Roman Gushchin (10): mm: add missing smp read barrier on getting memcg kmem_cache pointer mm: postpone kmem_cache memcg pointer initialization to memcg_link_cache() mm: rename slab delayed deactivation functions and fields mm: generalize postponed non-root kmem_cache deactivation mm: introduce __memcg_kmem_uncharge_memcg() mm: unify SLAB and SLUB page accounting mm: synchronize access to kmem_cache dying flag using a spinlock mm: rework non-root kmem_cache lifecycle management mm: stop setting page->mem_cgroup pointer for slab pages mm: reparent slab memory on cgroup removal include/linux/memcontrol.h | 10 +++ include/linux/slab.h | 13 +-- mm/list_lru.c | 11 ++- mm/memcontrol.c | 102 +++++++++++++++------- mm/slab.c | 25 ++---- mm/slab.h | 140 +++++++++++++++++++++--------- mm/slab_common.c | 170 ++++++++++++++++++++++--------------- mm/slub.c | 24 +----- 8 files changed, 312 insertions(+), 183 deletions(-) -- 2.20.1