Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp721965pxb; Tue, 2 Feb 2021 16:42:00 -0800 (PST) X-Google-Smtp-Source: ABdhPJyZPBkoBpXa2VDyCsA2139+tAjrzZqqIAogVx9trU1/23sTp2Qdk2eJTeW1V/GmHYFenEYo X-Received: by 2002:a17:907:169e:: with SMTP id hc30mr601964ejc.498.1612312920728; Tue, 02 Feb 2021 16:42:00 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1612312920; cv=none; d=google.com; s=arc-20160816; b=oWktnylBDUPYyVH3mJlwWfhckrqf7l/wLcPgaQivwhB4BZmYB5NhIPMbxG0Ct5VIVB dBv2HjcH2M6+NllR+PKdTs4H3XsV1O8BHV1uOstclQXwpC8uCbdYk5pZWX4vJuPuRt/B NXqcLKuIjzDVQltU0xEwjyGrajRK+u7VPPuYBaKgjDGucduSVAYGLYfnIa14vxhh0xea tQJZIv3f02tYELjnSdegiFgILBE3sUfthBY0bNEjkCytnQBcIyLyShNnQyD1p5CmeQin 64wcD7GP1VLxmj4HmIUnYGU+JaG2zxHvF66vkBp8asB7MH0UFomj06K50pHGIF0vF9qp r+sw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=SRGv31t5oIgSMdD5HuT0hX8f1ohrW2aTZf+6aUN4HlY=; b=TKQMAyuLOANPHEFM/4K0k3e9wKqY0RXebcpg6kFkniFcubtLIcF6sCE055Uc8+D29w Xs/+nSaWhPbo1nj0yqmfegSHO86GAbFAcuyk7YPF63CC65GKd0J88kbDge0v9qo3+600 gGiNjedIZWEa9+mm9DYoJeFhlctAw4sfbfio8orpx832aBvYUz5C2vehwS8KOwpmAzb3 FJVgdVaxW/cLkieGrMJ+/NNCHZ8AthNpTh+4AY8RCET4WhX+4B+nS5nuxFvrTBlym5g/ kNSQ+43S3/NIbG3qYoJcBY9Kd8+loEDs9tVxS6RvVkHnU6kgNzSZRq1U5RCDTWF4cmSW Czdw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@cmpxchg-org.20150623.gappssmtp.com header.s=20150623 header.b=S4Sk4WOq; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=cmpxchg.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id k8si208377edx.303.2021.02.02.16.41.36; Tue, 02 Feb 2021 16:42:00 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@cmpxchg-org.20150623.gappssmtp.com header.s=20150623 header.b=S4Sk4WOq; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=cmpxchg.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237709AbhBBSuN (ORCPT + 99 others); Tue, 2 Feb 2021 13:50:13 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55124 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237898AbhBBSsc (ORCPT ); Tue, 2 Feb 2021 13:48:32 -0500 Received: from mail-qv1-xf35.google.com (mail-qv1-xf35.google.com [IPv6:2607:f8b0:4864:20::f35]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 38805C0613D6 for ; Tue, 2 Feb 2021 10:47:52 -0800 (PST) Received: by mail-qv1-xf35.google.com with SMTP id j13so10378461qvu.10 for ; Tue, 02 Feb 2021 10:47:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=SRGv31t5oIgSMdD5HuT0hX8f1ohrW2aTZf+6aUN4HlY=; b=S4Sk4WOq0qqgckH/82ebDu33VM/7hVX534DYaPgrG6/Myx34h8dabJwOJetTSnIe3L OSJGc8m9Sj7EQRFPSlOkILnQpxMhplWh3winKWHzICWTzCsNDSbj7VZZt0sT8P0o2Uos KNBrt5mJFPLzidD4x80yZ/JP+Go2nbxyyMxjTbWMM3dp3wvf0R+Cpq4+EUTuyxwVssM+ lt68u6vKUpO96sioBifhybUsb5OofpzoH402x6koOFOdw3ZNmNvV5fjwinzsSbi1cUDk iMZ6ZnCe9ga140oAAtH6HEXfo0jsIp4jl7qhDJRVWw4hSkbR82SOry+BF+pEGYXh+LPT c9BQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=SRGv31t5oIgSMdD5HuT0hX8f1ohrW2aTZf+6aUN4HlY=; b=XM2ysDuMqsaDKwras+pC7L8sGE1dzivvswDpeFTY1McXqYRzBvpL1lYuw323b4QIjZ ZgryWyv6FVSwIxV7nSl2TzdwdfD+V0AbCws6V9h77JhWm0iFshgHY4rBlA5d992J6URv 1kqb10EVO91vyVnpV+CYHJfK997SKWJBuKu06Uvh3gaZybMBLiW/9B66LqFRFrcV3h8v PtrbF1QziXZwBv2F+aGfKbXM7MIhwDBBrLhLZ0mj+7O+vKSqXf6SZRjXbM0QLrLDMBXH Voz5ssl5KAGyXBpsMuWeVJBVdgUoaL/+Ro46Fq4oLNnP5y0HqVeg+onphmUH/TUocr9e /sng== X-Gm-Message-State: AOAM533IQvPCmyKe528fkQkS1FcfnawTDaCwEamL3dz6r8Z8VfKUqckI jzkI9RdkBHOVOOm5fk6A8R9kgA== X-Received: by 2002:a0c:fe47:: with SMTP id u7mr22040673qvs.4.1612291671468; Tue, 02 Feb 2021 10:47:51 -0800 (PST) Received: from localhost (70.44.39.90.res-cmts.bus.ptd.net. [70.44.39.90]) by smtp.gmail.com with ESMTPSA id u6sm4944633qke.114.2021.02.02.10.47.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 02 Feb 2021 10:47:50 -0800 (PST) From: Johannes Weiner To: Andrew Morton , Tejun Heo Cc: Michal Hocko , Roman Gushchin , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@fb.com Subject: [PATCH 0/7]: mm: memcontrol: switch to rstat Date: Tue, 2 Feb 2021 13:47:39 -0500 Message-Id: <20210202184746.119084-1-hannes@cmpxchg.org> X-Mailer: git-send-email 2.30.0 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This series converts memcg stats tracking to the streamlined rstat infrastructure provided by the cgroup core code. rstat is already used by the CPU controller and the IO controller. This change is motivated by recent accuracy problems in memcg's custom stats code, as well as the benefits of sharing common infra with other controllers. The current memcg implementation does batched tree aggregation on the write side: local stat changes are cached in per-cpu counters, which are then propagated upward in batches when a threshold (32 pages) is exceeded. This is cheap, but the error introduced by the lazy upward propagation adds up: 32 pages times CPUs times cgroups in the subtree. We've had complaints from service owners that the stats do not reliably track and react to allocation behavior as expected, sometimes swallowing the results of entire test applications. The original memcg stat implementation used to do tree aggregation exclusively on the read side: local stats would only ever be tracked in per-cpu counters, and a memory.stat read would iterate the entire subtree and sum those counters up. This didn't keep up with the times: - Cgroup trees are much bigger now. We switched to lazily-freed cgroups, where deleted groups would hang around until their remaining page cache has been reclaimed. This can result in large subtrees that are expensive to walk, while most of the groups are idle and their statistics don't change much anymore. - Automated monitoring increased. With the proliferation of userspace oom killing, proactive reclaim, and higher-resolution logging of workload trends in general, top-level stat files are polled at least once a second in many deployments. - The lifetime of cgroups got shorter. Where most cgroup setups in the past would have a few large policy-oriented cgroups for everything running on the system, newer cgroup deployments tend to create one group per application - which gets deleted again as the processes exit. An aggregation scheme that doesn't retain child data inside the parents loses event history of the subtree. Rstat addresses all three of those concerns through intelligent, persistent read-side aggregation. As statistics change at the local level, rstat tracks - on a per-cpu basis - only those parts of a subtree that have changes pending and require aggregation. The actual aggregation occurs on the colder read side - which can now skip over (potentially large) numbers of recently idle cgroups. --- A kernel build test confirms that the cost is comparable. Two kernels are built simultaneously in a nested tree with several idle siblings: root - kernelbuild - one - two - three - four - build-a (defconfig, make -j16) `- build-b (defconfig, make -j16) `- idle-1 `- ... `- idle-9 During the builds, kernelbuild/memory.stat is read once a second. A perf diff shows that the changes in cycle distribution is minimal. Top 10 kernel symbols: 0.09% +0.08% [kernel.kallsyms] [k] __mod_memcg_lruvec_state 0.00% +0.06% [kernel.kallsyms] [k] cgroup_rstat_updated 0.08% -0.05% [kernel.kallsyms] [k] __mod_memcg_state.part.0 0.16% -0.04% [kernel.kallsyms] [k] release_pages 0.00% +0.03% [kernel.kallsyms] [k] __count_memcg_events 0.01% +0.03% [kernel.kallsyms] [k] mem_cgroup_charge_statistics.constprop.0 0.10% -0.02% [kernel.kallsyms] [k] get_mem_cgroup_from_mm 0.05% -0.02% [kernel.kallsyms] [k] mem_cgroup_update_lru_size 0.57% +0.01% [kernel.kallsyms] [k] asm_exc_page_fault --- And of course, the on-demand aggregated stats are now fully accurate again: $ grep -e nr_inactive_file /proc/vmstat | awk '{print($1,$2*4096)}'; \ grep -e inactive_file /sys/fs/cgroup/memory.stat vanilla: patched: nr_inactive_file 1574105088 nr_inactive_file 1027801088 inactive_file 1577410560 inactive_file 1027801088 --- block/blk-cgroup.c | 14 +- include/linux/memcontrol.h | 119 ++++++---------- kernel/cgroup/cgroup.c | 34 +++-- kernel/cgroup/rstat.c | 62 +++++---- mm/memcontrol.c | 320 +++++++++++++++++++++---------------------- 5 files changed, 266 insertions(+), 283 deletions(-) Based on v5.11-rc5-mm1.