Received: by 2002:a05:7412:d8a:b0:e2:908c:2ebd with SMTP id b10csp442160rdg; Tue, 10 Oct 2023 15:23:08 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGJm2whnlUVrK4HZHtF0XlG8I45yR3m8Gf4e7lRxqAKiX4luh7ouLDFFIBClbG5Kh/T1jed X-Received: by 2002:a05:6a00:188a:b0:68a:6018:a66f with SMTP id x10-20020a056a00188a00b0068a6018a66fmr25596139pfh.2.1696976587679; Tue, 10 Oct 2023 15:23:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1696976587; cv=none; d=google.com; s=arc-20160816; b=hYHsytbLOZ5ts1WXySxA3eo2YjbFF/W/fm0Wi6xl5hGPhbcvzZnq8SMVL4RbrFcGKE vNlfIBLxprSK5r64jBZs4J9i83PBEZQ94ytJtR1ERhp8PGdx1DIhAUn730+Gz5cIXk1y mNsidk7KTInU7O2KzNPwdAL/6rhq5dibLL6jB1sjudDLAzdGQ4hrOhS8M98fOQyzFuEM DzcZ2B+AubF1gpBGz3yOSjfT5stnHw+YQuOt20S2xbn05csUeV5Bnxe3hYIsGAs85hIl m4hLvorLbE+IRD8fq3iaeWn4atBs4upRjBEYVd+GeHzdrbQpZpudSSDwOk43zHEOqHkT 0qaw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:cc:to:subject :message-id:date:from:in-reply-to:references:mime-version :dkim-signature; bh=D9ytv1y2yvSa4MvbD/ZRdqp1EA4N80KA40csq2dEErQ=; fh=xoUKUYwsHP88jVXv8iAtrUCIr9Fd+NLU74akQLg97ME=; b=XKrAM0EbcNKjJCxXZKk3qDRY1dW09sluMQGyNLHtOW6y7gv/xyreBzvf0Z9b154DyE afRk4rzd4ohCVhtWWNuC7BqsUnfwao8JBvvPoMRFjfpUP2a7sWZW1odV0DoFAbXEDXTM 8gr7t0WXCBkRci5JQ7qp0YWaChk3UtRad73WUgj8YyS8Uf+yK0n0QRDNJhdvmLIidHSX hvSD/EnpYgETkq0fBuEpEHcq1l7zQGPYoqqOfZ7dyiYEhffCF/BjMvo/e/i0+3krIEfL 2MukrtHRKDHNoSTSV0f5O5Ih2NR6HZMJ3Q+0RYTPKLT6KjSG2FYCRRhA3SqyuoVeIjEL s0CQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=cGQNz3qB; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from howler.vger.email (howler.vger.email. [23.128.96.34]) by mx.google.com with ESMTPS id y5-20020a056a001c8500b00690d55d630dsi10228565pfw.274.2023.10.10.15.23.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 10 Oct 2023 15:23:07 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) client-ip=23.128.96.34; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=cGQNz3qB; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by howler.vger.email (Postfix) with ESMTP id 7572080412D7; Tue, 10 Oct 2023 15:23:04 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at howler.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229735AbjJJWWc (ORCPT + 99 others); Tue, 10 Oct 2023 18:22:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35220 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231283AbjJJWWa (ORCPT ); Tue, 10 Oct 2023 18:22:30 -0400 Received: from mail-ed1-x52d.google.com (mail-ed1-x52d.google.com [IPv6:2a00:1450:4864:20::52d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B5C55A4 for ; Tue, 10 Oct 2023 15:22:27 -0700 (PDT) Received: by mail-ed1-x52d.google.com with SMTP id 4fb4d7f45d1cf-53406799540so11028822a12.1 for ; Tue, 10 Oct 2023 15:22:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1696976546; x=1697581346; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=D9ytv1y2yvSa4MvbD/ZRdqp1EA4N80KA40csq2dEErQ=; b=cGQNz3qBuGwIL0oWollP/UdGwjV7RCie2FXDlhQvit8a8pXDdaOigVznTKUKXd034E 5wa0wac7XnK8vc2zqZa4C6YjObiL7e3q9DtzlTqxFs/lXXNWNiuPpEkXLg/dlkY8qCn7 2FKWGRlm9l9wBL1w7viVC4H7knpAv9am9OHzmYwYJYkJtXJc7lziz/3itrTDO0q/pNZS VAI5A9vR4KVZdOYw8njVJaP+5dMXd3LtAa88G+xf8voexP3z0pUf5BBduCgc23vHV8ii xsIvmdBZnNf8oWCtWraxRT+hS0PcS8zbW2LUYn4mE76PfHpOSa+kerIfIjycArp8CTM6 7i7Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1696976546; x=1697581346; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=D9ytv1y2yvSa4MvbD/ZRdqp1EA4N80KA40csq2dEErQ=; b=IFbLN11sBWxV5zm23ftEVWpPW7t+YqidxZ3iq8kYKMBfO9+aGj1xzbErzHiIhFWupv mE0M/VSrHO/Q7oDOqNYNNgBJ3+bXo/lNgNGwiGM6PP0q2I6e6cxotU0Lt457JnMduGOS /Ym0uhrekusl2BDK9MEnkOAwMEmEaB+1AyfylpQD54BuJ6vvrx7rMNrobC8MZqCrbVo+ 4v39xh2aFY0jDS55vtMOdY8QK4GQAFc1kvSaAmxQQadZ1sZzObHm17feXUxA5v8kQTLC U485kYQQuXmN5tXLr6g8RySlE8pYfY8+I1TT5lvkAmRRHt4mXYqWnmVv4G8rdPATSqc0 IZPQ== X-Gm-Message-State: AOJu0Yxm/V0ONYruWcyaxWWz8XYrBZ4DPgnQI+C2vqMQ5ClR0sPcMzHw t4rXSAZ9lcL5c/qwMagBLtHK3vmeSl+t+rR0lGEg/w== X-Received: by 2002:a17:906:5349:b0:9b2:8b14:7a20 with SMTP id j9-20020a170906534900b009b28b147a20mr18547846ejo.45.1696976546026; Tue, 10 Oct 2023 15:22:26 -0700 (PDT) MIME-Version: 1.0 References: <20231010032117.1577496-1-yosryahmed@google.com> <20231010032117.1577496-4-yosryahmed@google.com> In-Reply-To: From: Yosry Ahmed Date: Tue, 10 Oct 2023 15:21:47 -0700 Message-ID: Subject: Re: [PATCH v2 3/5] mm: memcg: make stats flushing threshold per-memcg To: Shakeel Butt Cc: Andrew Morton , Johannes Weiner , Michal Hocko , Roman Gushchin , Muchun Song , Ivan Babrou , Tejun Heo , =?UTF-8?Q?Michal_Koutn=C3=BD?= , Waiman Long , kernel-team@cloudflare.com, Wei Xu , Greg Thelen , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Spam-Status: No, score=-4.8 required=5.0 tests=DKIMWL_WL_MED,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,RCVD_IN_SBL_CSS,SPF_HELO_NONE,SPF_PASS, USER_IN_DEF_DKIM_WL autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on howler.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (howler.vger.email [0.0.0.0]); Tue, 10 Oct 2023 15:23:04 -0700 (PDT) On Tue, Oct 10, 2023 at 2:02=E2=80=AFPM Yosry Ahmed = wrote: > > On Tue, Oct 10, 2023 at 1:45=E2=80=AFPM Shakeel Butt wrote: > > > > On Mon, Oct 9, 2023 at 8:21=E2=80=AFPM Yosry Ahmed wrote: > > > > > > A global counter for the magnitude of memcg stats update is maintaine= d > > > on the memcg side to avoid invoking rstat flushes when the pending > > > updates are not significant. This avoids unnecessary flushes, which a= re > > > not very cheap even if there isn't a lot of stats to flush. It also > > > avoids unnecessary lock contention on the underlying global rstat loc= k. > > > > > > Make this threshold per-memcg. The scheme is followed where percpu (n= ow > > > also per-memcg) counters are incremented in the update path, and only > > > propagated to per-memcg atomics when they exceed a certain threshold. > > > > > > This provides two benefits: > > > (a) On large machines with a lot of memcgs, the global threshold can = be > > > reached relatively fast, so guarding the underlying lock becomes less > > > effective. Making the threshold per-memcg avoids this. > > > > > > (b) Having a global threshold makes it hard to do subtree flushes, as= we > > > cannot reset the global counter except for a full flush. Per-memcg > > > counters removes this as a blocker from doing subtree flushes, which > > > helps avoid unnecessary work when the stats of a small subtree are > > > needed. > > > > > > Nothing is free, of course. This comes at a cost: > > > (a) A new per-cpu counter per memcg, consuming NR_CPUS * NR_MEMCGS * = 4 > > > bytes. The extra memory usage is insigificant. > > > > > > (b) More work on the update side, although in the common case it will > > > only be percpu counter updates. The amount of work scales with the > > > number of ancestors (i.e. tree depth). This is not a new concept, add= ing > > > a cgroup to the rstat tree involves a parent loop, so is charging. > > > Testing results below show no significant regressions. > > > > > > (c) The error margin in the stats for the system as a whole increases > > > from NR_CPUS * MEMCG_CHARGE_BATCH to NR_CPUS * MEMCG_CHARGE_BATCH * > > > NR_MEMCGS. This is probably fine because we have a similar per-memcg > > > error in charges coming from percpu stocks, and we have a periodic > > > flusher that makes sure we always flush all the stats every 2s anyway= . > > > > > > This patch was tested to make sure no significant regressions are > > > introduced on the update path as follows. The following benchmarks we= re > > > ran in a cgroup that is 4 levels deep (/sys/fs/cgroup/a/b/c/d), which= is > > > deeper than a usual setup: > > > > > > (a) neper [1] with 1000 flows and 100 threads (single machine). The > > > values in the table are the average of server and client throughputs = in > > > mbps after 30 iterations, each running for 30s: > > > > > > tcp_rr tcp_stream > > > Base 9504218.56 357366.84 > > > Patched 9656205.68 356978.39 > > > Delta +1.6% -0.1% > > > Standard Deviation 0.95% 1.03% > > > > > > An increase in the performance of tcp_rr doesn't really make sense, b= ut > > > it's probably in the noise. The same tests were ran with 1 flow and 1 > > > thread but the throughput was too noisy to make any conclusions (the > > > averages did not show regressions nonetheless). > > > > > > Looking at perf for one iteration of the above test, __mod_memcg_stat= e() > > > (which is where memcg_rstat_updated() is called) does not show up at = all > > > without this patch, but it shows up with this patch as 1.06% for tcp_= rr > > > and 0.36% for tcp_stream. > > > > > > (b) "stress-ng --vm 0 -t 1m --times --perf". I don't understand > > > stress-ng very well, so I am not sure that's the best way to test thi= s, > > > but it spawns 384 workers and spits a lot of metrics which looks nice= :) > > > I picked a few ones that seem to be relevant to the stats update path= . I > > > also included cache misses as this patch introduce more atomics that = may > > > bounce between cpu caches: > > > > > > Metric Base Patched Delta > > > Cache Misses 3.394 B/sec 3.433 B/sec +1.14% > > > Cache L1D Read 0.148 T/sec 0.154 T/sec +4.05% > > > Cache L1D Read Miss 20.430 B/sec 21.820 B/sec +6.8% > > > Page Faults Total 4.304 M/sec 4.535 M/sec +5.4% > > > Page Faults Minor 4.304 M/sec 4.535 M/sec +5.4% > > > Page Faults Major 18.794 /sec 0.000 /sec > > > Kmalloc 0.153 M/sec 0.152 M/sec -0.65% > > > Kfree 0.152 M/sec 0.153 M/sec +0.65% > > > MM Page Alloc 4.640 M/sec 4.898 M/sec +5.56% > > > MM Page Free 4.639 M/sec 4.897 M/sec +5.56% > > > Lock Contention Begin 0.362 M/sec 0.479 M/sec +32.32% > > > Lock Contention End 0.362 M/sec 0.479 M/sec +32.32% > > > page-cache add 238.057 /sec 0.000 /sec > > > page-cache del 6.265 /sec 6.267 /sec -0.03% > > > > > > This is only using a single run in each case. I am not sure what to > > > make out of most of these numbers, but they mostly seem in the noise > > > (some better, some worse). The lock contention numbers are interestin= g. > > > I am not sure if higher is better or worse here. No new locks or lock > > > sections are introduced by this patch either way. > > > > > > Looking at perf, __mod_memcg_state() shows up as 0.00% with and witho= ut > > > this patch. This is suspicious, but I verified while stress-ng is > > > running that all the threads are in the right cgroup. > > > > > > (3) will-it-scale page_fault tests. These tests (specifically > > > per_process_ops in page_fault3 test) detected a 25.9% regression befo= re > > > for a change in the stats update path [2]. These are the > > > numbers from 30 runs (+ is good): > > > > > > LABEL | MEAN | MEDIAN | STDDEV = | > > > ------------------------------+-------------+-------------+----------= --- > > > page_fault1_per_process_ops | | | = | > > > (A) base | 265207.738 | 262941.000 | 12112.379= | > > > (B) patched | 249249.191 | 248781.000 | 8767.457 = | > > > | -6.02% | -5.39% | = | > > > page_fault1_per_thread_ops | | | = | > > > (A) base | 241618.484 | 240209.000 | 10162.207= | > > > (B) patched | 229820.671 | 229108.000 | 7506.582 = | > > > | -4.88% | -4.62% | = | > > > page_fault1_scalability | | | > > > (A) base | 0.03545 | 0.035705 | 0.0015837= | > > > (B) patched | 0.029952 | 0.029957 | 0.0013551= | > > > | -9.29% | -9.35% | = | > > > > This much regression is not acceptable. > > > > In addition, I ran netperf with the same 4 level hierarchy as you have > > run and I am seeing ~11% regression. > > Interesting, I thought neper and netperf should be similar. Let me try > to reproduce this. > > Thanks for testing! > > > > > More specifically on a machine with 44 CPUs (HT disabled ixion machine)= : > > > > # for server > > $ netserver -6 > > > > # 22 instances of netperf clients > > $ netperf -6 -H ::1 -l 60 -t TCP_SENDFILE -- -m 10K > > > > (averaged over 4 runs) > > > > base (next-20231009): 33081 MBPS > > patched: 29267 MBPS > > > > So, this series is not acceptable unless this regression is resolved. I tried this on a machine with 72 cpus (also ixion), running both netserver and netperf in /sys/fs/cgroup/a/b/c/d as follows: # echo "+memory" > /sys/fs/cgroup/cgroup.subtree_control # mkdir /sys/fs/cgroup/a # echo "+memory" > /sys/fs/cgroup/a/cgroup.subtree_control # mkdir /sys/fs/cgroup/a/b # echo "+memory" > /sys/fs/cgroup/a/b/cgroup.subtree_control # mkdir /sys/fs/cgroup/a/b/c # echo "+memory" > /sys/fs/cgroup/a/b/c/cgroup.subtree_control # mkdir /sys/fs/cgroup/a/b/c/d # echo 0 > /sys/fs/cgroup/a/b/c/d/cgroup.procs # ./netserver -6 # echo 0 > /sys/fs/cgroup/a/b/c/d/cgroup.procs # for i in $(seq 10); do ./netperf -6 -H ::1 -l 60 -t TCP_SENDFILE -- -m 10K; done Base: 540000 262144 10240 60.00 54613.89 540000 262144 10240 60.00 54940.52 540000 262144 10240 60.00 55168.86 540000 262144 10240 60.00 54800.15 540000 262144 10240 60.00 54452.55 540000 262144 10240 60.00 54501.60 540000 262144 10240 60.00 55036.11 540000 262144 10240 60.00 52018.91 540000 262144 10240 60.00 54877.78 540000 262144 10240 60.00 55342.38 Average: 54575.275 Patched: 540000 262144 10240 60.00 53694.86 540000 262144 10240 60.00 54807.68 540000 262144 10240 60.00 54782.89 540000 262144 10240 60.00 51404.91 540000 262144 10240 60.00 55024.00 540000 262144 10240 60.00 54725.84 540000 262144 10240 60.00 51400.40 540000 262144 10240 60.00 54212.63 540000 262144 10240 60.00 51951.47 540000 262144 10240 60.00 51978.27 Average: 53398.295 That's ~2% regression. Did I do anything incorrectly?