Received: by 2002:a25:c205:0:0:0:0:0 with SMTP id s5csp60821ybf; Wed, 26 Feb 2020 08:49:49 -0800 (PST) X-Google-Smtp-Source: APXvYqwOtb6w9XvN3dCpcqGkx4VmJfTmHo51s7P3TU4ynE8F8JgXKfxeVwk6nXC7EKjF2Ta3MvPo X-Received: by 2002:a05:6830:1e37:: with SMTP id t23mr4023393otr.16.1582735788993; Wed, 26 Feb 2020 08:49:48 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1582735788; cv=none; d=google.com; s=arc-20160816; b=Mp4DHiH17IMrhu3KH98g2tJCg/a/VbCSVOIBkiCEkb3zwMlb/hHsxNbiRzPasuNyQZ UImBx7DzArF3iWp9KP12+eJil3CvYp3s4jKXdJ3Se+fFYaQII4HW01CztlXkRT3mt0R4 vD0CN9cREJT6WoIVu+UhdxYdeGuH2Y5witmic3CDmzr6NkZETAew2mnZxsokKba9FRgN P9CYaiJt+ol0vN5ZMx91BLgfab1A+mn0VVEbfNh89V74Ai5SiQQBLWQn7oOEBlJQ9/0z sqDlymzbtS84xbxYqdwuexsPxyk+T5+8Yck2X3vpVxIqxSnkFOS8ZdmQINWmq4GhpJuJ /bMw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=3IAUCjKcPnwqfoFEA6h60qd4xEpq61IRvz1MFe49wR0=; b=M8GqLvSztE1GHVbAqNlP0egCb/hWD6MofJg1mtVcVw2lPbAvUq2MBQJhFPG+VkldsS QCnuHg0G4JGBeP7G1O5LKpCGoN1zMt2lj7gXkBlI9it2G9EpG1YPGYprchoVlyCMSd6s PuA8z7hqTcmd3C/zpqPg+vUrKzbkhVX6gbi81iomCzqfA4oApFVcHz7CC1SToI1Brw6E ffbuFiyC8w34dT6ULxqx2bqvuVCv/t5iu4ijUFAKOex5g9jzv7eajHiiw4omia4ss7pi GYNTbWtvEkw0CkHKJa8pZ7eCrh7V8S2uA0vuOyZoq0i0mM9sSJAiNNy5lvL0ALBVPQRC 09dg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id h22si1346389oie.189.2020.02.26.08.49.36; Wed, 26 Feb 2020 08:49:48 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727145AbgBZQqk (ORCPT + 99 others); Wed, 26 Feb 2020 11:46:40 -0500 Received: from mx2.suse.de ([195.135.220.15]:58574 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726490AbgBZQqj (ORCPT ); Wed, 26 Feb 2020 11:46:39 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id C8D04AC46; Wed, 26 Feb 2020 16:46:36 +0000 (UTC) Date: Wed, 26 Feb 2020 17:46:32 +0100 From: Michal =?iso-8859-1?Q?Koutn=FD?= To: Johannes Weiner Cc: Andrew Morton , Roman Gushchin , Michal Hocko , Tejun Heo , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@fb.com Subject: Re: [PATCH v2 2/3] mm: memcontrol: clean up and document effective low/min calculations Message-ID: <20200226164632.GL27066@blackbody.suse.cz> References: <20191219200718.15696-1-hannes@cmpxchg.org> <20191219200718.15696-3-hannes@cmpxchg.org> <20200221171024.GA23476@blackbody.suse.cz> <20200225184014.GC10257@cmpxchg.org> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha256; protocol="application/pgp-signature"; boundary="fDP66DSfTvWAYVew" Content-Disposition: inline In-Reply-To: <20200225184014.GC10257@cmpxchg.org> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org --fDP66DSfTvWAYVew Content-Type: multipart/mixed; boundary="QWRRbczYj8mXuejp" Content-Disposition: inline --QWRRbczYj8mXuejp Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Tue, Feb 25, 2020 at 01:40:14PM -0500, Johannes Weiner wrote: > Hm, this example doesn't change with my patch because there is no > "floating" protection that gets distributed among the siblings. Maybe it had changed even earlier and the example obsoleted. > In my testing with the above parameters, the equilibrium still comes > out to roughly this distribution. I'm attaching my test (10-times smaller) and I'm getting these results: > /sys/fs/cgroup/test.slice/memory.current:838750208 > /sys/fs/cgroup/test.slice/pressure.service/memory.current:616972288 > /sys/fs/cgroup/test.slice/test-A.slice/memory.current:221782016 > /sys/fs/cgroup/test.slice/test-A.slice/B.service/memory.current:123428864 > /sys/fs/cgroup/test.slice/test-A.slice/C.service/memory.current:93495296 > /sys/fs/cgroup/test.slice/test-A.slice/D.service/memory.current:4702208 > /sys/fs/cgroup/test.slice/test-A.slice/E.service/memory.current:155648 (I'm running that on 5.6.0-rc2 + first two patches of your series.) That's IMO closer to the my simulation (1.16:0.84) than the example prediction (1.3:0.6) > It's just to illustrate the pressure weight, not to reflect each > factor that can influence the equilibrium. But it's good to have some idea about the equilibrium when configuring the values.=20 > I think it still has value to gain understanding of how it works, no? Alas, the example confused me so that I had to write the simulation to get grasp of it :-) And even when running actual code now, I'd say the values in the original example are only one of the equlibriums but definitely not reachable from the stated initial conditions. > > > @@ -6272,12 +6262,63 @@ struct cgroup_subsys memory_cgrp_subsys =3D { > > > * for next usage. This part is intentionally racy, but it's ok, > > > * as memory.low is a best-effort mechanism. > > Although it's a different issue but since this updates the docs I'm > > mentioning it -- we treat memory.min the same, i.e. it's subject to the > > same race, however, it's not meant to be best effort. I didn't look into > > outcomes of potential misaccounting but the comment seems to miss impact > > on memory.min protection. >=20 > Yeah I think we can delete that bit. Erm, which part? Make the racy behavior undocumented or that it applies both memory.low and memory.min? > I believe we cleared this up in the parallel thread, but just in case: > reclaim can happen due to a memory.max set lower in the > tree. memory.low propagation is always relative from the reclaim > scope, not the system-wide root cgroup. Clear now. Michal --QWRRbczYj8mXuejp Content-Type: application/x-sh Content-Disposition: attachment; filename="run.sh" Content-Transfer-Encoding: quoted-printable #!/bin/bash=0A=0ACGPATH=3D/sys/fs/cgroup/test.slice=0A=0Afunction stop_test= () {=0A systemctl stop test.slice=0A}=0A=0Atrap stop_test SIGINT=0A=0Acat >= /etc/systemd/system/test.slice </etc/systemd/system/test-A.slice <