Received: by 2002:a05:6a10:6744:0:0:0:0 with SMTP id w4csp1554987pxu; Thu, 8 Oct 2020 14:45:19 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxxQYZKnHGEzW1qJeUJhtjnxG63A4KJpll35zYfAzvsasjC59uJ6YlJoEoouvCtKHy8PQVC X-Received: by 2002:a17:907:2179:: with SMTP id rl25mr11278269ejb.450.1602193519618; Thu, 08 Oct 2020 14:45:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1602193519; cv=none; d=google.com; s=arc-20160816; b=R1BtQLTdn+ghhb8di9Y9kon5DsJjCusw5tZXYzrQkxl28D+RFblscZxM1hjQ5qTHiH RdM7UHUpwbpqFsNGJxl4a9rJhRCo5H72LS+iPjOxZTRsq1Woa2CWGM8FdZo92bMen17I ke1FDTWGOovknaF9G6yE4MptyhmoKGVGrd+Z+X5GDJpTU/wSg9IAgH8aJ2i2gI4ik41G qayq0a++nPr3xfqkMsazFNLIQpmabDC1gwQzs+LPXKdVQ/t0I085RKScOQ/sC+eDlIbw xAK2MjmzLesMYwgYrJPGIe8oQsEEra3+QVpJ2YPOW3Pt7HKpLnDjGbeChWRjrg4QQud9 H3CQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=bKBtP0ujyc6cQVOrIcuXYWFHroSBLTISizcx1XMf6ng=; b=BXmFH64QavIDkbf4MWIsjBLasjlYebo2ra4BXeYsJSz1Bb7hmpW8qrGbJLPocBC4AK aqDDGXrBGEGqcS+eUbUY7kVK6OHlq9uPsl/5+di+LWiTpK/kKlKuKy3VUn0CtnhEMNwd /ZhvO3D/kuf7eh0Vb8GwNP9QKlhDySlmR/QK5eEiL9jrlUH0YQsKjzy9rOYVLILy4atv svOjCnlL9cQCLOb5ohC9oHwkX1JF27KvKtFGOKgWz3mcYNH6DSm/MywjlnMpd9TmUEDF EzLrK7TbneBF0XsIW4qTGwwyMU4xBH9XoaXsoQB7B/8ISR/VMx44hx93fQnoEGN8GCq8 pSAg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@cmpxchg-org.20150623.gappssmtp.com header.s=20150623 header.b=s3k82CHt; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=cmpxchg.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id v13si4538548edi.95.2020.10.08.14.44.56; Thu, 08 Oct 2020 14:45:19 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@cmpxchg-org.20150623.gappssmtp.com header.s=20150623 header.b=s3k82CHt; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=cmpxchg.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728037AbgJHVKu (ORCPT + 99 others); Thu, 8 Oct 2020 17:10:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34280 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727605AbgJHVKu (ORCPT ); Thu, 8 Oct 2020 17:10:50 -0400 Received: from mail-qk1-x741.google.com (mail-qk1-x741.google.com [IPv6:2607:f8b0:4864:20::741]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5F6CCC0613D3 for ; Thu, 8 Oct 2020 14:10:50 -0700 (PDT) Received: by mail-qk1-x741.google.com with SMTP id x20so1528266qkn.1 for ; Thu, 08 Oct 2020 14:10:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=bKBtP0ujyc6cQVOrIcuXYWFHroSBLTISizcx1XMf6ng=; b=s3k82CHtM77kUlpDxAcCO0fDeNqzwcdQM8Of6YdH+JSnzn+xzdpRJ9ziqQJ17PbB71 Mqtm7QLzZTQmjyHN54EEKjqcrnVar1NQomaJT3VkEktsowxDbaBgGtqLQVh0M/BOAaD9 ARebNxoHOvYc4+h8nX3dYiA4ASrfoBZw+PtKvaDVCwv8CEtNux6iZ46m7efurvObQfKa pgRycL6/yDCFdXCxwCz2QfxOGdAsLD+tVl05QF2XfsTkXbk+27DnGXntvqhDOIaLbdua VjKUUQkR0jHnr0i/di/mlHqms2aF9AyPaAlyMuHaHfPx/5aXe43ZgQt8cORCWgcVX9tP HkZw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=bKBtP0ujyc6cQVOrIcuXYWFHroSBLTISizcx1XMf6ng=; b=LxJzxSOuq9AKSckLVZBxLAgOt5JCPtkCW8a1N0MRXBhYdcwW+UIvGa5vq+P9Ov+ezQ H3GsFZEfU6SsqUqT6/yZvIw9A66Jbk7z36dnZ78o9u9ydruRWxQsh64n0M4M+WwAurnJ GRbO7Scc+IiQQyVkC4Ll7FkGwKbW6A4AZO9F5Hw5lBwHboH1Qa8Fd04/vIecLJw3qCRK +rVk1V3MYZOkYOpUrgZOQjYfQHBzzz2sB/mEu1SWMo4ZkjA5hUGFUO2i1+aeETSOeyZP QJ2p0ABrLlLBeq4brJzgKch3a72ZsDeoQw2ETYDVv6E2MdwHx5XlmIHhnrRzRljPTz6L EXaw== X-Gm-Message-State: AOAM5323i9OXjDfGHrLNk5mPBQba6NYzgLsyY3FGBrtJumSZTPm9tIHv 08DC+WW35IA5FKa65Vn3hlPOlA== X-Received: by 2002:a37:7106:: with SMTP id m6mr10003822qkc.412.1602191449434; Thu, 08 Oct 2020 14:10:49 -0700 (PDT) Received: from localhost ([2620:10d:c091:480::1:9294]) by smtp.gmail.com with ESMTPSA id r13sm4690244qtp.94.2020.10.08.14.10.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 08 Oct 2020 14:10:48 -0700 (PDT) Date: Thu, 8 Oct 2020 17:09:17 -0400 From: Johannes Weiner To: Shakeel Butt Cc: Michal Hocko , Roman Gushchin , Yang Shi , Greg Thelen , David Rientjes , Michal =?iso-8859-1?Q?Koutn=FD?= , Andrew Morton , Linux MM , Cgroups , LKML , Andrea Righi , SeongJae Park Subject: Re: [PATCH] memcg: introduce per-memcg reclaim interface Message-ID: <20201008210917.GC163830@cmpxchg.org> References: <20200909215752.1725525-1-shakeelb@google.com> <20200928210216.GA378894@cmpxchg.org> <20200929150444.GG2277@dhcp22.suse.cz> <20200929215341.GA408059@cmpxchg.org> <20201001143149.GA493631@cmpxchg.org> <20201008145336.GA163830@cmpxchg.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Oct 08, 2020 at 08:55:57AM -0700, Shakeel Butt wrote: > On Thu, Oct 8, 2020 at 7:55 AM Johannes Weiner wrote: > > > > On Tue, Oct 06, 2020 at 09:55:43AM -0700, Shakeel Butt wrote: > > > On Thu, Oct 1, 2020 at 7:33 AM Johannes Weiner wrote: > > > > > > > [snip] > > > > > > So instead of asking users for a target size whose suitability > > > > > > heavily depends on the kernel's LRU implementation, the readahead > > > > > > code, the IO device's capability and general load, why not directly > > > > > > ask the user for a pressure level that the workload is comfortable > > > > > > with and which captures all of the above factors implicitly? Then > > > > > > let the kernel do this feedback loop from a per-cgroup worker. > > > > > > > > > > I am assuming here by pressure level you are referring to the PSI like > > > > > interface e.g. allowing the users to tell about their jobs that X > > > > > amount of stalls in a fixed time window is tolerable. > > > > > > > > Right, essentially the same parameters that psi poll() would take. > > > > > > I thought a bit more on the semantics of the psi usage for the > > > proactive reclaim. > > > > > > Suppose I have a top level cgroup A on which I want to enable > > > proactive reclaim. Which memory psi events should the proactive > > > reclaim should consider? > > > > > > The simplest would be the memory.psi at 'A'. However memory.psi is > > > hierarchical and I would not really want the pressure due limits in > > > children of 'A' to impact the proactive reclaim. > > > > I don't think pressure from limits down the tree can be separated out, > > generally. All events are accounted recursively as well. Of course, we > > remember the reclaim level for evicted entries - but if there is > > reclaim triggered at A and A/B concurrently, the distribution of who > > ends up reclaiming the physical pages in A/B is pretty arbitrary/racy. > > > > If A/B decides to do its own proactive reclaim with the sublimit, and > > ends up consuming the pressure budget assigned to proactive reclaim in > > A, there isn't much that can be done. > > > > It's also possible that proactive reclaim in A keeps A/B from hitting > > its limit in the first place. > > > > I have to say, the configuration doesn't really strike me as sensible, > > though. Limits make sense for doing fixed partitioning: A gets 4G, A/B > > gets 2G out of that. But if you do proactive reclaim on A you're > > essentially saying A as a whole is auto-sizing dynamically based on > > its memory access pattern. I'm not sure what it means to then start > > doing fixed partitions in the sublevel. > > > > Think of the scenario where there is an infrastructure owner and the > large number of job owners. The aim of the infra owner is to reduce > cost by stuffing as many jobs as possible on the same machine while > job owners want consistent performance. > > The job owners usually have meta jobs i.e. a set of small jobs that > run on the same machines and they manage these sub-jobs themselves. > > The infra owner wants to do proactive reclaim to trim the current jobs > without impacting their performance and more importantly to have > enough memory to land new jobs (We have learned the hard way that > depending on global reclaim for memory overcommit is really bad for > isolation). > > In the above scenario the configuration you mentioned might not be > sensible is really possible. This is exactly what we have in prod. I apologize if my statement was worded too broadly. I fully understand your motivation and understand the sub job structure. It's more about at which level to run proactive reclaim when there are sub-domains. You said you're already using a feedback loop to adjust proactive reclaim based on refault rates. How do you deal with this issue today of one subgroup potentially having higher refaults due to a limit? It appears that as soon as the subgroups can age independently, you also need to treat them independently for proactive reclaim. Because one group hitting its pressure limit says nothing about its sibling. If you apply equal reclaim on them both based on the independently pressured subjob, you'll under-reclaim the siblings. If you apply equal reclaim on them both based on the unpressured siblings alone, you'll over-pressurize the one with its own limit. This seems independent of the exact metric you're using, and more about at which level you apply pressure, and whether reclaim subdomains created through a hard limit can be treated as part of a larger shared pool or not.