Received: by 2002:a05:6a10:eb17:0:0:0:0 with SMTP id hx23csp1621300pxb; Fri, 10 Sep 2021 09:50:37 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxOh1DfTxZt6TCrfEh8W2csezuN1FymFpz77ZrzTRJOa5lPkD1Igy48qinKMCXrsMbf6fVj X-Received: by 2002:a05:6e02:1b88:: with SMTP id h8mr6088568ili.29.1631292637665; Fri, 10 Sep 2021 09:50:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1631292637; cv=none; d=google.com; s=arc-20160816; b=v0vZKMbdn0rJh/+3fdjCf7qSYi2P1dfR4Svf1VKJHZbRatkJZH3EcpCoX6W6C/XbvK SgxcO3HgxzVHabrdfbuZe0BQOpUrobM5lnaqZ2FFOZ9mfGxBpqSwI+97xjIQZS5PCdVS koXiZuTghGPrCGn3lbhmjfGpiIqzHHnQiJBRnHHcw+uTQSU6xVyQXNLMczPoG0KDqL5w uPqWqsC2s1jE/AeuRWgr/gBIeqMA108Nkh52PI1uKDflNCKp27QC6lREFEpqD9a5A+If JEP+8Ijwsd92CIikoZMBB2td+wY1hw8NDGQO7KNq/gVapbXw2fihzHsqc+ocVcCI2XBU dH3Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:sender:dkim-signature; bh=ycLk39mqr2VW20gmuwiS1O96iwebvsbUDKPqz1r4HLs=; b=MSYndhTiiSRVBqdHU4oT0k7XHTCC8Ybxs+3SusETbuMhW3ki2wzXZOHwy4sH+DMRN7 J+cfTMOzsp6h+YRP5HqFaaSDfDZ8mY0B9Xuz5LkZcL4rq7YysL+fZM+cn26Qq7fs92ME U+HpEgarcsHBzzeFsxwQhaOEifckEPbCQWc4zyHNqDgCZ4E53+J8W5W/D3Xrsjgy+DEc pkfY3FbSsJaWYq948QibNpGWjGIBLptEC/Dt01lvqlJsbh7W6/mJylS2xcsOA7zHV6Ea nP+h7Iq7Pi/+EbJP6jUQ9faMiPXhAWf/2J0otnAoK/UUSwhLRiHkTx0EVogEvyoLmY3X udEw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=SwFAicqG; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id o14si6094317ilu.101.2021.09.10.09.50.25; Fri, 10 Sep 2021 09:50:37 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=SwFAicqG; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230075AbhIJQum (ORCPT + 99 others); Fri, 10 Sep 2021 12:50:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60750 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229466AbhIJQuk (ORCPT ); Fri, 10 Sep 2021 12:50:40 -0400 Received: from mail-pl1-x62b.google.com (mail-pl1-x62b.google.com [IPv6:2607:f8b0:4864:20::62b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BF9E0C061574; Fri, 10 Sep 2021 09:49:29 -0700 (PDT) Received: by mail-pl1-x62b.google.com with SMTP id w6so1522789pll.3; Fri, 10 Sep 2021 09:49:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=sender:date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=ycLk39mqr2VW20gmuwiS1O96iwebvsbUDKPqz1r4HLs=; b=SwFAicqGFHYDPBgQbXLMJQ0UcmZY8lvUK61PawM7D0Nd1XWOmic8A7dtlSBGftYXPp xi3UTO+1z1xbIjIbsiLJ8Q/9xUUKAh0J4i3hAOX/qhS/9pi0c9P6bG7g8G0gM0uAkb6P K7S/4G77KHQ4gsQHrP+7672jpkYyigpoj8a+VGveTJ/c/oUqO6Gd3qDKWw+OVXuKg4IK 4qcpP/7Ha7F4kKM6LtlzziDVjx5YW6ESH4ILyPv5J8P1YHi9ghGRcFADXvv22+gJsMX0 P7S2+oxpmBjT4bpeQbiwz4FR8BkP7FZ5aViXGFWZfpnrMbegmWV4S2CLRrM0IKHIePu2 9anA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:sender:date:from:to:cc:subject:message-id :references:mime-version:content-disposition:in-reply-to; bh=ycLk39mqr2VW20gmuwiS1O96iwebvsbUDKPqz1r4HLs=; b=z/QQBv/Djc1pBH3QleAZAYC2QUJm5p+c2mKqzm3reBffFBMSMdQN5062jITwIMtBSj z/IuNYgg1rDMP6DZyqJfYFFV0Q4fzOkGcKWihhQTlfhhJ5XNoqEaSobs6sOhcHKjUpSq M9b7eY6wS64DZpDHId4k/xlxkbP1JYhjrNpQPg+ej6GZ8j8y87t3so7OlH8dTIRzf4vj 6VJAIOBjxBXUjzHZACDUORQ2NvgZVZkRrJ8Ag0BGBJ3ckZto2MlUOmTQLNqyKcg6DUdM loOGI92a2fqpgCh7L2sQywnPvnpIDd0qqlUNNmFrh4BQhKTq5X16XUpifN+6jbri2m2K 83HQ== X-Gm-Message-State: AOAM530KY1VfTUqvv40UBHY7FFier9EjVSERusCNTK2Bb1NDET78ekF7 Q3Cmo0vg6X6aJeW6Hk5eIh8= X-Received: by 2002:a17:90b:4b51:: with SMTP id mi17mr616793pjb.120.1631292569014; Fri, 10 Sep 2021 09:49:29 -0700 (PDT) Received: from localhost (2603-800c-1a02-1bae-e24f-43ff-fee6-449f.res6.spectrum.com. [2603:800c:1a02:1bae:e24f:43ff:fee6:449f]) by smtp.gmail.com with ESMTPSA id l185sm5569198pfd.62.2021.09.10.09.49.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 10 Sep 2021 09:49:28 -0700 (PDT) Sender: Tejun Heo Date: Fri, 10 Sep 2021 06:49:27 -1000 From: Tejun Heo To: "taoyi.ty" Cc: Greg KH , lizefan.x@bytedance.com, hannes@cmpxchg.org, mcgrof@kernel.org, keescook@chromium.org, yzaikin@google.com, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-fsdevel@vger.kernel.org, shanpeic@linux.alibaba.com Subject: Re: [RFC PATCH 0/2] support cgroup pool in v1 Message-ID: References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hello, On Fri, Sep 10, 2021 at 10:11:53AM +0800, taoyi.ty wrote: > The scenario is the function computing of the public > cloud. Each instance of function computing will be > allocated about 0.1 core cpu and 100M memory. On > a high-end server, for example, 104 cores and 384G, > it is normal to create hundreds of containers at the > same time if burst of requests comes. This type of use case isn't something cgroup is good at, at least not currently. The problem is that trying to scale management operations like creating and destroying cgroups has implications on how each controller is implemented - we want the hot paths which get used while cgroups are running actively to be as efficient and scalable as possible even if that requires a lot of extra preparation and lazy cleanup operations. We don't really want to push for cgroup creation / destruction efficiency at the cost of hot path overhead. This has implications for use cases like you describe. Even if the kernel pre-prepare cgroups to low latency for cgroup creation, it means that the system would be doing a *lot* of managerial extra work creating and destroying cgroups constantly for not much actual work. Usually, the right solution for this sort of situations is pooling cgroups from the userspace which usually has a lot better insight into which cgroups can be recycled and can also adjust the cgroup hierarchy to better fit the use case (e.g. some rapid-cycling cgroups can benefit from higher-level resource configurations). So, it'd be great to make the managerial operations more efficient from cgroup core side but there are inherent architectural reasons why rapid-cycling use cases aren't and won't be prioritized. Thanks. -- tejun