Received: by 2002:a05:6a10:eb17:0:0:0:0 with SMTP id hx23csp495094pxb; Wed, 8 Sep 2021 06:05:23 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyzKWM5mv7WTHq/YMxqcK+L7fn39jV7hxlybGIvfdJazGKLPP4HN5w0AR19jEwUmX48Pvvo X-Received: by 2002:a6b:e917:: with SMTP id u23mr3279736iof.19.1631106322923; Wed, 08 Sep 2021 06:05:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1631106322; cv=none; d=google.com; s=arc-20160816; b=U8QBwPYHwJxWryquetj31gZVWWNsLXMVg9r9AGqIJvPiP/zoDvLgGDtIQCvMf3kfoy n9X+QiJiSe6f3y5kRWgLEK+9TQxnFfJBDfwdppMgYqe+ZwNjeaieLjJrArTQtGvkv2Xd x/8PlpGqEMRV63lATFRFpZsbGf58wxdXWMcUwX+VAh4ruaroXOFJbEgS+EQmDu96lG/9 azOdngr3gDvRuM1fjCercCUrZbGLAfUuL1l91uHGE8slYVn8E6fp8hWRQkZjx3UU7SCd 9FsnTO/CbQ/tgawO7FeG7NJCTJ1bB80z4L/B5/UUcf3fWWLXMOBb/EiAWOcO5pCAg9kg bk6Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=nMezGjgjwzg9bIdKI/lKo1CwMtPF/hzsmq+sSzvk1iM=; b=TmDmPQuoXvhkpTqHrBq04yiJIUXqPSVhNhRZmFjIE0Opc/qCD3csVOx0Oufwdscx7R B6Eu4mjZLHRugkbnfuql+8ee39hW/wq73oQsseJIlaMdHHgvoKwNrC0n0C0LfTHaCIjL kGXDgChLGfDk7u92eISVGbZLvHQbaaGEYlVQwLvi9zZVeh9N4w+QbWKBJcWWhrG6jXhs LfYYYBAVX91ABuXxJI7co8xsQ4RGId1MMXsSMSTCTKD6+jB0yLe58TxVaL0k2TOQNsrt TtUcmvwfyUyORs7ieUoXgrMpAGRkUZxX5BspUgoW646Xi7EOP7IjjWEfd9yulJrQFaZb OX0w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=RB7SYU+a; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id l5si1847430ioa.54.2021.09.08.06.05.08; Wed, 08 Sep 2021 06:05:22 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=RB7SYU+a; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237910AbhIHMie (ORCPT + 99 others); Wed, 8 Sep 2021 08:38:34 -0400 Received: from mail.kernel.org ([198.145.29.99]:35548 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234109AbhIHMie (ORCPT ); Wed, 8 Sep 2021 08:38:34 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 089FD61155; Wed, 8 Sep 2021 12:37:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1631104646; bh=nMezGjgjwzg9bIdKI/lKo1CwMtPF/hzsmq+sSzvk1iM=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=RB7SYU+aQABbNPtw/eKMtzShODOIWXCRyxgwmRtzQu1yJ3v4YBMq86oggOT7NO72X ieyEByBTieCISCNeqDdxmmYVNKXkdo46Hgwznf3J6v1cAu7dUyNEbOF+/Gwf8a4Y3E FC9qRh1cgCSNQVV++u8F7zHv3VQcFUlZXJttaIPY= Date: Wed, 8 Sep 2021 14:37:23 +0200 From: Greg KH To: Yi Tao Cc: tj@kernel.org, lizefan.x@bytedance.com, hannes@cmpxchg.org, mcgrof@kernel.org, keescook@chromium.org, yzaikin@google.com, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-fsdevel@vger.kernel.org, shanpeic@linux.alibaba.com Subject: Re: [RFC PATCH 0/2] support cgroup pool in v1 Message-ID: References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Sep 08, 2021 at 08:15:11PM +0800, Yi Tao wrote: > In a scenario where containers are started with high concurrency, in > order to control the use of system resources by the container, it is > necessary to create a corresponding cgroup for each container and > attach the process. The kernel uses the cgroup_mutex global lock to > protect the consistency of the data, which results in a higher > long-tail delay for cgroup-related operations during concurrent startup. > For example, long-tail delay of creating cgroup under each subsystems > is 900ms when starting 400 containers, which becomes bottleneck of > performance. The delay is mainly composed of two parts, namely the > time of the critical section protected by cgroup_mutex and the > scheduling time of sleep. The scheduling time will increase with > the increase of the cpu overhead. Perhaps you shouldn't be creating that many containers all at once? What normal workload requires this? thanks, greg k-h