Received: by 2002:a05:6358:3188:b0:123:57c1:9b43 with SMTP id q8csp1165215rwd; Wed, 31 May 2023 10:13:24 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5r0IK3RxnrSgIgmQH3GiV1Rt5qwnLtTOWxqVIR4UGzvCCJQV7Nc7We69fU4/k6GvI75FYv X-Received: by 2002:a17:903:2310:b0:1ae:2013:4bc8 with SMTP id d16-20020a170903231000b001ae20134bc8mr5985709plh.18.1685553204494; Wed, 31 May 2023 10:13:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685553204; cv=none; d=google.com; s=arc-20160816; b=KKP9kPhnqsCstmJroScaB3/Qd6zbzh+oYdqt2WBn/6LLclmMX55+F6etbKlOxr9Zod cz6cNXVWZD71+hrL/Gp7Ne6ROisiTzShO+2NJnczigFmdXDpiZ+epQ12qzU73A/H2KLw sbzxJ9zb2TC8CnsXQZ/VlAJsbywGxKZJkJVSCN/ln8eUNYQIVJd5FB2XdvalFf7TBqXE q4LOk/hqtXdwMSWNzgW1ur65fONK8rkR3NFZcESQfx2gWglckemprHVN93s4H6lSBCVs TbJenaNBfSGSpQFvTEKS4RKYk0V+ax9VQo3soGqJpaU9eHW4AiLMuf32AHZWYLTw2c9D CZrw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=vEOF8HrDImAV/WYXa7xNLQNUFj3sGLIQNJ3rM8mtWVY=; b=sSZQzbIu1vpyDSRVbboYozTXKRWYXrjunAlHcY3BDJKkWn7Niu/i8CgE5/b9Br/qMx 5Nrat3poRlHtdjJALAsghf5huYvc4OrePhfZ2mwkbJzb3vWyKV/aASOXM09e0X+zV1vN OEZKvI9yrE+tkWvjQx35RQIjLbYZ7QGmCYbPojSJxxEsymI+1F5bbGEBl+p3q5glVv0J /KM0ClYeNdK3ZOKCHkw3yruZ5Wm3fEKzUh/CeRT5iYsJjQaJvFDfk8ccZ2CZdUd2wF3Y 9f1NgCdv+hdxm/BShjtinGAgrglE0uXCO4rEEtQe1ELD8UhEknK02/SITKbSJkWGBQ4B s/tA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=MaEPwmNd; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ja5-20020a170902efc500b001b0603829afsi1119477plb.405.2023.05.31.10.13.08; Wed, 31 May 2023 10:13:24 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=MaEPwmNd; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229888AbjEaQg2 (ORCPT + 99 others); Wed, 31 May 2023 12:36:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58620 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229864AbjEaQgR (ORCPT ); Wed, 31 May 2023 12:36:17 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 69CE9E54 for ; Wed, 31 May 2023 09:34:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1685550858; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=vEOF8HrDImAV/WYXa7xNLQNUFj3sGLIQNJ3rM8mtWVY=; b=MaEPwmNd9z2my5+NPgGxwRY7v9pHBM4bNtN84tfCjZogzCUzds/iVfaaAi49mzuM+ZntaC YUCstr27jUiBT4AEmbBQcispPBLFvYeUmrGPm0vOD4h1w0dxw/OfjzsR3Yr6ihQoCiHt7t BZwvMFW4vcKiB74GULlQHK4di24XUls= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-422-MwNguLseOg23MkkEXw2tFA-1; Wed, 31 May 2023 12:34:15 -0400 X-MC-Unique: MwNguLseOg23MkkEXw2tFA-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 489CD101A53B; Wed, 31 May 2023 16:34:14 +0000 (UTC) Received: from llong.com (dhcp-17-153.bos.redhat.com [10.18.17.153]) by smtp.corp.redhat.com (Postfix) with ESMTP id C4BD22166B2B; Wed, 31 May 2023 16:34:13 +0000 (UTC) From: Waiman Long To: Tejun Heo , Zefan Li , Johannes Weiner , Jonathan Corbet , Shuah Khan Cc: linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, Juri Lelli , Valentin Schneider , Frederic Weisbecker , Mrunal Patel , Ryan Phillips , Brent Rowsell , Peter Hunt , Phil Auld , Waiman Long Subject: [PATCH v2 5/6] cgroup/cpuset: Documentation update for partition Date: Wed, 31 May 2023 12:34:04 -0400 Message-Id: <20230531163405.2200292-6-longman@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6 X-Spam-Status: No, score=-2.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This patch updates the cgroup-v2.rst file to include information about the new "cpuset.cpus.reserve" control file as well as the new remote partition. Signed-off-by: Waiman Long --- Documentation/admin-guide/cgroup-v2.rst | 92 +++++++++++++++++++++---- 1 file changed, 79 insertions(+), 13 deletions(-) diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst index f67c0829350b..3e9351c2cd27 100644 --- a/Documentation/admin-guide/cgroup-v2.rst +++ b/Documentation/admin-guide/cgroup-v2.rst @@ -2215,6 +2215,38 @@ Cpuset Interface Files Its value will be affected by memory nodes hotplug events. + cpuset.cpus.reserve + A read-write multiple values file which exists only on root + cgroup. + + It lists all the CPUs that are reserved for adjacent and remote + partitions created in the system. See the next section for + more information on what an adjacent or remote partitions is. + + Creation of adjacent partition does not require touching this + control file as CPU reservation will be done automatically. + In order to create a remote partition, the CPUs needed by the + remote partition has to be written to this file first. + + Due to the fact that "cpuset.cpus.reserve" holds reserve CPUs + that can be used by multiple partitions and automatic reservation + may also race with manual reservation, an extension prefixes of + "+" and "-" are allowed for this file to reduce race. + + A "+" prefix can be used to indicate a list of additional + CPUs that are to be added without disturbing the CPUs that are + originally there. For example, if its current value is "3-4", + echoing ""+5" to it will change it to "3-5". + + Once a remote partition is destroyed, its CPUs have to be + removed from this file or no other process can use them. A "-" + prefix can be used to remove a list of CPUs from it. However, + removing CPUs that are currently used in existing partitions + may cause those partitions to become invalid. A single "-" + character without any number can be used to indicate removal + of all the free CPUs not yet allocated to any partitions to + avoid accidental partition invalidation. + cpuset.cpus.partition A read-write single value file which exists on non-root cpuset-enabled cgroups. This flag is owned by the parent cgroup @@ -2228,25 +2260,49 @@ Cpuset Interface Files "isolated" Partition root without load balancing ========== ===================================== - The root cgroup is always a partition root and its state - cannot be changed. All other non-root cgroups start out as - "member". + A cpuset partition is a collection of cgroups with a partition + root at the top of the hierarchy and its descendants except + those that are separate partition roots themselves and their + descendants. A partition has exclusive access to the set of + CPUs allocated to it. Other cgroups outside of that partition + cannot use any CPUs in that set. + + There are two types of partitions - adjacent and remote. The + parent of an adjacent partition must be a valid partition root. + Partition roots of adjacent partitions are all clustered around + the root cgroup. Creation of adjacent partition is done by + writing the desired partition type into "cpuset.cpus.partition". + + A remote partition does not require a partition root parent. + So a remote partition can be formed far from the root cgroup. + However, its creation is a 2-step process. The CPUs needed + by a remote partition ("cpuset.cpus" of the partition root) + has to be written into "cpuset.cpus.reserve" of the root + cgroup first. After that, "isolated" can be written into + "cpuset.cpus.partition" of the partition root to form a remote + isolated partition which is the only supported remote partition + type for now. + + All remote partitions are terminal as adjacent partition cannot + be created underneath it. With the way remote partition is + formed, it is not possible to create another valid remote + partition underneath it. + + The root cgroup is always a partition root and its state cannot + be changed. All other non-root cgroups start out as "member". When set to "root", the current cgroup is the root of a new - partition or scheduling domain that comprises itself and all - its descendants except those that are separate partition roots - themselves and their descendants. + partition or scheduling domain. - When set to "isolated", the CPUs in that partition root will + When set to "isolated", the CPUs in that partition will be in an isolated state without any load balancing from the scheduler. Tasks placed in such a partition with multiple CPUs should be carefully distributed and bound to each of the individual CPUs for optimal performance. - The value shown in "cpuset.cpus.effective" of a partition root - is the CPUs that the partition root can dedicate to a potential - new child partition root. The new child subtracts available - CPUs from its parent "cpuset.cpus.effective". + The value shown in "cpuset.cpus.effective" of a partition root is + the CPUs that are dedicated to that partition and not available + to cgroups outside of that partittion. A partition root ("root" or "isolated") can be in one of the two possible states - valid or invalid. An invalid partition @@ -2270,8 +2326,8 @@ Cpuset Interface Files In the case of an invalid partition root, a descriptive string on why the partition is invalid is included within parentheses. - For a partition root to become valid, the following conditions - must be met. + For an adjacent partition root to be valid, the following + conditions must be met. 1) The "cpuset.cpus" is exclusive with its siblings , i.e. they are not shared by any of its siblings (exclusivity rule). @@ -2281,6 +2337,16 @@ Cpuset Interface Files 4) The "cpuset.cpus.effective" cannot be empty unless there is no task associated with this partition. + For a remote partition root to be valid, the following conditions + must be met. + + 1) The same exclusivity rule as adjacent partition root. + 2) The "cpuset.cpus" is not empty and all the CPUs must be + present in "cpuset.cpus.reserve" of the root cgroup and none + of them are allocated to another partition. + 3) The "cpuset.cpus" value must be present in all its ancestors + to ensure proper hierarchical cpu distribution. + External events like hotplug or changes to "cpuset.cpus" can cause a valid partition root to become invalid and vice versa. Note that a task cannot be moved to a cgroup with empty -- 2.31.1