Received: by 2002:a05:6a10:1287:0:0:0:0 with SMTP id d7csp6720557pxv; Fri, 30 Jul 2021 00:12:48 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwB8OOFl3IddIPj4mAG7VrIFnQJkDeG4+dAbqYVIX8NMrbSiCDZLxlE56OjXBEz4IYzQkXb X-Received: by 2002:a6b:c9d3:: with SMTP id z202mr1282403iof.44.1627629168001; Fri, 30 Jul 2021 00:12:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1627629167; cv=none; d=google.com; s=arc-20160816; b=yZRNfYcPZe665wD3BiZYD8BSaABMCw8GFuMuA8VXE8v0IZ8uO52XRYVLDxZMjHvPqk gFbW38LDk7pQtqhZH3YP/biizQk/FNAz45D9nADZJHtHjoEZ/4buyzzTOM/bMkF8qsEE mZkGeFBJC3bf3fJfLIAv02gzoUY6A+qyVnqdPjmHoLfo2zhiZRHAdDTaRm1HhnBbrm98 7B3Xb7Hz0f+q4KYlPmAsyf+XCaz6BYybrF5MfeTEtnncY7/pt19wViBGUvLyHBkhg3PR Xr3WH8yUkBB3SpgNnWkwjHO8xGIyRbhgxoxMvawpNkoIB1mZg7rN0QW3s7UTv4BdAqSk nimQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=lZsoJP2teBn+QI3HKjqLsd7aXU/zW5cVsfZn+pHrU6g=; b=n4pLvHm0Qg7/MDTJrl7IOs7fcEIzNZs8opHBdxRr2FpWTxCxJyXVkkTsWl+yLAulpL ef3Ad25xhIJVJ/eEcVtaR8UDevB73gn7TNL4mWuIf/BAtwH/PtSy8cMUhw/FYQZwi2xm W1XLjTkvFdnaVOv/J+MCdnp9XxDqW1uCx0Vj6N4NQf+Ow7IKHAqAiY+S+sU6IxMf6KEo 5ACsgEzmkfgE+/jncJsWEHwbNXQcurzkoHoiFOaXQtPp6W4OLe6/3bh95tfkXJ/38L5n 8QzJsNz5sWxmLHwEbao9q3ntaixRnFyOCBzVAuFxcuLWuqblsJWVPtJjTRCSG8fpNEju BpcQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id s22si983760iow.33.2021.07.30.00.12.36; Fri, 30 Jul 2021 00:12:47 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237814AbhG3HKe (ORCPT + 99 others); Fri, 30 Jul 2021 03:10:34 -0400 Received: from out30-57.freemail.mail.aliyun.com ([115.124.30.57]:58949 "EHLO out30-57.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237572AbhG3HKc (ORCPT ); Fri, 30 Jul 2021 03:10:32 -0400 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R131e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e01424;MF=changhuaixin@linux.alibaba.com;NM=1;PH=DS;RN=23;SR=0;TI=SMTPD_---0UhPpHcj_1627629023; Received: from localhost(mailfrom:changhuaixin@linux.alibaba.com fp:SMTPD_---0UhPpHcj_1627629023) by smtp.aliyun-inc.com(127.0.0.1); Fri, 30 Jul 2021 15:10:24 +0800 From: Huaixin Chang To: peterz@infradead.org Cc: anderson@cs.unc.edu, baruah@wustl.edu, bsegall@google.com, changhuaixin@linux.alibaba.com, dietmar.eggemann@arm.com, dtcccc@linux.alibaba.com, juri.lelli@redhat.com, khlebnikov@yandex-team.ru, linux-kernel@vger.kernel.org, luca.abeni@santannapisa.it, mgorman@suse.de, mingo@redhat.com, odin@uged.al, odin@ugedal.com, pauld@redhead.com, pjt@google.com, rostedt@goodmis.org, shanpeic@linux.alibaba.com, tj@kernel.org, tommaso.cucinotta@santannapisa.it, vincent.guittot@linaro.org, xiyou.wangcong@gmail.com Subject: [PATCH 2/2] sched/fair: Add document for burstable CFS bandwidth Date: Fri, 30 Jul 2021 15:09:56 +0800 Message-Id: <20210730070956.44019-3-changhuaixin@linux.alibaba.com> X-Mailer: git-send-email 2.14.4.44.g2045bb6 In-Reply-To: <20210730070956.44019-1-changhuaixin@linux.alibaba.com> References: <20210730070956.44019-1-changhuaixin@linux.alibaba.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Basic description of usage and effect for CFS Bandwidth Control Burst. Co-developed-by: Shanpei Chen Signed-off-by: Shanpei Chen Co-developed-by: Tianchen Ding Signed-off-by: Tianchen Ding Signed-off-by: Huaixin Chang --- Documentation/admin-guide/cgroup-v2.rst | 8 ++++ Documentation/scheduler/sched-bwc.rst | 85 +++++++++++++++++++++++++++++---- 2 files changed, 83 insertions(+), 10 deletions(-) diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst index 5c7377b5bd3e..c79477089c53 100644 --- a/Documentation/admin-guide/cgroup-v2.rst +++ b/Documentation/admin-guide/cgroup-v2.rst @@ -1016,6 +1016,8 @@ All time durations are in microseconds. - nr_periods - nr_throttled - throttled_usec + - nr_bursts + - burst_usec cpu.weight A read-write single value file which exists on non-root @@ -1047,6 +1049,12 @@ All time durations are in microseconds. $PERIOD duration. "max" for $MAX indicates no limit. If only one number is written, $MAX is updated. + cpu.max.burst + A read-write single value file which exists on non-root + cgroups. The default is "0". + + The burst in the range [0, $QUOTA]. + cpu.pressure A read-write nested-keyed file. diff --git a/Documentation/scheduler/sched-bwc.rst b/Documentation/scheduler/sched-bwc.rst index 1fc73555f5c4..0b2a3b2e3369 100644 --- a/Documentation/scheduler/sched-bwc.rst +++ b/Documentation/scheduler/sched-bwc.rst @@ -22,39 +22,89 @@ cfs_quota units at each period boundary. As threads consume this bandwidth it is transferred to cpu-local "silos" on a demand basis. The amount transferred within each of these updates is tunable and described as the "slice". +Burst feature +------------- +This feature borrows time now against our future underrun, at the cost of +increased interference against the other system users. All nicely bounded. + +Traditional (UP-EDF) bandwidth control is something like: + + (U = \Sum u_i) <= 1 + +This guaranteeds both that every deadline is met and that the system is +stable. After all, if U were > 1, then for every second of walltime, +we'd have to run more than a second of program time, and obviously miss +our deadline, but the next deadline will be further out still, there is +never time to catch up, unbounded fail. + +The burst feature observes that a workload doesn't always executes the full +quota; this enables one to describe u_i as a statistical distribution. + +For example, have u_i = {x,e}_i, where x is the p(95) and x+e p(100) +(the traditional WCET). This effectively allows u to be smaller, +increasing the efficiency (we can pack more tasks in the system), but at +the cost of missing deadlines when all the odds line up. However, it +does maintain stability, since every overrun must be paired with an +underrun as long as our x is above the average. + +That is, suppose we have 2 tasks, both specify a p(95) value, then we +have a p(95)*p(95) = 90.25% chance both tasks are within their quota and +everything is good. At the same time we have a p(5)p(5) = 0.25% chance +both tasks will exceed their quota at the same time (guaranteed deadline +fail). Somewhere in between there's a threshold where one exceeds and +the other doesn't underrun enough to compensate; this depends on the +specific CDFs. + +At the same time, we can say that the worst case deadline miss, will be +\Sum e_i; that is, there is a bounded tardiness (under the assumption +that x+e is indeed WCET). + +The interferenece when using burst is valued by the possibilities for +missing the deadline and the average WCET. Test results showed that when +there many cgroups or CPU is under utilized, the interference is +limited. More details are shown in: +https://lore.kernel.org/lkml/5371BD36-55AE-4F71-B9D7-B86DC32E3D2B@linux.alibaba.com/ + Management ---------- -Quota and period are managed within the cpu subsystem via cgroupfs. +Quota, period and burst are managed within the cpu subsystem via cgroupfs. .. note:: The cgroupfs files described in this section are only applicable to cgroup v1. For cgroup v2, see :ref:`Documentation/admin-guide/cgroup-v2.rst `. -- cpu.cfs_quota_us: the total available run-time within a period (in - microseconds) +- cpu.cfs_quota_us: run-time replenished within a period (in microseconds) - cpu.cfs_period_us: the length of a period (in microseconds) - cpu.stat: exports throttling statistics [explained further below] +- cpu.cfs_burst_us: the maximum accumulated run-time (in microseconds) The default values are:: cpu.cfs_period_us=100ms - cpu.cfs_quota=-1 + cpu.cfs_quota_us=-1 + cpu.cfs_burst_us=0 A value of -1 for cpu.cfs_quota_us indicates that the group does not have any bandwidth restriction in place, such a group is described as an unconstrained bandwidth group. This represents the traditional work-conserving behavior for CFS. -Writing any (valid) positive value(s) will enact the specified bandwidth limit. -The minimum quota allowed for the quota or period is 1ms. There is also an -upper bound on the period length of 1s. Additional restrictions exist when -bandwidth limits are used in a hierarchical fashion, these are explained in -more detail below. +Writing any (valid) positive value(s) no smaller than cpu.cfs_burst_us will +enact the specified bandwidth limit. The minimum quota allowed for the quota or +period is 1ms. There is also an upper bound on the period length of 1s. +Additional restrictions exist when bandwidth limits are used in a hierarchical +fashion, these are explained in more detail below. Writing any negative value to cpu.cfs_quota_us will remove the bandwidth limit and return the group to an unconstrained state once more. +A value of 0 for cpu.cfs_burst_us indicates that the group can not accumulate +any unused bandwidth. It makes the traditional bandwidth control behavior for +CFS unchanged. Writing any (valid) positive value(s) no larger than +cpu.cfs_quota_us into cpu.cfs_burst_us will enact the cap on unused bandwidth +accumulation. + Any updates to a group's bandwidth specification will result in it becoming unthrottled if it is in a constrained state. @@ -74,7 +124,7 @@ for more fine-grained consumption. Statistics ---------- -A group's bandwidth statistics are exported via 3 fields in cpu.stat. +A group's bandwidth statistics are exported via 5 fields in cpu.stat. cpu.stat: @@ -82,6 +132,9 @@ cpu.stat: - nr_throttled: Number of times the group has been throttled/limited. - throttled_time: The total time duration (in nanoseconds) for which entities of the group have been throttled. +- nr_bursts: Number of periods burst occurs. +- burst_usec: Cumulative wall-time that any CPUs has used above quota in + respective periods This interface is read-only. @@ -179,3 +232,15 @@ Examples By using a small period here we are ensuring a consistent latency response at the expense of burst capacity. + +4. Limit a group to 40% of 1 CPU, and allow accumulate up to 20% of 1 CPU + additionally, in case accumulation has been done. + + With 50ms period, 20ms quota will be equivalent to 40% of 1 CPU. + And 10ms burst will be equivalent to 20% of 1 CPU. + + # echo 20000 > cpu.cfs_quota_us /* quota = 20ms */ + # echo 50000 > cpu.cfs_period_us /* period = 50ms */ + # echo 10000 > cpu.cfs_burst_us /* burst = 10ms */ + + Larger buffer setting (no larger than quota) allows greater burst capacity. -- 2.14.4.44.g2045bb6