Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp231192pxb; Tue, 2 Feb 2021 03:53:16 -0800 (PST) X-Google-Smtp-Source: ABdhPJyMXBey+fxi3asXieMP7byuV/6Di1ceyb3TfW68R6oI/mQAo7X3gN0v/G176UqlPSdNPZFb X-Received: by 2002:a17:906:442:: with SMTP id e2mr16546123eja.9.1612266796325; Tue, 02 Feb 2021 03:53:16 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1612266796; cv=none; d=google.com; s=arc-20160816; b=KgAGwdIefNp73MA+L0E/Ba+gmtFu88wjoelVRnUYibQ4UhIUa2GBf+q4f/thMaODyV s3BZm6hrvca0W6q/gxlRpLpUz2sejfKg4bzYtN4OaFZp2/uzhGVWrhAUC0ZDsVC5bpzE QPur+/S+RwQe3eqlNAWTGglIl9XGlXqzZ9s70nRa1q8ng6Sx/7lGMW3c98togzhTjbfn 8UmsMPSMz+wAzUbNKcI1qrlmp3W64OWqhUAKcbFCeAYBmp4RjpjnOxDCMfrpOWLYbDVB e2bXcX6zihgJBKpkLO6kv3I98Vs6oaz4Wv92jZxyGlEolQKFPk5ILJm+mu37In02x8u/ AAXw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=rD6tB+xMxUuTnySz5PSuFUwScNXD3xb41qbxEHwlSU8=; b=KF5HQ6u1R1gbj0ZEzmfOfMmIlD70bn/8+OP/1d7K4YMvzRz+O1JHYJ3cmJl0q38cLY CQJYc9GClbw2GZ1tdQijXccyzmZd9Mw8Vt7i1egOY+Yrrzlk0ozrqyDI9pv3ExG6u75V HzHVsx2An6O6ZvnPnnp9WERKVYq6gnzY5dFqzR2FKGXzDWWAm4CR1oXQmPCrY/UfcztJ i8aFnhSwIuC0aJjjp717iQ7seDQcwTtprmdlECGud8JbKT1L+X+k9/LPzotxQI0JGLw4 +Bif3T9iWCiURwZeya8GPF+f9eINJHLOWZeomo+jfYkaCHhVNNeFdBptBdDxzhe9IUhv wKhg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id gl3si12168629ejb.356.2021.02.02.03.52.51; Tue, 02 Feb 2021 03:53:16 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230002AbhBBLvg (ORCPT + 99 others); Tue, 2 Feb 2021 06:51:36 -0500 Received: from out30-54.freemail.mail.aliyun.com ([115.124.30.54]:58166 "EHLO out30-54.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229572AbhBBLva (ORCPT ); Tue, 2 Feb 2021 06:51:30 -0500 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R791e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e01424;MF=changhuaixin@linux.alibaba.com;NM=1;PH=DS;RN=15;SR=0;TI=SMTPD_---0UNfvUsr_1612266043; Received: from localhost(mailfrom:changhuaixin@linux.alibaba.com fp:SMTPD_---0UNfvUsr_1612266043) by smtp.aliyun-inc.com(127.0.0.1); Tue, 02 Feb 2021 19:40:44 +0800 From: Huaixin Chang To: linux-kernel@vger.kernel.org Cc: changhuaixin@linux.alibaba.com, bsegall@google.com, dietmar.eggemann@arm.com, juri.lelli@redhat.com, khlebnikov@yandex-team.ru, mgorman@suse.de, mingo@redhat.com, pauld@redhead.com, peterz@infradead.org, pjt@google.com, rostedt@goodmis.org, shanpeic@linux.alibaba.com, vincent.guittot@linaro.org, xiyou.wangcong@gmail.com Subject: [PATCH 4/4] sched/fair: Add document for burstable CFS bandwidth control Date: Tue, 2 Feb 2021 19:40:38 +0800 Message-Id: <20210202114038.64870-5-changhuaixin@linux.alibaba.com> X-Mailer: git-send-email 2.14.4.44.g2045bb6 In-Reply-To: <20210202114038.64870-1-changhuaixin@linux.alibaba.com> References: <20210202114038.64870-1-changhuaixin@linux.alibaba.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Basic description of usage and effect for CFS Bandwidth Control Burst. Signed-off-by: Huaixin Chang Signed-off-by: Shanpei Chen --- Documentation/scheduler/sched-bwc.rst | 70 +++++++++++++++++++++++++++++++++-- 1 file changed, 66 insertions(+), 4 deletions(-) diff --git a/Documentation/scheduler/sched-bwc.rst b/Documentation/scheduler/sched-bwc.rst index 9801d6b284b1..0933c66cc68b 100644 --- a/Documentation/scheduler/sched-bwc.rst +++ b/Documentation/scheduler/sched-bwc.rst @@ -21,18 +21,46 @@ cfs_quota units at each period boundary. As threads consume this bandwidth it is transferred to cpu-local "silos" on a demand basis. The amount transferred within each of these updates is tunable and described as the "slice". +By default, CPU bandwidth consumption is strictly limited to quota within each +given period. For the sequence of CPU usage u_i served under CFS bandwidth +control, if for any j <= k N(j,k) is the number of periods from u_j to u_k: + + u_j+...+u_k <= quota * N(j,k) + +For a bursty sequence among which interval u_j...u_k are at the peak, CPU +requests might have to wait for more periods to replenish enough quota. +Otherwise, larger quota is required. + +With "burst" buffer, CPU requests might be served as long as: + + u_j+...+u_k <= B_j + quota * N(j,k) + +if for any j <= k N(j,k) is the number of periods from u_j to u_k and B_j is +the accumulated quota from previous periods in burst buffer serving u_j. +Burst buffer helps in that serving whole bursty CPU requests without throttling +them can be done with moderate quota setting and accumulated quota in burst +buffer, if: + + u_0+...+u_n <= B_0 + quota * N(0,n) + +where B_0 is the initial state of burst buffer. The maximum accumulated quota in +the burst buffer is capped by burst. With proper burst setting, the available +bandwidth is still determined by quota and period on the long run. + Management ---------- -Quota and period are managed within the cpu subsystem via cgroupfs. +Quota, period and burst are managed within the cpu subsystem via cgroupfs. -cpu.cfs_quota_us: the total available run-time within a period (in microseconds) +cpu.cfs_quota_us: run-time replenished within a period (in microseconds) cpu.cfs_period_us: the length of a period (in microseconds) +cpu.cfs_burst_us: the maximum accumulated run-time (in microseconds) cpu.stat: exports throttling statistics [explained further below] The default values are:: cpu.cfs_period_us=100ms - cpu.cfs_quota=-1 + cpu.cfs_quota_us=-1 + cpu.cfs_burst_us=0 A value of -1 for cpu.cfs_quota_us indicates that the group does not have any bandwidth restriction in place, such a group is described as an unconstrained @@ -48,6 +76,11 @@ more detail below. Writing any negative value to cpu.cfs_quota_us will remove the bandwidth limit and return the group to an unconstrained state once more. +A value of 0 for cpu.cfs_burst_us indicates that the group can not accumulate +any unused bandwidth. It makes the traditional bandwidth control behavior for +CFS unchanged. Writing any (valid) positive value(s) into cpu.cfs_burst_us +will enact the cap on unused bandwidth accumulation. + Any updates to a group's bandwidth specification will result in it becoming unthrottled if it is in a constrained state. @@ -65,9 +98,21 @@ This is tunable via procfs:: Larger slice values will reduce transfer overheads, while smaller values allow for more fine-grained consumption. +There is also a global switch to turn off burst for all groups:: + /proc/sys/kernel/sched_cfs_bw_burst_enabled (default=1) + +By default it is enabled. Writing a 0 value means no accumulated CPU time can be +used for any group, even if cpu.cfs_burst_us is configured. + +Sometimes users might want a group to burst without accumulation. This is +tunable via:: + /proc/sys/kernel/sched_cfs_bw_burst_onset_percent (default=0) + +Up to 100% runtime of cpu.cfs_burst_us might be given on setting bandwidth. + Statistics ---------- -A group's bandwidth statistics are exported via 3 fields in cpu.stat. +A group's bandwidth statistics are exported via 6 fields in cpu.stat. cpu.stat: @@ -75,6 +120,11 @@ cpu.stat: - nr_throttled: Number of times the group has been throttled/limited. - throttled_time: The total time duration (in nanoseconds) for which entities of the group have been throttled. +- current_bw: Current runtime in global pool. +- nr_burst: Number of periods burst occurs. +- burst_time: Cumulative wall-time that any CPUs has used above quota in + respective periods + This interface is read-only. @@ -172,3 +222,15 @@ Examples By using a small period here we are ensuring a consistent latency response at the expense of burst capacity. + +4. Limit a group to 20% of 1 CPU, and allow accumulate up to 60% of 1 CPU + additionally, in case accumulation has been done. + + With 50ms period, 10ms quota will be equivalent to 20% of 1 CPU. + And 30ms burst will be equivalent to 60% of 1 CPU. + + # echo 10000 > cpu.cfs_quota_us /* quota = 10ms */ + # echo 50000 > cpu.cfs_period_us /* period = 50ms */ + # echo 30000 > cpu.cfs_burst_us /* burst = 30ms */ + + Larger buffer setting allows greater burst capacity. -- 2.14.4.44.g2045bb6