Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932697AbcDELz0 (ORCPT ); Tue, 5 Apr 2016 07:55:26 -0400 Received: from mga09.intel.com ([134.134.136.24]:32708 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932639AbcDELzY (ORCPT ); Tue, 5 Apr 2016 07:55:24 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.24,443,1455004800"; d="scan'208";a="681085028" From: Yuyang Du To: peterz@infradead.org, mingo@kernel.org, linux-kernel@vger.kernel.org Cc: bsegall@google.com, pjt@google.com, morten.rasmussen@arm.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, lizefan@huawei.com, umgwanakikbuti@gmail.com, Yuyang Du Subject: [PATCH v3 3/6] sched/fair: Add introduction to the sched load avg metrics Date: Tue, 5 Apr 2016 12:12:28 +0800 Message-Id: <1459829551-21625-4-git-send-email-yuyang.du@intel.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1459829551-21625-1-git-send-email-yuyang.du@intel.com> References: <1459829551-21625-1-git-send-email-yuyang.du@intel.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3212 Lines: 82 These sched metrics have become complex enough. We introduce them at their definition. Signed-off-by: Yuyang Du --- include/linux/sched.h | 60 +++++++++++++++++++++++++++++++++++++++++---------- 1 file changed, 49 insertions(+), 11 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index bc652fe..b0a6cf0 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1209,18 +1209,56 @@ struct load_weight { }; /* - * The load_avg/util_avg accumulates an infinite geometric series. - * 1) load_avg factors frequency scaling into the amount of time that a - * sched_entity is runnable on a rq into its weight. For cfs_rq, it is the - * aggregated such weights of all runnable and blocked sched_entities. - * 2) util_avg factors frequency and cpu capacity scaling into the amount of time - * that a sched_entity is running on a CPU, in the range [0..SCHED_CAPACITY_SCALE]. - * For cfs_rq, it is the aggregated such times of all runnable and + * The load_avg/util_avg accumulates an infinite geometric series + * (see __update_load_avg() in kernel/sched/fair.c). + * + * [load_avg definition] + * + * load_avg = runnable% * scale_load_down(load) + * + * where runnable% is the time ratio that a sched_entity is runnable. + * For cfs_rq, it is the aggregated such load_avg of all runnable and * blocked sched_entities. - * The 64 bit load_sum can: - * 1) for cfs_rq, afford 4353082796 (=2^64/47742/88761) entities with - * the highest weight (=88761) always runnable, we should not overflow - * 2) for entity, support any load.weight always runnable + * + * load_avg may also take frequency scaling into account: + * + * load_avg = runnable% * scale_load_down(load) * freq% + * + * where freq% is the CPU frequency normalize to the highest frequency + * + * [util_avg definition] + * + * util_avg = running% * SCHED_CAPACITY_SCALE + * + * where running% is the time ratio that a sched_entity is running on + * a CPU. For cfs_rq, it is the aggregated such util_avg of all runnable + * and blocked sched_entities. + * + * util_avg may also factor frequency scaling and CPU capacity scaling: + * + * util_avg = running% * SCHED_CAPACITY_SCALE * freq% * capacity% + * + * where freq% is the same as above, and capacity% is the CPU capacity + * normalized to the greatest capacity (due to uarch differences, etc). + * + * N.B., the above ratios (runnable%, running%, freq%, and capacity%) + * themselves are in the range of [0, 1]. To do fixed point arithmetic, + * we therefore scale them to as large range as necessary. This is for + * example reflected by util_avg's SCHED_CAPACITY_SCALE. + * + * [Overflow issue] + * + * The 64bit load_sum can have 4353082796 (=2^64/47742/88761) entities + * with the highest load (=88761) always runnable on a single cfs_rq, we + * should not overflow as the number already hits PID_MAX_LIMIT. + * + * For all other cases (including 32bit kernel), struct load_weight's + * weight will overflow first before we do, because: + * + * Max(load_avg) <= Max(load.weight) + * + * Then, it is the load_weight's responsibility to consider overflow + * issues. */ struct sched_avg { u64 last_update_time, load_sum; -- 2.1.4