This 4th version of the scale invariance patchset adds an important change
compare to previous ones. It still scales the time to reflect the
amount of work that has been done during the elapsed running time but this is
now done at rq level instead of per entity and rt/dl/cfs_rq. The main
advantage is that it is done once per clock update and we don't need to
maintain per sched_avg's stolen_idle_time anymore. This also ensure that
all the pelt signal will be always synced for a rq.
The 1st patch makes available rq_of() helper function for pelt.c file and
the 2nd patch implements the new scaling algorithm
Vincent Guittot (2):
sched/fair: move rq_of helper function
sched/fair: update scale invariance of PELT
kernel/sched/core.c | 2 +-
kernel/sched/deadline.c | 6 ++--
kernel/sched/fair.c | 29 +++++-----------
kernel/sched/pelt.c | 88 ++++++++++++++++++++++++++++++++++++++++++++-----
kernel/sched/pelt.h | 27 +++++++++++++++
kernel/sched/rt.c | 6 ++--
kernel/sched/sched.h | 18 ++++++++++
7 files changed, 139 insertions(+), 37 deletions(-)
--
2.7.4
The current implementation of load tracking invariance scales the
contribution with current frequency and uarch performance (only for
utilization) of the CPU. One main result of this formula is that the
figures are capped by current capacity of CPU. Another one is that the
load_avg is not invariant because not scaled with uarch.
The util_avg of a periodic task that runs r time slots every p time slots
varies in the range :
U * (1-y^r)/(1-y^p) * y^i < Utilization < U * (1-y^r)/(1-y^p)
with U is the max util_avg value = SCHED_CAPACITY_SCALE
At a lower capacity, the range becomes:
U * C * (1-y^r')/(1-y^p) * y^i' < Utilization < U * C * (1-y^r')/(1-y^p)
with C reflecting the compute capacity ratio between current capacity and
max capacity.
so C tries to compensate changes in (1-y^r') but it can't be accurate.
Instead of scaling the contribution value of PELT algo, we should scale the
running time. The PELT signal aims to track the amount of computation of
tasks and/or rq so it seems more correct to scale the running time to
reflect the effective amount of computation done since the last update.
In order to be fully invariant, we need to apply the same amount of
running time and idle time whatever the current capacity. Because running
at lower capacity implies that the task will run longer, we have to ensure
that the same amount of idle time will be apply when system becomes idle
and no idle time has been "stolen". But reaching the maximum utilization
value (SCHED_CAPACITY_SCALE) means that the task is seen as an
always-running task whatever the capacity of the CPU (even at max compute
capacity). In this case, we can discard this "stolen" idle times which
becomes meaningless.
In order to achieve this time scaling, a new clock_pelt is created per rq.
The increase of this clock scales with current capacity when something
is running on rq and synchronizes with clock_task when rq is idle. With
this mecanism, we ensure the same running and idle time whatever the
current capacity. This also enables to simplify the pelt algorithm by
removing all references of uarch and frequency and applying the same
contribution to utilization and loads. Furthermore, the scaling is done
only once per update of clock (update_rq_clock_task()) instead of during
each update of sched_entities and cfs/rt/dl_rq of the rq like the current
implementation. This is interesting when cgroup are involved as shown in
the results below:
On a hikey (octo ARM platform).
Performance cpufreq governor and only shallowest c-state to remove variance
generated by those power features so we only track the impact of pelt algo.
each test runs 16 times
./perf bench sched pipe
(higher is better)
kernel tip/sched/core + patch
ops/seconds ops/seconds diff
cgroup
root 59648(+/- 0.13%) 59785(+/- 0.24%) +0.23%
level1 55570(+/- 0.21%) 56003(+/- 0.24%) +0.78%
level2 52100(+/- 0.20%) 52788(+/- 0.22%) +1.32%
hackbench -l 1000
(lower is better)
kernel tip/sched/core + patch
duration(sec) duration(sec) diff
cgroup
root 4.472(+/- 1.86%) 4.346(+/- 2.74%) -2.80%
level1 5.039(+/- 11.05%) 4.662(+/- 7.57%) -7.47%
level2 5.195(+/- 10.66%) 4.877(+/- 8.90%) -6.12%
The responsivness of PELT is improved when CPU is not running at max
capacity with this new algorithm. I have put below some examples of
duration to reach some typical load values according to the capacity of the
CPU with current implementation and with this patch.
Util (%) max capacity half capacity(mainline) half capacity(w/ patch)
972 (95%) 138ms not reachable 276ms
486 (47.5%) 30ms 138ms 60ms
256 (25%) 13ms 32ms 26ms
On my hikey (octo ARM platform) with schedutil governor, the time to reach
max OPP when starting from a null utilization, decreases from 223ms with
current scale invariance down to 121ms with the new algorithm. For this
test, I have enable arch_scale_freq for arm64.
Signed-off-by: Vincent Guittot <[email protected]>
---
kernel/sched/core.c | 2 +-
kernel/sched/deadline.c | 6 ++--
kernel/sched/fair.c | 16 ++++-----
kernel/sched/pelt.c | 88 ++++++++++++++++++++++++++++++++++++++++++++-----
kernel/sched/pelt.h | 27 +++++++++++++++
kernel/sched/rt.c | 6 ++--
kernel/sched/sched.h | 2 ++
7 files changed, 123 insertions(+), 24 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 625bc98..84e5c48 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -181,6 +181,7 @@ static void update_rq_clock_task(struct rq *rq, s64 delta)
if ((irq_delta + steal) && sched_feat(NONTASK_CAPACITY))
update_irq_load_avg(rq, irq_delta + steal);
#endif
+ update_rq_clock_pelt(rq, delta);
}
void update_rq_clock(struct rq *rq)
@@ -205,7 +206,6 @@ void update_rq_clock(struct rq *rq)
update_rq_clock_task(rq, delta);
}
-
#ifdef CONFIG_SCHED_HRTICK
/*
* Use HR-timers to deliver accurate preemption points.
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 997ea7b..68cb4dc 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -1761,7 +1761,7 @@ pick_next_task_dl(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
deadline_queue_push_tasks(rq);
if (rq->curr->sched_class != &dl_sched_class)
- update_dl_rq_load_avg(rq_clock_task(rq), rq, 0);
+ update_dl_rq_load_avg(rq_clock_pelt(rq), rq, 0);
return p;
}
@@ -1770,7 +1770,7 @@ static void put_prev_task_dl(struct rq *rq, struct task_struct *p)
{
update_curr_dl(rq);
- update_dl_rq_load_avg(rq_clock_task(rq), rq, 1);
+ update_dl_rq_load_avg(rq_clock_pelt(rq), rq, 1);
if (on_dl_rq(&p->dl) && p->nr_cpus_allowed > 1)
enqueue_pushable_dl_task(rq, p);
}
@@ -1787,7 +1787,7 @@ static void task_tick_dl(struct rq *rq, struct task_struct *p, int queued)
{
update_curr_dl(rq);
- update_dl_rq_load_avg(rq_clock_task(rq), rq, 1);
+ update_dl_rq_load_avg(rq_clock_pelt(rq), rq, 1);
/*
* Even when we have runtime, update_curr_dl() might have resulted in us
* not being the leftmost task anymore. In that case NEED_RESCHED will
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 0969ce3..5677254 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -764,7 +764,7 @@ void post_init_entity_util_avg(struct sched_entity *se)
* such that the next switched_to_fair() has the
* expected state.
*/
- se->avg.last_update_time = cfs_rq_clock_task(cfs_rq);
+ se->avg.last_update_time = cfs_rq_clock_pelt(cfs_rq);
return;
}
}
@@ -3400,7 +3400,7 @@ static void detach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *s
/* Update task and its cfs_rq load average */
static inline void update_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
{
- u64 now = cfs_rq_clock_task(cfs_rq);
+ u64 now = cfs_rq_clock_pelt(cfs_rq);
struct rq *rq = rq_of(cfs_rq);
int cpu = cpu_of(rq);
int decayed;
@@ -7285,7 +7285,7 @@ static void update_blocked_averages(int cpu)
if (throttled_hierarchy(cfs_rq))
continue;
- if (update_cfs_rq_load_avg(cfs_rq_clock_task(cfs_rq), cfs_rq))
+ if (update_cfs_rq_load_avg(cfs_rq_clock_pelt(cfs_rq), cfs_rq))
update_tg_load_avg(cfs_rq, 0);
/* Propagate pending load changes to the parent, if any: */
@@ -7306,8 +7306,8 @@ static void update_blocked_averages(int cpu)
}
curr_class = rq->curr->sched_class;
- update_rt_rq_load_avg(rq_clock_task(rq), rq, curr_class == &rt_sched_class);
- update_dl_rq_load_avg(rq_clock_task(rq), rq, curr_class == &dl_sched_class);
+ update_rt_rq_load_avg(rq_clock_pelt(rq), rq, curr_class == &rt_sched_class);
+ update_dl_rq_load_avg(rq_clock_pelt(rq), rq, curr_class == &dl_sched_class);
update_irq_load_avg(rq, 0);
/* Don't need periodic decay once load/util_avg are null */
if (others_have_blocked(rq))
@@ -7377,11 +7377,11 @@ static inline void update_blocked_averages(int cpu)
rq_lock_irqsave(rq, &rf);
update_rq_clock(rq);
- update_cfs_rq_load_avg(cfs_rq_clock_task(cfs_rq), cfs_rq);
+ update_cfs_rq_load_avg(cfs_rq_clock_pelt(cfs_rq), cfs_rq);
curr_class = rq->curr->sched_class;
- update_rt_rq_load_avg(rq_clock_task(rq), rq, curr_class == &rt_sched_class);
- update_dl_rq_load_avg(rq_clock_task(rq), rq, curr_class == &dl_sched_class);
+ update_rt_rq_load_avg(rq_clock_pelt(rq), rq, curr_class == &rt_sched_class);
+ update_dl_rq_load_avg(rq_clock_pelt(rq), rq, curr_class == &dl_sched_class);
update_irq_load_avg(rq, 0);
#ifdef CONFIG_NO_HZ_COMMON
rq->last_blocked_load_update_tick = jiffies;
diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c
index 35475c0..48f4f07 100644
--- a/kernel/sched/pelt.c
+++ b/kernel/sched/pelt.c
@@ -30,6 +30,72 @@
#include "pelt.h"
/*
+ * The clock_pelt scales the time to reflect the effective amount of
+ * computation done during the running delta time but then sync back to
+ * clock_task when rq is idle.
+ *
+ *
+ * absolute time | 1| 2| 3| 4| 5| 6| 7| 8| 9|10|11|12|13|14|15|16
+ * @ max capacity ------******---------------******---------------
+ * @ half capacity ------************---------************---------
+ * clock pelt | 1| 2| 3| 4| 7| 8| 9| 10| 11|14|15|16
+ *
+ */
+void update_rq_clock_pelt(struct rq *rq, s64 delta)
+{
+
+ if (is_idle_task(rq->curr)) {
+ u32 divider = (LOAD_AVG_MAX - 1024 + rq->cfs.avg.period_contrib) << SCHED_CAPACITY_SHIFT;
+ u32 overload = rq->cfs.avg.util_sum + LOAD_AVG_MAX;
+ overload += rq->avg_rt.util_sum;
+ overload += rq->avg_dl.util_sum;
+
+ /*
+ * Reflecting some stolen time makes sense only if the idle
+ * phase would be present at max capacity. As soon as the
+ * utilization of a rq has reached the maximum value, it is
+ * considered as an always runnnig rq without idle time to
+ * steal. This potential idle time is considered as lost in
+ * this case. We keep track of this lost idle time compare to
+ * rq's clock_task.
+ */
+ if (overload >= divider)
+ rq->lost_idle_time += rq_clock_task(rq) - rq->clock_pelt;
+
+
+ /* The rq is idle, we can sync to clock_task */
+ rq->clock_pelt = rq_clock_task(rq);
+
+
+ } else {
+ /*
+ * When a rq runs at a lower compute capacity, it will need
+ * more time to do the same amount of work than at max
+ * capacity: either because it takes more time to compute the
+ * same amount of work or because taking more time means
+ * sharing more often the CPU between entities.
+ * In order to be invariant, we scale the delta to reflect how
+ * much work has been really done.
+ * Running at lower capacity also means running longer to do
+ * the same amount of work and this results in stealing some
+ * idle time that will disturb the load signal compared to
+ * max capacity; This stolen idle time will be automaticcally
+ * reflected when the rq will be idle and the clock will be
+ * synced with rq_clock_task.
+ */
+
+ /*
+ * scale the elapsed time to reflect the real amount of
+ * computation
+ */
+ delta = cap_scale(delta, arch_scale_freq_capacity(cpu_of(rq)));
+ delta = cap_scale(delta, arch_scale_cpu_capacity(NULL, cpu_of(rq)));
+
+ rq->clock_pelt += delta;
+ }
+}
+
+/*
* Approximate:
* val * y^n, where y^32 ~= 0.5 (~1 scheduling period)
*/
@@ -106,16 +172,12 @@ static u32 __accumulate_pelt_segments(u64 periods, u32 d1, u32 d3)
* n=1
*/
static __always_inline u32
-accumulate_sum(u64 delta, int cpu, struct sched_avg *sa,
+accumulate_sum(u64 delta, struct sched_avg *sa,
unsigned long load, unsigned long runnable, int running)
{
- unsigned long scale_freq, scale_cpu;
u32 contrib = (u32)delta; /* p == 0 -> delta < 1024 */
u64 periods;
- scale_freq = arch_scale_freq_capacity(cpu);
- scale_cpu = arch_scale_cpu_capacity(NULL, cpu);
-
delta += sa->period_contrib;
periods = delta / 1024; /* A period is 1024us (~1ms) */
@@ -137,13 +199,12 @@ accumulate_sum(u64 delta, int cpu, struct sched_avg *sa,
}
sa->period_contrib = delta;
- contrib = cap_scale(contrib, scale_freq);
if (load)
sa->load_sum += load * contrib;
if (runnable)
sa->runnable_load_sum += runnable * contrib;
if (running)
- sa->util_sum += contrib * scale_cpu;
+ sa->util_sum += contrib << SCHED_CAPACITY_SHIFT;
return periods;
}
@@ -221,7 +282,7 @@ ___update_load_sum(u64 now, int cpu, struct sched_avg *sa,
* Step 1: accumulate *_sum since last_update_time. If we haven't
* crossed period boundaries, finish.
*/
- if (!accumulate_sum(delta, cpu, sa, load, runnable, running))
+ if (!accumulate_sum(delta, sa, load, runnable, running))
return 0;
return 1;
@@ -371,12 +432,21 @@ int update_dl_rq_load_avg(u64 now, struct rq *rq, int running)
int update_irq_load_avg(struct rq *rq, u64 running)
{
int ret = 0;
+
+ /*
+ * We can't use clock_pelt because irq time is not accounted in
+ * clock_task. Instead we directly scale the running time to
+ * reflect the real amount of computation
+ */
+ running = cap_scale(running, arch_scale_freq_capacity(cpu_of(rq)));
+ running = cap_scale(running, arch_scale_cpu_capacity(NULL, cpu_of(rq)));
+
/*
* We know the time that has been used by interrupt since last update
* but we don't when. Let be pessimistic and assume that interrupt has
* happened just before the update. This is not so far from reality
* because interrupt will most probably wake up task and trig an update
- * of rq clock during which the metric si updated.
+ * of rq clock during which the metric is updated.
* We start to decay with normal context time and then we add the
* interrupt context time.
* We can safely remove running from rq->clock because
diff --git a/kernel/sched/pelt.h b/kernel/sched/pelt.h
index d2894db..b4ce173 100644
--- a/kernel/sched/pelt.h
+++ b/kernel/sched/pelt.h
@@ -42,6 +42,29 @@ static inline void cfs_se_util_change(struct sched_avg *avg)
WRITE_ONCE(avg->util_est.enqueued, enqueued);
}
+void update_rq_clock_pelt(struct rq *rq, s64 delta);
+
+static inline u64 rq_clock_pelt(struct rq *rq)
+{
+ return rq->clock_pelt - rq->lost_idle_time;
+}
+
+#ifdef CONFIG_CFS_BANDWIDTH
+/* rq->task_clock normalized against any time this cfs_rq has spent throttled */
+static inline u64 cfs_rq_clock_pelt(struct cfs_rq *cfs_rq)
+{
+ if (unlikely(cfs_rq->throttle_count))
+ return cfs_rq->throttled_clock_task - cfs_rq->throttled_clock_task_time;
+
+ return rq_clock_pelt(rq_of(cfs_rq)) - cfs_rq->throttled_clock_task_time;
+}
+#else
+static inline u64 cfs_rq_clock_pelt(struct cfs_rq *cfs_rq)
+{
+ return rq_clock_pelt(rq_of(cfs_rq));
+}
+#endif
+
#else
static inline int
@@ -67,6 +90,10 @@ update_irq_load_avg(struct rq *rq, u64 running)
{
return 0;
}
+
+static inline void
+update_rq_clock_pelt(struct rq *rq, s64 delta) {}
+
#endif
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 2e2955a..f62f2d5 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -1584,7 +1584,7 @@ pick_next_task_rt(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
* rt task
*/
if (rq->curr->sched_class != &rt_sched_class)
- update_rt_rq_load_avg(rq_clock_task(rq), rq, 0);
+ update_rt_rq_load_avg(rq_clock_pelt(rq), rq, 0);
return p;
}
@@ -1593,7 +1593,7 @@ static void put_prev_task_rt(struct rq *rq, struct task_struct *p)
{
update_curr_rt(rq);
- update_rt_rq_load_avg(rq_clock_task(rq), rq, 1);
+ update_rt_rq_load_avg(rq_clock_pelt(rq), rq, 1);
/*
* The previous task needs to be made eligible for pushing
@@ -2324,7 +2324,7 @@ static void task_tick_rt(struct rq *rq, struct task_struct *p, int queued)
struct sched_rt_entity *rt_se = &p->rt;
update_curr_rt(rq);
- update_rt_rq_load_avg(rq_clock_task(rq), rq, 1);
+ update_rt_rq_load_avg(rq_clock_pelt(rq), rq, 1);
watchdog(rq, p);
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 3990818..d987f50 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -848,6 +848,8 @@ struct rq {
unsigned int clock_update_flags;
u64 clock;
u64 clock_task;
+ u64 clock_pelt;
+ unsigned long lost_idle_time;
atomic_t nr_iowait;
--
2.7.4
Move rq_of() helper function so it can be used in pelt.c
Signed-off-by: Vincent Guittot <[email protected]>
---
kernel/sched/fair.c | 13 -------------
kernel/sched/sched.h | 16 ++++++++++++++++
2 files changed, 16 insertions(+), 13 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 6bd142d..0969ce3 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -248,13 +248,6 @@ const struct sched_class fair_sched_class;
*/
#ifdef CONFIG_FAIR_GROUP_SCHED
-
-/* cpu runqueue to which this cfs_rq is attached */
-static inline struct rq *rq_of(struct cfs_rq *cfs_rq)
-{
- return cfs_rq->rq;
-}
-
static inline struct task_struct *task_of(struct sched_entity *se)
{
SCHED_WARN_ON(!entity_is_task(se));
@@ -411,12 +404,6 @@ static inline struct task_struct *task_of(struct sched_entity *se)
return container_of(se, struct task_struct, se);
}
-static inline struct rq *rq_of(struct cfs_rq *cfs_rq)
-{
- return container_of(cfs_rq, struct rq, cfs);
-}
-
-
#define for_each_sched_entity(se) \
for (; se; se = NULL)
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 3a4ef8f..3990818 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -566,6 +566,22 @@ struct cfs_rq {
#endif /* CONFIG_FAIR_GROUP_SCHED */
};
+#ifdef CONFIG_FAIR_GROUP_SCHED
+
+/* cpu runqueue to which this cfs_rq is attached */
+static inline struct rq *rq_of(struct cfs_rq *cfs_rq)
+{
+ return cfs_rq->rq;
+}
+
+#else /* !CONFIG_FAIR_GROUP_SCHED */
+
+static inline struct rq *rq_of(struct cfs_rq *cfs_rq)
+{
+ return container_of(cfs_rq, struct rq, cfs);
+}
+#endif
+
static inline int rt_bandwidth_enabled(void)
{
return sysctl_sched_rt_runtime >= 0;
--
2.7.4
Hi Vincent,
Thank you for the patch! Yet something to improve:
[auto build test ERROR on tip/sched/core]
[also build test ERROR on v4.19-rc8 next-20181019]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]
url: https://github.com/0day-ci/linux/commits/Vincent-Guittot/sched-fair-move-rq_of-helper-function/20181020-081004
config: i386-randconfig-x002-201841 (attached as .config)
compiler: gcc-7 (Debian 7.3.0-1) 7.3.0
reproduce:
# save the attached .config to linux build tree
make ARCH=i386
All error/warnings (new ones prefixed by >>):
In file included from arch/x86/include/asm/current.h:5:0,
from include/linux/sched.h:12,
from kernel/sched/pelt.c:27:
kernel/sched/sched.h: In function 'rq_of':
>> include/linux/kernel.h:997:51: error: dereferencing pointer to incomplete type 'struct rq'
BUILD_BUG_ON_MSG(!__same_type(*(ptr), ((type *)0)->member) && \
^
include/linux/compiler.h:335:18: note: in definition of macro '__compiletime_assert'
int __cond = !(condition); \
^~~~~~~~~
include/linux/compiler.h:358:2: note: in expansion of macro '_compiletime_assert'
_compiletime_assert(condition, msg, __compiletime_assert_, __LINE__)
^~~~~~~~~~~~~~~~~~~
include/linux/build_bug.h:45:37: note: in expansion of macro 'compiletime_assert'
#define BUILD_BUG_ON_MSG(cond, msg) compiletime_assert(!(cond), msg)
^~~~~~~~~~~~~~~~~~
include/linux/kernel.h:997:2: note: in expansion of macro 'BUILD_BUG_ON_MSG'
BUILD_BUG_ON_MSG(!__same_type(*(ptr), ((type *)0)->member) && \
^~~~~~~~~~~~~~~~
include/linux/kernel.h:997:20: note: in expansion of macro '__same_type'
BUILD_BUG_ON_MSG(!__same_type(*(ptr), ((type *)0)->member) && \
^~~~~~~~~~~
>> kernel/sched/sched.h:581:9: note: in expansion of macro 'container_of'
return container_of(cfs_rq, struct rq, cfs);
^~~~~~~~~~~~
In file included from <command-line>:0:0:
>> include/linux/compiler_types.h:237:35: error: invalid use of undefined type 'struct rq'
#define __compiler_offsetof(a, b) __builtin_offsetof(a, b)
^
include/linux/stddef.h:17:32: note: in expansion of macro '__compiler_offsetof'
#define offsetof(TYPE, MEMBER) __compiler_offsetof(TYPE, MEMBER)
^~~~~~~~~~~~~~~~~~~
include/linux/kernel.h:1000:21: note: in expansion of macro 'offsetof'
((type *)(__mptr - offsetof(type, member))); })
^~~~~~~~
>> kernel/sched/sched.h:581:9: note: in expansion of macro 'container_of'
return container_of(cfs_rq, struct rq, cfs);
^~~~~~~~~~~~
--
In file included from arch/x86/include/asm/current.h:5:0,
from include/linux/sched.h:12,
from kernel/sched/sched.h:5,
from kernel/sched/fair.c:23:
kernel/sched/sched.h: In function 'rq_of':
>> include/linux/kernel.h:997:51: error: dereferencing pointer to incomplete type 'struct rq'
BUILD_BUG_ON_MSG(!__same_type(*(ptr), ((type *)0)->member) && \
^
include/linux/compiler.h:335:18: note: in definition of macro '__compiletime_assert'
int __cond = !(condition); \
^~~~~~~~~
include/linux/compiler.h:358:2: note: in expansion of macro '_compiletime_assert'
_compiletime_assert(condition, msg, __compiletime_assert_, __LINE__)
^~~~~~~~~~~~~~~~~~~
include/linux/build_bug.h:45:37: note: in expansion of macro 'compiletime_assert'
#define BUILD_BUG_ON_MSG(cond, msg) compiletime_assert(!(cond), msg)
^~~~~~~~~~~~~~~~~~
include/linux/kernel.h:997:2: note: in expansion of macro 'BUILD_BUG_ON_MSG'
BUILD_BUG_ON_MSG(!__same_type(*(ptr), ((type *)0)->member) && \
^~~~~~~~~~~~~~~~
include/linux/kernel.h:997:20: note: in expansion of macro '__same_type'
BUILD_BUG_ON_MSG(!__same_type(*(ptr), ((type *)0)->member) && \
^~~~~~~~~~~
>> kernel/sched/sched.h:581:9: note: in expansion of macro 'container_of'
return container_of(cfs_rq, struct rq, cfs);
^~~~~~~~~~~~
In file included from <command-line>:0:0:
>> include/linux/compiler_types.h:237:35: error: invalid use of undefined type 'struct rq'
#define __compiler_offsetof(a, b) __builtin_offsetof(a, b)
^
include/linux/stddef.h:17:32: note: in expansion of macro '__compiler_offsetof'
#define offsetof(TYPE, MEMBER) __compiler_offsetof(TYPE, MEMBER)
^~~~~~~~~~~~~~~~~~~
include/linux/kernel.h:1000:21: note: in expansion of macro 'offsetof'
((type *)(__mptr - offsetof(type, member))); })
^~~~~~~~
>> kernel/sched/sched.h:581:9: note: in expansion of macro 'container_of'
return container_of(cfs_rq, struct rq, cfs);
^~~~~~~~~~~~
In file included from kernel/sched/fair.c:23:0:
kernel/sched/sched.h:582:1: warning: control reaches end of non-void function [-Wreturn-type]
}
^
vim +/container_of +581 kernel/sched/sched.h
578
579 static inline struct rq *rq_of(struct cfs_rq *cfs_rq)
580 {
> 581 return container_of(cfs_rq, struct rq, cfs);
582 }
583 #endif
584
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all Intel Corporation
Hi Vincent,
On Fri, Oct 19, 2018 at 06:17:51PM +0200, Vincent Guittot wrote:
>
> /*
> + * The clock_pelt scales the time to reflect the effective amount of
> + * computation done during the running delta time but then sync back to
> + * clock_task when rq is idle.
> + *
> + *
> + * absolute time | 1| 2| 3| 4| 5| 6| 7| 8| 9|10|11|12|13|14|15|16
> + * @ max capacity ------******---------------******---------------
> + * @ half capacity ------************---------************---------
> + * clock pelt | 1| 2| 3| 4| 7| 8| 9| 10| 11|14|15|16
> + *
> + */
> +void update_rq_clock_pelt(struct rq *rq, s64 delta)
> +{
> +
> + if (is_idle_task(rq->curr)) {
> + u32 divider = (LOAD_AVG_MAX - 1024 + rq->cfs.avg.period_contrib) << SCHED_CAPACITY_SHIFT;
> + u32 overload = rq->cfs.avg.util_sum + LOAD_AVG_MAX;
> + overload += rq->avg_rt.util_sum;
> + overload += rq->avg_dl.util_sum;
> +
> + /*
> + * Reflecting some stolen time makes sense only if the idle
> + * phase would be present at max capacity. As soon as the
> + * utilization of a rq has reached the maximum value, it is
> + * considered as an always runnnig rq without idle time to
> + * steal. This potential idle time is considered as lost in
> + * this case. We keep track of this lost idle time compare to
> + * rq's clock_task.
> + */
> + if (overload >= divider)
> + rq->lost_idle_time += rq_clock_task(rq) - rq->clock_pelt;
> +
I am trying to understand this better. I believe we run into this scenario, when
the frequency is limited due to thermal/userspace constraints. Lets say
frequency is limited to Fmax/2. A 50% task at Fmax, becomes 100% running at
Fmax/2. The utilization is built up to 100% after several periods.
The clock_pelt runs at 1/2 speed of the clock_task. We are loosing the idle time
all along. What happens when the CPU enters idle for a short duration and comes
back to run this 100% utilization task?
If the above block is not present i.e lost_idle_time is not tracked, we
stretch the idle time (since clock_pelt is synced to clock_task) and the
utilization is dropped. Right?
With the above block, we don't stretch the idle time. In fact we don't
consider the idle time at all. Because,
idle_time = now - last_time;
idle_time = (rq->clock_pelt - rq->lost_idle_time) - last_time
idle_time = (rq->clock_task - rq_clock_task + rq->clock_pelt_old) - last_time
idle_time = rq->clock_pelt_old - last_time
The last time is nothing but the last snapshot of the rq->clock_pelt when the
task entered sleep due to which CPU entered idle.
Can you please explain the significance of the above block with an example?
> +
> + /* The rq is idle, we can sync to clock_task */
> + rq->clock_pelt = rq_clock_task(rq);
> +
> +
> + } else {
> + /*
> + * When a rq runs at a lower compute capacity, it will need
> + * more time to do the same amount of work than at max
> + * capacity: either because it takes more time to compute the
> + * same amount of work or because taking more time means
> + * sharing more often the CPU between entities.
> + * In order to be invariant, we scale the delta to reflect how
> + * much work has been really done.
> + * Running at lower capacity also means running longer to do
> + * the same amount of work and this results in stealing some
> + * idle time that will disturb the load signal compared to
> + * max capacity; This stolen idle time will be automaticcally
> + * reflected when the rq will be idle and the clock will be
> + * synced with rq_clock_task.
> + */
> +
> + /*
> + * scale the elapsed time to reflect the real amount of
> + * computation
> + */
> + delta = cap_scale(delta, arch_scale_freq_capacity(cpu_of(rq)));
> + delta = cap_scale(delta, arch_scale_cpu_capacity(NULL, cpu_of(rq)));
> +
> + rq->clock_pelt += delta;
AFAICT, the rq->clock_pelt is used for both utilization and load. So the load
also becomes a function of CPU uarch now. Is this intentional?
Thanks,
Pavan
--
Qualcomm India Private Limited, on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project.
On Fri, Oct 19, 2018 at 06:17:51PM +0200, Vincent Guittot wrote:
> In order to achieve this time scaling, a new clock_pelt is created per rq.
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index 3990818..d987f50 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -848,6 +848,8 @@ struct rq {
> unsigned int clock_update_flags;
> u64 clock;
> u64 clock_task;
> + u64 clock_pelt;
> + unsigned long lost_idle_time;
Very clever that. Seems to work out nicely. We should maybe look at
ensuring all these clock fields are indeed on the same cacheline.
Hi Pavan,
On Tue, 23 Oct 2018 at 07:59, Pavan Kondeti <[email protected]> wrote:
>
> Hi Vincent,
>
> On Fri, Oct 19, 2018 at 06:17:51PM +0200, Vincent Guittot wrote:
> >
> > /*
> > + * The clock_pelt scales the time to reflect the effective amount of
> > + * computation done during the running delta time but then sync back to
> > + * clock_task when rq is idle.
> > + *
> > + *
> > + * absolute time | 1| 2| 3| 4| 5| 6| 7| 8| 9|10|11|12|13|14|15|16
> > + * @ max capacity ------******---------------******---------------
> > + * @ half capacity ------************---------************---------
> > + * clock pelt | 1| 2| 3| 4| 7| 8| 9| 10| 11|14|15|16
> > + *
> > + */
> > +void update_rq_clock_pelt(struct rq *rq, s64 delta)
> > +{
> > +
> > + if (is_idle_task(rq->curr)) {
> > + u32 divider = (LOAD_AVG_MAX - 1024 + rq->cfs.avg.period_contrib) << SCHED_CAPACITY_SHIFT;
> > + u32 overload = rq->cfs.avg.util_sum + LOAD_AVG_MAX;
> > + overload += rq->avg_rt.util_sum;
> > + overload += rq->avg_dl.util_sum;
> > +
> > + /*
> > + * Reflecting some stolen time makes sense only if the idle
> > + * phase would be present at max capacity. As soon as the
> > + * utilization of a rq has reached the maximum value, it is
> > + * considered as an always runnnig rq without idle time to
> > + * steal. This potential idle time is considered as lost in
> > + * this case. We keep track of this lost idle time compare to
> > + * rq's clock_task.
> > + */
> > + if (overload >= divider)
> > + rq->lost_idle_time += rq_clock_task(rq) - rq->clock_pelt;
> > +
>
> I am trying to understand this better. I believe we run into this scenario, when
> the frequency is limited due to thermal/userspace constraints. Lets say
Yes these are the most common UCs but this can also happen after tasks
migration or with a cpufreq governor that doesn't increase OPP fast
enough for current utilization.
> frequency is limited to Fmax/2. A 50% task at Fmax, becomes 100% running at
> Fmax/2. The utilization is built up to 100% after several periods.
> The clock_pelt runs at 1/2 speed of the clock_task. We are loosing the idle time
> all along. What happens when the CPU enters idle for a short duration and comes
> back to run this 100% utilization task?
If you are at 100%, we only apply the short idle duration
>
> If the above block is not present i.e lost_idle_time is not tracked, we
> stretch the idle time (since clock_pelt is synced to clock_task) and the
> utilization is dropped. Right?
yes that 's what would happen. I gives more details below
>
> With the above block, we don't stretch the idle time. In fact we don't
> consider the idle time at all. Because,
>
> idle_time = now - last_time;
>
> idle_time = (rq->clock_pelt - rq->lost_idle_time) - last_time
> idle_time = (rq->clock_task - rq_clock_task + rq->clock_pelt_old) - last_time
> idle_time = rq->clock_pelt_old - last_time
>
> The last time is nothing but the last snapshot of the rq->clock_pelt when the
> task entered sleep due to which CPU entered idle.
The condition for dropping this idle time is quite important. This
only happens when the utilization reaches max compute capacity of the
CPU. Otherwise, the idle time will be fully applied
>
> Can you please explain the significance of the above block with an example?
The pelt signal reaches its max value after 323ms at full capacity,
which means that we can't make any difference between tasks running
323ms, 500ms or more at max capacity. As a result, we consider that
the CPU is fully used and there is no idle time when the utilization
equals max capacity. If CPU runs at half the capacity, it will run
626ms before reaching max utilization and at that time we will stop to
stretch the idle time because we consider that there is no idle time
to stretch. By default, we never drop the idle time which is a
necessary for being fully invariant and we always apply it. But we
have to drop it when we consider that it would not have been present
at max capacity too. That's all the purpose of the block that you
mention
Let take a task that runs 120 ms with a period of 330ms.
At max capacity, task utilization will vary in the range [10-949]
At half capacity, task will run 240ms and the range will stay the same
as the idle time and the running time will be the same once stretched
and scaled
At one third of the capacity, task should run 360ms in a period of 330
which means that the task will always run and will probably even lost
some events as it will have not finished when the new period will
start. In this case, the task/CPU utilization will reach the max value
just like an always running task. As we can't make any difference
anymore, we consider that there is no idle time to recover once the
cpu will become idle and the block of code that you mention above will
cancel the stretch of idle time.
>
> > +
> > + /* The rq is idle, we can sync to clock_task */
> > + rq->clock_pelt = rq_clock_task(rq);
> > +
> > +
> > + } else {
> > + /*
> > + * When a rq runs at a lower compute capacity, it will need
> > + * more time to do the same amount of work than at max
> > + * capacity: either because it takes more time to compute the
> > + * same amount of work or because taking more time means
> > + * sharing more often the CPU between entities.
> > + * In order to be invariant, we scale the delta to reflect how
> > + * much work has been really done.
> > + * Running at lower capacity also means running longer to do
> > + * the same amount of work and this results in stealing some
> > + * idle time that will disturb the load signal compared to
> > + * max capacity; This stolen idle time will be automaticcally
> > + * reflected when the rq will be idle and the clock will be
> > + * synced with rq_clock_task.
> > + */
> > +
> > + /*
> > + * scale the elapsed time to reflect the real amount of
> > + * computation
> > + */
> > + delta = cap_scale(delta, arch_scale_freq_capacity(cpu_of(rq)));
> > + delta = cap_scale(delta, arch_scale_cpu_capacity(NULL, cpu_of(rq)));
> > +
> > + rq->clock_pelt += delta;
>
> AFAICT, the rq->clock_pelt is used for both utilization and load. So the load
> also becomes a function of CPU uarch now. Is this intentional?
yes, it is. Load is not scaled with uarch in current implementation
because the load would cap by the max capacity of the local CPU and
this mess up the load balance.
Let take the example of CPU0 with max capacity of 1024 and CPU1 with
max capacity of 512.
We have 6 always running tasks with same nice priority
Then, put 3 tasks on each CPU.
If the load is scaled/capped with uarch, LB will consider the system
balanced : 3*max_load / 1024 for CPU0 and 3*(max_load / 2) / 512 for
CPU1. But tasks on CPU0 have twice more compute capacity than tasks on
CPU1.
With the new scaling, we don't have this problem anymore so we can
take into account uarch and have more accurate load.
Regards,
Vincent
>
> Thanks,
> Pavan
> --
> Qualcomm India Private Limited, on behalf of Qualcomm Innovation Center, Inc.
> Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project.
>
On Tue, 23 Oct 2018 at 12:01, Peter Zijlstra <[email protected]> wrote:
>
> On Fri, Oct 19, 2018 at 06:17:51PM +0200, Vincent Guittot wrote:
> > In order to achieve this time scaling, a new clock_pelt is created per rq.
>
>
> > diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> > index 3990818..d987f50 100644
> > --- a/kernel/sched/sched.h
> > +++ b/kernel/sched/sched.h
> > @@ -848,6 +848,8 @@ struct rq {
> > unsigned int clock_update_flags;
> > u64 clock;
> > u64 clock_task;
> > + u64 clock_pelt;
> > + unsigned long lost_idle_time;
>
> Very clever that. Seems to work out nicely. We should maybe look at
Thanks
> ensuring all these clock fields are indeed on the same cacheline.
yes good point
Hi Vincent,
Thanks for the detailed explanation.
On Tue, Oct 23, 2018 at 02:15:08PM +0200, Vincent Guittot wrote:
> Hi Pavan,
>
> On Tue, 23 Oct 2018 at 07:59, Pavan Kondeti <[email protected]> wrote:
> >
> > Hi Vincent,
> >
> > On Fri, Oct 19, 2018 at 06:17:51PM +0200, Vincent Guittot wrote:
> > >
> > > /*
> > > + * The clock_pelt scales the time to reflect the effective amount of
> > > + * computation done during the running delta time but then sync back to
> > > + * clock_task when rq is idle.
> > > + *
> > > + *
> > > + * absolute time | 1| 2| 3| 4| 5| 6| 7| 8| 9|10|11|12|13|14|15|16
> > > + * @ max capacity ------******---------------******---------------
> > > + * @ half capacity ------************---------************---------
> > > + * clock pelt | 1| 2| 3| 4| 7| 8| 9| 10| 11|14|15|16
> > > + *
> > > + */
> > > +void update_rq_clock_pelt(struct rq *rq, s64 delta)
> > > +{
> > > +
> > > + if (is_idle_task(rq->curr)) {
> > > + u32 divider = (LOAD_AVG_MAX - 1024 + rq->cfs.avg.period_contrib) << SCHED_CAPACITY_SHIFT;
> > > + u32 overload = rq->cfs.avg.util_sum + LOAD_AVG_MAX;
> > > + overload += rq->avg_rt.util_sum;
> > > + overload += rq->avg_dl.util_sum;
> > > +
> > > + /*
> > > + * Reflecting some stolen time makes sense only if the idle
> > > + * phase would be present at max capacity. As soon as the
> > > + * utilization of a rq has reached the maximum value, it is
> > > + * considered as an always runnnig rq without idle time to
> > > + * steal. This potential idle time is considered as lost in
> > > + * this case. We keep track of this lost idle time compare to
> > > + * rq's clock_task.
> > > + */
> > > + if (overload >= divider)
> > > + rq->lost_idle_time += rq_clock_task(rq) - rq->clock_pelt;
> > > +
> >
> > I am trying to understand this better. I believe we run into this scenario, when
> > the frequency is limited due to thermal/userspace constraints. Lets say
>
> Yes these are the most common UCs but this can also happen after tasks
> migration or with a cpufreq governor that doesn't increase OPP fast
> enough for current utilization.
>
> > frequency is limited to Fmax/2. A 50% task at Fmax, becomes 100% running at
> > Fmax/2. The utilization is built up to 100% after several periods.
> > The clock_pelt runs at 1/2 speed of the clock_task. We are loosing the idle time
> > all along. What happens when the CPU enters idle for a short duration and comes
> > back to run this 100% utilization task?
>
> If you are at 100%, we only apply the short idle duration
>
> >
> > If the above block is not present i.e lost_idle_time is not tracked, we
> > stretch the idle time (since clock_pelt is synced to clock_task) and the
> > utilization is dropped. Right?
>
> yes that 's what would happen. I gives more details below
>
> >
> > With the above block, we don't stretch the idle time. In fact we don't
> > consider the idle time at all. Because,
> >
> > idle_time = now - last_time;
> >
> > idle_time = (rq->clock_pelt - rq->lost_idle_time) - last_time
> > idle_time = (rq->clock_task - rq_clock_task + rq->clock_pelt_old) - last_time
> > idle_time = rq->clock_pelt_old - last_time
> >
> > The last time is nothing but the last snapshot of the rq->clock_pelt when the
> > task entered sleep due to which CPU entered idle.
>
> The condition for dropping this idle time is quite important. This
> only happens when the utilization reaches max compute capacity of the
> CPU. Otherwise, the idle time will be fully applied
Right.
rq->lost_idle_time += rq_clock_task(rq) - rq->clock_pelt
This not only tracks the lost idle time due to running slow but also the
absolute/real sleep time. For example, when the slow running 100% task
sleeps for 100 msec, are not we ignoring the 100 msec sleep there?
For example a task ran 323 msec at full capacity and sleeps for (1000-323)
msec. when it wakes up the utilization is dropped. If the same task runs
for 626 msec at the half capacity and sleeps for (1000-626), should not
drop the utilization by taking (1000-626) sleep time into account. I
understand that why we don't strech idle time to (1000-323) but it is not
clear to me why we completely drop the idle time.
>
> >
> > Can you please explain the significance of the above block with an example?
>
> The pelt signal reaches its max value after 323ms at full capacity,
> which means that we can't make any difference between tasks running
> 323ms, 500ms or more at max capacity. As a result, we consider that
> the CPU is fully used and there is no idle time when the utilization
> equals max capacity. If CPU runs at half the capacity, it will run
> 626ms before reaching max utilization and at that time we will stop to
> stretch the idle time because we consider that there is no idle time
> to stretch. By default, we never drop the idle time which is a
> necessary for being fully invariant and we always apply it. But we
> have to drop it when we consider that it would not have been present
> at max capacity too. That's all the purpose of the block that you
> mention
This is very much clear.
>
> Let take a task that runs 120 ms with a period of 330ms.
> At max capacity, task utilization will vary in the range [10-949]
> At half capacity, task will run 240ms and the range will stay the same
> as the idle time and the running time will be the same once stretched
> and scaled
> At one third of the capacity, task should run 360ms in a period of 330
> which means that the task will always run and will probably even lost
> some events as it will have not finished when the new period will
> start. In this case, the task/CPU utilization will reach the max value
> just like an always running task. As we can't make any difference
> anymore, we consider that there is no idle time to recover once the
> cpu will become idle and the block of code that you mention above will
> cancel the stretch of idle time.
>
Got it.
> >
> > > +
> > > + /* The rq is idle, we can sync to clock_task */
> > > + rq->clock_pelt = rq_clock_task(rq);
> > > +
> > > +
> > > + } else {
> > > + /*
> > > + * When a rq runs at a lower compute capacity, it will need
> > > + * more time to do the same amount of work than at max
> > > + * capacity: either because it takes more time to compute the
> > > + * same amount of work or because taking more time means
> > > + * sharing more often the CPU between entities.
> > > + * In order to be invariant, we scale the delta to reflect how
> > > + * much work has been really done.
> > > + * Running at lower capacity also means running longer to do
> > > + * the same amount of work and this results in stealing some
> > > + * idle time that will disturb the load signal compared to
> > > + * max capacity; This stolen idle time will be automaticcally
> > > + * reflected when the rq will be idle and the clock will be
> > > + * synced with rq_clock_task.
> > > + */
> > > +
> > > + /*
> > > + * scale the elapsed time to reflect the real amount of
> > > + * computation
> > > + */
> > > + delta = cap_scale(delta, arch_scale_freq_capacity(cpu_of(rq)));
> > > + delta = cap_scale(delta, arch_scale_cpu_capacity(NULL, cpu_of(rq)));
> > > +
> > > + rq->clock_pelt += delta;
> >
> > AFAICT, the rq->clock_pelt is used for both utilization and load. So the load
> > also becomes a function of CPU uarch now. Is this intentional?
>
> yes, it is. Load is not scaled with uarch in current implementation
> because the load would cap by the max capacity of the local CPU and
> this mess up the load balance.
>
> Let take the example of CPU0 with max capacity of 1024 and CPU1 with
> max capacity of 512.
> We have 6 always running tasks with same nice priority
> Then, put 3 tasks on each CPU.
> If the load is scaled/capped with uarch, LB will consider the system
> balanced : 3*max_load / 1024 for CPU0 and 3*(max_load / 2) / 512 for
> CPU1. But tasks on CPU0 have twice more compute capacity than tasks on
> CPU1.
>
> With the new scaling, we don't have this problem anymore so we can
> take into account uarch and have more accurate load.
>
Got it.
Thanks,
Pavan
--
Qualcomm India Private Limited, on behalf of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project.
Hi Pavan,
On Wed, 24 Oct 2018 at 06:53, Pavan Kondeti <[email protected]> wrote:
>
> Hi Vincent,
>
> Thanks for the detailed explanation.
>
> On Tue, Oct 23, 2018 at 02:15:08PM +0200, Vincent Guittot wrote:
> > Hi Pavan,
> >
> > On Tue, 23 Oct 2018 at 07:59, Pavan Kondeti <[email protected]> wrote:
> > >
> > > Hi Vincent,
> > >
> > > On Fri, Oct 19, 2018 at 06:17:51PM +0200, Vincent Guittot wrote:
> > > >
> > > > /*
> > > > + * The clock_pelt scales the time to reflect the effective amount of
> > > > + * computation done during the running delta time but then sync back to
> > > > + * clock_task when rq is idle.
> > > > + *
> > > > + *
> > > > + * absolute time | 1| 2| 3| 4| 5| 6| 7| 8| 9|10|11|12|13|14|15|16
> > > > + * @ max capacity ------******---------------******---------------
> > > > + * @ half capacity ------************---------************---------
> > > > + * clock pelt | 1| 2| 3| 4| 7| 8| 9| 10| 11|14|15|16
> > > > + *
> > > > + */
> > > > +void update_rq_clock_pelt(struct rq *rq, s64 delta)
> > > > +{
> > > > +
> > > > + if (is_idle_task(rq->curr)) {
> > > > + u32 divider = (LOAD_AVG_MAX - 1024 + rq->cfs.avg.period_contrib) << SCHED_CAPACITY_SHIFT;
> > > > + u32 overload = rq->cfs.avg.util_sum + LOAD_AVG_MAX;
> > > > + overload += rq->avg_rt.util_sum;
> > > > + overload += rq->avg_dl.util_sum;
> > > > +
> > > > + /*
> > > > + * Reflecting some stolen time makes sense only if the idle
> > > > + * phase would be present at max capacity. As soon as the
> > > > + * utilization of a rq has reached the maximum value, it is
> > > > + * considered as an always runnnig rq without idle time to
> > > > + * steal. This potential idle time is considered as lost in
> > > > + * this case. We keep track of this lost idle time compare to
> > > > + * rq's clock_task.
> > > > + */
> > > > + if (overload >= divider)
> > > > + rq->lost_idle_time += rq_clock_task(rq) - rq->clock_pelt;
> > > > +
> > >
> > > I am trying to understand this better. I believe we run into this scenario, when
> > > the frequency is limited due to thermal/userspace constraints. Lets say
> >
> > Yes these are the most common UCs but this can also happen after tasks
> > migration or with a cpufreq governor that doesn't increase OPP fast
> > enough for current utilization.
> >
> > > frequency is limited to Fmax/2. A 50% task at Fmax, becomes 100% running at
> > > Fmax/2. The utilization is built up to 100% after several periods.
> > > The clock_pelt runs at 1/2 speed of the clock_task. We are loosing the idle time
> > > all along. What happens when the CPU enters idle for a short duration and comes
> > > back to run this 100% utilization task?
> >
> > If you are at 100%, we only apply the short idle duration
> >
> > >
> > > If the above block is not present i.e lost_idle_time is not tracked, we
> > > stretch the idle time (since clock_pelt is synced to clock_task) and the
> > > utilization is dropped. Right?
> >
> > yes that 's what would happen. I gives more details below
> >
> > >
> > > With the above block, we don't stretch the idle time. In fact we don't
> > > consider the idle time at all. Because,
> > >
> > > idle_time = now - last_time;
> > >
> > > idle_time = (rq->clock_pelt - rq->lost_idle_time) - last_time
> > > idle_time = (rq->clock_task - rq_clock_task + rq->clock_pelt_old) - last_time
> > > idle_time = rq->clock_pelt_old - last_time
> > >
> > > The last time is nothing but the last snapshot of the rq->clock_pelt when the
> > > task entered sleep due to which CPU entered idle.
> >
> > The condition for dropping this idle time is quite important. This
> > only happens when the utilization reaches max compute capacity of the
> > CPU. Otherwise, the idle time will be fully applied
>
> Right.
>
> rq->lost_idle_time += rq_clock_task(rq) - rq->clock_pelt
>
> This not only tracks the lost idle time due to running slow but also the
> absolute/real sleep time. For example, when the slow running 100% task
> sleeps for 100 msec, are not we ignoring the 100 msec sleep there?
>
> For example a task ran 323 msec at full capacity and sleeps for (1000-323)
> msec. when it wakes up the utilization is dropped. If the same task runs
> for 626 msec at the half capacity and sleeps for (1000-626), should not
> drop the utilization by taking (1000-626) sleep time into account. I
> understand that why we don't strech idle time to (1000-323) but it is not
> clear to me why we completely drop the idle time.
So this should not happen.
I' m going to update the way I track lost idle time and move it out
of update_rq_clock_pelt() and only do the test when entering idle
This is even better as it simplifies update_rq_clock_pelt() and
reduces the number of tests for lost idle time
Thanks for spotting this
I'm preparing a new version with this, some build fix for !SMP and the
alignement with cache line suggested by Peter
Vincent
>
> >
> > >
Hi Vincent,
On 10/19/18 6:17 PM, Vincent Guittot wrote:
> The current implementation of load tracking invariance scales the
> contribution with current frequency and uarch performance (only for
> utilization) of the CPU. One main result of this formula is that the
> figures are capped by current capacity of CPU. Another one is that the
> load_avg is not invariant because not scaled with uarch.
>
> The util_avg of a periodic task that runs r time slots every p time slots
> varies in the range :
>
> U * (1-y^r)/(1-y^p) * y^i < Utilization < U * (1-y^r)/(1-y^p)
>
> with U is the max util_avg value = SCHED_CAPACITY_SCALE
>
> At a lower capacity, the range becomes:
>
> U * C * (1-y^r')/(1-y^p) * y^i' < Utilization < U * C * (1-y^r')/(1-y^p)
>
> with C reflecting the compute capacity ratio between current capacity and
> max capacity.
>
> so C tries to compensate changes in (1-y^r') but it can't be accurate.
>
> Instead of scaling the contribution value of PELT algo, we should scale the
> running time. The PELT signal aims to track the amount of computation of
> tasks and/or rq so it seems more correct to scale the running time to
> reflect the effective amount of computation done since the last update.
>
> In order to be fully invariant, we need to apply the same amount of
> running time and idle time whatever the current capacity. Because running
> at lower capacity implies that the task will run longer, we have to ensure
> that the same amount of idle time will be apply when system becomes idle
> and no idle time has been "stolen". But reaching the maximum utilization
> value (SCHED_CAPACITY_SCALE) means that the task is seen as an
> always-running task whatever the capacity of the CPU (even at max compute
> capacity). In this case, we can discard this "stolen" idle times which
> becomes meaningless.
>
> In order to achieve this time scaling, a new clock_pelt is created per rq.
> The increase of this clock scales with current capacity when something
> is running on rq and synchronizes with clock_task when rq is idle. With
> this mecanism, we ensure the same running and idle time whatever the
> current capacity. This also enables to simplify the pelt algorithm by
> removing all references of uarch and frequency and applying the same
> contribution to utilization and loads. Furthermore, the scaling is done
> only once per update of clock (update_rq_clock_task()) instead of during
> each update of sched_entities and cfs/rt/dl_rq of the rq like the current
> implementation. This is interesting when cgroup are involved as shown in
> the results below:
I have a couple of questions related to the tests you ran.
> On a hikey (octo ARM platform).
> Performance cpufreq governor and only shallowest c-state to remove variance
> generated by those power features so we only track the impact of pelt algo.
So you disabled c-state 'cpu-sleep' and 'cluster-sleep'?
I get 'hisi_thermal f7030700.tsensor: THERMAL ALARM: 66385 > 65000' on
my hikey620. Did you change the thermal configuration? Not sure if there
are any actions attached to this warning though.
> each test runs 16 times
>
> ./perf bench sched pipe
> (higher is better)
> kernel tip/sched/core + patch
> ops/seconds ops/seconds diff
> cgroup
> root 59648(+/- 0.13%) 59785(+/- 0.24%) +0.23%
> level1 55570(+/- 0.21%) 56003(+/- 0.24%) +0.78%
> level2 52100(+/- 0.20%) 52788(+/- 0.22%) +1.32%
>
> hackbench -l 1000
Shouldn't this be '-l 100'?
> (lower is better)
> kernel tip/sched/core + patch
> duration(sec) duration(sec) diff
> cgroup
> root 4.472(+/- 1.86%) 4.346(+/- 2.74%) -2.80%
> level1 5.039(+/- 11.05%) 4.662(+/- 7.57%) -7.47%
> level2 5.195(+/- 10.66%) 4.877(+/- 8.90%) -6.12%
>
> The responsivness of PELT is improved when CPU is not running at max
> capacity with this new algorithm. I have put below some examples of
> duration to reach some typical load values according to the capacity of the
> CPU with current implementation and with this patch.
>
> Util (%) max capacity half capacity(mainline) half capacity(w/ patch)
> 972 (95%) 138ms not reachable 276ms
> 486 (47.5%) 30ms 138ms 60ms
> 256 (25%) 13ms 32ms 26ms
Could you describe these testcases in more detail?
So I assume you run one 100% task (possibly pinned to one CPU) on your
hikey620 with userspace governor and for:
(1) max capacity:
echo 1200000 > /sys/devices/system/cpu/cpufreq/policy0/scaling_setspeed
(2) half capacity:
echo 729000 > /sys/devices/system/cpu/cpufreq/policy0/scaling_setspeed
and then you measure the time till t1 reaches 25%, 47.5% and 95%
utilization?
What's the initial utilization value of t1? I assume t1 starts with
utilization=512 (post_init_entity_util_avg()).
> On my hikey (octo ARM platform) with schedutil governor, the time to reach
> max OPP when starting from a null utilization, decreases from 223ms with
> current scale invariance down to 121ms with the new algorithm. For this
> test, I have enable arch_scale_freq for arm64.
Isn't the arch-specific arch_scale_freq_capacity() enabled by default on
arm64 with cpufreq support?
I would like to run the same tests so we can discuss results more easily.
On Thu, 25 Oct 2018 at 12:36, Dietmar Eggemann <[email protected]> wrote:
>
> Hi Vincent,
>
> On 10/19/18 6:17 PM, Vincent Guittot wrote:
> > The current implementation of load tracking invariance scales the
> > contribution with current frequency and uarch performance (only for
> > utilization) of the CPU. One main result of this formula is that the
> > figures are capped by current capacity of CPU. Another one is that the
> > load_avg is not invariant because not scaled with uarch.
> >
> > The util_avg of a periodic task that runs r time slots every p time slots
> > varies in the range :
> >
> > U * (1-y^r)/(1-y^p) * y^i < Utilization < U * (1-y^r)/(1-y^p)
> >
> > with U is the max util_avg value = SCHED_CAPACITY_SCALE
> >
> > At a lower capacity, the range becomes:
> >
> > U * C * (1-y^r')/(1-y^p) * y^i' < Utilization < U * C * (1-y^r')/(1-y^p)
> >
> > with C reflecting the compute capacity ratio between current capacity and
> > max capacity.
> >
> > so C tries to compensate changes in (1-y^r') but it can't be accurate.
> >
> > Instead of scaling the contribution value of PELT algo, we should scale the
> > running time. The PELT signal aims to track the amount of computation of
> > tasks and/or rq so it seems more correct to scale the running time to
> > reflect the effective amount of computation done since the last update.
> >
> > In order to be fully invariant, we need to apply the same amount of
> > running time and idle time whatever the current capacity. Because running
> > at lower capacity implies that the task will run longer, we have to ensure
> > that the same amount of idle time will be apply when system becomes idle
> > and no idle time has been "stolen". But reaching the maximum utilization
> > value (SCHED_CAPACITY_SCALE) means that the task is seen as an
> > always-running task whatever the capacity of the CPU (even at max compute
> > capacity). In this case, we can discard this "stolen" idle times which
> > becomes meaningless.
> >
> > In order to achieve this time scaling, a new clock_pelt is created per rq.
> > The increase of this clock scales with current capacity when something
> > is running on rq and synchronizes with clock_task when rq is idle. With
> > this mecanism, we ensure the same running and idle time whatever the
> > current capacity. This also enables to simplify the pelt algorithm by
> > removing all references of uarch and frequency and applying the same
> > contribution to utilization and loads. Furthermore, the scaling is done
> > only once per update of clock (update_rq_clock_task()) instead of during
> > each update of sched_entities and cfs/rt/dl_rq of the rq like the current
> > implementation. This is interesting when cgroup are involved as shown in
> > the results below:
>
> I have a couple of questions related to the tests you ran.
>
> > On a hikey (octo ARM platform).
> > Performance cpufreq governor and only shallowest c-state to remove variance
> > generated by those power features so we only track the impact of pelt algo.
>
> So you disabled c-state 'cpu-sleep' and 'cluster-sleep'?
yes
>
> I get 'hisi_thermal f7030700.tsensor: THERMAL ALARM: 66385 > 65000' on
> my hikey620. Did you change the thermal configuration? Not sure if there
> are any actions attached to this warning though.
I have a fan to ensure that no thermal mitigation will bias the measurement.
>
> > each test runs 16 times
> >
> > ./perf bench sched pipe
> > (higher is better)
> > kernel tip/sched/core + patch
> > ops/seconds ops/seconds diff
> > cgroup
> > root 59648(+/- 0.13%) 59785(+/- 0.24%) +0.23%
> > level1 55570(+/- 0.21%) 56003(+/- 0.24%) +0.78%
> > level2 52100(+/- 0.20%) 52788(+/- 0.22%) +1.32%
> >
> > hackbench -l 1000
>
> Shouldn't this be '-l 100'?
I have re checked and it's -l 1000
>
> > (lower is better)
> > kernel tip/sched/core + patch
> > duration(sec) duration(sec) diff
> > cgroup
> > root 4.472(+/- 1.86%) 4.346(+/- 2.74%) -2.80%
> > level1 5.039(+/- 11.05%) 4.662(+/- 7.57%) -7.47%
> > level2 5.195(+/- 10.66%) 4.877(+/- 8.90%) -6.12%
> >
> > The responsivness of PELT is improved when CPU is not running at max
> > capacity with this new algorithm. I have put below some examples of
> > duration to reach some typical load values according to the capacity of the
> > CPU with current implementation and with this patch.
> >
> > Util (%) max capacity half capacity(mainline) half capacity(w/ patch)
> > 972 (95%) 138ms not reachable 276ms
> > 486 (47.5%) 30ms 138ms 60ms
> > 256 (25%) 13ms 32ms 26ms
>
> Could you describe these testcases in more detail?
You don't need to run test case. These numbers are computed based on
geometric series and half period value
>
> So I assume you run one 100% task (possibly pinned to one CPU) on your
> hikey620 with userspace governor and for:
>
> (1) max capacity:
>
> echo 1200000 > /sys/devices/system/cpu/cpufreq/policy0/scaling_setspeed
>
> (2) half capacity:
>
> echo 729000 > /sys/devices/system/cpu/cpufreq/policy0/scaling_setspeed
>
> and then you measure the time till t1 reaches 25%, 47.5% and 95%
> utilization?
> What's the initial utilization value of t1? I assume t1 starts with
> utilization=512 (post_init_entity_util_avg()).
>
> > On my hikey (octo ARM platform) with schedutil governor, the time to reach
> > max OPP when starting from a null utilization, decreases from 223ms with
> > current scale invariance down to 121ms with the new algorithm. For this
> > test, I have enable arch_scale_freq for arm64.
>
> Isn't the arch-specific arch_scale_freq_capacity() enabled by default on
> arm64 with cpufreq support?
Yes. that's a remain of previous version when arch_scale_freq was not yet merged
>
> I would like to run the same tests so we can discuss results more easily.
Let me know if you need more details
On 10/25/18 12:43 PM, Vincent Guittot wrote:
> On Thu, 25 Oct 2018 at 12:36, Dietmar Eggemann <[email protected]> wrote:
[...]
>> I have a couple of questions related to the tests you ran.
>>
>>> On a hikey (octo ARM platform).
>>> Performance cpufreq governor and only shallowest c-state to remove variance
>>> generated by those power features so we only track the impact of pelt algo.
>>
>> So you disabled c-state 'cpu-sleep' and 'cluster-sleep'?
>
> yes
>
>>
>> I get 'hisi_thermal f7030700.tsensor: THERMAL ALARM: 66385 > 65000' on
>> my hikey620. Did you change the thermal configuration? Not sure if there
>> are any actions attached to this warning though.
>
> I have a fan to ensure that no thermal mitigation will bias the measurement.
Great, with a fan they disappear here as well.
>>> each test runs 16 times
>>>
>>> ./perf bench sched pipe
>>> (higher is better)
>>> kernel tip/sched/core + patch
>>> ops/seconds ops/seconds diff
>>> cgroup
>>> root 59648(+/- 0.13%) 59785(+/- 0.24%) +0.23%
>>> level1 55570(+/- 0.21%) 56003(+/- 0.24%) +0.78%
>>> level2 52100(+/- 0.20%) 52788(+/- 0.22%) +1.32%
>>>
>>> hackbench -l 1000
>>
>> Shouldn't this be '-l 100'?
>
> I have re checked and it's -l 1000
Strange, when I run hackbench on this board (performance governor) I get
values like:
root@h620:/# hackbench -l 100
Running in process mode with 10 groups using 40 file descriptors each
(== 400 tasks)
Each sender will pass 100 messages of 100 bytes
Time: 4.023
root@h620:/# hackbench -l 1000
Running in process mode with 10 groups using 40 file descriptors each
(== 400 tasks)
Each sender will pass 1000 messages of 100 bytes
Time: 37.883
Since you have values in the range of 4-6 secs in your hackbench table?
Maybe different hackbench versions?
>>> (lower is better)
>>> kernel tip/sched/core + patch
>>> duration(sec) duration(sec) diff
>>> cgroup
>>> root 4.472(+/- 1.86%) 4.346(+/- 2.74%) -2.80%
>>> level1 5.039(+/- 11.05%) 4.662(+/- 7.57%) -7.47%
>>> level2 5.195(+/- 10.66%) 4.877(+/- 8.90%) -6.12%
>>>
>>> The responsivness of PELT is improved when CPU is not running at max
>>> capacity with this new algorithm. I have put below some examples of
>>> duration to reach some typical load values according to the capacity of the
>>> CPU with current implementation and with this patch.
>>>
>>> Util (%) max capacity half capacity(mainline) half capacity(w/ patch)
>>> 972 (95%) 138ms not reachable 276ms
>>> 486 (47.5%) 30ms 138ms 60ms
>>> 256 (25%) 13ms 32ms 26ms
>>
>> Could you describe these testcases in more detail?
>
> You don't need to run test case. These numbers are computed based on
> geometric series and half period value
Ah, ok, maybe you can mention this explicitly.
[...]
>> What's the initial utilization value of t1? I assume t1 starts with
>> utilization=512 (post_init_entity_util_avg()).
OK, then it's starts at 0.
>>> On my hikey (octo ARM platform) with schedutil governor, the time to reach
>>> max OPP when starting from a null utilization, decreases from 223ms with
>>> current scale invariance down to 121ms with the new algorithm. For this
>>> test, I have enable arch_scale_freq for arm64.
>>
>> Isn't the arch-specific arch_scale_freq_capacity() enabled by default on
>> arm64 with cpufreq support?
>
> Yes. that's a remain of previous version when arch_scale_freq was not yet merged
OK.
[...]