2022-04-27 15:00:33

by Vincent Donnefort

[permalink] [raw]
Subject: [PATCH v7 0/7] feec() energy margin removal

find_energy_efficient() (feec()) will migrate a task to save energy only
if it saves at least 6% of the total energy consumed by the system. This
conservative approach is a problem on a system where a lot of small tasks
create a huge load on the overall: very few of them will be allowed to migrate
to a smaller CPU, wasting a lot of energy. Instead of trying to determine yet
another margin, let's try to remove it.

The first elements of this patch-set are various fixes and improvement that
stabilizes task_util and ensures energy comparison fairness across all CPUs of
the topology. Only once those fixed, we can completely remove the margin and
let feec() aggressively place task and save energy.

This has been validated by two different ways:

First using LISA's eas_behaviour test suite. This is composed of a set of
scenario and verify if the task placement is optimum. No failure have been
observed and it also improved some tests such as Ramp-Down (as the placement
is now more energy oriented) and *ThreeSmall (as no bouncing between clusters
happen anymore).

* Hikey960: 100% PASSED
* DB-845C: 100% PASSED
* RB5: 100% PASSED

Second, using an Android benchmark: PCMark2 on a Pixel4, with a lot of
backports to have a scheduler as close as we can from mainline.

+------------+-----------------+-----------------+
| Test | Perf | Energy [1] |
+------------+-----------------+-----------------+
| Web2 | -0.3% pval 0.03 | -1.8% pval 0.00 |
| Video2 | -0.3% pval 0.13 | -5.6% pval 0.00 |
| Photo2 [2] | -3.8% pval 0.00 | -1% pval 0.00 |
| Writing2 | 0% pval 0.13 | -1% pval 0.00 |
| Data2 | 0% pval 0.8 | -0.43 pval 0.00 |
+------------+-----------------+-----------------+

The margin removal let the kernel make the best use of the Energy Model,
tasks are more likely to be placed where they fit and this saves a
substantial amount of energy, while having a limited impact on performances.

[1] This is an energy estimation based on the CPU activity and the Energy Model
for this device. "All models are wrong but some are useful"; yes, this is an
imperfect estimation that doesn't take into account some idle states and shared
power rails. Nonetheless this is based on the information the kernel has during
runtime and it proves the scheduler can take better decisions based solely on
those data.

[2] This is the only performance impact observed. The debugging of this test
showed no issue with task placement. The better score was solely due to some
critical threads held on better performing CPUs. If a thread needs a higher
capacity CPU, the placement must result from a user input (with e.g. uclamp
min) instead of being artificially held on less efficient CPUs by feec().
Notice also, the experiment didn't use the Android only latency_sensitive
feature which would hide this problem on a real-life device.

v6 -> v7:
- PELT migration decay: Add missing clock_pelt_idle updates.
- PELT migration decay: Fix PELT scaling delta for CONFIG_CFS_BANDWIDTH.

v5 -> v6:
- Fix !CONFIG_SMP build.

v4 -> v5:
- PELT migration decay: timestamp only at idle time (Vincent G.)
- PELT migration decay: split timestamp values (enter_idle / clock_pelt_idle)
(Vincent G.)

v3 -> v4:
- Minor cosmetic changes (Dietmar)

v2 -> v3:
- feec(): introduce energy_env struct (Dietmar)
- PELT migration decay: Only apply when src CPU is idle (Vincent G.)
- PELT migration decay: Do not apply when cfs_rq is throttled
- PELT migration decay: Snapshot the lag at cfs_rq's level

v1 -> v2:
- Fix PELT migration last_update_time (previously root cfs_rq's).
- Add Dietmar's patches to refactor feec()'s CPU loop.
- feec(): renaming busy time functions get_{pd,tsk}_busy_time()
- feec(): pd_cap computation in the first for_each_cpu loop.
- feec(): create get_pd_max_util() function (previously within compute_energy())
- feec(): rename base_energy_pd to base_energy.

Dietmar Eggemann (3):
sched, drivers: Remove max param from
effective_cpu_util()/sched_cpu_util()
sched/fair: Rename select_idle_mask to select_rq_mask
sched/fair: Use the same cpumask per-PD throughout
find_energy_efficient_cpu()

Vincent Donnefort (4):
sched/fair: Provide u64 read for 32-bits arch helper
sched/fair: Decay task PELT values during wakeup migration
sched/fair: Remove task_util from effective utilization in feec()
sched/fair: Remove the energy margin in feec()

drivers/powercap/dtpm_cpu.c | 33 +--
drivers/thermal/cpufreq_cooling.c | 6 +-
include/linux/sched.h | 2 +-
kernel/sched/core.c | 15 +-
kernel/sched/cpufreq_schedutil.c | 5 +-
kernel/sched/fair.c | 379 ++++++++++++++++++------------
kernel/sched/pelt.h | 28 ++-
kernel/sched/sched.h | 53 ++++-
8 files changed, 318 insertions(+), 203 deletions(-)

--
2.25.1


2022-04-27 15:00:54

by Vincent Donnefort

[permalink] [raw]
Subject: [PATCH v7 4/7] sched/fair: Rename select_idle_mask to select_rq_mask

From: Dietmar Eggemann <[email protected]>

Decouple the name of the per-cpu cpumask select_idle_mask from its usage
in select_idle_[cpu/capacity]() of the CFS run-queue selection
(select_task_rq_fair()).

This is to support the reuse of this cpumask in the Energy Aware
Scheduling (EAS) path (find_energy_efficient_cpu()) of the CFS run-queue
selection.

Signed-off-by: Dietmar Eggemann <[email protected]>

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index a62d25ec5b0d..f3f5540bae9e 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -9456,7 +9456,7 @@ static struct kmem_cache *task_group_cache __read_mostly;
#endif

DECLARE_PER_CPU(cpumask_var_t, load_balance_mask);
-DECLARE_PER_CPU(cpumask_var_t, select_idle_mask);
+DECLARE_PER_CPU(cpumask_var_t, select_rq_mask);

void __init sched_init(void)
{
@@ -9505,7 +9505,7 @@ void __init sched_init(void)
for_each_possible_cpu(i) {
per_cpu(load_balance_mask, i) = (cpumask_var_t)kzalloc_node(
cpumask_size(), GFP_KERNEL, cpu_to_node(i));
- per_cpu(select_idle_mask, i) = (cpumask_var_t)kzalloc_node(
+ per_cpu(select_rq_mask, i) = (cpumask_var_t)kzalloc_node(
cpumask_size(), GFP_KERNEL, cpu_to_node(i));
}
#endif /* CONFIG_CPUMASK_OFFSTACK */
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index e0d5b1ba565d..a4ad4d82f217 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5850,7 +5850,7 @@ static void dequeue_task_fair(struct rq *rq, struct task_struct *p, int flags)

/* Working cpumask for: load_balance, load_balance_newidle. */
DEFINE_PER_CPU(cpumask_var_t, load_balance_mask);
-DEFINE_PER_CPU(cpumask_var_t, select_idle_mask);
+DEFINE_PER_CPU(cpumask_var_t, select_rq_mask);

#ifdef CONFIG_NO_HZ_COMMON

@@ -6340,7 +6340,7 @@ static inline int select_idle_smt(struct task_struct *p, struct sched_domain *sd
*/
static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, bool has_idle_core, int target)
{
- struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_idle_mask);
+ struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_rq_mask);
int i, cpu, idle_cpu = -1, nr = INT_MAX;
struct rq *this_rq = this_rq();
int this = smp_processor_id();
@@ -6426,7 +6426,7 @@ select_idle_capacity(struct task_struct *p, struct sched_domain *sd, int target)
int cpu, best_cpu = -1;
struct cpumask *cpus;

- cpus = this_cpu_cpumask_var_ptr(select_idle_mask);
+ cpus = this_cpu_cpumask_var_ptr(select_rq_mask);
cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);

task_util = uclamp_task_util(p);
--
2.25.1

2022-04-27 15:00:55

by Vincent Donnefort

[permalink] [raw]
Subject: [PATCH v7 1/7] sched/fair: Provide u64 read for 32-bits arch helper

Introducing macro helpers u64_u32_{store,load}() to factorize lockless
accesses to u64 variables for 32-bits architectures.

Users are for now cfs_rq.min_vruntime and sched_avg.last_update_time. To
accommodate the later where the copy lies outside of the structure
(cfs_rq.last_udpate_time_copy instead of sched_avg.last_update_time_copy),
use the _copy() version of those helpers.

Those new helpers encapsulate smp_rmb() and smp_wmb() synchronization and
therefore, have a small penalty in set_task_rq_fair() and init_cfs_rq().

Signed-off-by: Vincent Donnefort <[email protected]>

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 4c420124b5d6..abd1feeec0c2 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -600,11 +600,8 @@ static void update_min_vruntime(struct cfs_rq *cfs_rq)
}

/* ensure we never gain time by being placed backwards. */
- cfs_rq->min_vruntime = max_vruntime(cfs_rq->min_vruntime, vruntime);
-#ifndef CONFIG_64BIT
- smp_wmb();
- cfs_rq->min_vruntime_copy = cfs_rq->min_vruntime;
-#endif
+ u64_u32_store(cfs_rq->min_vruntime,
+ max_vruntime(cfs_rq->min_vruntime, vruntime));
}

static inline bool __entity_less(struct rb_node *a, const struct rb_node *b)
@@ -3301,6 +3298,11 @@ static inline void cfs_rq_util_change(struct cfs_rq *cfs_rq, int flags)
}

#ifdef CONFIG_SMP
+static inline u64 cfs_rq_last_update_time(struct cfs_rq *cfs_rq)
+{
+ return u64_u32_load_copy(cfs_rq->avg.last_update_time,
+ cfs_rq->last_update_time_copy);
+}
#ifdef CONFIG_FAIR_GROUP_SCHED
/*
* Because list_add_leaf_cfs_rq always places a child cfs_rq on the list
@@ -3411,27 +3413,9 @@ void set_task_rq_fair(struct sched_entity *se,
if (!(se->avg.last_update_time && prev))
return;

-#ifndef CONFIG_64BIT
- {
- u64 p_last_update_time_copy;
- u64 n_last_update_time_copy;
-
- do {
- p_last_update_time_copy = prev->load_last_update_time_copy;
- n_last_update_time_copy = next->load_last_update_time_copy;
-
- smp_rmb();
+ p_last_update_time = cfs_rq_last_update_time(prev);
+ n_last_update_time = cfs_rq_last_update_time(next);

- p_last_update_time = prev->avg.last_update_time;
- n_last_update_time = next->avg.last_update_time;
-
- } while (p_last_update_time != p_last_update_time_copy ||
- n_last_update_time != n_last_update_time_copy);
- }
-#else
- p_last_update_time = prev->avg.last_update_time;
- n_last_update_time = next->avg.last_update_time;
-#endif
__update_load_avg_blocked_se(p_last_update_time, se);
se->avg.last_update_time = n_last_update_time;
}
@@ -3786,8 +3770,9 @@ update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq)
decayed |= __update_load_avg_cfs_rq(now, cfs_rq);

#ifndef CONFIG_64BIT
- smp_wmb();
- cfs_rq->load_last_update_time_copy = sa->last_update_time;
+ u64_u32_store_copy(sa->last_update_time,
+ cfs_rq->last_update_time_copy,
+ sa->last_update_time);
#endif

return decayed;
@@ -3921,27 +3906,6 @@ static inline void update_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *s
}
}

-#ifndef CONFIG_64BIT
-static inline u64 cfs_rq_last_update_time(struct cfs_rq *cfs_rq)
-{
- u64 last_update_time_copy;
- u64 last_update_time;
-
- do {
- last_update_time_copy = cfs_rq->load_last_update_time_copy;
- smp_rmb();
- last_update_time = cfs_rq->avg.last_update_time;
- } while (last_update_time != last_update_time_copy);
-
- return last_update_time;
-}
-#else
-static inline u64 cfs_rq_last_update_time(struct cfs_rq *cfs_rq)
-{
- return cfs_rq->avg.last_update_time;
-}
-#endif
-
/*
* Synchronize entity load avg of dequeued entity without locking
* the previous rq.
@@ -6991,21 +6955,8 @@ static void migrate_task_rq_fair(struct task_struct *p, int new_cpu)
if (READ_ONCE(p->__state) == TASK_WAKING) {
struct sched_entity *se = &p->se;
struct cfs_rq *cfs_rq = cfs_rq_of(se);
- u64 min_vruntime;

-#ifndef CONFIG_64BIT
- u64 min_vruntime_copy;
-
- do {
- min_vruntime_copy = cfs_rq->min_vruntime_copy;
- smp_rmb();
- min_vruntime = cfs_rq->min_vruntime;
- } while (min_vruntime != min_vruntime_copy);
-#else
- min_vruntime = cfs_rq->min_vruntime;
-#endif
-
- se->vruntime -= min_vruntime;
+ se->vruntime -= u64_u32_load(cfs_rq->min_vruntime);
}

if (p->on_rq == TASK_ON_RQ_MIGRATING) {
@@ -11453,10 +11404,7 @@ static void set_next_task_fair(struct rq *rq, struct task_struct *p, bool first)
void init_cfs_rq(struct cfs_rq *cfs_rq)
{
cfs_rq->tasks_timeline = RB_ROOT_CACHED;
- cfs_rq->min_vruntime = (u64)(-(1LL << 20));
-#ifndef CONFIG_64BIT
- cfs_rq->min_vruntime_copy = cfs_rq->min_vruntime;
-#endif
+ u64_u32_store(cfs_rq->min_vruntime, (u64)(-(1LL << 20)));
#ifdef CONFIG_SMP
raw_spin_lock_init(&cfs_rq->removed.lock);
#endif
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 762be73972bd..e2cf6e48b165 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -513,6 +513,45 @@ struct cfs_bandwidth { };

#endif /* CONFIG_CGROUP_SCHED */

+/*
+ * u64_u32_load/u64_u32_store
+ *
+ * Use a copy of a u64 value to protect against data race. This is only
+ * applicable for 32-bits architectures.
+ */
+#ifdef CONFIG_64BIT
+# define u64_u32_load_copy(var, copy) var
+# define u64_u32_store_copy(var, copy, val) (var = val)
+#else
+# define u64_u32_load_copy(var, copy) \
+({ \
+ u64 __val, __val_copy; \
+ do { \
+ __val_copy = copy; \
+ /* \
+ * paired with u64_u32_store, ordering access \
+ * to var and copy. \
+ */ \
+ smp_rmb(); \
+ __val = var; \
+ } while (__val != __val_copy); \
+ __val; \
+})
+# define u64_u32_store_copy(var, copy, val) \
+do { \
+ typeof(val) __val = (val); \
+ var = __val; \
+ /* \
+ * paired with u64_u32_load, ordering access to var and \
+ * copy. \
+ */ \
+ smp_wmb(); \
+ copy = __val; \
+} while (0)
+#endif
+# define u64_u32_load(var) u64_u32_load_copy(var, var##_copy)
+# define u64_u32_store(var, val) u64_u32_store_copy(var, var##_copy, val)
+
/* CFS-related fields in a runqueue */
struct cfs_rq {
struct load_weight load;
@@ -553,7 +592,7 @@ struct cfs_rq {
*/
struct sched_avg avg;
#ifndef CONFIG_64BIT
- u64 load_last_update_time_copy;
+ u64 last_update_time_copy;
#endif
struct {
raw_spinlock_t lock ____cacheline_aligned;
--
2.25.1

2022-04-27 15:01:00

by Vincent Donnefort

[permalink] [raw]
Subject: [PATCH v7 5/7] sched/fair: Use the same cpumask per-PD throughout find_energy_efficient_cpu()

From: Dietmar Eggemann <[email protected]>

The Perf Domain (PD) cpumask (struct em_perf_domain.cpus) stays
invariant after Energy Model creation, i.e. it is not updated after
CPU hotplug operations.

That's why the PD mask is used in conjunction with the cpu_online_mask
(or Sched Domain cpumask). Thereby the cpu_online_mask is fetched
multiple times (in compute_energy()) during a run-queue selection
for a task.

cpu_online_mask may change during this time which can lead to wrong
energy calculations.

To be able to avoid this, use the select_rq_mask per-cpu cpumask to
create a cpumask out of PD cpumask and cpu_online_mask and pass it
through the function calls of the EAS run-queue selection path.

The PD cpumask for max_spare_cap_cpu/compute_prev_delta selection
(find_energy_efficient_cpu()) is now ANDed not only with the SD mask
but also with the cpu_online_mask. This is fine since this cpumask
has to be in syc with the one used for energy computation
(compute_energy()).
An exclusive cpuset setup with at least one asymmetric CPU capacity
island (hence the additional AND with the SD cpumask) is the obvious
exception here.

Signed-off-by: Dietmar Eggemann <[email protected]>

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index a4ad4d82f217..d705298aa310 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6696,14 +6696,14 @@ static unsigned long cpu_util_next(int cpu, struct task_struct *p, int dst_cpu)
* task.
*/
static long
-compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
+compute_energy(struct task_struct *p, int dst_cpu, struct cpumask *cpus,
+ struct perf_domain *pd)
{
- struct cpumask *pd_mask = perf_domain_span(pd);
unsigned long max_util = 0, sum_util = 0, cpu_cap;
int cpu;

- cpu_cap = arch_scale_cpu_capacity(cpumask_first(pd_mask));
- cpu_cap -= arch_scale_thermal_pressure(cpumask_first(pd_mask));
+ cpu_cap = arch_scale_cpu_capacity(cpumask_first(cpus));
+ cpu_cap -= arch_scale_thermal_pressure(cpumask_first(cpus));

/*
* The capacity state of CPUs of the current rd can be driven by CPUs
@@ -6714,7 +6714,7 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
* If an entire pd is outside of the current rd, it will not appear in
* its pd list and will not be accounted by compute_energy().
*/
- for_each_cpu_and(cpu, pd_mask, cpu_online_mask) {
+ for_each_cpu(cpu, cpus) {
unsigned long util_freq = cpu_util_next(cpu, p, dst_cpu);
unsigned long cpu_util, util_running = util_freq;
struct task_struct *tsk = NULL;
@@ -6801,6 +6801,7 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
*/
static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
{
+ struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_rq_mask);
unsigned long prev_delta = ULONG_MAX, best_delta = ULONG_MAX;
struct root_domain *rd = cpu_rq(smp_processor_id())->rd;
int cpu, best_energy_cpu = prev_cpu, target = -1;
@@ -6835,7 +6836,9 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
unsigned long base_energy_pd;
int max_spare_cap_cpu = -1;

- for_each_cpu_and(cpu, perf_domain_span(pd), sched_domain_span(sd)) {
+ cpumask_and(cpus, perf_domain_span(pd), cpu_online_mask);
+
+ for_each_cpu_and(cpu, cpus, sched_domain_span(sd)) {
if (!cpumask_test_cpu(cpu, p->cpus_ptr))
continue;

@@ -6872,12 +6875,12 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
continue;

/* Compute the 'base' energy of the pd, without @p */
- base_energy_pd = compute_energy(p, -1, pd);
+ base_energy_pd = compute_energy(p, -1, cpus, pd);
base_energy += base_energy_pd;

/* Evaluate the energy impact of using prev_cpu. */
if (compute_prev_delta) {
- prev_delta = compute_energy(p, prev_cpu, pd);
+ prev_delta = compute_energy(p, prev_cpu, cpus, pd);
if (prev_delta < base_energy_pd)
goto unlock;
prev_delta -= base_energy_pd;
@@ -6886,7 +6889,8 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)

/* Evaluate the energy impact of using max_spare_cap_cpu. */
if (max_spare_cap_cpu >= 0) {
- cur_delta = compute_energy(p, max_spare_cap_cpu, pd);
+ cur_delta = compute_energy(p, max_spare_cap_cpu, cpus,
+ pd);
if (cur_delta < base_energy_pd)
goto unlock;
cur_delta -= base_energy_pd;
--
2.25.1

2022-04-27 15:01:14

by Vincent Donnefort

[permalink] [raw]
Subject: [PATCH v7 2/7] sched/fair: Decay task PELT values during wakeup migration

Before being migrated to a new CPU, a task sees its PELT values
synchronized with rq last_update_time. Once done, that same task will also
have its sched_avg last_update_time reset. This means the time between
the migration and the last clock update (B) will not be accounted for in
util_avg and a discontinuity will appear. This issue is amplified by the
PELT clock scaling. If the clock hasn't been updated while the CPU is
idle, clock_pelt will not be aligned with clock_task and that time (A)
will be also lost.

---------|----- A -----|-----------|------- B -----|>
clock_pelt clock_task clock now

This is especially problematic for asymmetric CPU capacity systems which
need stable util_avg signals for task placement and energy estimation.

Ideally, this problem would be solved by updating the runqueue clocks
before the migration. But that would require taking the runqueue lock
which is quite expensive [1]. Instead estimate the missing time and update
the task util_avg with that value:

A + B = clock_task - clock_pelt + sched_clock_cpu() - clock

sched_clock_cpu() is a costly function. Limit the usage to the case where
the source CPU is idle as we know this is when the clock is having the
biggest risk of being outdated.

Neither clock_task, clock_pelt nor clock can be accessed without the
runqueue lock. We then need to store those values in a timestamp variable
which can be accessed during the migration. rq's enter_idle will give the
wall-clock time when the rq went idle. We have then:

B = sched_clock_cpu() - rq->enter_idle.

Then, to catch-up the PELT clock scaling (A), two cases:

* !CFS_BANDWIDTH: We can simply use clock_task(). This value is stored
in rq's clock_pelt_idle, before the rq enters idle. The estimated time
is then:

rq->clock_pelt_idle + sched_clock_cpu() - rq->enter_idle.

* CFS_BANDWIDTH: We can't catch-up with clock_task because of the
throttled_clock_task_time offset. cfs_rq's clock_pelt_idle is then
giving the PELT clock when the cfs_rq becomes idle. This gives:

A = rq->clock_pelt_idle - cfs_rq->clock_pelt_idle

And gives the following estimated time:

cfs_rq->last_update_time +
rq->clock_pelt_idle - cfs_rq->clock_pelt_idle + (A)
sched_clock_cpu() - rq->enter_idle (B)

The (B) part of the missing time is however an estimation that doesn't
take into account IRQ and Paravirt time.

[1] https://lore.kernel.org/all/[email protected]/

Signed-off-by: Vincent Donnefort <[email protected]>

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index abd1feeec0c2..9cd506dc682c 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3694,6 +3694,57 @@ static inline void add_tg_cfs_propagate(struct cfs_rq *cfs_rq, long runnable_sum

#endif /* CONFIG_FAIR_GROUP_SCHED */

+#ifdef CONFIG_NO_HZ_COMMON
+static inline void migrate_se_pelt_lag(struct sched_entity *se)
+{
+ struct cfs_rq *cfs_rq;
+ struct rq *rq;
+ bool is_idle;
+ u64 now;
+
+ cfs_rq = cfs_rq_of(se);
+ rq = rq_of(cfs_rq);
+
+ rcu_read_lock();
+ is_idle = is_idle_task(rcu_dereference(rq->curr));
+ rcu_read_unlock();
+
+ /*
+ * The lag estimation comes with a cost we don't want to pay all the
+ * time. Hence, limiting to the case where the source CPU is idle and
+ * we know we are at the greatest risk to have an outdated clock.
+ */
+ if (!is_idle)
+ return;
+
+ /*
+ * estimated "now" is:
+ * last_update_time +
+ * PELT scaling (rq->clock_pelt_idle - cfs_rq->clock_pelt_idle) +
+ * rq clock lag (sched_clock_cpu() - rq->enter_idle)
+ *
+ * The PELT scaling contribution is always 0 when !CFS_BANDWIDTH.
+ * (see clock_pelt = clock_task in _update_idle_rq_clock_pelt())
+ */
+#ifdef CONFIG_CFS_BANDWIDTH
+ now = u64_u32_load(cfs_rq->clock_pelt_idle);
+ /* The clock has been stopped for throttling */
+ if (now == U64_MAX)
+ return;
+
+ now = u64_u32_load(rq->clock_pelt_idle) - now;
+ now += cfs_rq_last_update_time(cfs_rq);
+#else
+ now = u64_u32_load(rq->clock_pelt_idle);
+#endif
+ now += sched_clock_cpu(cpu_of(rq)) - u64_u32_load(rq->enter_idle);
+
+ __update_load_avg_blocked_se(now, se);
+}
+#else
+static void migrate_se_pelt_lag(struct sched_entity *se) {}
+#endif
+
/**
* update_cfs_rq_load_avg - update the cfs_rq's load/util averages
* @now: current time, as per cfs_rq_clock_pelt()
@@ -4429,6 +4480,9 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
*/
if ((flags & (DEQUEUE_SAVE | DEQUEUE_MOVE)) != DEQUEUE_SAVE)
update_min_vruntime(cfs_rq);
+
+ if (cfs_rq->nr_running == 0)
+ update_idle_cfs_rq_clock_pelt(cfs_rq);
}

/*
@@ -6946,6 +7000,8 @@ static void detach_entity_cfs_rq(struct sched_entity *se);
*/
static void migrate_task_rq_fair(struct task_struct *p, int new_cpu)
{
+ struct sched_entity *se = &p->se;
+
/*
* As blocked tasks retain absolute vruntime the migration needs to
* deal with this by subtracting the old and adding the new
@@ -6953,7 +7009,6 @@ static void migrate_task_rq_fair(struct task_struct *p, int new_cpu)
* the task on the new runqueue.
*/
if (READ_ONCE(p->__state) == TASK_WAKING) {
- struct sched_entity *se = &p->se;
struct cfs_rq *cfs_rq = cfs_rq_of(se);

se->vruntime -= u64_u32_load(cfs_rq->min_vruntime);
@@ -6965,25 +7020,29 @@ static void migrate_task_rq_fair(struct task_struct *p, int new_cpu)
* rq->lock and can modify state directly.
*/
lockdep_assert_rq_held(task_rq(p));
- detach_entity_cfs_rq(&p->se);
+ detach_entity_cfs_rq(se);

} else {
+ remove_entity_load_avg(se);
+
/*
- * We are supposed to update the task to "current" time, then
- * its up to date and ready to go to new CPU/cfs_rq. But we
- * have difficulty in getting what current time is, so simply
- * throw away the out-of-date time. This will result in the
- * wakee task is less decayed, but giving the wakee more load
- * sounds not bad.
+ * Here, the task's PELT values have been updated according to
+ * the current rq's clock. But if that clock hasn't been
+ * updated in a while, a substantial idle time will be missed,
+ * leading to an inflation after wake-up on the new rq.
+ *
+ * Estimate the missing time from the cfs_rq last_update_time
+ * and update sched_avg to improve the PELT continuity after
+ * migration.
*/
- remove_entity_load_avg(&p->se);
+ migrate_se_pelt_lag(se);
}

/* Tell new CPU we are migrated */
- p->se.avg.last_update_time = 0;
+ se->avg.last_update_time = 0;

/* We have migrated, no longer consider this task hot */
- p->se.exec_start = 0;
+ se->exec_start = 0;

update_scan_period(p, new_cpu);
}
@@ -8149,6 +8208,10 @@ static bool __update_blocked_fair(struct rq *rq, bool *done)
if (update_cfs_rq_load_avg(cfs_rq_clock_pelt(cfs_rq), cfs_rq)) {
update_tg_load_avg(cfs_rq);

+ /* sync clock_pelt_idle with last update */
+ if (cfs_rq->nr_running == 0)
+ update_idle_cfs_rq_clock_pelt(cfs_rq);
+
if (cfs_rq == &rq->cfs)
decayed = true;
}
diff --git a/kernel/sched/pelt.h b/kernel/sched/pelt.h
index 4ff2ed4f8fa1..6b39e07b2919 100644
--- a/kernel/sched/pelt.h
+++ b/kernel/sched/pelt.h
@@ -61,6 +61,23 @@ static inline void cfs_se_util_change(struct sched_avg *avg)
WRITE_ONCE(avg->util_est.enqueued, enqueued);
}

+static inline u64 rq_clock_pelt(struct rq *rq)
+{
+ lockdep_assert_rq_held(rq);
+ assert_clock_updated(rq);
+
+ return rq->clock_pelt - rq->lost_idle_time;
+}
+
+/* The rq is idle, we can sync to clock_task */
+static inline void _update_idle_rq_clock_pelt(struct rq *rq)
+{
+ rq->clock_pelt = rq_clock_task(rq);
+
+ u64_u32_store(rq->enter_idle, rq_clock(rq));
+ u64_u32_store(rq->clock_pelt_idle, rq_clock_pelt(rq));
+}
+
/*
* The clock_pelt scales the time to reflect the effective amount of
* computation done during the running delta time but then sync back to
@@ -76,8 +93,7 @@ static inline void cfs_se_util_change(struct sched_avg *avg)
static inline void update_rq_clock_pelt(struct rq *rq, s64 delta)
{
if (unlikely(is_idle_task(rq->curr))) {
- /* The rq is idle, we can sync to clock_task */
- rq->clock_pelt = rq_clock_task(rq);
+ _update_idle_rq_clock_pelt(rq);
return;
}

@@ -130,17 +146,20 @@ static inline void update_idle_rq_clock_pelt(struct rq *rq)
*/
if (util_sum >= divider)
rq->lost_idle_time += rq_clock_task(rq) - rq->clock_pelt;
+
+ _update_idle_rq_clock_pelt(rq);
}

-static inline u64 rq_clock_pelt(struct rq *rq)
+#ifdef CONFIG_CFS_BANDWIDTH
+static inline void update_idle_cfs_rq_clock_pelt(struct cfs_rq *cfs_rq)
{
- lockdep_assert_rq_held(rq);
- assert_clock_updated(rq);
-
- return rq->clock_pelt - rq->lost_idle_time;
+ if (unlikely(cfs_rq->throttle_count))
+ u64_u32_store(cfs_rq->clock_pelt_idle, U64_MAX);
+ else
+ u64_u32_store(cfs_rq->clock_pelt_idle,
+ rq_clock_pelt(rq_of(cfs_rq)));
}

-#ifdef CONFIG_CFS_BANDWIDTH
/* rq->task_clock normalized against any time this cfs_rq has spent throttled */
static inline u64 cfs_rq_clock_pelt(struct cfs_rq *cfs_rq)
{
@@ -150,6 +169,7 @@ static inline u64 cfs_rq_clock_pelt(struct cfs_rq *cfs_rq)
return rq_clock_pelt(rq_of(cfs_rq)) - cfs_rq->throttled_clock_pelt_time;
}
#else
+static inline void update_idle_cfs_rq_clock_pelt(struct cfs_rq *cfs_rq) { }
static inline u64 cfs_rq_clock_pelt(struct cfs_rq *cfs_rq)
{
return rq_clock_pelt(rq_of(cfs_rq));
@@ -204,6 +224,7 @@ update_rq_clock_pelt(struct rq *rq, s64 delta) { }
static inline void
update_idle_rq_clock_pelt(struct rq *rq) { }

+static inline void update_idle_cfs_rq_clock_pelt(struct cfs_rq *cfs_rq) { }
#endif


diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index e2cf6e48b165..07014e8cbae2 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -641,6 +641,10 @@ struct cfs_rq {
int runtime_enabled;
s64 runtime_remaining;

+ u64 clock_pelt_idle;
+#ifndef CONFIG_64BIT
+ u64 clock_pelt_idle_copy;
+#endif
u64 throttled_clock;
u64 throttled_clock_pelt;
u64 throttled_clock_pelt_time;
@@ -1013,6 +1017,12 @@ struct rq {
u64 clock_task ____cacheline_aligned;
u64 clock_pelt;
unsigned long lost_idle_time;
+ u64 clock_pelt_idle;
+ u64 enter_idle;
+#ifndef CONFIG_64BIT
+ u64 clock_pelt_idle_copy;
+ u64 enter_idle_copy;
+#endif

atomic_t nr_iowait;

--
2.25.1

2022-04-27 15:01:19

by Vincent Donnefort

[permalink] [raw]
Subject: [PATCH v7 6/7] sched/fair: Remove task_util from effective utilization in feec()

The energy estimation in find_energy_efficient_cpu() (feec()) relies on
the computation of the effective utilization for each CPU of a perf domain
(PD). This effective utilization is then used as an estimation of the busy
time for this pd. The function effective_cpu_util() which gives this value,
scales the utilization relative to IRQ pressure on the CPU to take into
account that the IRQ time is hidden from the task clock. The IRQ scaling is
as follow:

effective_cpu_util = irq + (cpu_cap - irq)/cpu_cap * util

Where util is the sum of CFS/RT/DL utilization, cpu_cap the capacity of
the CPU and irq the IRQ avg time.

If now we take as an example a task placement which doesn't raise the OPP
on the candidate CPU, we can write the energy delta as:

delta = OPPcost/cpu_cap * (effective_cpu_util(cpu_util + task_util) -
effective_cpu_util(cpu_util))
= OPPcost/cpu_cap * (cpu_cap - irq)/cpu_cap * task_util

We end-up with an energy delta depending on the IRQ avg time, which is a
problem: first the time spent on IRQs by a CPU has no effect on the
additional energy that would be consumed by a task. Second, we don't want
to favour a CPU with a higher IRQ avg time value.

Nonetheless, we need to take the IRQ avg time into account. If a task
placement raises the PD's frequency, it will increase the energy cost for
the entire time where the CPU is busy. A solution is to only use
effective_cpu_util() with the CPU contribution part. The task contribution
is added separately and scaled according to prev_cpu's IRQ time.

No change for the FREQUENCY_UTIL component of the energy estimation. We
still want to get the actual frequency that would be selected after the
task placement.

Signed-off-by: Vincent Donnefort <[email protected]>

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index d705298aa310..3f382156b4ec 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6689,61 +6689,97 @@ static unsigned long cpu_util_next(int cpu, struct task_struct *p, int dst_cpu)
}

/*
- * compute_energy(): Estimates the energy that @pd would consume if @p was
- * migrated to @dst_cpu. compute_energy() predicts what will be the utilization
- * landscape of @pd's CPUs after the task migration, and uses the Energy Model
- * to compute what would be the energy if we decided to actually migrate that
- * task.
+ * energy_env - Utilization landscape for energy estimation.
+ * @task_busy_time: Utilization contribution by the task for which we test the
+ * placement. Given by eenv_task_busy_time().
+ * @pd_busy_time: Utilization of the whole perf domain without the task
+ * contribution. Given by eenv_pd_busy_time().
+ * @cpu_cap: Maximum CPU capacity for the perf domain.
+ * @pd_cap: Entire perf domain capacity. (pd->nr_cpus * cpu_cap).
+ */
+struct energy_env {
+ unsigned long task_busy_time;
+ unsigned long pd_busy_time;
+ unsigned long cpu_cap;
+ unsigned long pd_cap;
+};
+
+/*
+ * Compute the task busy time for compute_energy(). This time cannot be
+ * injected directly into effective_cpu_util() because of the IRQ scaling.
+ * The latter only makes sense with the most recent CPUs where the task has
+ * run.
*/
-static long
-compute_energy(struct task_struct *p, int dst_cpu, struct cpumask *cpus,
- struct perf_domain *pd)
+static inline void eenv_task_busy_time(struct energy_env *eenv,
+ struct task_struct *p, int prev_cpu)
{
- unsigned long max_util = 0, sum_util = 0, cpu_cap;
+ unsigned long max_cap = arch_scale_cpu_capacity(prev_cpu);
+ unsigned long irq = cpu_util_irq(cpu_rq(prev_cpu));
+
+ if (unlikely(irq >= max_cap)) {
+ eenv->task_busy_time = max_cap;
+ return;
+ }
+
+ eenv->task_busy_time =
+ scale_irq_capacity(task_util_est(p), irq, max_cap);
+}
+
+/*
+ * Compute the perf_domain (PD) busy time for compute_energy(). Based on the
+ * utilization for each @pd_cpus, it however doesn't take into account
+ * clamping since the ratio (utilization / cpu_capacity) is already enough to
+ * scale the EM reported power consumption at the (eventually clamped)
+ * cpu_capacity.
+ *
+ * The contribution of the task @p for which we want to estimate the
+ * energy cost is removed (by cpu_util_next()) and must be calculated
+ * separately (see eenv_task_busy_time). This ensures:
+ *
+ * - A stable PD utilization, no matter which CPU of that PD we want to place
+ * the task on.
+ *
+ * - A fair comparison between CPUs as the task contribution (task_util())
+ * will always be the same no matter which CPU utilization we rely on
+ * (util_avg or util_est).
+ *
+ * Set @eenv busy time for the PD that spans @pd_cpus. This busy time can't
+ * exceed @eenv->pd_cap.
+ */
+static inline void eenv_pd_busy_time(struct energy_env *eenv,
+ struct cpumask *pd_cpus,
+ struct task_struct *p)
+{
+ unsigned long busy_time = 0;
int cpu;

- cpu_cap = arch_scale_cpu_capacity(cpumask_first(cpus));
- cpu_cap -= arch_scale_thermal_pressure(cpumask_first(cpus));
+ for_each_cpu(cpu, pd_cpus) {
+ unsigned long util = cpu_util_next(cpu, p, -1);

- /*
- * The capacity state of CPUs of the current rd can be driven by CPUs
- * of another rd if they belong to the same pd. So, account for the
- * utilization of these CPUs too by masking pd with cpu_online_mask
- * instead of the rd span.
- *
- * If an entire pd is outside of the current rd, it will not appear in
- * its pd list and will not be accounted by compute_energy().
- */
- for_each_cpu(cpu, cpus) {
- unsigned long util_freq = cpu_util_next(cpu, p, dst_cpu);
- unsigned long cpu_util, util_running = util_freq;
- struct task_struct *tsk = NULL;
+ busy_time += effective_cpu_util(cpu, util, ENERGY_UTIL, NULL);
+ }

- /*
- * When @p is placed on @cpu:
- *
- * util_running = max(cpu_util, cpu_util_est) +
- * max(task_util, _task_util_est)
- *
- * while cpu_util_next is: max(cpu_util + task_util,
- * cpu_util_est + _task_util_est)
- */
- if (cpu == dst_cpu) {
- tsk = p;
- util_running =
- cpu_util_next(cpu, p, -1) + task_util_est(p);
- }
+ eenv->pd_busy_time = min(eenv->pd_cap, busy_time);
+}

- /*
- * Busy time computation: utilization clamping is not
- * required since the ratio (sum_util / cpu_capacity)
- * is already enough to scale the EM reported power
- * consumption at the (eventually clamped) cpu_capacity.
- */
- cpu_util = effective_cpu_util(cpu, util_running, ENERGY_UTIL,
- NULL);
+/*
+ * Compute the maximum utilization for compute_energy() when the task @p
+ * is placed on the cpu @dst_cpu.
+ *
+ * Returns the maximum utilization among @eenv->cpus. This utilization can't
+ * exceed @eenv->cpu_cap.
+ */
+static inline unsigned long
+eenv_pd_max_util(struct energy_env *eenv, struct cpumask *pd_cpus,
+ struct task_struct *p, int dst_cpu)
+{
+ unsigned long max_util = 0;
+ int cpu;

- sum_util += min(cpu_util, cpu_cap);
+ for_each_cpu(cpu, pd_cpus) {
+ struct task_struct *tsk = (cpu == dst_cpu) ? p : NULL;
+ unsigned long util = cpu_util_next(cpu, p, dst_cpu);
+ unsigned long cpu_util;

/*
* Performance domain frequency: utilization clamping
@@ -6752,12 +6788,30 @@ compute_energy(struct task_struct *p, int dst_cpu, struct cpumask *cpus,
* NOTE: in case RT tasks are running, by default the
* FREQUENCY_UTIL's utilization can be max OPP.
*/
- cpu_util = effective_cpu_util(cpu, util_freq, FREQUENCY_UTIL,
- tsk);
- max_util = max(max_util, min(cpu_util, cpu_cap));
+ cpu_util = effective_cpu_util(cpu, util, FREQUENCY_UTIL, tsk);
+ max_util = max(max_util, cpu_util);
}

- return em_cpu_energy(pd->em_pd, max_util, sum_util, cpu_cap);
+ return min(max_util, eenv->cpu_cap);
+}
+
+/*
+ * compute_energy(): Use the Energy Model to estimate the energy that @pd would
+ * consume for a given utilization landscape @eenv. If @dst_cpu < 0 the task
+ * contribution is removed from the energy estimation.
+ */
+static inline unsigned long
+compute_energy(struct energy_env *eenv, struct perf_domain *pd,
+ struct cpumask *pd_cpus, struct task_struct *p, int dst_cpu)
+{
+ unsigned long max_util = eenv_pd_max_util(eenv, pd_cpus, p, dst_cpu);
+ unsigned long busy_time = eenv->pd_busy_time;
+
+ if (dst_cpu >= 0)
+ busy_time = min(eenv->pd_cap,
+ eenv->pd_busy_time + eenv->task_busy_time);
+
+ return em_cpu_energy(pd->em_pd, max_util, busy_time, eenv->cpu_cap);
}

/*
@@ -6803,11 +6857,12 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
{
struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_rq_mask);
unsigned long prev_delta = ULONG_MAX, best_delta = ULONG_MAX;
- struct root_domain *rd = cpu_rq(smp_processor_id())->rd;
int cpu, best_energy_cpu = prev_cpu, target = -1;
- unsigned long cpu_cap, util, base_energy = 0;
+ struct root_domain *rd = this_rq()->rd;
+ unsigned long base_energy = 0;
struct sched_domain *sd;
struct perf_domain *pd;
+ struct energy_env eenv;

rcu_read_lock();
pd = rcu_dereference(rd->pd);
@@ -6830,22 +6885,36 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
if (!task_util_est(p))
goto unlock;

+ eenv_task_busy_time(&eenv, p, prev_cpu);
+
for (; pd; pd = pd->next) {
- unsigned long cur_delta, spare_cap, max_spare_cap = 0;
+ unsigned long cpu_cap, cpu_thermal_cap, util;
+ unsigned long cur_delta, max_spare_cap = 0;
bool compute_prev_delta = false;
unsigned long base_energy_pd;
int max_spare_cap_cpu = -1;

cpumask_and(cpus, perf_domain_span(pd), cpu_online_mask);

- for_each_cpu_and(cpu, cpus, sched_domain_span(sd)) {
+ /* Account thermal pressure for the energy estimation */
+ cpu = cpumask_first(cpus);
+ cpu_thermal_cap = arch_scale_cpu_capacity(cpu);
+ cpu_thermal_cap -= arch_scale_thermal_pressure(cpu);
+
+ eenv.cpu_cap = cpu_thermal_cap;
+ eenv.pd_cap = 0;
+
+ for_each_cpu(cpu, cpus) {
+ eenv.pd_cap += cpu_thermal_cap;
+
+ if (!cpumask_test_cpu(cpu, sched_domain_span(sd)))
+ continue;
+
if (!cpumask_test_cpu(cpu, p->cpus_ptr))
continue;

util = cpu_util_next(cpu, p, cpu);
cpu_cap = capacity_of(cpu);
- spare_cap = cpu_cap;
- lsub_positive(&spare_cap, util);

/*
* Skip CPUs that cannot satisfy the capacity request.
@@ -6858,15 +6927,17 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
if (!fits_capacity(util, cpu_cap))
continue;

+ lsub_positive(&cpu_cap, util);
+
if (cpu == prev_cpu) {
/* Always use prev_cpu as a candidate. */
compute_prev_delta = true;
- } else if (spare_cap > max_spare_cap) {
+ } else if (cpu_cap > max_spare_cap) {
/*
* Find the CPU with the maximum spare capacity
* in the performance domain.
*/
- max_spare_cap = spare_cap;
+ max_spare_cap = cpu_cap;
max_spare_cap_cpu = cpu;
}
}
@@ -6875,12 +6946,14 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
continue;

/* Compute the 'base' energy of the pd, without @p */
- base_energy_pd = compute_energy(p, -1, cpus, pd);
+ eenv_pd_busy_time(&eenv, cpus, p);
+ base_energy_pd = compute_energy(&eenv, pd, cpus, p, -1);
base_energy += base_energy_pd;

/* Evaluate the energy impact of using prev_cpu. */
if (compute_prev_delta) {
- prev_delta = compute_energy(p, prev_cpu, cpus, pd);
+ prev_delta = compute_energy(&eenv, pd, cpus, p,
+ prev_cpu);
if (prev_delta < base_energy_pd)
goto unlock;
prev_delta -= base_energy_pd;
@@ -6889,8 +6962,8 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)

/* Evaluate the energy impact of using max_spare_cap_cpu. */
if (max_spare_cap_cpu >= 0) {
- cur_delta = compute_energy(p, max_spare_cap_cpu, cpus,
- pd);
+ cur_delta = compute_energy(&eenv, pd, cpus, p,
+ max_spare_cap_cpu);
if (cur_delta < base_energy_pd)
goto unlock;
cur_delta -= base_energy_pd;
--
2.25.1

2022-04-27 15:01:20

by Vincent Donnefort

[permalink] [raw]
Subject: [PATCH v7 3/7] sched, drivers: Remove max param from effective_cpu_util()/sched_cpu_util()

From: Dietmar Eggemann <[email protected]>

effective_cpu_util() already has a `int cpu' parameter which allows to
retrieve the CPU capacity scale factor (or maximum CPU capacity) inside
this function via an arch_scale_cpu_capacity(cpu).

A lot of code calling effective_cpu_util() (or the shim
sched_cpu_util()) needs the maximum CPU capacity, i.e. it will call
arch_scale_cpu_capacity() already.
But not having to pass it into effective_cpu_util() will make the EAS
wake-up code easier, especially when the maximum CPU capacity reduced
by the thermal pressure is passed through the EAS wake-up functions.

Due to the asymmetric CPU capacity support of arm/arm64 architectures,
arch_scale_cpu_capacity(int cpu) is a per-CPU variable read access via
per_cpu(cpu_scale, cpu) on such a system.
On all other architectures it is a a compile-time constant
(SCHED_CAPACITY_SCALE).

Signed-off-by: Dietmar Eggemann <[email protected]>

diff --git a/drivers/powercap/dtpm_cpu.c b/drivers/powercap/dtpm_cpu.c
index bca2f912d349..024dba4e6575 100644
--- a/drivers/powercap/dtpm_cpu.c
+++ b/drivers/powercap/dtpm_cpu.c
@@ -71,34 +71,19 @@ static u64 set_pd_power_limit(struct dtpm *dtpm, u64 power_limit)

static u64 scale_pd_power_uw(struct cpumask *pd_mask, u64 power)
{
- unsigned long max = 0, sum_util = 0;
+ unsigned long max, sum_util = 0;
int cpu;

- for_each_cpu_and(cpu, pd_mask, cpu_online_mask) {
-
- /*
- * The capacity is the same for all CPUs belonging to
- * the same perf domain, so a single call to
- * arch_scale_cpu_capacity() is enough. However, we
- * need the CPU parameter to be initialized by the
- * loop, so the call ends up in this block.
- *
- * We can initialize 'max' with a cpumask_first() call
- * before the loop but the bits computation is not
- * worth given the arch_scale_cpu_capacity() just
- * returns a value where the resulting assembly code
- * will be optimized by the compiler.
- */
- max = arch_scale_cpu_capacity(cpu);
- sum_util += sched_cpu_util(cpu, max);
- }
-
/*
- * In the improbable case where all the CPUs of the perf
- * domain are offline, 'max' will be zero and will lead to an
- * illegal operation with a zero division.
+ * The capacity is the same for all CPUs belonging to
+ * the same perf domain.
*/
- return max ? (power * ((sum_util << 10) / max)) >> 10 : 0;
+ max = arch_scale_cpu_capacity(cpumask_first(pd_mask));
+
+ for_each_cpu_and(cpu, pd_mask, cpu_online_mask)
+ sum_util += sched_cpu_util(cpu);
+
+ return (power * ((sum_util << 10) / max)) >> 10;
}

static u64 get_pd_power_uw(struct dtpm *dtpm)
diff --git a/drivers/thermal/cpufreq_cooling.c b/drivers/thermal/cpufreq_cooling.c
index 0bfb8eebd126..3f514ff3d9aa 100644
--- a/drivers/thermal/cpufreq_cooling.c
+++ b/drivers/thermal/cpufreq_cooling.c
@@ -137,11 +137,9 @@ static u32 cpu_power_to_freq(struct cpufreq_cooling_device *cpufreq_cdev,
static u32 get_load(struct cpufreq_cooling_device *cpufreq_cdev, int cpu,
int cpu_idx)
{
- unsigned long max = arch_scale_cpu_capacity(cpu);
- unsigned long util;
+ unsigned long util = sched_cpu_util(cpu);

- util = sched_cpu_util(cpu, max);
- return (util * 100) / max;
+ return (util * 100) / arch_scale_cpu_capacity(cpu);
}
#else /* !CONFIG_SMP */
static u32 get_load(struct cpufreq_cooling_device *cpufreq_cdev, int cpu,
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 67f06f72c50e..c1705effb3a4 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -2255,7 +2255,7 @@ static inline bool owner_on_cpu(struct task_struct *owner)
}

/* Returns effective CPU energy utilization, as seen by the scheduler */
-unsigned long sched_cpu_util(int cpu, unsigned long max);
+unsigned long sched_cpu_util(int cpu);
#endif /* CONFIG_SMP */

#ifdef CONFIG_RSEQ
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 068c088e9584..a62d25ec5b0d 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -7061,12 +7061,14 @@ struct task_struct *idle_task(int cpu)
* required to meet deadlines.
*/
unsigned long effective_cpu_util(int cpu, unsigned long util_cfs,
- unsigned long max, enum cpu_util_type type,
+ enum cpu_util_type type,
struct task_struct *p)
{
- unsigned long dl_util, util, irq;
+ unsigned long dl_util, util, irq, max;
struct rq *rq = cpu_rq(cpu);

+ max = arch_scale_cpu_capacity(cpu);
+
if (!uclamp_is_used() &&
type == FREQUENCY_UTIL && rt_rq_is_runnable(&rq->rt)) {
return max;
@@ -7146,10 +7148,9 @@ unsigned long effective_cpu_util(int cpu, unsigned long util_cfs,
return min(max, util);
}

-unsigned long sched_cpu_util(int cpu, unsigned long max)
+unsigned long sched_cpu_util(int cpu)
{
- return effective_cpu_util(cpu, cpu_util_cfs(cpu), max,
- ENERGY_UTIL, NULL);
+ return effective_cpu_util(cpu, cpu_util_cfs(cpu), ENERGY_UTIL, NULL);
}
#endif /* CONFIG_SMP */

diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
index 3dbf351d12d5..1207c78f85c1 100644
--- a/kernel/sched/cpufreq_schedutil.c
+++ b/kernel/sched/cpufreq_schedutil.c
@@ -157,11 +157,10 @@ static unsigned int get_next_freq(struct sugov_policy *sg_policy,
static void sugov_get_util(struct sugov_cpu *sg_cpu)
{
struct rq *rq = cpu_rq(sg_cpu->cpu);
- unsigned long max = arch_scale_cpu_capacity(sg_cpu->cpu);

- sg_cpu->max = max;
+ sg_cpu->max = arch_scale_cpu_capacity(sg_cpu->cpu);
sg_cpu->bw_dl = cpu_bw_dl(rq);
- sg_cpu->util = effective_cpu_util(sg_cpu->cpu, cpu_util_cfs(sg_cpu->cpu), max,
+ sg_cpu->util = effective_cpu_util(sg_cpu->cpu, cpu_util_cfs(sg_cpu->cpu),
FREQUENCY_UTIL, NULL);
}

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 9cd506dc682c..e0d5b1ba565d 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6699,12 +6699,11 @@ static long
compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
{
struct cpumask *pd_mask = perf_domain_span(pd);
- unsigned long cpu_cap = arch_scale_cpu_capacity(cpumask_first(pd_mask));
- unsigned long max_util = 0, sum_util = 0;
- unsigned long _cpu_cap = cpu_cap;
+ unsigned long max_util = 0, sum_util = 0, cpu_cap;
int cpu;

- _cpu_cap -= arch_scale_thermal_pressure(cpumask_first(pd_mask));
+ cpu_cap = arch_scale_cpu_capacity(cpumask_first(pd_mask));
+ cpu_cap -= arch_scale_thermal_pressure(cpumask_first(pd_mask));

/*
* The capacity state of CPUs of the current rd can be driven by CPUs
@@ -6741,10 +6740,10 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
* is already enough to scale the EM reported power
* consumption at the (eventually clamped) cpu_capacity.
*/
- cpu_util = effective_cpu_util(cpu, util_running, cpu_cap,
- ENERGY_UTIL, NULL);
+ cpu_util = effective_cpu_util(cpu, util_running, ENERGY_UTIL,
+ NULL);

- sum_util += min(cpu_util, _cpu_cap);
+ sum_util += min(cpu_util, cpu_cap);

/*
* Performance domain frequency: utilization clamping
@@ -6753,12 +6752,12 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
* NOTE: in case RT tasks are running, by default the
* FREQUENCY_UTIL's utilization can be max OPP.
*/
- cpu_util = effective_cpu_util(cpu, util_freq, cpu_cap,
- FREQUENCY_UTIL, tsk);
- max_util = max(max_util, min(cpu_util, _cpu_cap));
+ cpu_util = effective_cpu_util(cpu, util_freq, FREQUENCY_UTIL,
+ tsk);
+ max_util = max(max_util, min(cpu_util, cpu_cap));
}

- return em_cpu_energy(pd->em_pd, max_util, sum_util, _cpu_cap);
+ return em_cpu_energy(pd->em_pd, max_util, sum_util, cpu_cap);
}

/*
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 07014e8cbae2..f902f3e27e48 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -2878,7 +2878,7 @@ enum cpu_util_type {
};

unsigned long effective_cpu_util(int cpu, unsigned long util_cfs,
- unsigned long max, enum cpu_util_type type,
+ enum cpu_util_type type,
struct task_struct *p);

static inline unsigned long cpu_bw_dl(struct rq *rq)
--
2.25.1

2022-04-27 15:01:22

by Vincent Donnefort

[permalink] [raw]
Subject: [PATCH v7 7/7] sched/fair: Remove the energy margin in feec()

find_energy_efficient_cpu() integrates a margin to protect tasks from
bouncing back and forth from a CPU to another. This margin is set as being
6% of the total current energy estimated on the system. This however does
not work for two reasons:

1. The energy estimation is not a good absolute value:

compute_energy() used in feec() is a good estimation for task placement as
it allows to compare the energy with and without a task. The computed
delta will give a good overview of the cost for a certain task placement.
It, however, doesn't work as an absolute estimation for the total energy
of the system. First it adds the contribution to idle CPUs into the
energy, second it mixes util_avg with util_est values. util_avg contains
the near history for a CPU usage, it doesn't tell at all what the current
utilization is. A system that has been quite busy in the near past will
hold a very high energy and then a high margin preventing any task
migration to a lower capacity CPU, wasting energy. It even creates a
negative feedback loop: by holding the tasks on a less efficient CPU, the
margin contributes in keeping the energy high.

2. The margin handicaps small tasks:

On a system where the workload is composed mostly of small tasks (which is
often the case on Android), the overall energy will be high enough to
create a margin none of those tasks can cross. On a Pixel4, a small
utilization of 5% on all the CPUs creates a global estimated energy of 140
joules, as per the Energy Model declaration of that same device. This
means, after applying the 6% margin that any migration must save more than
8 joules to happen. No task with a utilization lower than 40 would then be
able to migrate away from the biggest CPU of the system.

The 6% of the overall system energy was brought by the following patch:

(eb92692b2544 sched/fair: Speed-up energy-aware wake-ups)

It was previously 6% of the prev_cpu energy. Also, the following one
made this margin value conditional on the clusters where the task fits:

(8d4c97c105ca sched/fair: Only compute base_energy_pd if necessary)

We could simply revert that margin change to what it was, but the original
version didn't have strong grounds neither and as demonstrated in (1.) the
estimated energy isn't a good absolute value. Instead, removing it
completely. It is indeed, made possible by recent changes that improved
energy estimation comparison fairness (sched/fair: Remove task_util from
effective utilization in feec()) (PM: EM: Increase energy calculation
precision) and task utilization stabilization (sched/fair: Decay task
util_avg during migration)

Without a margin, we could have feared bouncing between CPUs. But running
LISA's eas_behaviour test coverage on three different platforms (Hikey960,
RB-5 and DB-845) showed no issue.

Removing the energy margin enables more energy-optimized placements for a
more energy efficient system.

Signed-off-by: Vincent Donnefort <[email protected]>

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 3f382156b4ec..097f63be8ac1 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6857,9 +6857,8 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
{
struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_rq_mask);
unsigned long prev_delta = ULONG_MAX, best_delta = ULONG_MAX;
- int cpu, best_energy_cpu = prev_cpu, target = -1;
struct root_domain *rd = this_rq()->rd;
- unsigned long base_energy = 0;
+ int cpu, best_energy_cpu, target = -1;
struct sched_domain *sd;
struct perf_domain *pd;
struct energy_env eenv;
@@ -6891,8 +6890,8 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
unsigned long cpu_cap, cpu_thermal_cap, util;
unsigned long cur_delta, max_spare_cap = 0;
bool compute_prev_delta = false;
- unsigned long base_energy_pd;
int max_spare_cap_cpu = -1;
+ unsigned long base_energy;

cpumask_and(cpus, perf_domain_span(pd), cpu_online_mask);

@@ -6947,16 +6946,15 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)

/* Compute the 'base' energy of the pd, without @p */
eenv_pd_busy_time(&eenv, cpus, p);
- base_energy_pd = compute_energy(&eenv, pd, cpus, p, -1);
- base_energy += base_energy_pd;
+ base_energy = compute_energy(&eenv, pd, cpus, p, -1);

/* Evaluate the energy impact of using prev_cpu. */
if (compute_prev_delta) {
prev_delta = compute_energy(&eenv, pd, cpus, p,
prev_cpu);
- if (prev_delta < base_energy_pd)
+ if (prev_delta < base_energy)
goto unlock;
- prev_delta -= base_energy_pd;
+ prev_delta -= base_energy;
best_delta = min(best_delta, prev_delta);
}

@@ -6964,9 +6962,9 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
if (max_spare_cap_cpu >= 0) {
cur_delta = compute_energy(&eenv, pd, cpus, p,
max_spare_cap_cpu);
- if (cur_delta < base_energy_pd)
+ if (cur_delta < base_energy)
goto unlock;
- cur_delta -= base_energy_pd;
+ cur_delta -= base_energy;
if (cur_delta < best_delta) {
best_delta = cur_delta;
best_energy_cpu = max_spare_cap_cpu;
@@ -6975,12 +6973,7 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
}
rcu_read_unlock();

- /*
- * Pick the best CPU if prev_cpu cannot be used, or if it saves at
- * least 6% of the energy used by prev_cpu.
- */
- if ((prev_delta == ULONG_MAX) ||
- (prev_delta - best_delta) > ((prev_delta + base_energy) >> 4))
+ if (best_delta < prev_delta)
target = best_energy_cpu;

return target;
--
2.25.1

2022-04-27 18:04:16

by Tao Zhou

[permalink] [raw]
Subject: Re: [PATCH v7 2/7] sched/fair: Decay task PELT values during wakeup migration

On Wed, Apr 27, 2022 at 03:32:59PM +0100, Vincent Donnefort wrote:
> Before being migrated to a new CPU, a task sees its PELT values
> synchronized with rq last_update_time. Once done, that same task will also
> have its sched_avg last_update_time reset. This means the time between
> the migration and the last clock update (B) will not be accounted for in
> util_avg and a discontinuity will appear. This issue is amplified by the
> PELT clock scaling. If the clock hasn't been updated while the CPU is
> idle, clock_pelt will not be aligned with clock_task and that time (A)
> will be also lost.
>
> ---------|----- A -----|-----------|------- B -----|>
> clock_pelt clock_task clock now
>
> This is especially problematic for asymmetric CPU capacity systems which
> need stable util_avg signals for task placement and energy estimation.
>
> Ideally, this problem would be solved by updating the runqueue clocks
> before the migration. But that would require taking the runqueue lock
> which is quite expensive [1]. Instead estimate the missing time and update
> the task util_avg with that value:
>
> A + B = clock_task - clock_pelt + sched_clock_cpu() - clock
>
> sched_clock_cpu() is a costly function. Limit the usage to the case where
> the source CPU is idle as we know this is when the clock is having the
> biggest risk of being outdated.
>
> Neither clock_task, clock_pelt nor clock can be accessed without the
> runqueue lock. We then need to store those values in a timestamp variable
> which can be accessed during the migration. rq's enter_idle will give the
> wall-clock time when the rq went idle. We have then:
>
> B = sched_clock_cpu() - rq->enter_idle.
>
> Then, to catch-up the PELT clock scaling (A), two cases:
>
> * !CFS_BANDWIDTH: We can simply use clock_task(). This value is stored
> in rq's clock_pelt_idle, before the rq enters idle. The estimated time
> is then:
>
> rq->clock_pelt_idle + sched_clock_cpu() - rq->enter_idle.
>
> * CFS_BANDWIDTH: We can't catch-up with clock_task because of the
> throttled_clock_task_time offset. cfs_rq's clock_pelt_idle is then
> giving the PELT clock when the cfs_rq becomes idle. This gives:
>
> A = rq->clock_pelt_idle - cfs_rq->clock_pelt_idle
>
> And gives the following estimated time:
>
> cfs_rq->last_update_time +
> rq->clock_pelt_idle - cfs_rq->clock_pelt_idle + (A)
> sched_clock_cpu() - rq->enter_idle (B)
>
> The (B) part of the missing time is however an estimation that doesn't
> take into account IRQ and Paravirt time.
>
> [1] https://lore.kernel.org/all/[email protected]/
>
> Signed-off-by: Vincent Donnefort <[email protected]>
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index abd1feeec0c2..9cd506dc682c 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -3694,6 +3694,57 @@ static inline void add_tg_cfs_propagate(struct cfs_rq *cfs_rq, long runnable_sum
>
> #endif /* CONFIG_FAIR_GROUP_SCHED */
>
> +#ifdef CONFIG_NO_HZ_COMMON
> +static inline void migrate_se_pelt_lag(struct sched_entity *se)
> +{
> + struct cfs_rq *cfs_rq;
> + struct rq *rq;
> + bool is_idle;
> + u64 now;
> +
> + cfs_rq = cfs_rq_of(se);
> + rq = rq_of(cfs_rq);
> +
> + rcu_read_lock();
> + is_idle = is_idle_task(rcu_dereference(rq->curr));
> + rcu_read_unlock();
> +
> + /*
> + * The lag estimation comes with a cost we don't want to pay all the
> + * time. Hence, limiting to the case where the source CPU is idle and
> + * we know we are at the greatest risk to have an outdated clock.
> + */
> + if (!is_idle)
> + return;
> +
> + /*
> + * estimated "now" is:
> + * last_update_time +
> + * PELT scaling (rq->clock_pelt_idle - cfs_rq->clock_pelt_idle) +
> + * rq clock lag (sched_clock_cpu() - rq->enter_idle)
> + *
> + * The PELT scaling contribution is always 0 when !CFS_BANDWIDTH.
> + * (see clock_pelt = clock_task in _update_idle_rq_clock_pelt())
> + */
> +#ifdef CONFIG_CFS_BANDWIDTH
> + now = u64_u32_load(cfs_rq->clock_pelt_idle);
> + /* The clock has been stopped for throttling */
> + if (now == U64_MAX)
> + return;
> +
> + now = u64_u32_load(rq->clock_pelt_idle) - now;
> + now += cfs_rq_last_update_time(cfs_rq);
> +#else
> + now = u64_u32_load(rq->clock_pelt_idle);
> +#endif
> + now += sched_clock_cpu(cpu_of(rq)) - u64_u32_load(rq->enter_idle);
> +
> + __update_load_avg_blocked_se(now, se);
> +}
> +#else
> +static void migrate_se_pelt_lag(struct sched_entity *se) {}
> +#endif
> +
> /**
> * update_cfs_rq_load_avg - update the cfs_rq's load/util averages
> * @now: current time, as per cfs_rq_clock_pelt()
> @@ -4429,6 +4480,9 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
> */
> if ((flags & (DEQUEUE_SAVE | DEQUEUE_MOVE)) != DEQUEUE_SAVE)
> update_min_vruntime(cfs_rq);
> +
> + if (cfs_rq->nr_running == 0)
> + update_idle_cfs_rq_clock_pelt(cfs_rq);
> }
>
> /*
> @@ -6946,6 +7000,8 @@ static void detach_entity_cfs_rq(struct sched_entity *se);
> */
> static void migrate_task_rq_fair(struct task_struct *p, int new_cpu)
> {
> + struct sched_entity *se = &p->se;
> +
> /*
> * As blocked tasks retain absolute vruntime the migration needs to
> * deal with this by subtracting the old and adding the new
> @@ -6953,7 +7009,6 @@ static void migrate_task_rq_fair(struct task_struct *p, int new_cpu)
> * the task on the new runqueue.
> */
> if (READ_ONCE(p->__state) == TASK_WAKING) {
> - struct sched_entity *se = &p->se;
> struct cfs_rq *cfs_rq = cfs_rq_of(se);
>
> se->vruntime -= u64_u32_load(cfs_rq->min_vruntime);
> @@ -6965,25 +7020,29 @@ static void migrate_task_rq_fair(struct task_struct *p, int new_cpu)
> * rq->lock and can modify state directly.
> */
> lockdep_assert_rq_held(task_rq(p));
> - detach_entity_cfs_rq(&p->se);
> + detach_entity_cfs_rq(se);
>
> } else {
> + remove_entity_load_avg(se);
> +
> /*
> - * We are supposed to update the task to "current" time, then
> - * its up to date and ready to go to new CPU/cfs_rq. But we
> - * have difficulty in getting what current time is, so simply
> - * throw away the out-of-date time. This will result in the
> - * wakee task is less decayed, but giving the wakee more load
> - * sounds not bad.
> + * Here, the task's PELT values have been updated according to
> + * the current rq's clock. But if that clock hasn't been
> + * updated in a while, a substantial idle time will be missed,
> + * leading to an inflation after wake-up on the new rq.
> + *
> + * Estimate the missing time from the cfs_rq last_update_time
> + * and update sched_avg to improve the PELT continuity after
> + * migration.
> */
> - remove_entity_load_avg(&p->se);
> + migrate_se_pelt_lag(se);
> }
>
> /* Tell new CPU we are migrated */
> - p->se.avg.last_update_time = 0;
> + se->avg.last_update_time = 0;
>
> /* We have migrated, no longer consider this task hot */
> - p->se.exec_start = 0;
> + se->exec_start = 0;
>
> update_scan_period(p, new_cpu);
> }
> @@ -8149,6 +8208,10 @@ static bool __update_blocked_fair(struct rq *rq, bool *done)
> if (update_cfs_rq_load_avg(cfs_rq_clock_pelt(cfs_rq), cfs_rq)) {
> update_tg_load_avg(cfs_rq);
>
> + /* sync clock_pelt_idle with last update */
> + if (cfs_rq->nr_running == 0)
> + update_idle_cfs_rq_clock_pelt(cfs_rq);

I think that if cfs_rq->nr_running == 0 then use cfs rq pelt_idle to update
idle cfs rq.

if (!cfs_rq->nr_running) {
/* A part. calculation of idle cfs rq */
calculate now like in migrate_se_pelt_lag().
decay = update_cfs_rq_load_avg(now, cfs_rq);
} else {
decay = update_cfs_rq_load_avg(cfs_rq_clock_pelt(cfs_rq), cfs_rq))
}

if (decay) {
update_tg_load_avg(cfs_rq);
if (cfs_rq == &rq->cfs)
decayed == ture;
}

Thanks,
Tao
> if (cfs_rq == &rq->cfs)
> decayed = true;
> }
> diff --git a/kernel/sched/pelt.h b/kernel/sched/pelt.h
> index 4ff2ed4f8fa1..6b39e07b2919 100644
> --- a/kernel/sched/pelt.h
> +++ b/kernel/sched/pelt.h
> @@ -61,6 +61,23 @@ static inline void cfs_se_util_change(struct sched_avg *avg)
> WRITE_ONCE(avg->util_est.enqueued, enqueued);
> }
>
> +static inline u64 rq_clock_pelt(struct rq *rq)
> +{
> + lockdep_assert_rq_held(rq);
> + assert_clock_updated(rq);
> +
> + return rq->clock_pelt - rq->lost_idle_time;
> +}
> +
> +/* The rq is idle, we can sync to clock_task */
> +static inline void _update_idle_rq_clock_pelt(struct rq *rq)
> +{
> + rq->clock_pelt = rq_clock_task(rq);
> +
> + u64_u32_store(rq->enter_idle, rq_clock(rq));
> + u64_u32_store(rq->clock_pelt_idle, rq_clock_pelt(rq));
> +}
> +
> /*
> * The clock_pelt scales the time to reflect the effective amount of
> * computation done during the running delta time but then sync back to
> @@ -76,8 +93,7 @@ static inline void cfs_se_util_change(struct sched_avg *avg)
> static inline void update_rq_clock_pelt(struct rq *rq, s64 delta)
> {
> if (unlikely(is_idle_task(rq->curr))) {
> - /* The rq is idle, we can sync to clock_task */
> - rq->clock_pelt = rq_clock_task(rq);
> + _update_idle_rq_clock_pelt(rq);
> return;
> }
>
> @@ -130,17 +146,20 @@ static inline void update_idle_rq_clock_pelt(struct rq *rq)
> */
> if (util_sum >= divider)
> rq->lost_idle_time += rq_clock_task(rq) - rq->clock_pelt;
> +
> + _update_idle_rq_clock_pelt(rq);
> }
>
> -static inline u64 rq_clock_pelt(struct rq *rq)
> +#ifdef CONFIG_CFS_BANDWIDTH
> +static inline void update_idle_cfs_rq_clock_pelt(struct cfs_rq *cfs_rq)
> {
> - lockdep_assert_rq_held(rq);
> - assert_clock_updated(rq);
> -
> - return rq->clock_pelt - rq->lost_idle_time;
> + if (unlikely(cfs_rq->throttle_count))
> + u64_u32_store(cfs_rq->clock_pelt_idle, U64_MAX);
> + else
> + u64_u32_store(cfs_rq->clock_pelt_idle,
> + rq_clock_pelt(rq_of(cfs_rq)));
> }
>
> -#ifdef CONFIG_CFS_BANDWIDTH
> /* rq->task_clock normalized against any time this cfs_rq has spent throttled */
> static inline u64 cfs_rq_clock_pelt(struct cfs_rq *cfs_rq)
> {
> @@ -150,6 +169,7 @@ static inline u64 cfs_rq_clock_pelt(struct cfs_rq *cfs_rq)
> return rq_clock_pelt(rq_of(cfs_rq)) - cfs_rq->throttled_clock_pelt_time;
> }
> #else
> +static inline void update_idle_cfs_rq_clock_pelt(struct cfs_rq *cfs_rq) { }
> static inline u64 cfs_rq_clock_pelt(struct cfs_rq *cfs_rq)
> {
> return rq_clock_pelt(rq_of(cfs_rq));
> @@ -204,6 +224,7 @@ update_rq_clock_pelt(struct rq *rq, s64 delta) { }
> static inline void
> update_idle_rq_clock_pelt(struct rq *rq) { }
>
> +static inline void update_idle_cfs_rq_clock_pelt(struct cfs_rq *cfs_rq) { }
> #endif
>
>
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index e2cf6e48b165..07014e8cbae2 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -641,6 +641,10 @@ struct cfs_rq {
> int runtime_enabled;
> s64 runtime_remaining;
>
> + u64 clock_pelt_idle;
> +#ifndef CONFIG_64BIT
> + u64 clock_pelt_idle_copy;
> +#endif
> u64 throttled_clock;
> u64 throttled_clock_pelt;
> u64 throttled_clock_pelt_time;
> @@ -1013,6 +1017,12 @@ struct rq {
> u64 clock_task ____cacheline_aligned;
> u64 clock_pelt;
> unsigned long lost_idle_time;
> + u64 clock_pelt_idle;
> + u64 enter_idle;
> +#ifndef CONFIG_64BIT
> + u64 clock_pelt_idle_copy;
> + u64 enter_idle_copy;
> +#endif
>
> atomic_t nr_iowait;
>
> --
> 2.25.1
>

2022-04-29 08:31:23

by Tao Zhou

[permalink] [raw]
Subject: Re: [PATCH v7 2/7] sched/fair: Decay task PELT values during wakeup migration

Hi Vincent,

On Thu, Apr 28, 2022 at 03:38:39PM +0200, Vincent Guittot wrote:

> On Wed, 27 Apr 2022 at 19:37, Tao Zhou <[email protected]> wrote:
> >
> > On Wed, Apr 27, 2022 at 03:32:59PM +0100, Vincent Donnefort wrote:
> > > Before being migrated to a new CPU, a task sees its PELT values
> > > synchronized with rq last_update_time. Once done, that same task will also
> > > have its sched_avg last_update_time reset. This means the time between
> > > the migration and the last clock update (B) will not be accounted for in
> > > util_avg and a discontinuity will appear. This issue is amplified by the
> > > PELT clock scaling. If the clock hasn't been updated while the CPU is
> > > idle, clock_pelt will not be aligned with clock_task and that time (A)
> > > will be also lost.
> > >
> > > ---------|----- A -----|-----------|------- B -----|>
> > > clock_pelt clock_task clock now
> > >
> > > This is especially problematic for asymmetric CPU capacity systems which
> > > need stable util_avg signals for task placement and energy estimation.
> > >
> > > Ideally, this problem would be solved by updating the runqueue clocks
> > > before the migration. But that would require taking the runqueue lock
> > > which is quite expensive [1]. Instead estimate the missing time and update
> > > the task util_avg with that value:
> > >
> > > A + B = clock_task - clock_pelt + sched_clock_cpu() - clock
> > >
> > > sched_clock_cpu() is a costly function. Limit the usage to the case where
> > > the source CPU is idle as we know this is when the clock is having the
> > > biggest risk of being outdated.
> > >
> > > Neither clock_task, clock_pelt nor clock can be accessed without the
> > > runqueue lock. We then need to store those values in a timestamp variable
> > > which can be accessed during the migration. rq's enter_idle will give the
> > > wall-clock time when the rq went idle. We have then:
> > >
> > > B = sched_clock_cpu() - rq->enter_idle.
> > >
> > > Then, to catch-up the PELT clock scaling (A), two cases:
> > >
> > > * !CFS_BANDWIDTH: We can simply use clock_task(). This value is stored
> > > in rq's clock_pelt_idle, before the rq enters idle. The estimated time
> > > is then:
> > >
> > > rq->clock_pelt_idle + sched_clock_cpu() - rq->enter_idle.
> > >
> > > * CFS_BANDWIDTH: We can't catch-up with clock_task because of the
> > > throttled_clock_task_time offset. cfs_rq's clock_pelt_idle is then
> > > giving the PELT clock when the cfs_rq becomes idle. This gives:
> > >
> > > A = rq->clock_pelt_idle - cfs_rq->clock_pelt_idle
> > >
> > > And gives the following estimated time:
> > >
> > > cfs_rq->last_update_time +
> > > rq->clock_pelt_idle - cfs_rq->clock_pelt_idle + (A)
> > > sched_clock_cpu() - rq->enter_idle (B)
> > >
> > > The (B) part of the missing time is however an estimation that doesn't
> > > take into account IRQ and Paravirt time.
> > >
> > > [1] https://lore.kernel.org/all/[email protected]/
> > >
> > > Signed-off-by: Vincent Donnefort <[email protected]>
> > >
> > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > > index abd1feeec0c2..9cd506dc682c 100644
> > > --- a/kernel/sched/fair.c
> > > +++ b/kernel/sched/fair.c
> > > @@ -3694,6 +3694,57 @@ static inline void add_tg_cfs_propagate(struct cfs_rq *cfs_rq, long runnable_sum
> > >
> > > #endif /* CONFIG_FAIR_GROUP_SCHED */
> > >
> > > +#ifdef CONFIG_NO_HZ_COMMON
> > > +static inline void migrate_se_pelt_lag(struct sched_entity *se)
> > > +{
> > > + struct cfs_rq *cfs_rq;
> > > + struct rq *rq;
> > > + bool is_idle;
> > > + u64 now;
> > > +
>
> would it make sense to check if pelt value of the task are not fully
> decayed before starting the below : ie after syncing with
> last_update_time of the cfs

The below diff include this check in.

> > > + cfs_rq = cfs_rq_of(se);
> > > + rq = rq_of(cfs_rq);
> > > +
> > > + rcu_read_lock();
> > > + is_idle = is_idle_task(rcu_dereference(rq->curr));
> > > + rcu_read_unlock();
> > > +
> > > + /*
> > > + * The lag estimation comes with a cost we don't want to pay all the
> > > + * time. Hence, limiting to the case where the source CPU is idle and
> > > + * we know we are at the greatest risk to have an outdated clock.
> > > + */
> > > + if (!is_idle)
> > > + return;
> > > +
> > > + /*
> > > + * estimated "now" is:
> > > + * last_update_time +
> > > + * PELT scaling (rq->clock_pelt_idle - cfs_rq->clock_pelt_idle) +
>
> PELT scaling is in fact the time between cfs becoming idle and rq
> becoming idle. Naming it PELT scaling is misleading because even at
> max frequency (ie without pelt scaling) we can have this delta.
>
> > > + * rq clock lag (sched_clock_cpu() - rq->enter_idle)
>
> and this is the time between rq becoming idle and current time
>
> > > + *
> > > + * The PELT scaling contribution is always 0 when !CFS_BANDWIDTH.
> > > + * (see clock_pelt = clock_task in _update_idle_rq_clock_pelt())
>
> The contribution becomes 0 because we use the same clock reference
>
> last_update_time (cfs_clock_pelt when cfs became idle) +
> rq->clock_pelt_idle (rq_clock_pelt when rq became idle) -
> cfs_rq->clock_pelt_idle (rq_clock_pelt when cfs became idle)
>
> when !CFS_BANDWIDTH, cfs_clock_pelt == rq_clock_pelt because there is
> no throttling offset (which can dynamically change)
> so we have:
>
> last_update_time (rq_clock_pelt when cfs became idle) +
> rq->clock_pelt_idle (rq_clock_pelt when rq became idle) -
> cfs_rq->clock_pelt_idle (rq_clock_pelt when cfs became idle)
>
> which is equals to rq->clock_pelt_idle (rq_clock_pelt when rq became idle)
>
> This also means that we only need a snapshot of the
> cfs_rq->throttled_clock_pelt_time when cfs became idle and the
> equation becomes like below for CFS_BANDWIDTH
>
> rq->clock_pelt_idle - snapshot of cfs_rq->throttled_clock_pelt_time
> when entering idle
>
> which remove one u64_u32_load

Include these as comments in the below diff.

> > > + */
> > > +#ifdef CONFIG_CFS_BANDWIDTH
> > > + now = u64_u32_load(cfs_rq->clock_pelt_idle);
> > > + /* The clock has been stopped for throttling */
> > > + if (now == U64_MAX)
> > > + return;
> > > +
> > > + now = u64_u32_load(rq->clock_pelt_idle) - now;
> > > + now += cfs_rq_last_update_time(cfs_rq);
> > > +#else
> > > + now = u64_u32_load(rq->clock_pelt_idle);
> > > +#endif
> > > + now += sched_clock_cpu(cpu_of(rq)) - u64_u32_load(rq->enter_idle);
> > > +
> > > + __update_load_avg_blocked_se(now, se);
> > > +}
> > > +#else
> > > +static void migrate_se_pelt_lag(struct sched_entity *se) {}
> > > +#endif
> > > +
> > > /**
> > > * update_cfs_rq_load_avg - update the cfs_rq's load/util averages
> > > * @now: current time, as per cfs_rq_clock_pelt()
> > > @@ -4429,6 +4480,9 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
> > > */
> > > if ((flags & (DEQUEUE_SAVE | DEQUEUE_MOVE)) != DEQUEUE_SAVE)
> > > update_min_vruntime(cfs_rq);
> > > +
> > > + if (cfs_rq->nr_running == 0)
> > > + update_idle_cfs_rq_clock_pelt(cfs_rq);
> > > }
> > >
> > > /*
> > > @@ -6946,6 +7000,8 @@ static void detach_entity_cfs_rq(struct sched_entity *se);
> > > */
> > > static void migrate_task_rq_fair(struct task_struct *p, int new_cpu)
> > > {
> > > + struct sched_entity *se = &p->se;
> > > +
> > > /*
> > > * As blocked tasks retain absolute vruntime the migration needs to
> > > * deal with this by subtracting the old and adding the new
> > > @@ -6953,7 +7009,6 @@ static void migrate_task_rq_fair(struct task_struct *p, int new_cpu)
> > > * the task on the new runqueue.
> > > */
> > > if (READ_ONCE(p->__state) == TASK_WAKING) {
> > > - struct sched_entity *se = &p->se;
> > > struct cfs_rq *cfs_rq = cfs_rq_of(se);
> > >
> > > se->vruntime -= u64_u32_load(cfs_rq->min_vruntime);
> > > @@ -6965,25 +7020,29 @@ static void migrate_task_rq_fair(struct task_struct *p, int new_cpu)
> > > * rq->lock and can modify state directly.
> > > */
> > > lockdep_assert_rq_held(task_rq(p));
> > > - detach_entity_cfs_rq(&p->se);
> > > + detach_entity_cfs_rq(se);
> > >
> > > } else {
> > > + remove_entity_load_avg(se);
> > > +
> > > /*
> > > - * We are supposed to update the task to "current" time, then
> > > - * its up to date and ready to go to new CPU/cfs_rq. But we
> > > - * have difficulty in getting what current time is, so simply
> > > - * throw away the out-of-date time. This will result in the
> > > - * wakee task is less decayed, but giving the wakee more load
> > > - * sounds not bad.
> > > + * Here, the task's PELT values have been updated according to
> > > + * the current rq's clock. But if that clock hasn't been
> > > + * updated in a while, a substantial idle time will be missed,
> > > + * leading to an inflation after wake-up on the new rq.
> > > + *
> > > + * Estimate the missing time from the cfs_rq last_update_time
> > > + * and update sched_avg to improve the PELT continuity after
> > > + * migration.
> > > */
> > > - remove_entity_load_avg(&p->se);
> > > + migrate_se_pelt_lag(se);
> > > }
> > >
> > > /* Tell new CPU we are migrated */
> > > - p->se.avg.last_update_time = 0;
> > > + se->avg.last_update_time = 0;
> > >
> > > /* We have migrated, no longer consider this task hot */
> > > - p->se.exec_start = 0;
> > > + se->exec_start = 0;
> > >
> > > update_scan_period(p, new_cpu);
> > > }
> > > @@ -8149,6 +8208,10 @@ static bool __update_blocked_fair(struct rq *rq, bool *done)
> > > if (update_cfs_rq_load_avg(cfs_rq_clock_pelt(cfs_rq), cfs_rq)) {
> > > update_tg_load_avg(cfs_rq);
> > >
> > > + /* sync clock_pelt_idle with last update */
> > > + if (cfs_rq->nr_running == 0)
> > > + update_idle_cfs_rq_clock_pelt(cfs_rq);
> >
> > I think that if cfs_rq->nr_running == 0 then use cfs rq pelt_idle to update
> > idle cfs rq.
>
> update_blocked_averages() updates all cfs rq to be aligned with now so
> we don't need to calculate an estimated now. update_rq_clock(rq) is
> called 1st to update the rq->clock and childs
>
> With only need to save when happened the last update which is done in
> update_rq_clock_pelt(rq) for rq->clock_pelt and with
> update_idle_cfs_rq_clock_pelt(cfs) for the cfs_rq_clock_pelt

I missed this.

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index a68482d66535..98c81bdb120a 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3357,6 +3357,29 @@ static inline bool cfs_rq_is_decayed(struct cfs_rq *cfs_rq)
return true;
}

+static inline bool task_se_is_decayed(struct sched_entity *se)
+{
+ if (se->avg.load_sum)
+ return false;
+
+ if (se->avg.util_sum)
+ return false;
+
+ if (se->avg.runnable_sum)
+ return false;
+
+ /*
+ * _avg must be null when _sum are null because _avg = _sum / divider
+ * Make sure that rounding and/or propagation of PELT values never
+ * break this.
+ */
+ SCHED_WARN_ON(se->avg.load_avg ||
+ se->avg.util_avg ||
+ se->avg.runnable_avg);
+
+ return true;
+}
+
/**
* update_tg_load_avg - update the tg's load avg
* @cfs_rq: the cfs_rq whose avg changed
@@ -3710,6 +3733,77 @@ static inline void add_tg_cfs_propagate(struct cfs_rq *cfs_rq, long runnable_sum

#endif /* CONFIG_FAIR_GROUP_SCHED */

+#ifdef CONFIG_NO_HZ_COMMON
+static inline void migrate_se_pelt_lag(struct sched_entity *se)
+{
+ struct cfs_rq *cfs_rq;
+ struct rq *rq;
+ bool is_idle;
+ u64 now;
+
+ /* check if pelt value of the task are fully decayed */
+ if (task_se_is_decayed(se))
+ return;
+
+ cfs_rq = cfs_rq_of(se);
+ rq = rq_of(cfs_rq);
+
+ rcu_read_lock();
+ is_idle = is_idle_task(rcu_dereference(rq->curr));
+ rcu_read_unlock();
+
+ /*
+ * The lag estimation comes with a cost we don't want to pay all the
+ * time. Hence, limiting to the case where the source CPU is idle and
+ * we know we are at the greatest risk to have an outdated clock.
+ */
+ if (!is_idle)
+ return;
+
+ /*
+ * estimated "now" is:
+ * last_update_time (cfs_clock_pelt when cfs became idle) +
+ * rq->clock_pelt_idle (rq_clock_pelt when rq became idle) -
+ * cfs_rq->clock_pelt_idle (rq_clock_pelt when cfs became idle)
+ *
+ * PELT idle lag is in fact the time between cfs becoming idle and
+ * rq becoming idle.
+ * rq clock lag is the time between rq becoming idle and current time.
+ *
+ * when !CFS_BANDWIDTH, cfs_clock_pelt == rq_clock_pelt because there is
+ * no throttling offset (which can dynamically change)
+ * so we have:
+ * last_update_time (rq_clock_pelt when cfs became idle) +
+ * rq->clock_pelt_idle (rq_clock_pelt when rq became idle) -
+ * cfs_rq->clock_pelt_idle (rq_clock_pelt when cfs became idle)
+ *
+ * which is equals to rq->clock_pelt_idle (rq_clock_pelt when rq became idle)
+ * This also means that we only need a snapshot of the
+ * cfs_rq->throttled_clock_pelt_time when cfs became idle and the
+ * equation becomes like below for CFS_BANDWIDTH
+ * rq->clock_pelt_idle - snapshot of cfs_rq->throttled_clock_pelt_time
+ * when entering idle
+ *
+ */
+#ifdef CONFIG_CFS_BANDWIDTH
+ now = u64_u32_load(cfs_rq->throttled_clock_pelt_time);
+ /* The clock has been stopped for throttling */
+ if (now == U64_MAX)
+ return;
+
+ now = u64_u32_load(rq->clock_pelt_idle) - now;
+#else
+ now = u64_u32_load(rq->clock_pelt_idle);
+#endif
+ now += sched_clock_cpu(cpu_of(rq)) - u64_u32_load(rq->enter_idle);
+
+ __update_load_avg_blocked_se(now, se);
+}
+#else
+static void migrate_se_pelt_lag(struct sched_entity *se) {}
+#endif
+
+
/**
* update_cfs_rq_load_avg - update the cfs_rq's load/util averages
* @now: current time, as per cfs_rq_clock_pelt()
@@ -4191,6 +4285,11 @@ static inline bool cfs_rq_is_decayed(struct cfs_rq *cfs_rq)
return true;
}

+static inline bool task_se_is_decayed(struct sched_entity *se)
+{
+ return true;
+}
+
#define UPDATE_TG 0x0
#define SKIP_AGE_LOAD 0x0
#define DO_ATTACH 0x0
@@ -4465,6 +4564,9 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
*/
if ((flags & (DEQUEUE_SAVE | DEQUEUE_MOVE)) != DEQUEUE_SAVE)
update_min_vruntime(cfs_rq);
+
+ if (cfs_rq->nr_running == 0)
+ update_idle_cfs_rq_clock_pelt(cfs_rq);
}

/*
@@ -6982,6 +7084,8 @@ static void detach_entity_cfs_rq(struct sched_entity *se);
*/
static void migrate_task_rq_fair(struct task_struct *p, int new_cpu)
{
+ struct sched_entity *se = &p->se;
+
/*
* As blocked tasks retain absolute vruntime the migration needs to
* deal with this by subtracting the old and adding the new
@@ -6989,7 +7093,6 @@ static void migrate_task_rq_fair(struct task_struct *p, int new_cpu)
* the task on the new runqueue.
*/
if (READ_ONCE(p->__state) == TASK_WAKING) {
- struct sched_entity *se = &p->se;
struct cfs_rq *cfs_rq = cfs_rq_of(se);
u64 min_vruntime;

@@ -7014,25 +7117,28 @@ static void migrate_task_rq_fair(struct task_struct *p, int new_cpu)
* rq->lock and can modify state directly.
*/
lockdep_assert_rq_held(task_rq(p));
- detach_entity_cfs_rq(&p->se);
+ detach_entity_cfs_rq(se);

} else {
+ remove_entity_load_avg(&p->se);
/*
- * We are supposed to update the task to "current" time, then
- * its up to date and ready to go to new CPU/cfs_rq. But we
- * have difficulty in getting what current time is, so simply
- * throw away the out-of-date time. This will result in the
- * wakee task is less decayed, but giving the wakee more load
- * sounds not bad.
+ * Here, the task's PELT values have been updated according to
+ * the current rq's clock. But if that clock hasn't been
+ * updated in a while, a substantial idle time will be missed,
+ * leading to an inflation after wake-up on the new rq.
+ *
+ * Estimate the missing time from the cfs_rq last_update_time
+ * and update sched_avg to improve the PELT continuity after
+ * migration.
*/
- remove_entity_load_avg(&p->se);
+ migrate_se_pelt_lag(se);
}

/* Tell new CPU we are migrated */
- p->se.avg.last_update_time = 0;
+ se.avg.last_update_time = 0;

/* We have migrated, no longer consider this task hot */
- p->se.exec_start = 0;
+ se.exec_start = 0;

update_scan_period(p, new_cpu);
}
@@ -8198,6 +8304,10 @@ static bool __update_blocked_fair(struct rq *rq, bool *done)
if (update_cfs_rq_load_avg(cfs_rq_clock_pelt(cfs_rq), cfs_rq)) {
update_tg_load_avg(cfs_rq);

+ /* sync clock_pelt_idle with last update */
+ if (cfs_rq->nr_running == 0)
+ update_idle_cfs_rq_clock_pelt(cfs_rq);
+
if (cfs_rq == &rq->cfs)
decayed = true;
}
diff --git a/kernel/sched/pelt.h b/kernel/sched/pelt.h
index c336f5f481bc..0a01fe1b6ff4 100644
--- a/kernel/sched/pelt.h
+++ b/kernel/sched/pelt.h
@@ -61,6 +61,23 @@ static inline void cfs_se_util_change(struct sched_avg *avg)
WRITE_ONCE(avg->util_est.enqueued, enqueued);
}

+static inline u64 rq_clock_pelt(struct rq *rq)
+{
+ lockdep_assert_rq_held(rq);
+ assert_clock_updated(rq);
+
+ return rq->clock_pelt - rq->lost_idle_time;
+}
+
+/* The rq is idle, we can sync to clock_task */
+static inline void _update_idle_rq_clock_pelt(struct rq *rq)
+{
+ rq->clock_pelt = rq_clock_task(rq);
+
+ u64_u32_store(rq->enter_idle, rq_clock(rq));
+ u64_u32_store(rq->clock_pelt_idle, rq_clock_pelt(rq));
+}
+
/*
* The clock_pelt scales the time to reflect the effective amount of
* computation done during the running delta time but then sync back to
@@ -76,8 +93,7 @@ static inline void cfs_se_util_change(struct sched_avg *avg)
static inline void update_rq_clock_pelt(struct rq *rq, s64 delta)
{
if (unlikely(is_idle_task(rq->curr))) {
- /* The rq is idle, we can sync to clock_task */
- rq->clock_pelt = rq_clock_task(rq);
+ _update_idle_rq_clock_pelt(rq);
return;
}

@@ -130,17 +146,24 @@ static inline void update_idle_rq_clock_pelt(struct rq *rq)
*/
if (util_sum >= divider)
rq->lost_idle_time += rq_clock_task(rq) - rq->clock_pelt;
-}

-static inline u64 rq_clock_pelt(struct rq *rq)
-{
- lockdep_assert_rq_held(rq);
- assert_clock_updated(rq);
-
- return rq->clock_pelt - rq->lost_idle_time;
+ _update_idle_rq_clock_pelt(rq);
}

#ifdef CONFIG_CFS_BANDWIDTH
+static inline void update_idle_cfs_rq_clock_pelt(struct cfs_rq *cfs_rq)
+{
+ if (unlikely(cfs_rq->throttle_count)) {
+ u64_u32_store(cfs_rq->clock_pelt_idle, U64_MAX);
+ u64_u32_store(cfs_rq->throttled_clock_pelt_time, U64_MAX);
+ } else {
+ u64_u32_store(cfs_rq->clock_pelt_idle,
+ rq_clock_pelt(rq_of(cfs_rq)));
+ u64_u32_store(cfs_rq->throttled_clock_pelt_time,
+ cfs_rq->throttled_clock_task_time);
+ }
+}
+
/* rq->task_clock normalized against any time this cfs_rq has spent throttled */
static inline u64 cfs_rq_clock_pelt(struct cfs_rq *cfs_rq)
{
@@ -150,6 +173,7 @@ static inline u64 cfs_rq_clock_pelt(struct cfs_rq *cfs_rq)
return rq_clock_pelt(rq_of(cfs_rq)) - cfs_rq->throttled_clock_task_time;
}
#else
+static inline void update_idle_cfs_rq_clock_pelt(struct cfs_rq *cfs_rq) { }
static inline u64 cfs_rq_clock_pelt(struct cfs_rq *cfs_rq)
{
return rq_clock_pelt(rq_of(cfs_rq));
@@ -204,6 +228,7 @@ update_rq_clock_pelt(struct rq *rq, s64 delta) { }
static inline void
update_idle_rq_clock_pelt(struct rq *rq) { }

+static inline void update_idle_cfs_rq_clock_pelt(struct cfs_rq *cfs_rq) { }
#endif


diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 8dccb34eb190..3bd77a011676 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -602,6 +602,12 @@ struct cfs_rq {
int runtime_enabled;
s64 runtime_remaining;

+ u64 clock_pelt_idle;
+ u64 throttled_clock_pelt_time;
+#ifndef CONFIG_64BIT
+ u64 clock_pelt_idle_copy;
+ u64 throttled_clock_pelt_time_copy;
+#endif
u64 throttled_clock;
u64 throttled_clock_task;
u64 throttled_clock_task_time;
@@ -974,6 +980,12 @@ struct rq {
u64 clock_task ____cacheline_aligned;
u64 clock_pelt;
unsigned long lost_idle_time;
+ u64 clock_pelt_idle;
+ u64 enter_idle;
+#ifndef CONFIG_64BIT
+ u64 clock_pelt_idle_copy;
+ u64 enter_idle_copy;
+#endif

atomic_t nr_iowait;


+ u64 clock_pelt_idle;
+ u64 throttled_clock_pelt_time;
+#ifndef CONFIG_64BIT
+ u64 clock_pelt_idle_copy;
+ u64 throttled_clock_pelt_time_copy;
+#endif
u64 throttled_clock;
u64 throttled_clock_task;
u64 throttled_clock_task_time;
@@ -974,6 +980,12 @@ struct rq {
u64 clock_task ____cacheline_aligned;
u64 clock_pelt;
unsigned long lost_idle_time;
+ u64 clock_pelt_idle;
+ u64 enter_idle;
+#ifndef CONFIG_64BIT
+ u64 clock_pelt_idle_copy;
+ u64 enter_idle_copy;
+#endif

atomic_t nr_iowait;



Thanks,
Tao
> >
> > if (!cfs_rq->nr_running) {
> > /* A part. calculation of idle cfs rq */
> > calculate now like in migrate_se_pelt_lag().
> > decay = update_cfs_rq_load_avg(now, cfs_rq);
> > } else {
> > decay = update_cfs_rq_load_avg(cfs_rq_clock_pelt(cfs_rq), cfs_rq))
> > }
> >
> > if (decay) {
> > update_tg_load_avg(cfs_rq);
> > if (cfs_rq == &rq->cfs)
> > decayed == ture;
> > }
> >
> > Thanks,
> > Tao
> > > if (cfs_rq == &rq->cfs)
> > > decayed = true;
> > > }
> > > diff --git a/kernel/sched/pelt.h b/kernel/sched/pelt.h
> > > index 4ff2ed4f8fa1..6b39e07b2919 100644
> > > --- a/kernel/sched/pelt.h
> > > +++ b/kernel/sched/pelt.h
> > > @@ -61,6 +61,23 @@ static inline void cfs_se_util_change(struct sched_avg *avg)
> > > WRITE_ONCE(avg->util_est.enqueued, enqueued);
> > > }
> > >
> > > +static inline u64 rq_clock_pelt(struct rq *rq)
> > > +{
> > > + lockdep_assert_rq_held(rq);
> > > + assert_clock_updated(rq);
> > > +
> > > + return rq->clock_pelt - rq->lost_idle_time;
> > > +}
> > > +
> > > +/* The rq is idle, we can sync to clock_task */
> > > +static inline void _update_idle_rq_clock_pelt(struct rq *rq)
> > > +{
> > > + rq->clock_pelt = rq_clock_task(rq);
> > > +
> > > + u64_u32_store(rq->enter_idle, rq_clock(rq));
> > > + u64_u32_store(rq->clock_pelt_idle, rq_clock_pelt(rq));
> > > +}
> > > +
> > > /*
> > > * The clock_pelt scales the time to reflect the effective amount of
> > > * computation done during the running delta time but then sync back to
> > > @@ -76,8 +93,7 @@ static inline void cfs_se_util_change(struct sched_avg *avg)
> > > static inline void update_rq_clock_pelt(struct rq *rq, s64 delta)
> > > {
> > > if (unlikely(is_idle_task(rq->curr))) {
> > > - /* The rq is idle, we can sync to clock_task */
> > > - rq->clock_pelt = rq_clock_task(rq);
> > > + _update_idle_rq_clock_pelt(rq);
> > > return;
> > > }
> > >
> > > @@ -130,17 +146,20 @@ static inline void update_idle_rq_clock_pelt(struct rq *rq)
> > > */
> > > if (util_sum >= divider)
> > > rq->lost_idle_time += rq_clock_task(rq) - rq->clock_pelt;
> > > +
> > > + _update_idle_rq_clock_pelt(rq);
> > > }
> > >
> > > -static inline u64 rq_clock_pelt(struct rq *rq)
> > > +#ifdef CONFIG_CFS_BANDWIDTH
> > > +static inline void update_idle_cfs_rq_clock_pelt(struct cfs_rq *cfs_rq)
> > > {
> > > - lockdep_assert_rq_held(rq);
> > > - assert_clock_updated(rq);
> > > -
> > > - return rq->clock_pelt - rq->lost_idle_time;
> > > + if (unlikely(cfs_rq->throttle_count))
> > > + u64_u32_store(cfs_rq->clock_pelt_idle, U64_MAX);
> > > + else
> > > + u64_u32_store(cfs_rq->clock_pelt_idle,
> > > + rq_clock_pelt(rq_of(cfs_rq)));
> > > }
> > >
> > > -#ifdef CONFIG_CFS_BANDWIDTH
> > > /* rq->task_clock normalized against any time this cfs_rq has spent throttled */
> > > static inline u64 cfs_rq_clock_pelt(struct cfs_rq *cfs_rq)
> > > {
> > > @@ -150,6 +169,7 @@ static inline u64 cfs_rq_clock_pelt(struct cfs_rq *cfs_rq)
> > > return rq_clock_pelt(rq_of(cfs_rq)) - cfs_rq->throttled_clock_pelt_time;
> > > }
> > > #else
> > > +static inline void update_idle_cfs_rq_clock_pelt(struct cfs_rq *cfs_rq) { }
> > > static inline u64 cfs_rq_clock_pelt(struct cfs_rq *cfs_rq)
> > > {
> > > return rq_clock_pelt(rq_of(cfs_rq));
> > > @@ -204,6 +224,7 @@ update_rq_clock_pelt(struct rq *rq, s64 delta) { }
> > > static inline void
> > > update_idle_rq_clock_pelt(struct rq *rq) { }
> > >
> > > +static inline void update_idle_cfs_rq_clock_pelt(struct cfs_rq *cfs_rq) { }
> > > #endif
> > >
> > >
> > > diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> > > index e2cf6e48b165..07014e8cbae2 100644
> > > --- a/kernel/sched/sched.h
> > > +++ b/kernel/sched/sched.h
> > > @@ -641,6 +641,10 @@ struct cfs_rq {
> > > int runtime_enabled;
> > > s64 runtime_remaining;
> > >
> > > + u64 clock_pelt_idle;
> > > +#ifndef CONFIG_64BIT
> > > + u64 clock_pelt_idle_copy;
> > > +#endif
> > > u64 throttled_clock;
> > > u64 throttled_clock_pelt;
> > > u64 throttled_clock_pelt_time;
> > > @@ -1013,6 +1017,12 @@ struct rq {
> > > u64 clock_task ____cacheline_aligned;
> > > u64 clock_pelt;
> > > unsigned long lost_idle_time;
> > > + u64 clock_pelt_idle;
> > > + u64 enter_idle;
> > > +#ifndef CONFIG_64BIT
> > > + u64 clock_pelt_idle_copy;
> > > + u64 enter_idle_copy;
> > > +#endif
> > >
> > > atomic_t nr_iowait;
> > >
> > > --
> > > 2.25.1
> > >

2022-04-29 12:01:50

by Vincent Guittot

[permalink] [raw]
Subject: Re: [PATCH v7 2/7] sched/fair: Decay task PELT values during wakeup migration

Le vendredi 29 avril 2022 ? 01:22:19 (+0800), Tao Zhou a ?crit :
> Hi Vincent,
>
> On Thu, Apr 28, 2022 at 03:38:39PM +0200, Vincent Guittot wrote:
>
> > On Wed, 27 Apr 2022 at 19:37, Tao Zhou <[email protected]> wrote:
> > >

[..]

> > > > + /* sync clock_pelt_idle with last update */
> > > > + if (cfs_rq->nr_running == 0)
> > > > + update_idle_cfs_rq_clock_pelt(cfs_rq);
> > >
> > > I think that if cfs_rq->nr_running == 0 then use cfs rq pelt_idle to update
> > > idle cfs rq.
> >
> > update_blocked_averages() updates all cfs rq to be aligned with now so
> > we don't need to calculate an estimated now. update_rq_clock(rq) is
> > called 1st to update the rq->clock and childs
> >
> > With only need to save when happened the last update which is done in
> > update_rq_clock_pelt(rq) for rq->clock_pelt and with
> > update_idle_cfs_rq_clock_pelt(cfs) for the cfs_rq_clock_pelt
>
> I missed this.

I ended up with something a bit different:

---
kernel/sched/fair.c | 133 ++++++++++++++++++++++++++++++++++---------
kernel/sched/pelt.h | 66 ++++++++++++++++++---
kernel/sched/sched.h | 10 ++++
3 files changed, 174 insertions(+), 35 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index abd1feeec0c2..63e4cf225292 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3335,27 +3335,12 @@ static inline bool cfs_rq_is_decayed(struct cfs_rq *cfs_rq)
if (cfs_rq->load.weight)
return false;

- if (cfs_rq->avg.load_sum)
- return false;
-
- if (cfs_rq->avg.util_sum)
- return false;
-
- if (cfs_rq->avg.runnable_sum)
+ if (load_avg_is_decayed(&cfs_rq->avg))
return false;

if (child_cfs_rq_on_list(cfs_rq))
return false;

- /*
- * _avg must be null when _sum are null because _avg = _sum / divider
- * Make sure that rounding and/or propagation of PELT values never
- * break this.
- */
- SCHED_WARN_ON(cfs_rq->avg.load_avg ||
- cfs_rq->avg.util_avg ||
- cfs_rq->avg.runnable_avg);
-
return true;
}

@@ -3694,6 +3679,88 @@ static inline void add_tg_cfs_propagate(struct cfs_rq *cfs_rq, long runnable_sum

#endif /* CONFIG_FAIR_GROUP_SCHED */

+#ifdef CONFIG_NO_HZ_COMMON
+static inline void migrate_se_pelt_lag(struct sched_entity *se)
+{
+ struct cfs_rq *cfs_rq;
+ struct rq *rq;
+ bool is_idle;
+ u64 now, throttled = 0;
+
+ /* utilization is already fully decayed */
+ if (load_avg_is_decayed(&se->avg))
+ return;
+
+ cfs_rq = cfs_rq_of(se);
+ rq = rq_of(cfs_rq);
+
+ rcu_read_lock();
+ is_idle = is_idle_task(rcu_dereference(rq->curr));
+ rcu_read_unlock();
+
+ /*
+ * The lag estimation comes with a cost we don't want to pay all the
+ * time. Hence, limiting to the case where the source CPU is idle and
+ * we know we are at the greatest risk to have an outdated clock.
+ */
+ if (!is_idle)
+ return;
+
+ /*
+ * Estimated "now" is:
+ * last_update_time: last update of the cfs_lock_pelt +
+ * cfs_idle_lag: rq_clock_pelt delta bewteen last cfs update and last rq update +
+ * rq_idle_lag: rq_clock delta between last rq update and now
+ *
+ * with
+ *
+ * last_update_time == cfs_clock_pelt()
+ * == rq_clock_pelt() - cfs->throttled_clock_pelt_time
+ *
+ * cfs_idle_lag: rq_clock_pelt() when rq is idle - rq_clock_pelt() when cfs is idle
+ *
+ * rq_idle_lag : sched_clock_cpu() - rq_clock() when rq is idle
+ *
+ * In fact, rq_clock_pelt() that is used for last_update_time and when
+ * cfs is idle are the same because their last update happens atthe
+ * same time.
+ *
+ * We can optimize "now" to be:
+ * rq_clock_pelt when rq is idle - cfs->throttled_clock_pelt_time when cfs is idle +
+ * sched_clock_cpu() - rq_clock() when rq is idle
+ *
+ * when rq is idle
+ * rq_clock_pelt() is saved in rq->clock_pelt_idle
+ * rq_clock() is saved in rq->enter idle
+ *
+ * when cfs is idle
+ * cfs->throttled_clock_pelt_time is saved in cfs_rq->throttled_pelt_idle
+ *
+ * When !CFS_BANDWIDTH, cfs->throttled_clock_pelt_time is null
+ */
+
+#ifdef CONFIG_CFS_BANDWIDTH
+ throttled = u64_u32_load(cfs_rq->throttled_pelt_idle);
+ /* The clock has been stopped for throttling */
+ if (throttled == U64_MAX)
+ return;
+#endif
+
+ now = u64_u32_load(rq->clock_pelt_idle);
+ now -= throttled;
+
+ /* An update happened while computing lag */
+ if (now < cfs_rq_last_update_time(cfs_rq))
+ return;
+
+ now += sched_clock_cpu(cpu_of(rq)) - u64_u32_load(rq->enter_idle);
+
+ __update_load_avg_blocked_se(now, se);
+}
+#else
+static void migrate_se_pelt_lag(struct sched_entity *se) {}
+#endif
+
/**
* update_cfs_rq_load_avg - update the cfs_rq's load/util averages
* @now: current time, as per cfs_rq_clock_pelt()
@@ -4429,6 +4496,9 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
*/
if ((flags & (DEQUEUE_SAVE | DEQUEUE_MOVE)) != DEQUEUE_SAVE)
update_min_vruntime(cfs_rq);
+
+ if (cfs_rq->nr_running == 0)
+ update_idle_cfs_rq_clock_pelt(cfs_rq);
}

/*
@@ -6946,6 +7016,8 @@ static void detach_entity_cfs_rq(struct sched_entity *se);
*/
static void migrate_task_rq_fair(struct task_struct *p, int new_cpu)
{
+ struct sched_entity *se = &p->se;
+
/*
* As blocked tasks retain absolute vruntime the migration needs to
* deal with this by subtracting the old and adding the new
@@ -6953,7 +7025,6 @@ static void migrate_task_rq_fair(struct task_struct *p, int new_cpu)
* the task on the new runqueue.
*/
if (READ_ONCE(p->__state) == TASK_WAKING) {
- struct sched_entity *se = &p->se;
struct cfs_rq *cfs_rq = cfs_rq_of(se);

se->vruntime -= u64_u32_load(cfs_rq->min_vruntime);
@@ -6965,25 +7036,29 @@ static void migrate_task_rq_fair(struct task_struct *p, int new_cpu)
* rq->lock and can modify state directly.
*/
lockdep_assert_rq_held(task_rq(p));
- detach_entity_cfs_rq(&p->se);
+ detach_entity_cfs_rq(se);

} else {
+ remove_entity_load_avg(se);
+
/*
- * We are supposed to update the task to "current" time, then
- * its up to date and ready to go to new CPU/cfs_rq. But we
- * have difficulty in getting what current time is, so simply
- * throw away the out-of-date time. This will result in the
- * wakee task is less decayed, but giving the wakee more load
- * sounds not bad.
+ * Here, the task's PELT values have been updated according to
+ * the current rq's clock. But if that clock hasn't been
+ * updated in a while, a substantial idle time will be missed,
+ * leading to an inflation after wake-up on the new rq.
+ *
+ * Estimate the missing time from the cfs_rq last_update_time
+ * and update sched_avg to improve the PELT continuity after
+ * migration.
*/
- remove_entity_load_avg(&p->se);
+ migrate_se_pelt_lag(se);
}

/* Tell new CPU we are migrated */
- p->se.avg.last_update_time = 0;
+ se->avg.last_update_time = 0;

/* We have migrated, no longer consider this task hot */
- p->se.exec_start = 0;
+ se->exec_start = 0;

update_scan_period(p, new_cpu);
}
@@ -8149,6 +8224,10 @@ static bool __update_blocked_fair(struct rq *rq, bool *done)
if (update_cfs_rq_load_avg(cfs_rq_clock_pelt(cfs_rq), cfs_rq)) {
update_tg_load_avg(cfs_rq);

+ /* sync clock_pelt_idle with last update */
+ if (cfs_rq->nr_running == 0)
+ update_idle_cfs_rq_clock_pelt(cfs_rq);
+
if (cfs_rq == &rq->cfs)
decayed = true;
}
diff --git a/kernel/sched/pelt.h b/kernel/sched/pelt.h
index 4ff2ed4f8fa1..4143c6dc64dc 100644
--- a/kernel/sched/pelt.h
+++ b/kernel/sched/pelt.h
@@ -37,6 +37,29 @@ update_irq_load_avg(struct rq *rq, u64 running)
}
#endif

+static inline bool load_avg_is_decayed(struct sched_avg *sa)
+{
+ if (sa->load_sum)
+ return false;
+
+ if (sa->util_sum)
+ return false;
+
+ if (sa->runnable_sum)
+ return false;
+
+ /*
+ * _avg must be null when _sum are null because _avg = _sum / divider
+ * Make sure that rounding and/or propagation of PELT values never
+ * break this.
+ */
+ SCHED_WARN_ON(sa->load_avg ||
+ sa->util_avg ||
+ sa->runnable_avg);
+
+ return true;
+}
+
#define PELT_MIN_DIVIDER (LOAD_AVG_MAX - 1024)

static inline u32 get_pelt_divider(struct sched_avg *avg)
@@ -61,6 +84,23 @@ static inline void cfs_se_util_change(struct sched_avg *avg)
WRITE_ONCE(avg->util_est.enqueued, enqueued);
}

+static inline u64 rq_clock_pelt(struct rq *rq)
+{
+ lockdep_assert_rq_held(rq);
+ assert_clock_updated(rq);
+
+ return rq->clock_pelt - rq->lost_idle_time;
+}
+
+/* The rq is idle, we can sync to clock_task */
+static inline void _update_idle_rq_clock_pelt(struct rq *rq)
+{
+ rq->clock_pelt = rq_clock_task(rq);
+
+ u64_u32_store(rq->enter_idle, rq_clock(rq));
+ u64_u32_store(rq->clock_pelt_idle, rq_clock_pelt(rq));
+}
+
/*
* The clock_pelt scales the time to reflect the effective amount of
* computation done during the running delta time but then sync back to
@@ -76,8 +116,7 @@ static inline void cfs_se_util_change(struct sched_avg *avg)
static inline void update_rq_clock_pelt(struct rq *rq, s64 delta)
{
if (unlikely(is_idle_task(rq->curr))) {
- /* The rq is idle, we can sync to clock_task */
- rq->clock_pelt = rq_clock_task(rq);
+ _update_idle_rq_clock_pelt(rq);
return;
}

@@ -130,17 +169,26 @@ static inline void update_idle_rq_clock_pelt(struct rq *rq)
*/
if (util_sum >= divider)
rq->lost_idle_time += rq_clock_task(rq) - rq->clock_pelt;
+
+ _update_idle_rq_clock_pelt(rq);
}

-static inline u64 rq_clock_pelt(struct rq *rq)
+#ifdef CONFIG_CFS_BANDWIDTH
+static inline void update_idle_cfs_rq_clock_pelt(struct cfs_rq *cfs_rq)
{
- lockdep_assert_rq_held(rq);
- assert_clock_updated(rq);
-
- return rq->clock_pelt - rq->lost_idle_time;
+ /*
+ * Make sure that pending update of rq->clock_pelt_idle and
+ * rq->enter_idle are visible during update_blocked_average() before
+ * updating cfs_rq->throttled_pelt_idle.
+ */
+ smp_wmb();
+ if (unlikely(cfs_rq->throttle_count))
+ u64_u32_store(cfs_rq->throttled_pelt_idle, U64_MAX);
+ else
+ u64_u32_store(cfs_rq->throttled_pelt_idle,
+ cfs_rq->throttled_clock_pelt_time);
}

-#ifdef CONFIG_CFS_BANDWIDTH
/* rq->task_clock normalized against any time this cfs_rq has spent throttled */
static inline u64 cfs_rq_clock_pelt(struct cfs_rq *cfs_rq)
{
@@ -150,6 +198,7 @@ static inline u64 cfs_rq_clock_pelt(struct cfs_rq *cfs_rq)
return rq_clock_pelt(rq_of(cfs_rq)) - cfs_rq->throttled_clock_pelt_time;
}
#else
+static inline void update_idle_cfs_rq_clock_pelt(struct cfs_rq *cfs_rq) { }
static inline u64 cfs_rq_clock_pelt(struct cfs_rq *cfs_rq)
{
return rq_clock_pelt(rq_of(cfs_rq));
@@ -204,6 +253,7 @@ update_rq_clock_pelt(struct rq *rq, s64 delta) { }
static inline void
update_idle_rq_clock_pelt(struct rq *rq) { }

+static inline void update_idle_cfs_rq_clock_pelt(struct cfs_rq *cfs_rq) { }
#endif


diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index e2cf6e48b165..ea9365e1a24e 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -641,6 +641,10 @@ struct cfs_rq {
int runtime_enabled;
s64 runtime_remaining;

+ u64 throttled_pelt_idle;
+#ifndef CONFIG_64BIT
+ u64 throttled_pelt_idle_copy;
+#endif
u64 throttled_clock;
u64 throttled_clock_pelt;
u64 throttled_clock_pelt_time;
@@ -1013,6 +1017,12 @@ struct rq {
u64 clock_task ____cacheline_aligned;
u64 clock_pelt;
unsigned long lost_idle_time;
+ u64 clock_pelt_idle;
+ u64 enter_idle;
+#ifndef CONFIG_64BIT
+ u64 clock_pelt_idle_copy;
+ u64 enter_idle_copy;
+#endif

atomic_t nr_iowait;

--
2.17.1

2022-04-29 16:09:36

by Vincent Guittot

[permalink] [raw]
Subject: Re: [PATCH v7 2/7] sched/fair: Decay task PELT values during wakeup migration

On Wed, 27 Apr 2022 at 19:37, Tao Zhou <[email protected]> wrote:
>
> On Wed, Apr 27, 2022 at 03:32:59PM +0100, Vincent Donnefort wrote:
> > Before being migrated to a new CPU, a task sees its PELT values
> > synchronized with rq last_update_time. Once done, that same task will also
> > have its sched_avg last_update_time reset. This means the time between
> > the migration and the last clock update (B) will not be accounted for in
> > util_avg and a discontinuity will appear. This issue is amplified by the
> > PELT clock scaling. If the clock hasn't been updated while the CPU is
> > idle, clock_pelt will not be aligned with clock_task and that time (A)
> > will be also lost.
> >
> > ---------|----- A -----|-----------|------- B -----|>
> > clock_pelt clock_task clock now
> >
> > This is especially problematic for asymmetric CPU capacity systems which
> > need stable util_avg signals for task placement and energy estimation.
> >
> > Ideally, this problem would be solved by updating the runqueue clocks
> > before the migration. But that would require taking the runqueue lock
> > which is quite expensive [1]. Instead estimate the missing time and update
> > the task util_avg with that value:
> >
> > A + B = clock_task - clock_pelt + sched_clock_cpu() - clock
> >
> > sched_clock_cpu() is a costly function. Limit the usage to the case where
> > the source CPU is idle as we know this is when the clock is having the
> > biggest risk of being outdated.
> >
> > Neither clock_task, clock_pelt nor clock can be accessed without the
> > runqueue lock. We then need to store those values in a timestamp variable
> > which can be accessed during the migration. rq's enter_idle will give the
> > wall-clock time when the rq went idle. We have then:
> >
> > B = sched_clock_cpu() - rq->enter_idle.
> >
> > Then, to catch-up the PELT clock scaling (A), two cases:
> >
> > * !CFS_BANDWIDTH: We can simply use clock_task(). This value is stored
> > in rq's clock_pelt_idle, before the rq enters idle. The estimated time
> > is then:
> >
> > rq->clock_pelt_idle + sched_clock_cpu() - rq->enter_idle.
> >
> > * CFS_BANDWIDTH: We can't catch-up with clock_task because of the
> > throttled_clock_task_time offset. cfs_rq's clock_pelt_idle is then
> > giving the PELT clock when the cfs_rq becomes idle. This gives:
> >
> > A = rq->clock_pelt_idle - cfs_rq->clock_pelt_idle
> >
> > And gives the following estimated time:
> >
> > cfs_rq->last_update_time +
> > rq->clock_pelt_idle - cfs_rq->clock_pelt_idle + (A)
> > sched_clock_cpu() - rq->enter_idle (B)
> >
> > The (B) part of the missing time is however an estimation that doesn't
> > take into account IRQ and Paravirt time.
> >
> > [1] https://lore.kernel.org/all/[email protected]/
> >
> > Signed-off-by: Vincent Donnefort <[email protected]>
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index abd1feeec0c2..9cd506dc682c 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -3694,6 +3694,57 @@ static inline void add_tg_cfs_propagate(struct cfs_rq *cfs_rq, long runnable_sum
> >
> > #endif /* CONFIG_FAIR_GROUP_SCHED */
> >
> > +#ifdef CONFIG_NO_HZ_COMMON
> > +static inline void migrate_se_pelt_lag(struct sched_entity *se)
> > +{
> > + struct cfs_rq *cfs_rq;
> > + struct rq *rq;
> > + bool is_idle;
> > + u64 now;
> > +

would it make sense to check if pelt value of the task are not fully
decayed before starting the below : ie after syncing with
last_update_time of the cfs

> > + cfs_rq = cfs_rq_of(se);
> > + rq = rq_of(cfs_rq);
> > +
> > + rcu_read_lock();
> > + is_idle = is_idle_task(rcu_dereference(rq->curr));
> > + rcu_read_unlock();
> > +
> > + /*
> > + * The lag estimation comes with a cost we don't want to pay all the
> > + * time. Hence, limiting to the case where the source CPU is idle and
> > + * we know we are at the greatest risk to have an outdated clock.
> > + */
> > + if (!is_idle)
> > + return;
> > +
> > + /*
> > + * estimated "now" is:
> > + * last_update_time +
> > + * PELT scaling (rq->clock_pelt_idle - cfs_rq->clock_pelt_idle) +

PELT scaling is in fact the time between cfs becoming idle and rq
becoming idle. Naming it PELT scaling is misleading because even at
max frequency (ie without pelt scaling) we can have this delta.

> > + * rq clock lag (sched_clock_cpu() - rq->enter_idle)

and this is the time between rq becoming idle and current time

> > + *
> > + * The PELT scaling contribution is always 0 when !CFS_BANDWIDTH.
> > + * (see clock_pelt = clock_task in _update_idle_rq_clock_pelt())

The contribution becomes 0 because we use the same clock reference

last_update_time (cfs_clock_pelt when cfs became idle) +
rq->clock_pelt_idle (rq_clock_pelt when rq became idle) -
cfs_rq->clock_pelt_idle (rq_clock_pelt when cfs became idle)

when !CFS_BANDWIDTH, cfs_clock_pelt == rq_clock_pelt because there is
no throttling offset (which can dynamically change)
so we have:

last_update_time (rq_clock_pelt when cfs became idle) +
rq->clock_pelt_idle (rq_clock_pelt when rq became idle) -
cfs_rq->clock_pelt_idle (rq_clock_pelt when cfs became idle)

which is equals to rq->clock_pelt_idle (rq_clock_pelt when rq became idle)

This also means that we only need a snapshot of the
cfs_rq->throttled_clock_pelt_time when cfs became idle and the
equation becomes like below for CFS_BANDWIDTH

rq->clock_pelt_idle - snapshot of cfs_rq->throttled_clock_pelt_time
when entering idle

which remove one u64_u32_load

> > + */
> > +#ifdef CONFIG_CFS_BANDWIDTH
> > + now = u64_u32_load(cfs_rq->clock_pelt_idle);
> > + /* The clock has been stopped for throttling */
> > + if (now == U64_MAX)
> > + return;
> > +
> > + now = u64_u32_load(rq->clock_pelt_idle) - now;
> > + now += cfs_rq_last_update_time(cfs_rq);
> > +#else
> > + now = u64_u32_load(rq->clock_pelt_idle);
> > +#endif
> > + now += sched_clock_cpu(cpu_of(rq)) - u64_u32_load(rq->enter_idle);
> > +
> > + __update_load_avg_blocked_se(now, se);
> > +}
> > +#else
> > +static void migrate_se_pelt_lag(struct sched_entity *se) {}
> > +#endif
> > +
> > /**
> > * update_cfs_rq_load_avg - update the cfs_rq's load/util averages
> > * @now: current time, as per cfs_rq_clock_pelt()
> > @@ -4429,6 +4480,9 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
> > */
> > if ((flags & (DEQUEUE_SAVE | DEQUEUE_MOVE)) != DEQUEUE_SAVE)
> > update_min_vruntime(cfs_rq);
> > +
> > + if (cfs_rq->nr_running == 0)
> > + update_idle_cfs_rq_clock_pelt(cfs_rq);
> > }
> >
> > /*
> > @@ -6946,6 +7000,8 @@ static void detach_entity_cfs_rq(struct sched_entity *se);
> > */
> > static void migrate_task_rq_fair(struct task_struct *p, int new_cpu)
> > {
> > + struct sched_entity *se = &p->se;
> > +
> > /*
> > * As blocked tasks retain absolute vruntime the migration needs to
> > * deal with this by subtracting the old and adding the new
> > @@ -6953,7 +7009,6 @@ static void migrate_task_rq_fair(struct task_struct *p, int new_cpu)
> > * the task on the new runqueue.
> > */
> > if (READ_ONCE(p->__state) == TASK_WAKING) {
> > - struct sched_entity *se = &p->se;
> > struct cfs_rq *cfs_rq = cfs_rq_of(se);
> >
> > se->vruntime -= u64_u32_load(cfs_rq->min_vruntime);
> > @@ -6965,25 +7020,29 @@ static void migrate_task_rq_fair(struct task_struct *p, int new_cpu)
> > * rq->lock and can modify state directly.
> > */
> > lockdep_assert_rq_held(task_rq(p));
> > - detach_entity_cfs_rq(&p->se);
> > + detach_entity_cfs_rq(se);
> >
> > } else {
> > + remove_entity_load_avg(se);
> > +
> > /*
> > - * We are supposed to update the task to "current" time, then
> > - * its up to date and ready to go to new CPU/cfs_rq. But we
> > - * have difficulty in getting what current time is, so simply
> > - * throw away the out-of-date time. This will result in the
> > - * wakee task is less decayed, but giving the wakee more load
> > - * sounds not bad.
> > + * Here, the task's PELT values have been updated according to
> > + * the current rq's clock. But if that clock hasn't been
> > + * updated in a while, a substantial idle time will be missed,
> > + * leading to an inflation after wake-up on the new rq.
> > + *
> > + * Estimate the missing time from the cfs_rq last_update_time
> > + * and update sched_avg to improve the PELT continuity after
> > + * migration.
> > */
> > - remove_entity_load_avg(&p->se);
> > + migrate_se_pelt_lag(se);
> > }
> >
> > /* Tell new CPU we are migrated */
> > - p->se.avg.last_update_time = 0;
> > + se->avg.last_update_time = 0;
> >
> > /* We have migrated, no longer consider this task hot */
> > - p->se.exec_start = 0;
> > + se->exec_start = 0;
> >
> > update_scan_period(p, new_cpu);
> > }
> > @@ -8149,6 +8208,10 @@ static bool __update_blocked_fair(struct rq *rq, bool *done)
> > if (update_cfs_rq_load_avg(cfs_rq_clock_pelt(cfs_rq), cfs_rq)) {
> > update_tg_load_avg(cfs_rq);
> >
> > + /* sync clock_pelt_idle with last update */
> > + if (cfs_rq->nr_running == 0)
> > + update_idle_cfs_rq_clock_pelt(cfs_rq);
>
> I think that if cfs_rq->nr_running == 0 then use cfs rq pelt_idle to update
> idle cfs rq.

update_blocked_averages() updates all cfs rq to be aligned with now so
we don't need to calculate an estimated now. update_rq_clock(rq) is
called 1st to update the rq->clock and childs

With only need to save when happened the last update which is done in
update_rq_clock_pelt(rq) for rq->clock_pelt and with
update_idle_cfs_rq_clock_pelt(cfs) for the cfs_rq_clock_pelt


>
> if (!cfs_rq->nr_running) {
> /* A part. calculation of idle cfs rq */
> calculate now like in migrate_se_pelt_lag().
> decay = update_cfs_rq_load_avg(now, cfs_rq);
> } else {
> decay = update_cfs_rq_load_avg(cfs_rq_clock_pelt(cfs_rq), cfs_rq))
> }
>
> if (decay) {
> update_tg_load_avg(cfs_rq);
> if (cfs_rq == &rq->cfs)
> decayed == ture;
> }
>
> Thanks,
> Tao
> > if (cfs_rq == &rq->cfs)
> > decayed = true;
> > }
> > diff --git a/kernel/sched/pelt.h b/kernel/sched/pelt.h
> > index 4ff2ed4f8fa1..6b39e07b2919 100644
> > --- a/kernel/sched/pelt.h
> > +++ b/kernel/sched/pelt.h
> > @@ -61,6 +61,23 @@ static inline void cfs_se_util_change(struct sched_avg *avg)
> > WRITE_ONCE(avg->util_est.enqueued, enqueued);
> > }
> >
> > +static inline u64 rq_clock_pelt(struct rq *rq)
> > +{
> > + lockdep_assert_rq_held(rq);
> > + assert_clock_updated(rq);
> > +
> > + return rq->clock_pelt - rq->lost_idle_time;
> > +}
> > +
> > +/* The rq is idle, we can sync to clock_task */
> > +static inline void _update_idle_rq_clock_pelt(struct rq *rq)
> > +{
> > + rq->clock_pelt = rq_clock_task(rq);
> > +
> > + u64_u32_store(rq->enter_idle, rq_clock(rq));
> > + u64_u32_store(rq->clock_pelt_idle, rq_clock_pelt(rq));
> > +}
> > +
> > /*
> > * The clock_pelt scales the time to reflect the effective amount of
> > * computation done during the running delta time but then sync back to
> > @@ -76,8 +93,7 @@ static inline void cfs_se_util_change(struct sched_avg *avg)
> > static inline void update_rq_clock_pelt(struct rq *rq, s64 delta)
> > {
> > if (unlikely(is_idle_task(rq->curr))) {
> > - /* The rq is idle, we can sync to clock_task */
> > - rq->clock_pelt = rq_clock_task(rq);
> > + _update_idle_rq_clock_pelt(rq);
> > return;
> > }
> >
> > @@ -130,17 +146,20 @@ static inline void update_idle_rq_clock_pelt(struct rq *rq)
> > */
> > if (util_sum >= divider)
> > rq->lost_idle_time += rq_clock_task(rq) - rq->clock_pelt;
> > +
> > + _update_idle_rq_clock_pelt(rq);
> > }
> >
> > -static inline u64 rq_clock_pelt(struct rq *rq)
> > +#ifdef CONFIG_CFS_BANDWIDTH
> > +static inline void update_idle_cfs_rq_clock_pelt(struct cfs_rq *cfs_rq)
> > {
> > - lockdep_assert_rq_held(rq);
> > - assert_clock_updated(rq);
> > -
> > - return rq->clock_pelt - rq->lost_idle_time;
> > + if (unlikely(cfs_rq->throttle_count))
> > + u64_u32_store(cfs_rq->clock_pelt_idle, U64_MAX);
> > + else
> > + u64_u32_store(cfs_rq->clock_pelt_idle,
> > + rq_clock_pelt(rq_of(cfs_rq)));
> > }
> >
> > -#ifdef CONFIG_CFS_BANDWIDTH
> > /* rq->task_clock normalized against any time this cfs_rq has spent throttled */
> > static inline u64 cfs_rq_clock_pelt(struct cfs_rq *cfs_rq)
> > {
> > @@ -150,6 +169,7 @@ static inline u64 cfs_rq_clock_pelt(struct cfs_rq *cfs_rq)
> > return rq_clock_pelt(rq_of(cfs_rq)) - cfs_rq->throttled_clock_pelt_time;
> > }
> > #else
> > +static inline void update_idle_cfs_rq_clock_pelt(struct cfs_rq *cfs_rq) { }
> > static inline u64 cfs_rq_clock_pelt(struct cfs_rq *cfs_rq)
> > {
> > return rq_clock_pelt(rq_of(cfs_rq));
> > @@ -204,6 +224,7 @@ update_rq_clock_pelt(struct rq *rq, s64 delta) { }
> > static inline void
> > update_idle_rq_clock_pelt(struct rq *rq) { }
> >
> > +static inline void update_idle_cfs_rq_clock_pelt(struct cfs_rq *cfs_rq) { }
> > #endif
> >
> >
> > diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> > index e2cf6e48b165..07014e8cbae2 100644
> > --- a/kernel/sched/sched.h
> > +++ b/kernel/sched/sched.h
> > @@ -641,6 +641,10 @@ struct cfs_rq {
> > int runtime_enabled;
> > s64 runtime_remaining;
> >
> > + u64 clock_pelt_idle;
> > +#ifndef CONFIG_64BIT
> > + u64 clock_pelt_idle_copy;
> > +#endif
> > u64 throttled_clock;
> > u64 throttled_clock_pelt;
> > u64 throttled_clock_pelt_time;
> > @@ -1013,6 +1017,12 @@ struct rq {
> > u64 clock_task ____cacheline_aligned;
> > u64 clock_pelt;
> > unsigned long lost_idle_time;
> > + u64 clock_pelt_idle;
> > + u64 enter_idle;
> > +#ifndef CONFIG_64BIT
> > + u64 clock_pelt_idle_copy;
> > + u64 enter_idle_copy;
> > +#endif
> >
> > atomic_t nr_iowait;
> >
> > --
> > 2.25.1
> >

2022-04-29 22:12:27

by Tao Zhou

[permalink] [raw]
Subject: Re: [PATCH v7 2/7] sched/fair: Decay task PELT values during wakeup migration

On Fri, Apr 29, 2022 at 10:20:00AM +0200, Vincent Guittot wrote:

> Le vendredi 29 avril 2022 à 01:22:19 (+0800), Tao Zhou a écrit :
> > Hi Vincent,
> >
> > On Thu, Apr 28, 2022 at 03:38:39PM +0200, Vincent Guittot wrote:
> >
> > > On Wed, 27 Apr 2022 at 19:37, Tao Zhou <[email protected]> wrote:
> > > >
>
> [..]
>
> > > > > + /* sync clock_pelt_idle with last update */
> > > > > + if (cfs_rq->nr_running == 0)
> > > > > + update_idle_cfs_rq_clock_pelt(cfs_rq);
> > > >
> > > > I think that if cfs_rq->nr_running == 0 then use cfs rq pelt_idle to update
> > > > idle cfs rq.
> > >
> > > update_blocked_averages() updates all cfs rq to be aligned with now so
> > > we don't need to calculate an estimated now. update_rq_clock(rq) is
> > > called 1st to update the rq->clock and childs
> > >
> > > With only need to save when happened the last update which is done in
> > > update_rq_clock_pelt(rq) for rq->clock_pelt and with
> > > update_idle_cfs_rq_clock_pelt(cfs) for the cfs_rq_clock_pelt
> >
> > I missed this.
>
> I ended up with something a bit different:
>
> ---
> kernel/sched/fair.c | 133 ++++++++++++++++++++++++++++++++++---------
> kernel/sched/pelt.h | 66 ++++++++++++++++++---
> kernel/sched/sched.h | 10 ++++
> 3 files changed, 174 insertions(+), 35 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index abd1feeec0c2..63e4cf225292 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -3335,27 +3335,12 @@ static inline bool cfs_rq_is_decayed(struct cfs_rq *cfs_rq)
> if (cfs_rq->load.weight)
> return false;
>
> - if (cfs_rq->avg.load_sum)
> - return false;
> -
> - if (cfs_rq->avg.util_sum)
> - return false;
> -
> - if (cfs_rq->avg.runnable_sum)
> + if (load_avg_is_decayed(&cfs_rq->avg))
> return false;
>
> if (child_cfs_rq_on_list(cfs_rq))
> return false;
>
> - /*
> - * _avg must be null when _sum are null because _avg = _sum / divider
> - * Make sure that rounding and/or propagation of PELT values never
> - * break this.
> - */
> - SCHED_WARN_ON(cfs_rq->avg.load_avg ||
> - cfs_rq->avg.util_avg ||
> - cfs_rq->avg.runnable_avg);
> -
> return true;
> }
>
> @@ -3694,6 +3679,88 @@ static inline void add_tg_cfs_propagate(struct cfs_rq *cfs_rq, long runnable_sum
>
> #endif /* CONFIG_FAIR_GROUP_SCHED */
>
> +#ifdef CONFIG_NO_HZ_COMMON
> +static inline void migrate_se_pelt_lag(struct sched_entity *se)
> +{
> + struct cfs_rq *cfs_rq;
> + struct rq *rq;
> + bool is_idle;
> + u64 now, throttled = 0;
> +
> + /* utilization is already fully decayed */
> + if (load_avg_is_decayed(&se->avg))
> + return;
> +
> + cfs_rq = cfs_rq_of(se);
> + rq = rq_of(cfs_rq);
> +
> + rcu_read_lock();
> + is_idle = is_idle_task(rcu_dereference(rq->curr));
> + rcu_read_unlock();
> +
> + /*
> + * The lag estimation comes with a cost we don't want to pay all the
> + * time. Hence, limiting to the case where the source CPU is idle and
> + * we know we are at the greatest risk to have an outdated clock.
> + */
> + if (!is_idle)
> + return;
> +
> + /*
> + * Estimated "now" is:
> + * last_update_time: last update of the cfs_lock_pelt +
> + * cfs_idle_lag: rq_clock_pelt delta bewteen last cfs update and last rq update +

s/bewteen/between/

> + * rq_idle_lag: rq_clock delta between last rq update and now
> + *
> + * with
> + *
> + * last_update_time == cfs_clock_pelt()
> + * == rq_clock_pelt() - cfs->throttled_clock_pelt_time
> + *
> + * cfs_idle_lag: rq_clock_pelt() when rq is idle - rq_clock_pelt() when cfs is idle
> + *
> + * rq_idle_lag : sched_clock_cpu() - rq_clock() when rq is idle
> + *
> + * In fact, rq_clock_pelt() that is used for last_update_time and when
> + * cfs is idle are the same because their last update happens atthe

s/atthe/at the/

> + * same time.
> + *
> + * We can optimize "now" to be:
> + * rq_clock_pelt when rq is idle - cfs->throttled_clock_pelt_time when cfs is idle +
> + * sched_clock_cpu() - rq_clock() when rq is idle
> + *
> + * when rq is idle
> + * rq_clock_pelt() is saved in rq->clock_pelt_idle
> + * rq_clock() is saved in rq->enter idle
> + *
> + * when cfs is idle
> + * cfs->throttled_clock_pelt_time is saved in cfs_rq->throttled_pelt_idle
> + *
> + * When !CFS_BANDWIDTH, cfs->throttled_clock_pelt_time is null
> + */
> +
> +#ifdef CONFIG_CFS_BANDWIDTH
> + throttled = u64_u32_load(cfs_rq->throttled_pelt_idle);
> + /* The clock has been stopped for throttling */
> + if (throttled == U64_MAX)
> + return;
> +#endif
> +
> + now = u64_u32_load(rq->clock_pelt_idle);
> + now -= throttled;
> +
> + /* An update happened while computing lag */
> + if (now < cfs_rq_last_update_time(cfs_rq))
> + return;
> +
> + now += sched_clock_cpu(cpu_of(rq)) - u64_u32_load(rq->enter_idle);
> +
> + __update_load_avg_blocked_se(now, se);
> +}
> +#else
> +static void migrate_se_pelt_lag(struct sched_entity *se) {}
> +#endif
> +
> /**
> * update_cfs_rq_load_avg - update the cfs_rq's load/util averages
> * @now: current time, as per cfs_rq_clock_pelt()
> @@ -4429,6 +4496,9 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
> */
> if ((flags & (DEQUEUE_SAVE | DEQUEUE_MOVE)) != DEQUEUE_SAVE)
> update_min_vruntime(cfs_rq);
> +
> + if (cfs_rq->nr_running == 0)
> + update_idle_cfs_rq_clock_pelt(cfs_rq);
> }
>
> /*
> @@ -6946,6 +7016,8 @@ static void detach_entity_cfs_rq(struct sched_entity *se);
> */
> static void migrate_task_rq_fair(struct task_struct *p, int new_cpu)
> {
> + struct sched_entity *se = &p->se;
> +
> /*
> * As blocked tasks retain absolute vruntime the migration needs to
> * deal with this by subtracting the old and adding the new
> @@ -6953,7 +7025,6 @@ static void migrate_task_rq_fair(struct task_struct *p, int new_cpu)
> * the task on the new runqueue.
> */
> if (READ_ONCE(p->__state) == TASK_WAKING) {
> - struct sched_entity *se = &p->se;
> struct cfs_rq *cfs_rq = cfs_rq_of(se);
>
> se->vruntime -= u64_u32_load(cfs_rq->min_vruntime);
> @@ -6965,25 +7036,29 @@ static void migrate_task_rq_fair(struct task_struct *p, int new_cpu)
> * rq->lock and can modify state directly.
> */
> lockdep_assert_rq_held(task_rq(p));
> - detach_entity_cfs_rq(&p->se);
> + detach_entity_cfs_rq(se);
>
> } else {
> + remove_entity_load_avg(se);
> +
> /*
> - * We are supposed to update the task to "current" time, then
> - * its up to date and ready to go to new CPU/cfs_rq. But we
> - * have difficulty in getting what current time is, so simply
> - * throw away the out-of-date time. This will result in the
> - * wakee task is less decayed, but giving the wakee more load
> - * sounds not bad.
> + * Here, the task's PELT values have been updated according to
> + * the current rq's clock. But if that clock hasn't been
> + * updated in a while, a substantial idle time will be missed,
> + * leading to an inflation after wake-up on the new rq.
> + *
> + * Estimate the missing time from the cfs_rq last_update_time
> + * and update sched_avg to improve the PELT continuity after
> + * migration.
> */
> - remove_entity_load_avg(&p->se);
> + migrate_se_pelt_lag(se);
> }
>
> /* Tell new CPU we are migrated */
> - p->se.avg.last_update_time = 0;
> + se->avg.last_update_time = 0;
>
> /* We have migrated, no longer consider this task hot */
> - p->se.exec_start = 0;
> + se->exec_start = 0;
>
> update_scan_period(p, new_cpu);
> }
> @@ -8149,6 +8224,10 @@ static bool __update_blocked_fair(struct rq *rq, bool *done)
> if (update_cfs_rq_load_avg(cfs_rq_clock_pelt(cfs_rq), cfs_rq)) {
> update_tg_load_avg(cfs_rq);
>
> + /* sync clock_pelt_idle with last update */
> + if (cfs_rq->nr_running == 0)
> + update_idle_cfs_rq_clock_pelt(cfs_rq);
> +
> if (cfs_rq == &rq->cfs)
> decayed = true;
> }
> diff --git a/kernel/sched/pelt.h b/kernel/sched/pelt.h
> index 4ff2ed4f8fa1..4143c6dc64dc 100644
> --- a/kernel/sched/pelt.h
> +++ b/kernel/sched/pelt.h
> @@ -37,6 +37,29 @@ update_irq_load_avg(struct rq *rq, u64 running)
> }
> #endif
>
> +static inline bool load_avg_is_decayed(struct sched_avg *sa)
> +{
> + if (sa->load_sum)
> + return false;
> +
> + if (sa->util_sum)
> + return false;
> +
> + if (sa->runnable_sum)
> + return false;
> +
> + /*
> + * _avg must be null when _sum are null because _avg = _sum / divider
> + * Make sure that rounding and/or propagation of PELT values never
> + * break this.
> + */
> + SCHED_WARN_ON(sa->load_avg ||
> + sa->util_avg ||
> + sa->runnable_avg);
> +
> + return true;
> +}
> +
> #define PELT_MIN_DIVIDER (LOAD_AVG_MAX - 1024)
>
> static inline u32 get_pelt_divider(struct sched_avg *avg)
> @@ -61,6 +84,23 @@ static inline void cfs_se_util_change(struct sched_avg *avg)
> WRITE_ONCE(avg->util_est.enqueued, enqueued);
> }
>
> +static inline u64 rq_clock_pelt(struct rq *rq)
> +{
> + lockdep_assert_rq_held(rq);
> + assert_clock_updated(rq);
> +
> + return rq->clock_pelt - rq->lost_idle_time;
> +}
> +
> +/* The rq is idle, we can sync to clock_task */
> +static inline void _update_idle_rq_clock_pelt(struct rq *rq)
> +{
> + rq->clock_pelt = rq_clock_task(rq);
> +
> + u64_u32_store(rq->enter_idle, rq_clock(rq));
> + u64_u32_store(rq->clock_pelt_idle, rq_clock_pelt(rq));
> +}
> +
> /*
> * The clock_pelt scales the time to reflect the effective amount of
> * computation done during the running delta time but then sync back to
> @@ -76,8 +116,7 @@ static inline void cfs_se_util_change(struct sched_avg *avg)
> static inline void update_rq_clock_pelt(struct rq *rq, s64 delta)
> {
> if (unlikely(is_idle_task(rq->curr))) {
> - /* The rq is idle, we can sync to clock_task */
> - rq->clock_pelt = rq_clock_task(rq);
> + _update_idle_rq_clock_pelt(rq);
> return;
> }
>
> @@ -130,17 +169,26 @@ static inline void update_idle_rq_clock_pelt(struct rq *rq)
> */
> if (util_sum >= divider)
> rq->lost_idle_time += rq_clock_task(rq) - rq->clock_pelt;
> +
> + _update_idle_rq_clock_pelt(rq);
> }
>
> -static inline u64 rq_clock_pelt(struct rq *rq)
> +#ifdef CONFIG_CFS_BANDWIDTH
> +static inline void update_idle_cfs_rq_clock_pelt(struct cfs_rq *cfs_rq)
> {
> - lockdep_assert_rq_held(rq);
> - assert_clock_updated(rq);
> -
> - return rq->clock_pelt - rq->lost_idle_time;
> + /*
> + * Make sure that pending update of rq->clock_pelt_idle and
> + * rq->enter_idle are visible during update_blocked_average() before
> + * updating cfs_rq->throttled_pelt_idle.
> + */
> + smp_wmb();

No smp_rmb() at read side and I feel it is in migrate_se_pelt_lag()
before this line:
‘now = u64_u32_load(rq->clock_pelt_idle);’
The load of cfs_rq->throttled_pelt_idle should be valid before the load
of rq->clock_pelt_idle and rq->enter_idle. This is paired with the write
side in update_idle_cfs_rq_clock_pelt().

> + if (unlikely(cfs_rq->throttle_count))
> + u64_u32_store(cfs_rq->throttled_pelt_idle, U64_MAX);
> + else
> + u64_u32_store(cfs_rq->throttled_pelt_idle,
> + cfs_rq->throttled_clock_pelt_time);
> }
>
> -#ifdef CONFIG_CFS_BANDWIDTH
> /* rq->task_clock normalized against any time this cfs_rq has spent throttled */
> static inline u64 cfs_rq_clock_pelt(struct cfs_rq *cfs_rq)
> {
> @@ -150,6 +198,7 @@ static inline u64 cfs_rq_clock_pelt(struct cfs_rq *cfs_rq)
> return rq_clock_pelt(rq_of(cfs_rq)) - cfs_rq->throttled_clock_pelt_time;
> }
> #else
> +static inline void update_idle_cfs_rq_clock_pelt(struct cfs_rq *cfs_rq) { }
> static inline u64 cfs_rq_clock_pelt(struct cfs_rq *cfs_rq)
> {
> return rq_clock_pelt(rq_of(cfs_rq));
> @@ -204,6 +253,7 @@ update_rq_clock_pelt(struct rq *rq, s64 delta) { }
> static inline void
> update_idle_rq_clock_pelt(struct rq *rq) { }
>
> +static inline void update_idle_cfs_rq_clock_pelt(struct cfs_rq *cfs_rq) { }
> #endif
>
>
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index e2cf6e48b165..ea9365e1a24e 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -641,6 +641,10 @@ struct cfs_rq {
> int runtime_enabled;
> s64 runtime_remaining;
>
> + u64 throttled_pelt_idle;
> +#ifndef CONFIG_64BIT
> + u64 throttled_pelt_idle_copy;
> +#endif
> u64 throttled_clock;
> u64 throttled_clock_pelt;
> u64 throttled_clock_pelt_time;
> @@ -1013,6 +1017,12 @@ struct rq {
> u64 clock_task ____cacheline_aligned;
> u64 clock_pelt;
> unsigned long lost_idle_time;
> + u64 clock_pelt_idle;
> + u64 enter_idle;
> +#ifndef CONFIG_64BIT
> + u64 clock_pelt_idle_copy;
> + u64 enter_idle_copy;
> +#endif
>
> atomic_t nr_iowait;
>
> --
> 2.17.1

2022-05-01 22:15:25

by Vincent Guittot

[permalink] [raw]
Subject: Re: [PATCH v7 2/7] sched/fair: Decay task PELT values during wakeup migration

On Fri, 29 Apr 2022 at 14:51, Tao Zhou <[email protected]> wrote:
>
> On Fri, Apr 29, 2022 at 10:20:00AM +0200, Vincent Guittot wrote:
>
> > Le vendredi 29 avril 2022 à 01:22:19 (+0800), Tao Zhou a écrit :
> > > Hi Vincent,
> > >
> > > On Thu, Apr 28, 2022 at 03:38:39PM +0200, Vincent Guittot wrote:
> > >
> > > > On Wed, 27 Apr 2022 at 19:37, Tao Zhou <[email protected]> wrote:
> > > > >
> >
> > [..]
> >
> > > > > > + /* sync clock_pelt_idle with last update */
> > > > > > + if (cfs_rq->nr_running == 0)
> > > > > > + update_idle_cfs_rq_clock_pelt(cfs_rq);
> > > > >
> > > > > I think that if cfs_rq->nr_running == 0 then use cfs rq pelt_idle to update
> > > > > idle cfs rq.
> > > >
> > > > update_blocked_averages() updates all cfs rq to be aligned with now so
> > > > we don't need to calculate an estimated now. update_rq_clock(rq) is
> > > > called 1st to update the rq->clock and childs
> > > >
> > > > With only need to save when happened the last update which is done in
> > > > update_rq_clock_pelt(rq) for rq->clock_pelt and with
> > > > update_idle_cfs_rq_clock_pelt(cfs) for the cfs_rq_clock_pelt
> > >
> > > I missed this.
> >
> > I ended up with something a bit different:
> >
> > ---
> > kernel/sched/fair.c | 133 ++++++++++++++++++++++++++++++++++---------
> > kernel/sched/pelt.h | 66 ++++++++++++++++++---
> > kernel/sched/sched.h | 10 ++++
> > 3 files changed, 174 insertions(+), 35 deletions(-)
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index abd1feeec0c2..63e4cf225292 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -3335,27 +3335,12 @@ static inline bool cfs_rq_is_decayed(struct cfs_rq *cfs_rq)
> > if (cfs_rq->load.weight)
> > return false;
> >
> > - if (cfs_rq->avg.load_sum)
> > - return false;
> > -
> > - if (cfs_rq->avg.util_sum)
> > - return false;
> > -
> > - if (cfs_rq->avg.runnable_sum)
> > + if (load_avg_is_decayed(&cfs_rq->avg))
> > return false;
> >
> > if (child_cfs_rq_on_list(cfs_rq))
> > return false;
> >
> > - /*
> > - * _avg must be null when _sum are null because _avg = _sum / divider
> > - * Make sure that rounding and/or propagation of PELT values never
> > - * break this.
> > - */
> > - SCHED_WARN_ON(cfs_rq->avg.load_avg ||
> > - cfs_rq->avg.util_avg ||
> > - cfs_rq->avg.runnable_avg);
> > -
> > return true;
> > }
> >
> > @@ -3694,6 +3679,88 @@ static inline void add_tg_cfs_propagate(struct cfs_rq *cfs_rq, long runnable_sum
> >
> > #endif /* CONFIG_FAIR_GROUP_SCHED */
> >
> > +#ifdef CONFIG_NO_HZ_COMMON
> > +static inline void migrate_se_pelt_lag(struct sched_entity *se)
> > +{
> > + struct cfs_rq *cfs_rq;
> > + struct rq *rq;
> > + bool is_idle;
> > + u64 now, throttled = 0;
> > +
> > + /* utilization is already fully decayed */
> > + if (load_avg_is_decayed(&se->avg))
> > + return;
> > +
> > + cfs_rq = cfs_rq_of(se);
> > + rq = rq_of(cfs_rq);
> > +
> > + rcu_read_lock();
> > + is_idle = is_idle_task(rcu_dereference(rq->curr));
> > + rcu_read_unlock();
> > +
> > + /*
> > + * The lag estimation comes with a cost we don't want to pay all the
> > + * time. Hence, limiting to the case where the source CPU is idle and
> > + * we know we are at the greatest risk to have an outdated clock.
> > + */
> > + if (!is_idle)
> > + return;
> > +
> > + /*
> > + * Estimated "now" is:
> > + * last_update_time: last update of the cfs_lock_pelt +
> > + * cfs_idle_lag: rq_clock_pelt delta bewteen last cfs update and last rq update +
>
> s/bewteen/between/
>
> > + * rq_idle_lag: rq_clock delta between last rq update and now
> > + *
> > + * with
> > + *
> > + * last_update_time == cfs_clock_pelt()
> > + * == rq_clock_pelt() - cfs->throttled_clock_pelt_time
> > + *
> > + * cfs_idle_lag: rq_clock_pelt() when rq is idle - rq_clock_pelt() when cfs is idle
> > + *
> > + * rq_idle_lag : sched_clock_cpu() - rq_clock() when rq is idle
> > + *
> > + * In fact, rq_clock_pelt() that is used for last_update_time and when
> > + * cfs is idle are the same because their last update happens atthe
>
> s/atthe/at the/
>
> > + * same time.
> > + *
> > + * We can optimize "now" to be:
> > + * rq_clock_pelt when rq is idle - cfs->throttled_clock_pelt_time when cfs is idle +
> > + * sched_clock_cpu() - rq_clock() when rq is idle
> > + *
> > + * when rq is idle
> > + * rq_clock_pelt() is saved in rq->clock_pelt_idle
> > + * rq_clock() is saved in rq->enter idle
> > + *
> > + * when cfs is idle
> > + * cfs->throttled_clock_pelt_time is saved in cfs_rq->throttled_pelt_idle
> > + *
> > + * When !CFS_BANDWIDTH, cfs->throttled_clock_pelt_time is null
> > + */
> > +
> > +#ifdef CONFIG_CFS_BANDWIDTH
> > + throttled = u64_u32_load(cfs_rq->throttled_pelt_idle);
> > + /* The clock has been stopped for throttling */
> > + if (throttled == U64_MAX)
> > + return;
> > +#endif
> > +
> > + now = u64_u32_load(rq->clock_pelt_idle);
> > + now -= throttled;
> > +
> > + /* An update happened while computing lag */
> > + if (now < cfs_rq_last_update_time(cfs_rq))
> > + return;
> > +
> > + now += sched_clock_cpu(cpu_of(rq)) - u64_u32_load(rq->enter_idle);
> > +
> > + __update_load_avg_blocked_se(now, se);
> > +}
> > +#else
> > +static void migrate_se_pelt_lag(struct sched_entity *se) {}
> > +#endif
> > +
> > /**
> > * update_cfs_rq_load_avg - update the cfs_rq's load/util averages
> > * @now: current time, as per cfs_rq_clock_pelt()
> > @@ -4429,6 +4496,9 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
> > */
> > if ((flags & (DEQUEUE_SAVE | DEQUEUE_MOVE)) != DEQUEUE_SAVE)
> > update_min_vruntime(cfs_rq);
> > +
> > + if (cfs_rq->nr_running == 0)
> > + update_idle_cfs_rq_clock_pelt(cfs_rq);
> > }
> >
> > /*
> > @@ -6946,6 +7016,8 @@ static void detach_entity_cfs_rq(struct sched_entity *se);
> > */
> > static void migrate_task_rq_fair(struct task_struct *p, int new_cpu)
> > {
> > + struct sched_entity *se = &p->se;
> > +
> > /*
> > * As blocked tasks retain absolute vruntime the migration needs to
> > * deal with this by subtracting the old and adding the new
> > @@ -6953,7 +7025,6 @@ static void migrate_task_rq_fair(struct task_struct *p, int new_cpu)
> > * the task on the new runqueue.
> > */
> > if (READ_ONCE(p->__state) == TASK_WAKING) {
> > - struct sched_entity *se = &p->se;
> > struct cfs_rq *cfs_rq = cfs_rq_of(se);
> >
> > se->vruntime -= u64_u32_load(cfs_rq->min_vruntime);
> > @@ -6965,25 +7036,29 @@ static void migrate_task_rq_fair(struct task_struct *p, int new_cpu)
> > * rq->lock and can modify state directly.
> > */
> > lockdep_assert_rq_held(task_rq(p));
> > - detach_entity_cfs_rq(&p->se);
> > + detach_entity_cfs_rq(se);
> >
> > } else {
> > + remove_entity_load_avg(se);
> > +
> > /*
> > - * We are supposed to update the task to "current" time, then
> > - * its up to date and ready to go to new CPU/cfs_rq. But we
> > - * have difficulty in getting what current time is, so simply
> > - * throw away the out-of-date time. This will result in the
> > - * wakee task is less decayed, but giving the wakee more load
> > - * sounds not bad.
> > + * Here, the task's PELT values have been updated according to
> > + * the current rq's clock. But if that clock hasn't been
> > + * updated in a while, a substantial idle time will be missed,
> > + * leading to an inflation after wake-up on the new rq.
> > + *
> > + * Estimate the missing time from the cfs_rq last_update_time
> > + * and update sched_avg to improve the PELT continuity after
> > + * migration.
> > */
> > - remove_entity_load_avg(&p->se);
> > + migrate_se_pelt_lag(se);
> > }
> >
> > /* Tell new CPU we are migrated */
> > - p->se.avg.last_update_time = 0;
> > + se->avg.last_update_time = 0;
> >
> > /* We have migrated, no longer consider this task hot */
> > - p->se.exec_start = 0;
> > + se->exec_start = 0;
> >
> > update_scan_period(p, new_cpu);
> > }
> > @@ -8149,6 +8224,10 @@ static bool __update_blocked_fair(struct rq *rq, bool *done)
> > if (update_cfs_rq_load_avg(cfs_rq_clock_pelt(cfs_rq), cfs_rq)) {
> > update_tg_load_avg(cfs_rq);
> >
> > + /* sync clock_pelt_idle with last update */
> > + if (cfs_rq->nr_running == 0)
> > + update_idle_cfs_rq_clock_pelt(cfs_rq);
> > +
> > if (cfs_rq == &rq->cfs)
> > decayed = true;
> > }
> > diff --git a/kernel/sched/pelt.h b/kernel/sched/pelt.h
> > index 4ff2ed4f8fa1..4143c6dc64dc 100644
> > --- a/kernel/sched/pelt.h
> > +++ b/kernel/sched/pelt.h
> > @@ -37,6 +37,29 @@ update_irq_load_avg(struct rq *rq, u64 running)
> > }
> > #endif
> >
> > +static inline bool load_avg_is_decayed(struct sched_avg *sa)
> > +{
> > + if (sa->load_sum)
> > + return false;
> > +
> > + if (sa->util_sum)
> > + return false;
> > +
> > + if (sa->runnable_sum)
> > + return false;
> > +
> > + /*
> > + * _avg must be null when _sum are null because _avg = _sum / divider
> > + * Make sure that rounding and/or propagation of PELT values never
> > + * break this.
> > + */
> > + SCHED_WARN_ON(sa->load_avg ||
> > + sa->util_avg ||
> > + sa->runnable_avg);
> > +
> > + return true;
> > +}
> > +
> > #define PELT_MIN_DIVIDER (LOAD_AVG_MAX - 1024)
> >
> > static inline u32 get_pelt_divider(struct sched_avg *avg)
> > @@ -61,6 +84,23 @@ static inline void cfs_se_util_change(struct sched_avg *avg)
> > WRITE_ONCE(avg->util_est.enqueued, enqueued);
> > }
> >
> > +static inline u64 rq_clock_pelt(struct rq *rq)
> > +{
> > + lockdep_assert_rq_held(rq);
> > + assert_clock_updated(rq);
> > +
> > + return rq->clock_pelt - rq->lost_idle_time;
> > +}
> > +
> > +/* The rq is idle, we can sync to clock_task */
> > +static inline void _update_idle_rq_clock_pelt(struct rq *rq)
> > +{
> > + rq->clock_pelt = rq_clock_task(rq);
> > +
> > + u64_u32_store(rq->enter_idle, rq_clock(rq));
> > + u64_u32_store(rq->clock_pelt_idle, rq_clock_pelt(rq));
> > +}
> > +
> > /*
> > * The clock_pelt scales the time to reflect the effective amount of
> > * computation done during the running delta time but then sync back to
> > @@ -76,8 +116,7 @@ static inline void cfs_se_util_change(struct sched_avg *avg)
> > static inline void update_rq_clock_pelt(struct rq *rq, s64 delta)
> > {
> > if (unlikely(is_idle_task(rq->curr))) {
> > - /* The rq is idle, we can sync to clock_task */
> > - rq->clock_pelt = rq_clock_task(rq);
> > + _update_idle_rq_clock_pelt(rq);
> > return;
> > }
> >
> > @@ -130,17 +169,26 @@ static inline void update_idle_rq_clock_pelt(struct rq *rq)
> > */
> > if (util_sum >= divider)
> > rq->lost_idle_time += rq_clock_task(rq) - rq->clock_pelt;
> > +
> > + _update_idle_rq_clock_pelt(rq);
> > }
> >
> > -static inline u64 rq_clock_pelt(struct rq *rq)
> > +#ifdef CONFIG_CFS_BANDWIDTH
> > +static inline void update_idle_cfs_rq_clock_pelt(struct cfs_rq *cfs_rq)
> > {
> > - lockdep_assert_rq_held(rq);
> > - assert_clock_updated(rq);
> > -
> > - return rq->clock_pelt - rq->lost_idle_time;
> > + /*
> > + * Make sure that pending update of rq->clock_pelt_idle and
> > + * rq->enter_idle are visible during update_blocked_average() before
> > + * updating cfs_rq->throttled_pelt_idle.
> > + */
> > + smp_wmb();
>
> No smp_rmb() at read side and I feel it is in migrate_se_pelt_lag()
> before this line:
> ‘now = u64_u32_load(rq->clock_pelt_idle);’
> The load of cfs_rq->throttled_pelt_idle should be valid before the load
> of rq->clock_pelt_idle and rq->enter_idle. This is paired with the write
> side in update_idle_cfs_rq_clock_pelt().

Yes

>
> > + if (unlikely(cfs_rq->throttle_count))
> > + u64_u32_store(cfs_rq->throttled_pelt_idle, U64_MAX);
> > + else
> > + u64_u32_store(cfs_rq->throttled_pelt_idle,
> > + cfs_rq->throttled_clock_pelt_time);
> > }
> >
> > -#ifdef CONFIG_CFS_BANDWIDTH
> > /* rq->task_clock normalized against any time this cfs_rq has spent throttled */
> > static inline u64 cfs_rq_clock_pelt(struct cfs_rq *cfs_rq)
> > {
> > @@ -150,6 +198,7 @@ static inline u64 cfs_rq_clock_pelt(struct cfs_rq *cfs_rq)
> > return rq_clock_pelt(rq_of(cfs_rq)) - cfs_rq->throttled_clock_pelt_time;
> > }
> > #else
> > +static inline void update_idle_cfs_rq_clock_pelt(struct cfs_rq *cfs_rq) { }
> > static inline u64 cfs_rq_clock_pelt(struct cfs_rq *cfs_rq)
> > {
> > return rq_clock_pelt(rq_of(cfs_rq));
> > @@ -204,6 +253,7 @@ update_rq_clock_pelt(struct rq *rq, s64 delta) { }
> > static inline void
> > update_idle_rq_clock_pelt(struct rq *rq) { }
> >
> > +static inline void update_idle_cfs_rq_clock_pelt(struct cfs_rq *cfs_rq) { }
> > #endif
> >
> >
> > diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> > index e2cf6e48b165..ea9365e1a24e 100644
> > --- a/kernel/sched/sched.h
> > +++ b/kernel/sched/sched.h
> > @@ -641,6 +641,10 @@ struct cfs_rq {
> > int runtime_enabled;
> > s64 runtime_remaining;
> >
> > + u64 throttled_pelt_idle;
> > +#ifndef CONFIG_64BIT
> > + u64 throttled_pelt_idle_copy;
> > +#endif
> > u64 throttled_clock;
> > u64 throttled_clock_pelt;
> > u64 throttled_clock_pelt_time;
> > @@ -1013,6 +1017,12 @@ struct rq {
> > u64 clock_task ____cacheline_aligned;
> > u64 clock_pelt;
> > unsigned long lost_idle_time;
> > + u64 clock_pelt_idle;
> > + u64 enter_idle;
> > +#ifndef CONFIG_64BIT
> > + u64 clock_pelt_idle_copy;
> > + u64 enter_idle_copy;
> > +#endif
> >
> > atomic_t nr_iowait;
> >
> > --
> > 2.17.1