Child has the same decay_count as parent. If it's not zero,
we add it to parent's cfs_rq->removed_load:
wake_up_new_task()->set_task_cpu()->migrate_task_rq_fair().
Child's load is a just garbade after copying of parent,
it hasn't been on cfs_rq yet, and it must not be added to
cfs_rq::removed_load in migrate_task_rq_fair().
The patch moves sched_entity::avg::decay_count intialization
in sched_fork(). So, migrate_task_rq_fair() does not change
removed_load.
Signed-off-by: Kirill Tkhai <[email protected]>
---
kernel/sched/core.c | 1 +
kernel/sched/fair.c | 1 -
2 files changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index bb398c0..2894b69 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1832,6 +1832,7 @@ static void __sched_fork(unsigned long clone_flags, struct task_struct *p)
p->se.prev_sum_exec_runtime = 0;
p->se.nr_migrations = 0;
p->se.vruntime = 0;
+ p->se.avg.decay_count = 0;
INIT_LIST_HEAD(&p->se.group_node);
#ifdef CONFIG_SCHEDSTATS
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index df2cdf7..5f3b5a7 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -676,7 +676,6 @@ void init_task_runnable_average(struct task_struct *p)
{
u32 slice;
- p->se.avg.decay_count = 0;
slice = sched_slice(task_cfs_rq(p), &p->se) >> 10;
p->se.avg.runnable_avg_sum = slice;
p->se.avg.runnable_avg_period = slice;
Kirill Tkhai <[email protected]> writes:
> Child has the same decay_count as parent. If it's not zero,
> we add it to parent's cfs_rq->removed_load:
>
> wake_up_new_task()->set_task_cpu()->migrate_task_rq_fair().
>
> Child's load is a just garbade after copying of parent,
> it hasn't been on cfs_rq yet, and it must not be added to
> cfs_rq::removed_load in migrate_task_rq_fair().
>
> The patch moves sched_entity::avg::decay_count intialization
> in sched_fork(). So, migrate_task_rq_fair() does not change
> removed_load.
>
> Signed-off-by: Kirill Tkhai <[email protected]>
Reviewed-by: Ben Segall <[email protected]>
> ---
> kernel/sched/core.c | 1 +
> kernel/sched/fair.c | 1 -
> 2 files changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index bb398c0..2894b69 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -1832,6 +1832,7 @@ static void __sched_fork(unsigned long clone_flags, struct task_struct *p)
> p->se.prev_sum_exec_runtime = 0;
> p->se.nr_migrations = 0;
> p->se.vruntime = 0;
> + p->se.avg.decay_count = 0;
> INIT_LIST_HEAD(&p->se.group_node);
>
> #ifdef CONFIG_SCHEDSTATS
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index df2cdf7..5f3b5a7 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -676,7 +676,6 @@ void init_task_runnable_average(struct task_struct *p)
> {
> u32 slice;
>
> - p->se.avg.decay_count = 0;
> slice = sched_slice(task_cfs_rq(p), &p->se) >> 10;
> p->se.avg.runnable_avg_sum = slice;
> p->se.avg.runnable_avg_period = slice;
>
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/