2022-07-13 04:20:20

by Chengming Zhou

[permalink] [raw]
Subject: [PATCH v2 05/10] sched/fair: reset sched_avg last_update_time before set_task_rq()

set_task_rq() -> set_task_rq_fair() will try to synchronize the blocked
task's sched_avg when migrate, which is not needed for already detached
task.

task_change_group_fair() will detached the task sched_avg from prev cfs_rq
first, so reset sched_avg last_update_time before set_task_rq() to avoid that.

Signed-off-by: Chengming Zhou <[email protected]>
---
kernel/sched/fair.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 8992ce5e73d2..171bc22bc142 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -11637,12 +11637,12 @@ void init_cfs_rq(struct cfs_rq *cfs_rq)
static void task_change_group_fair(struct task_struct *p)
{
detach_task_cfs_rq(p);
- set_task_rq(p, task_cpu(p));

#ifdef CONFIG_SMP
/* Tell se's cfs_rq has been changed -- migrated */
p->se.avg.last_update_time = 0;
#endif
+ set_task_rq(p, task_cpu(p));
attach_task_cfs_rq(p);
}

--
2.36.1


2022-07-14 12:34:29

by Dietmar Eggemann

[permalink] [raw]
Subject: Re: [PATCH v2 05/10] sched/fair: reset sched_avg last_update_time before set_task_rq()

On 13/07/2022 06:04, Chengming Zhou wrote:
> set_task_rq() -> set_task_rq_fair() will try to synchronize the blocked
> task's sched_avg when migrate, which is not needed for already detached
> task.
>
> task_change_group_fair() will detached the task sched_avg from prev cfs_rq
> first, so reset sched_avg last_update_time before set_task_rq() to avoid that.
>
> Signed-off-by: Chengming Zhou <[email protected]>

Reviewed-by: Dietmar Eggemann <[email protected]>

> ---
> kernel/sched/fair.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 8992ce5e73d2..171bc22bc142 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -11637,12 +11637,12 @@ void init_cfs_rq(struct cfs_rq *cfs_rq)
> static void task_change_group_fair(struct task_struct *p)
> {
> detach_task_cfs_rq(p);
> - set_task_rq(p, task_cpu(p));
>
> #ifdef CONFIG_SMP
> /* Tell se's cfs_rq has been changed -- migrated */
> p->se.avg.last_update_time = 0;
> #endif
> + set_task_rq(p, task_cpu(p));
> attach_task_cfs_rq(p);
> }
>

2022-07-19 09:19:05

by Vincent Guittot

[permalink] [raw]
Subject: Re: [PATCH v2 05/10] sched/fair: reset sched_avg last_update_time before set_task_rq()

On Wed, 13 Jul 2022 at 06:05, Chengming Zhou
<[email protected]> wrote:
>
> set_task_rq() -> set_task_rq_fair() will try to synchronize the blocked
> task's sched_avg when migrate, which is not needed for already detached
> task.
>
> task_change_group_fair() will detached the task sched_avg from prev cfs_rq
> first, so reset sched_avg last_update_time before set_task_rq() to avoid that.
>
> Signed-off-by: Chengming Zhou <[email protected]>

Reviewed-by: Vincent Guittot <[email protected]>

> ---
> kernel/sched/fair.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 8992ce5e73d2..171bc22bc142 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -11637,12 +11637,12 @@ void init_cfs_rq(struct cfs_rq *cfs_rq)
> static void task_change_group_fair(struct task_struct *p)
> {
> detach_task_cfs_rq(p);
> - set_task_rq(p, task_cpu(p));
>
> #ifdef CONFIG_SMP
> /* Tell se's cfs_rq has been changed -- migrated */
> p->se.avg.last_update_time = 0;
> #endif
> + set_task_rq(p, task_cpu(p));
> attach_task_cfs_rq(p);
> }
>
> --
> 2.36.1
>