2021-09-30 07:09:39

by Li RongQing

[permalink] [raw]
Subject: [PATCH] sched/fair: Drop the redundant setting of recent_used_cpu

recent_used_cpu has been set to prev before check

Signed-off-by: Li RongQing <[email protected]>
---
kernel/sched/fair.c | 8 +-------
1 files changed, 1 insertions(+), 7 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 7b9fe8c..ec42eaa 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6437,14 +6437,8 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
cpus_share_cache(recent_used_cpu, target) &&
(available_idle_cpu(recent_used_cpu) || sched_idle_cpu(recent_used_cpu)) &&
cpumask_test_cpu(p->recent_used_cpu, p->cpus_ptr) &&
- asym_fits_capacity(task_util, recent_used_cpu)) {
- /*
- * Replace recent_used_cpu with prev as it is a potential
- * candidate for the next wake:
- */
- p->recent_used_cpu = prev;
+ asym_fits_capacity(task_util, recent_used_cpu))
return recent_used_cpu;
- }

/*
* For asymmetric CPU capacity systems, our domain of interest is
--
1.7.1


2021-09-30 12:09:44

by Dietmar Eggemann

[permalink] [raw]
Subject: Re: [PATCH] sched/fair: Drop the redundant setting of recent_used_cpu

On 30/09/2021 08:59, Li RongQing wrote:
> recent_used_cpu has been set to prev before check
>
> Signed-off-by: Li RongQing <[email protected]>
> ---
> kernel/sched/fair.c | 8 +-------
> 1 files changed, 1 insertions(+), 7 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 7b9fe8c..ec42eaa 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6437,14 +6437,8 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
> cpus_share_cache(recent_used_cpu, target) &&
> (available_idle_cpu(recent_used_cpu) || sched_idle_cpu(recent_used_cpu)) &&
> cpumask_test_cpu(p->recent_used_cpu, p->cpus_ptr) &&
> - asym_fits_capacity(task_util, recent_used_cpu)) {
> - /*
> - * Replace recent_used_cpu with prev as it is a potential
> - * candidate for the next wake:
> - */
> - p->recent_used_cpu = prev;
> + asym_fits_capacity(task_util, recent_used_cpu))
> return recent_used_cpu;
> - }
>
> /*
> * For asymmetric CPU capacity systems, our domain of interest is
>

Looks like this has been already fixed in:

https://lore.kernel.org/r/[email protected]