2014-10-28 03:55:42

by Yao Dongdong

[permalink] [raw]
Subject: [PATCH resend] sched:check if got a shallowest_idle_cpu before search for least_loaded_cpu

Idle cpu is idler than non-idle cpu, so we needn't search for least_loaded_cpu
after we have found an idle cpu.

Signed-off-by: Yao Dongdong <[email protected]>
Reviewed-by: Srikar Dronamraju <[email protected]>
---
kernel/sched/fair.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 0b069bf..2445a23 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4446,7 +4446,7 @@ find_idlest_cpu(struct sched_group *group, struct task_struct *p, int this_cpu)
latest_idle_timestamp = rq->idle_stamp;
shallowest_idle_cpu = i;
}
- } else {
+ } else if (shallowest_idle_cpu == -1) {
load = weighted_cpuload(i);
if (load < min_load || (load == min_load && i == this_cpu)) {
min_load = load;
--
1.8.3.4


Subject: [tip:sched/core] sched: Check if we got a shallowest_idle_cpu before searching for least_loaded_cpu

Commit-ID: 9f96742a13135e6c609cc99a3a458402af3c8f31
Gitweb: http://git.kernel.org/tip/9f96742a13135e6c609cc99a3a458402af3c8f31
Author: Yao Dongdong <[email protected]>
AuthorDate: Tue, 28 Oct 2014 04:08:06 +0000
Committer: Ingo Molnar <[email protected]>
CommitDate: Tue, 4 Nov 2014 07:17:51 +0100

sched: Check if we got a shallowest_idle_cpu before searching for least_loaded_cpu

Idle cpu is idler than non-idle cpu, so we needn't search for least_loaded_cpu
after we have found an idle cpu.

Signed-off-by: Yao Dongdong <[email protected]>
Reviewed-by: Srikar Dronamraju <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Cc: Linus Torvalds <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
---
kernel/sched/fair.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index ec32c26d..d03d76d 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4641,7 +4641,7 @@ find_idlest_cpu(struct sched_group *group, struct task_struct *p, int this_cpu)
latest_idle_timestamp = rq->idle_stamp;
shallowest_idle_cpu = i;
}
- } else {
+ } else if (shallowest_idle_cpu == -1) {
load = weighted_cpuload(i);
if (load < min_load || (load == min_load && i == this_cpu)) {
min_load = load;