2013-06-14 06:18:32

by Lei Wen

[permalink] [raw]
Subject: [PATCH 0/2] small fix for fix_small_imbalance

Here is two patches which correct the scale usage in the fix_small_balance,
and give out comment over when fix_small_imbalance would cause load change.

Lei Wen (2):
sched: reduce calculation effort in fix_small_imbalance
sched: scale the busy and this queue's per-task load before compare

kernel/sched/fair.c | 37 ++++++++++++++++++++++---------------
1 file changed, 22 insertions(+), 15 deletions(-)

--
1.7.10.4


2013-06-14 06:15:50

by Lei Wen

[permalink] [raw]
Subject: [PATCH 2/2] sched: scale the busy and this queue's per-task load before compare

Since for max_load and this_load, they are the value that already be
scaled. It is not reasonble to get a minimum value between the scaled
and non-scaled value, like below example.
min(sds->busiest_load_per_task, sds->max_load);

Also add comment over in what condition, there would be cpu power gain
in move the load.

Signed-off-by: Lei Wen <[email protected]>
---
kernel/sched/fair.c | 28 +++++++++++++++++++---------
1 file changed, 19 insertions(+), 9 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 28052fa..77a149c 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4692,7 +4692,7 @@ void fix_small_imbalance(struct lb_env *env, struct sd_lb_stats *sds)
{
unsigned long tmp, pwr_now = 0, pwr_move = 0;
unsigned int imbn = 2;
- unsigned long scaled_busy_load_per_task;
+ unsigned long scaled_busy_load_per_task, scaled_this_load_per_task;

if (sds->this_nr_running) {
sds->this_load_per_task /= sds->this_nr_running;
@@ -4714,6 +4714,9 @@ void fix_small_imbalance(struct lb_env *env, struct sd_lb_stats *sds)
return;
}

+ scaled_this_load_per_task = sds->this_load_per_task
+ * SCHED_POWER_SCALE;
+ scaled_this_load_per_task /= sds->this->sgp->power;
/*
* OK, we don't have enough imbalance to justify moving tasks,
* however we may be able to increase total CPU power used by
@@ -4721,28 +4724,35 @@ void fix_small_imbalance(struct lb_env *env, struct sd_lb_stats *sds)
*/

pwr_now += sds->busiest->sgp->power *
- min(sds->busiest_load_per_task, sds->max_load);
+ min(scaled_busy_load_per_task, sds->max_load);
pwr_now += sds->this->sgp->power *
- min(sds->this_load_per_task, sds->this_load);
+ min(scaled_this_load_per_task, sds->this_load);
pwr_now /= SCHED_POWER_SCALE;

/* Amount of load we'd subtract */
if (sds->max_load > scaled_busy_load_per_task) {
pwr_move += sds->busiest->sgp->power *
- min(sds->busiest_load_per_task,
+ min(scaled_busy_load_per_task,
sds->max_load - scaled_busy_load_per_task);
- tmp = (sds->busiest_load_per_task * SCHED_POWER_SCALE) /
- sds->this->sgp->power;
+ tmp = scaled_busy_load_per_task;
} else
- tmp = (sds->max_load * sds->busiest->sgp->power) /
- sds->this->sgp->power;
+ tmp = sds->max_load;

+ /* Scale to this queue from busiest queue */
+ tmp = (tmp * sds->busiest->sgp->power) /
+ sds->this->sgp->power;
/* Amount of load we'd add */
pwr_move += sds->this->sgp->power *
- min(sds->this_load_per_task, sds->this_load + tmp);
+ min(scaled_this_load_per_task, sds->this_load + tmp);
pwr_move /= SCHED_POWER_SCALE;

/* Move if we gain throughput */
+ /*
+ * The only possibilty for below statement be true, is:
+ * sds->max_load is larger than scaled_busy_load_per_task, while,
+ * scaled_this_load_per_task is larger than sds->this_load plus by
+ * the scaled scaled_busy_load_per_task moved into this queue
+ */
if (pwr_move > pwr_now)
env->imbalance = sds->busiest_load_per_task;
}
--
1.7.10.4

2013-06-14 06:19:03

by Lei Wen

[permalink] [raw]
Subject: [PATCH 1/2] sched: reduce calculation effort in fix_small_imbalance

Actually all below item could be repalced by scaled_busy_load_per_task
(sds->busiest_load_per_task * SCHED_POWER_SCALE)
/sds->busiest->sgp->power;

Signed-off-by: Lei Wen <[email protected]>
---
kernel/sched/fair.c | 19 ++++++++-----------
1 file changed, 8 insertions(+), 11 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index c61a614..28052fa 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4727,20 +4727,17 @@ void fix_small_imbalance(struct lb_env *env, struct sd_lb_stats *sds)
pwr_now /= SCHED_POWER_SCALE;

/* Amount of load we'd subtract */
- tmp = (sds->busiest_load_per_task * SCHED_POWER_SCALE) /
- sds->busiest->sgp->power;
- if (sds->max_load > tmp)
+ if (sds->max_load > scaled_busy_load_per_task) {
pwr_move += sds->busiest->sgp->power *
- min(sds->busiest_load_per_task, sds->max_load - tmp);
-
- /* Amount of load we'd add */
- if (sds->max_load * sds->busiest->sgp->power <
- sds->busiest_load_per_task * SCHED_POWER_SCALE)
- tmp = (sds->max_load * sds->busiest->sgp->power) /
- sds->this->sgp->power;
- else
+ min(sds->busiest_load_per_task,
+ sds->max_load - scaled_busy_load_per_task);
tmp = (sds->busiest_load_per_task * SCHED_POWER_SCALE) /
sds->this->sgp->power;
+ } else
+ tmp = (sds->max_load * sds->busiest->sgp->power) /
+ sds->this->sgp->power;
+
+ /* Amount of load we'd add */
pwr_move += sds->this->sgp->power *
min(sds->this_load_per_task, sds->this_load + tmp);
pwr_move /= SCHED_POWER_SCALE;
--
1.7.10.4