2009-04-15 02:53:18

by Miao Xie

[permalink] [raw]
Subject: [PATCH] sched: fix off-by-one bug in balance_tasks()

If the load that need be moved equals the half weight of a task, I think
it is unnecessary to move this task. Or this task will be moved back and
forth.

Signed-off-by: Miao Xie <[email protected]>
---
kernel/sched.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/kernel/sched.c b/kernel/sched.c
index 5724508..44926c8 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -3085,7 +3085,7 @@ next:
if (!p || loops++ > sysctl_sched_nr_migrate)
goto out;

- if ((p->se.load.weight >> 1) > rem_load_move ||
+ if ((p->se.load.weight >> 1) >= rem_load_move ||
!can_migrate_task(p, busiest, this_cpu, sd, idle, &pinned)) {
p = iterator->next(iterator->arg);
goto next;
--
1.6.0.3


2009-04-15 11:42:19

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH] sched: fix off-by-one bug in balance_tasks()

On Wed, 2009-04-15 at 10:49 +0800, Miao Xie wrote:
> If the load that need be moved equals the half weight of a task, I think
> it is unnecessary to move this task. Or this task will be moved back and
> forth.

That's actually desirable. Consider the 3 tasks on 2 cpus statically
infeasible scenario. There you'd want the tasks to bounce around a bit
in order to provide fairness.

> Signed-off-by: Miao Xie <[email protected]>
> ---
> kernel/sched.c | 2 +-
> 1 files changed, 1 insertions(+), 1 deletions(-)
>
> diff --git a/kernel/sched.c b/kernel/sched.c
> index 5724508..44926c8 100644
> --- a/kernel/sched.c
> +++ b/kernel/sched.c
> @@ -3085,7 +3085,7 @@ next:
> if (!p || loops++ > sysctl_sched_nr_migrate)
> goto out;
>
> - if ((p->se.load.weight >> 1) > rem_load_move ||
> + if ((p->se.load.weight >> 1) >= rem_load_move ||
> !can_migrate_task(p, busiest, this_cpu, sd, idle, &pinned)) {
> p = iterator->next(iterator->arg);
> goto next;

2009-04-16 01:27:18

by Miao Xie

[permalink] [raw]
Subject: Re: [PATCH] sched: fix off-by-one bug in balance_tasks()

on 2009-4-15 19:41 Peter Zijlstra wrote:
> On Wed, 2009-04-15 at 10:49 +0800, Miao Xie wrote:
>> If the load that need be moved equals the half weight of a task, I think
>> it is unnecessary to move this task. Or this task will be moved back and
>> forth.
>
> That's actually desirable. Consider the 3 tasks on 2 cpus statically
> infeasible scenario. There you'd want the tasks to bounce around a bit
> in order to provide fairness.

I see. Thanks for your explanation.

Miao Xie
>
>> Signed-off-by: Miao Xie <[email protected]>
>> ---
>> kernel/sched.c | 2 +-
>> 1 files changed, 1 insertions(+), 1 deletions(-)
>>
>> diff --git a/kernel/sched.c b/kernel/sched.c
>> index 5724508..44926c8 100644
>> --- a/kernel/sched.c
>> +++ b/kernel/sched.c
>> @@ -3085,7 +3085,7 @@ next:
>> if (!p || loops++ > sysctl_sched_nr_migrate)
>> goto out;
>>
>> - if ((p->se.load.weight >> 1) > rem_load_move ||
>> + if ((p->se.load.weight >> 1) >= rem_load_move ||
>> !can_migrate_task(p, busiest, this_cpu, sd, idle, &pinned)) {
>> p = iterator->next(iterator->arg);
>> goto next;
>
>
>
>