2004-10-20 19:37:36

by John Hawkes

[permalink] [raw]
Subject: [PATCH, 2.6.9] improved load_balance() tolerance for pinned tasks

A large number of processes that are pinned to a single CPU results in
every other CPU's load_balance() seeing this overloaded CPU as "busiest",
yet move_tasks() never finds a task to pull-migrate. This condition
occurs during module unload, but can also occur as a denial-of-service
using sys_sched_setaffinity(). Several hundred CPUs performing this
fruitless load_balance() will livelock on the busiest CPU's runqueue
lock. A smaller number of CPUs will livelock if the pinned task count
gets high. This simple patch remedies the more common first problem:
after a move_tasks() failure to migrate anything, the balance_interval
increments. Using a simple increment, vs. the more dramatic doubling of
the balance_interval, is conservative and yet also effective.

John Hawkes


Signed-off-by: John Hawkes <[email protected]>




Index: linux/kernel/sched.c
===================================================================
--- linux.orig/kernel/sched.c 2004-10-19 15:04:11.000000000 -0700
+++ linux/kernel/sched.c 2004-10-19 15:09:50.000000000 -0700
@@ -2123,11 +2123,19 @@
*/
sd->nr_balance_failed = sd->cache_nice_tries;
}
- } else
- sd->nr_balance_failed = 0;

- /* We were unbalanced, so reset the balancing interval */
- sd->balance_interval = sd->min_interval;
+ /*
+ * We were unbalanced, but unsuccessful in move_tasks(),
+ * so bump the balance_interval to lessen the lock contention.
+ */
+ if (sd->balance_interval < sd->max_interval)
+ sd->balance_interval++;
+ } else {
+ sd->nr_balance_failed = 0;
+
+ /* We were unbalanced, so reset the balancing interval */
+ sd->balance_interval = sd->min_interval;
+ }

return nr_moved;


2004-10-20 20:08:00

by Ingo Molnar

[permalink] [raw]
Subject: Re: [PATCH, 2.6.9] improved load_balance() tolerance for pinned tasks


* John Hawkes <[email protected]> wrote:

> A large number of processes that are pinned to a single CPU results in
> every other CPU's load_balance() seeing this overloaded CPU as "busiest",
> yet move_tasks() never finds a task to pull-migrate. This condition
> occurs during module unload, but can also occur as a denial-of-service
> using sys_sched_setaffinity(). Several hundred CPUs performing this
> fruitless load_balance() will livelock on the busiest CPU's runqueue
> lock. A smaller number of CPUs will livelock if the pinned task count
> gets high. This simple patch remedies the more common first problem:
> after a move_tasks() failure to migrate anything, the balance_interval
> increments. Using a simple increment, vs. the more dramatic doubling of
> the balance_interval, is conservative and yet also effective.
>
> John Hawkes
>
>
> Signed-off-by: John Hawkes <[email protected]>

looks good to me.

Acked-by: Ingo Molnar <[email protected]>

Ingo

2004-10-22 13:15:23

by Nick Piggin

[permalink] [raw]
Subject: Re: [PATCH, 2.6.9] improved load_balance() tolerance for pinned tasks

John Hawkes wrote:
> A large number of processes that are pinned to a single CPU results in
> every other CPU's load_balance() seeing this overloaded CPU as "busiest",
> yet move_tasks() never finds a task to pull-migrate. This condition
> occurs during module unload, but can also occur as a denial-of-service
> using sys_sched_setaffinity(). Several hundred CPUs performing this
> fruitless load_balance() will livelock on the busiest CPU's runqueue
> lock. A smaller number of CPUs will livelock if the pinned task count
> gets high. This simple patch remedies the more common first problem:
> after a move_tasks() failure to migrate anything, the balance_interval
> increments. Using a simple increment, vs. the more dramatic doubling of
> the balance_interval, is conservative and yet also effective.
>
> John Hawkes
>

John and I also discussed a slightly more sophisticated algorithm
for this that would try to be a bit smarter about the reason for
balance failures (described in the changelog).

John, this works quite well here, would you have any objections
using it instead? Does it cure your problems there?


Attachments:
sched-lb-pinned.patch (4.80 kB)

2004-10-22 19:24:31

by John Hawkes

[permalink] [raw]
Subject: Re: [PATCH, 2.6.9] improved load_balance() tolerance for pinned tasks

Nick, your patch doesn't work on my 128p to solve the problem.
This variation, however, does work. It's a patch against 2.6.9.
The difference is in move_tasks().






linux-2.6-npiggin/kernel/sched.c | 34 +++++++++++++++++++++++-----------
1 files changed, 23 insertions(+), 11 deletions(-)

Index: linux/kernel/sched.c
===================================================================
--- linux.orig/kernel/sched.c 2004-10-22 09:11:12.000000000 -0700
+++ linux/kernel/sched.c 2004-10-22 11:45:10.000000000 -0700
@@ -1770,7 +1770,7 @@
*/
static inline
int can_migrate_task(task_t *p, runqueue_t *rq, int this_cpu,
- struct sched_domain *sd, enum idle_type idle)
+ struct sched_domain *sd, enum idle_type idle, int *pinned)
{
/*
* We do not migrate tasks that are:
@@ -1780,8 +1780,10 @@
*/
if (task_running(rq, p))
return 0;
- if (!cpu_isset(this_cpu, p->cpus_allowed))
+ if (!cpu_isset(this_cpu, p->cpus_allowed)) {
+ *pinned++;
return 0;
+ }

/* Aggressive migration if we've failed balancing */
if (idle == NEWLY_IDLE ||
@@ -1802,11 +1804,11 @@
*/
static int move_tasks(runqueue_t *this_rq, int this_cpu, runqueue_t *busiest,
unsigned long max_nr_move, struct sched_domain *sd,
- enum idle_type idle)
+ enum idle_type idle, int *all_pinned)
{
prio_array_t *array, *dst_array;
struct list_head *head, *curr;
- int idx, pulled = 0;
+ int idx, examined = 0, pulled = 0, pinned = 0;
task_t *tmp;

if (max_nr_move <= 0 || busiest->nr_running <= 1)
@@ -1850,7 +1852,8 @@

curr = curr->prev;

- if (!can_migrate_task(tmp, busiest, this_cpu, sd, idle)) {
+ examined++;
+ if (!can_migrate_task(tmp, busiest, this_cpu, sd, idle, &pinned)) {
if (curr != head)
goto skip_queue;
idx++;
@@ -1876,6 +1879,8 @@
goto skip_bitmap;
}
out:
+ if (unlikely(examined && examined == pinned))
+ *all_pinned = 1;
return pulled;
}

@@ -2056,7 +2061,7 @@
struct sched_group *group;
runqueue_t *busiest;
unsigned long imbalance;
- int nr_moved;
+ int nr_moved, all_pinned;

spin_lock(&this_rq->lock);
schedstat_inc(sd, lb_cnt[idle]);
@@ -2095,11 +2100,16 @@
*/
double_lock_balance(this_rq, busiest);
nr_moved = move_tasks(this_rq, this_cpu, busiest,
- imbalance, sd, idle);
+ imbalance, sd, idle,
+ &all_pinned);
spin_unlock(&busiest->lock);
}
- spin_unlock(&this_rq->lock);
+ /* All tasks on this runqueue were pinned by CPU affinity */
+ if (unlikely(all_pinned))
+ goto out_balanced;

+ spin_unlock(&this_rq->lock);
+
if (!nr_moved) {
schedstat_inc(sd, lb_failed[idle]);
sd->nr_balance_failed++;
@@ -2154,7 +2164,7 @@
struct sched_group *group;
runqueue_t *busiest = NULL;
unsigned long imbalance;
- int nr_moved = 0;
+ int nr_moved = 0, all_pinned;

schedstat_inc(sd, lb_cnt[NEWLY_IDLE]);
group = find_busiest_group(sd, this_cpu, &imbalance, NEWLY_IDLE);
@@ -2174,7 +2184,7 @@

schedstat_add(sd, lb_imbalance[NEWLY_IDLE], imbalance);
nr_moved = move_tasks(this_rq, this_cpu, busiest,
- imbalance, sd, NEWLY_IDLE);
+ imbalance, sd, NEWLY_IDLE, &all_pinned);
if (!nr_moved)
schedstat_inc(sd, lb_failed[NEWLY_IDLE]);

@@ -2236,6 +2246,7 @@
cpumask_t tmp;
runqueue_t *rq;
int push_cpu = 0;
+ int all_pinned;

if (group == busy_group)
goto next_group;
@@ -2261,7 +2272,8 @@
if (unlikely(busiest == rq))
goto next_group;
double_lock_balance(busiest, rq);
- if (move_tasks(rq, push_cpu, busiest, 1, sd, IDLE)) {
+ if (move_tasks(rq, push_cpu, busiest, 1,
+ sd, IDLE, &all_pinned)) {
schedstat_inc(busiest, alb_lost);
schedstat_inc(rq, alb_gained);
} else {

2004-10-22 19:44:30

by John Hawkes

[permalink] [raw]
Subject: Re: [PATCH, 2.6.9] improved load_balance() tolerance for pinned tasks

> John and I also discussed a slightly more sophisticated algorithm
> for this that would try to be a bit smarter about the reason for
> balance failures (described in the changelog).
>
> John, this works quite well here, would you have any objections
> using it instead? Does it cure your problems there?

No, your variation doesn't solve the problem. This variation of your
patch does, however, solve the problem. The difference is in
move_tasks():

Signed-off-by: John Hawkes <[email protected]>


linux-2.6-npiggin/kernel/sched.c | 34 +++++++++++++++++++++++-----------
1 files changed, 23 insertions(+), 11 deletions(-)

Index: linux/kernel/sched.c
===================================================================
--- linux.orig/kernel/sched.c 2004-10-22 09:11:12.000000000 -0700
+++ linux/kernel/sched.c 2004-10-22 11:45:10.000000000 -0700
@@ -1770,7 +1770,7 @@
*/
static inline
int can_migrate_task(task_t *p, runqueue_t *rq, int this_cpu,
- struct sched_domain *sd, enum idle_type idle)
+ struct sched_domain *sd, enum idle_type idle, int *pinned)
{
/*
* We do not migrate tasks that are:
@@ -1780,8 +1780,10 @@
*/
if (task_running(rq, p))
return 0;
- if (!cpu_isset(this_cpu, p->cpus_allowed))
+ if (!cpu_isset(this_cpu, p->cpus_allowed)) {
+ *pinned++;
return 0;
+ }

/* Aggressive migration if we've failed balancing */
if (idle == NEWLY_IDLE ||
@@ -1802,11 +1804,11 @@
*/
static int move_tasks(runqueue_t *this_rq, int this_cpu, runqueue_t *busiest,
unsigned long max_nr_move, struct sched_domain *sd,
- enum idle_type idle)
+ enum idle_type idle, int *all_pinned)
{
prio_array_t *array, *dst_array;
struct list_head *head, *curr;
- int idx, pulled = 0;
+ int idx, examined = 0, pulled = 0, pinned = 0;
task_t *tmp;

if (max_nr_move <= 0 || busiest->nr_running <= 1)
@@ -1850,7 +1852,8 @@

curr = curr->prev;

- if (!can_migrate_task(tmp, busiest, this_cpu, sd, idle)) {
+ examined++;
+ if (!can_migrate_task(tmp, busiest, this_cpu, sd, idle, &pinned)) {
if (curr != head)
goto skip_queue;
idx++;
@@ -1876,6 +1879,8 @@
goto skip_bitmap;
}
out:
+ if (unlikely(examined && examined == pinned))
+ *all_pinned = 1;
return pulled;
}

@@ -2056,7 +2061,7 @@
struct sched_group *group;
runqueue_t *busiest;
unsigned long imbalance;
- int nr_moved;
+ int nr_moved, all_pinned;

spin_lock(&this_rq->lock);
schedstat_inc(sd, lb_cnt[idle]);
@@ -2095,11 +2100,16 @@
*/
double_lock_balance(this_rq, busiest);
nr_moved = move_tasks(this_rq, this_cpu, busiest,
- imbalance, sd, idle);
+ imbalance, sd, idle,
+ &all_pinned);
spin_unlock(&busiest->lock);
}
- spin_unlock(&this_rq->lock);
+ /* All tasks on this runqueue were pinned by CPU affinity */
+ if (unlikely(all_pinned))
+ goto out_balanced;

+ spin_unlock(&this_rq->lock);
+
if (!nr_moved) {
schedstat_inc(sd, lb_failed[idle]);
sd->nr_balance_failed++;
@@ -2154,7 +2164,7 @@
struct sched_group *group;
runqueue_t *busiest = NULL;
unsigned long imbalance;
- int nr_moved = 0;
+ int nr_moved = 0, all_pinned;

schedstat_inc(sd, lb_cnt[NEWLY_IDLE]);
group = find_busiest_group(sd, this_cpu, &imbalance, NEWLY_IDLE);
@@ -2174,7 +2184,7 @@

schedstat_add(sd, lb_imbalance[NEWLY_IDLE], imbalance);
nr_moved = move_tasks(this_rq, this_cpu, busiest,
- imbalance, sd, NEWLY_IDLE);
+ imbalance, sd, NEWLY_IDLE, &all_pinned);
if (!nr_moved)
schedstat_inc(sd, lb_failed[NEWLY_IDLE]);

@@ -2236,6 +2246,7 @@
cpumask_t tmp;
runqueue_t *rq;
int push_cpu = 0;
+ int all_pinned;

if (group == busy_group)
goto next_group;
@@ -2261,7 +2272,8 @@
if (unlikely(busiest == rq))
goto next_group;
double_lock_balance(busiest, rq);
- if (move_tasks(rq, push_cpu, busiest, 1, sd, IDLE)) {
+ if (move_tasks(rq, push_cpu, busiest, 1,
+ sd, IDLE, &all_pinned)) {
schedstat_inc(busiest, alb_lost);
schedstat_inc(rq, alb_gained);
} else {

2004-10-23 04:26:51

by Nick Piggin

[permalink] [raw]
Subject: Re: [PATCH, 2.6.9] improved load_balance() tolerance for pinned tasks

John Hawkes wrote:
> Nick, your patch doesn't work on my 128p to solve the problem.
> This variation, however, does work. It's a patch against 2.6.9.
> The difference is in move_tasks().
>

OK good. I'd be happy with you sending that to Andrew if you're
happy with it.

Thanks,
Nick

2004-10-23 04:45:11

by Nick Piggin

[permalink] [raw]
Subject: Re: [PATCH, 2.6.9] improved load_balance() tolerance for pinned tasks

John Hawkes wrote:
> From: "John Hawkes" <[email protected]>
>
>>No, your variation doesn't solve the problem. This variation of your
>>patch does, however, solve the problem. The difference is in
>>move_tasks():
>
>
> Actually, there is another related problem that arises in
> active_load_balance() with a runqueue that holds hundreds of pinned processes.
> I'm seeing a migration_thread perpetually consuming 70% of its CPU.
>

That's what I was worried about, but in your most recent
patch you just sent, the all_pinned path should skip over
the active load balance completely... basically it shouldn't
be running at all, and if it is then it is a bug I think?

2004-10-25 16:36:12

by John Hawkes

[permalink] [raw]
Subject: Re: [PATCH, 2.6.9] improved load_balance() tolerance for pinned tasks

From: "Nick Piggin" <[email protected]>
> > From: "John Hawkes" <[email protected]>
> > Actually, there is another related problem that arises in
> > active_load_balance() with a runqueue that holds hundreds of pinned
processes.
> > I'm seeing a migration_thread perpetually consuming 70% of its CPU.
>
> That's what I was worried about, but in your most recent
> patch you just sent, the all_pinned path should skip over
> the active load balance completely... basically it shouldn't
> be running at all, and if it is then it is a bug I think?

To reiterate: this is probably reproducible on smaller SMP systems, too.
Just do a 'runon' (using sys_sched_setaffinity) of ~200 (or more) small
computebound processes on a single CPU.

My patch -- that has load_balance() skip over (busiest->active_balance = 1)
trigger that starts up active_load_balance() -- does seem to reduce the
frequency of bursts of long-running activity of the migration thread, but
those burst of activity are still there, with migration_thread consuming
75-95% of its CPU for several seconds (as observed by 'top'). I have not yet
determined what's happening. It might be an artifact of how long it takes to
do those 'runon' startups of the computebound processes.

John Hawkes

2004-10-26 01:25:44

by Nick Piggin

[permalink] [raw]
Subject: Re: [PATCH, 2.6.9] improved load_balance() tolerance for pinned tasks

John Hawkes wrote:

>
> To reiterate: this is probably reproducible on smaller SMP systems, too.
> Just do a 'runon' (using sys_sched_setaffinity) of ~200 (or more) small
> computebound processes on a single CPU.
>

Yeah I tried that with a handful... I'll try again with 200.

> My patch -- that has load_balance() skip over (busiest->active_balance = 1)
> trigger that starts up active_load_balance() -- does seem to reduce the
> frequency of bursts of long-running activity of the migration thread, but
> those burst of activity are still there, with migration_thread consuming
> 75-95% of its CPU for several seconds (as observed by 'top'). I have not yet
> determined what's happening. It might be an artifact of how long it takes to
> do those 'runon' startups of the computebound processes.
>

Hmm... it would be hard to believe that it is all to do with
migrating tasks... but its possible I guess.

I'll also look into it if I can reproduce the problem here.

2004-10-30 00:28:05

by Matthew Dobson

[permalink] [raw]
Subject: Re: [PATCH, 2.6.9] improved load_balance() tolerance for pinned tasks

On Mon, 2004-10-25 at 09:02, John Hawkes wrote:
> From: "Nick Piggin" <[email protected]>
> > > From: "John Hawkes" <[email protected]>
> > > Actually, there is another related problem that arises in
> > > active_load_balance() with a runqueue that holds hundreds of pinned
> processes.
> > > I'm seeing a migration_thread perpetually consuming 70% of its CPU.
> >
> > That's what I was worried about, but in your most recent
> > patch you just sent, the all_pinned path should skip over
> > the active load balance completely... basically it shouldn't
> > be running at all, and if it is then it is a bug I think?
>
> To reiterate: this is probably reproducible on smaller SMP systems, too.
> Just do a 'runon' (using sys_sched_setaffinity) of ~200 (or more) small
> computebound processes on a single CPU.
>
> My patch -- that has load_balance() skip over (busiest->active_balance = 1)
> trigger that starts up active_load_balance() -- does seem to reduce the
> frequency of bursts of long-running activity of the migration thread, but
> those burst of activity are still there, with migration_thread consuming
> 75-95% of its CPU for several seconds (as observed by 'top'). I have not yet
> determined what's happening. It might be an artifact of how long it takes to
> do those 'runon' startups of the computebound processes.

You may want to try these tests again with linux-2.6.10-rc1-mm2. It's
got 2 patches to fix some broken behavior of active_load_balance(). The
version of active_load_balance() in 2.6.9 was not considering a great
many CPUs as potential recipients of tasks due to some small logic
problems in the code.

Cheers!

-Matt