2015-06-01 19:39:02

by Josef Bacik

[permalink] [raw]
Subject: Re: [PATCH RESEND] sched: prefer an idle cpu vs an idle sibling for BALANCE_WAKE

On 05/29/2015 05:03 PM, Josef Bacik wrote:
> On 05/28/2015 07:05 AM, Peter Zijlstra wrote:
>>
>> So maybe you want something like the below; that cures the thing Morten
>> raised, and we continue looking for sd, even after we found affine_sd.
>>
>> It also avoids the pointless idle_cpu() check Mike raised by making
>> select_idle_sibling() return -1 if it doesn't find anything.
>>
>> Then it continues doing the full balance IFF sd was set, which is keyed
>> off of sd->flags.
>>
>> And note (as Mike already said), BALANCE_WAKE does _NOT_ look for idle
>> CPUs, it looks for the least loaded CPU. And its damn expensive.
>>
>>
>> Rewriting this entire thing is somewhere on the todo list :/
>>
>

Ok I got this patch to give me the same performance as all our other
crap, just need to apply this incremental


diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index b71eb2b..e11cfec 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4761,13 +4761,10 @@ select_task_rq_fair(struct task_struct *p, int
prev_cpu, int sd_flag, int wake_f

if (tmp->flags & sd_flag)
sd = tmp;
- else if (!want_affine || (want_affine && affine_sd))
- break;
}

if (affine_sd && cpu != prev_cpu && wake_affine(affine_sd, p, sync)) {
prev_cpu = cpu;
- sd = NULL; /* WAKE_AFFINE trumps BALANCE_WAKE */
}

if (sd_flag & SD_BALANCE_WAKE) {

And everything works fine. Does that seem reasonable? Thanks,

Josef


2015-06-01 20:42:48

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH RESEND] sched: prefer an idle cpu vs an idle sibling for BALANCE_WAKE

On Mon, 2015-06-01 at 15:38 -0400, Josef Bacik wrote:

> Ok I got this patch to give me the same performance as all our other
> crap, just need to apply this incremental
>
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index b71eb2b..e11cfec 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -4761,13 +4761,10 @@ select_task_rq_fair(struct task_struct *p, int
> prev_cpu, int sd_flag, int wake_f
>
> if (tmp->flags & sd_flag)
> sd = tmp;
> - else if (!want_affine || (want_affine && affine_sd))
> - break;
> }

That bit worries me a bit, because that causes us to have a weird
definition for what sd is.

Without WAKE_AFFINE, sd is the biggest domain with BALANCE_WAKE (or any
other sd_flag) set.

But with WAKE_AFFINE, its the first domain that satisfies the wake
affine constraint of covering both the previous and waking cpu. It
basically reduces sd to affine_sd.

2015-06-01 21:04:58

by Josef Bacik

[permalink] [raw]
Subject: Re: [PATCH RESEND] sched: prefer an idle cpu vs an idle sibling for BALANCE_WAKE

On 06/01/2015 04:42 PM, Peter Zijlstra wrote:
> On Mon, 2015-06-01 at 15:38 -0400, Josef Bacik wrote:
>
>> Ok I got this patch to give me the same performance as all our other
>> crap, just need to apply this incremental
>>
>>
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index b71eb2b..e11cfec 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -4761,13 +4761,10 @@ select_task_rq_fair(struct task_struct *p, int
>> prev_cpu, int sd_flag, int wake_f
>>
>> if (tmp->flags & sd_flag)
>> sd = tmp;
>> - else if (!want_affine || (want_affine && affine_sd))
>> - break;
>> }
>
> That bit worries me a bit, because that causes us to have a weird
> definition for what sd is.
>
> Without WAKE_AFFINE, sd is the biggest domain with BALANCE_WAKE (or any
> other sd_flag) set.
>
> But with WAKE_AFFINE, its the first domain that satisfies the wake
> affine constraint of covering both the previous and waking cpu. It
> basically reduces sd to affine_sd.
>

I'm just giving you what made the performance good for us, I'm relying
on you to make it sane ;). So with that bit we break out if
!wake_affine as soon as we don't find an sd that matches sd_flag, that
seems less than helpful. I was going to change it to

else if (want_affine && affine_sd)

but the thing is we don't do anything with affine_sd unless the bit
below matches

if (affine_sd && cpu != prev_cpu && wake_affine(affine_sd, p, sync))

so it seems like we're leaving ourselves without sd set in a few cases
where we'd actually want it set, so I just nuked the whole thing and
carried on. I'm all ears for other ideas, I'm just letting you know
what my results are since you are the guys who actually understand this
stuff. Thanks,

Josef

2015-06-01 22:16:02

by Rik van Riel

[permalink] [raw]
Subject: Re: [PATCH RESEND] sched: prefer an idle cpu vs an idle sibling for BALANCE_WAKE

On 06/01/2015 03:38 PM, Josef Bacik wrote:

> Ok I got this patch to give me the same performance as all our other
> crap, just need to apply this incremental
>
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index b71eb2b..e11cfec 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -4761,13 +4761,10 @@ select_task_rq_fair(struct task_struct *p, int
> prev_cpu, int sd_flag, int wake_f
>
> if (tmp->flags & sd_flag)
> sd = tmp;
> - else if (!want_affine || (want_affine && affine_sd))
> - break;
> }
>
> if (affine_sd && cpu != prev_cpu && wake_affine(affine_sd, p, sync)) {
> prev_cpu = cpu;
> - sd = NULL; /* WAKE_AFFINE trumps BALANCE_WAKE */

Given Peter's worries about wake_affine and affine_sd,
should the above be sd = affine_sd, in case select_idle_sibling
cannot find an idle sibling?

That way we can attempt to at least find an idle cpu inside
the affine_sd.

Of course, there may be subtleties here I am overlooking...

> }
>
> if (sd_flag & SD_BALANCE_WAKE) {


--
All rights reversed

2015-06-02 17:14:52

by Josef Bacik

[permalink] [raw]
Subject: Re: [PATCH RESEND] sched: prefer an idle cpu vs an idle sibling for BALANCE_WAKE

On 06/01/2015 04:42 PM, Peter Zijlstra wrote:
> On Mon, 2015-06-01 at 15:38 -0400, Josef Bacik wrote:
>
>> Ok I got this patch to give me the same performance as all our other
>> crap, just need to apply this incremental
>>
>>
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index b71eb2b..e11cfec 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -4761,13 +4761,10 @@ select_task_rq_fair(struct task_struct *p, int
>> prev_cpu, int sd_flag, int wake_f
>>
>> if (tmp->flags & sd_flag)
>> sd = tmp;
>> - else if (!want_affine || (want_affine && affine_sd))
>> - break;
>> }
>
> That bit worries me a bit, because that causes us to have a weird
> definition for what sd is.
>
> Without WAKE_AFFINE, sd is the biggest domain with BALANCE_WAKE (or any
> other sd_flag) set.
>
> But with WAKE_AFFINE, its the first domain that satisfies the wake
> affine constraint of covering both the previous and waking cpu. It
> basically reduces sd to affine_sd.
>

Ok I took Rik's idea and came out with this


diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index b71eb2b..75073d3 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4761,13 +4761,14 @@ select_task_rq_fair(struct task_struct *p, int
prev_cpu, int sd_flag, int wake_f

if (tmp->flags & sd_flag)
sd = tmp;
- else if (!want_affine || (want_affine && affine_sd))
+ else if (want_affine && affine_sd) {
+ sd = affine_sd;
break;
+ }
}

if (affine_sd && cpu != prev_cpu && wake_affine(affine_sd, p, sync)) {
prev_cpu = cpu;
- sd = NULL; /* WAKE_AFFINE trumps BALANCE_WAKE */
}

if (sd_flag & SD_BALANCE_WAKE) {

But now that I re-read your response I think this is even more what you
were worried about than less.

Basically it comes down to if sd isn't set then we get shit performance.
I realize that this magic to find an idle cpu when sd is set is pretty
heavy handed, but it's obviously helpful in our case.

So let me ask this question. When do we want to do the heavy handed
search and when do we not? With WAKE_AFFINE what is our ultimate goal
vs the other SD's? If we don't have an sd that matches our sd_flags
what should we be doing, should we just go with whatever cpu we're on
and carry on? Thanks,

Josef

2015-06-03 14:12:56

by Rik van Riel

[permalink] [raw]
Subject: Re: [PATCH RESEND] sched: prefer an idle cpu vs an idle sibling for BALANCE_WAKE

On 06/02/2015 01:12 PM, Josef Bacik wrote:

> But now that I re-read your response I think this is even more what you
> were worried about than less.
>
> Basically it comes down to if sd isn't set then we get shit performance.
> I realize that this magic to find an idle cpu when sd is set is pretty
> heavy handed, but it's obviously helpful in our case.

Ingo and Peter appear to be worried that searching
through too many idle CPUs leads to bad performance.

Your test results seem to indicate that finding an
idle CPU really really helps performance.

There is a policy vs mechanism thing here. Ingo and Peter
are worried about the overhead in the mechanism of finding
an idle CPU. Your measurements show that the policy of
finding an idle CPU is the correct one.

Can we make the policy change now, and optimize the
mechanism later?

> So let me ask this question. When do we want to do the heavy handed
> search and when do we not? With WAKE_AFFINE what is our ultimate goal
> vs the other SD's? If we don't have an sd that matches our sd_flags
> what should we be doing, should we just go with whatever cpu we're on
> and carry on? Thanks,
>
> Josef

2015-06-03 14:24:23

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH RESEND] sched: prefer an idle cpu vs an idle sibling for BALANCE_WAKE

On Wed, 2015-06-03 at 10:12 -0400, Rik van Riel wrote:

> There is a policy vs mechanism thing here. Ingo and Peter
> are worried about the overhead in the mechanism of finding
> an idle CPU. Your measurements show that the policy of
> finding an idle CPU is the correct one.

For his workload; I'm sure I can find a workload where it hurts.

In fact, I'm fairly sure Mike knows one from the top of his head, seeing
how he's the one playing about trying to shrink that idle search :-)

2015-06-03 14:50:38

by Josef Bacik

[permalink] [raw]
Subject: Re: [PATCH RESEND] sched: prefer an idle cpu vs an idle sibling for BALANCE_WAKE

On 06/03/2015 10:24 AM, Peter Zijlstra wrote:
> On Wed, 2015-06-03 at 10:12 -0400, Rik van Riel wrote:
>
>> There is a policy vs mechanism thing here. Ingo and Peter
>> are worried about the overhead in the mechanism of finding
>> an idle CPU. Your measurements show that the policy of
>> finding an idle CPU is the correct one.
>
> For his workload; I'm sure I can find a workload where it hurts.
>
> In fact, I'm fairly sure Mike knows one from the top of his head, seeing
> how he's the one playing about trying to shrink that idle search :-)
>

So the perf bench sched microbenchmarks are a pretty good analog for our
workload. I run

perf bench sched messaging -g 100 -l 10000
perf bench sched pipe

5 times and average the results to get an answer, really the messaging
one is closest one and the one I look at. I get like 56 seconds of
runtime on plain 4.0 and 47 seconds patched, it's how I check my little
experiments before doing the full real workload.

I don't want to tune the scheduler just for our workload, but the
microbenchmarks we have are also showing the same performance
improvements. I would be super interested in workloads where this patch
doesn't help so we could integrate that workload into perf sched bench
to make us more confident in making policy changes in the scheduler. So
Mike if you have something specific in mind please elaborate and I'm
happy to do the legwork to get it into perf bench and to test things
until we're happy.

In the meantime I really want to get this fixed for us, I do not want to
pull some weird old patch around for the next year until we rebase again
next year, and then do this whole dance again. What would be the way
forward for getting this fixed now? Do I need to hide it behind a
sysctl or config option? Thanks,

Josef

2015-06-03 15:30:53

by Mike Galbraith

[permalink] [raw]
Subject: Re: [PATCH RESEND] sched: prefer an idle cpu vs an idle sibling for BALANCE_WAKE

On Wed, 2015-06-03 at 16:24 +0200, Peter Zijlstra wrote:
> On Wed, 2015-06-03 at 10:12 -0400, Rik van Riel wrote:
>
> > There is a policy vs mechanism thing here. Ingo and Peter
> > are worried about the overhead in the mechanism of finding
> > an idle CPU. Your measurements show that the policy of
> > finding an idle CPU is the correct one.
>
> For his workload; I'm sure I can find a workload where it hurts.
>
> In fact, I'm fairly sure Mike knows one from the top of his head, seeing
> how he's the one playing about trying to shrink that idle search :-)

Like anything where scheduling latency doesn't heavily dominate. Even
if searching were free, bounces aren't, even for the very light.

-Mike

2015-06-03 15:58:10

by Josef Bacik

[permalink] [raw]
Subject: Re: [PATCH RESEND] sched: prefer an idle cpu vs an idle sibling for BALANCE_WAKE

On 06/03/2015 11:30 AM, Mike Galbraith wrote:
> On Wed, 2015-06-03 at 16:24 +0200, Peter Zijlstra wrote:
>> On Wed, 2015-06-03 at 10:12 -0400, Rik van Riel wrote:
>>
>>> There is a policy vs mechanism thing here. Ingo and Peter
>>> are worried about the overhead in the mechanism of finding
>>> an idle CPU. Your measurements show that the policy of
>>> finding an idle CPU is the correct one.
>>
>> For his workload; I'm sure I can find a workload where it hurts.
>>
>> In fact, I'm fairly sure Mike knows one from the top of his head, seeing
>> how he's the one playing about trying to shrink that idle search :-)
>
> Like anything where scheduling latency doesn't heavily dominate. Even
> if searching were free, bounces aren't, even for the very light.
>

If scheduling latency doesn't hurt then making the search shouldn't
matter should it? I get that migrations aren't free, but it seems like
they can't hurt that much. This application is huge, it's our
webserver, we're doing like 400 requests per second on these things, and
hands down moving stuff to idle cpus is beating the pants off of staying
on the same cpu. Is there a specific workload I could build a test for
that you think this approach would hurt? Thanks,

Josef

2015-06-03 16:53:13

by Mike Galbraith

[permalink] [raw]
Subject: Re: [PATCH RESEND] sched: prefer an idle cpu vs an idle sibling for BALANCE_WAKE

On Wed, 2015-06-03 at 11:57 -0400, Josef Bacik wrote:
> On 06/03/2015 11:30 AM, Mike Galbraith wrote:
> > On Wed, 2015-06-03 at 16:24 +0200, Peter Zijlstra wrote:
> >> On Wed, 2015-06-03 at 10:12 -0400, Rik van Riel wrote:
> >>
> >>> There is a policy vs mechanism thing here. Ingo and Peter
> >>> are worried about the overhead in the mechanism of finding
> >>> an idle CPU. Your measurements show that the policy of
> >>> finding an idle CPU is the correct one.
> >>
> >> For his workload; I'm sure I can find a workload where it hurts.
> >>
> >> In fact, I'm fairly sure Mike knows one from the top of his head, seeing
> >> how he's the one playing about trying to shrink that idle search :-)
> >
> > Like anything where scheduling latency doesn't heavily dominate. Even
> > if searching were free, bounces aren't, even for the very light.
> >
>
> If scheduling latency doesn't hurt then making the search shouldn't
> matter should it? I get that migrations aren't free, but it seems like
> they can't hurt that much.

Nah, they don't hurt much :)

commit e0a79f529d5ba2507486d498b25da40911d95cf6
Author: Mike Galbraith <[email protected]>
Date: Mon Jan 28 12:19:25 2013 +0100

sched: Fix select_idle_sibling() bouncing cow syndrome

If the previous CPU is cache affine and idle, select it.

The current implementation simply traverses the sd_llc domain,
taking the first idle CPU encountered, which walks buddy pairs
hand in hand over the package, inflicting excruciating pain.

1 tbench pair (worst case) in a 10 core + SMT package:

pre 15.22 MB/sec 1 procs
post 252.01 MB/sec 1 procs


> This application is huge, it's our
> webserver, we're doing like 400 requests per second on these things, and
> hands down moving stuff to idle cpus is beating the pants off of staying
> on the same cpu. Is there a specific workload I could build a test for
> that you think this approach would hurt? Thanks,

Search cost hurts fast movers, as does dragging even a small footprint
all over the place, as you can see above.

-Mike

2015-06-03 17:17:35

by Josef Bacik

[permalink] [raw]
Subject: Re: [PATCH RESEND] sched: prefer an idle cpu vs an idle sibling for BALANCE_WAKE

On 06/03/2015 12:53 PM, Mike Galbraith wrote:
> On Wed, 2015-06-03 at 11:57 -0400, Josef Bacik wrote:
>> On 06/03/2015 11:30 AM, Mike Galbraith wrote:
>>> On Wed, 2015-06-03 at 16:24 +0200, Peter Zijlstra wrote:
>>>> On Wed, 2015-06-03 at 10:12 -0400, Rik van Riel wrote:
>>>>
>>>>> There is a policy vs mechanism thing here. Ingo and Peter
>>>>> are worried about the overhead in the mechanism of finding
>>>>> an idle CPU. Your measurements show that the policy of
>>>>> finding an idle CPU is the correct one.
>>>>
>>>> For his workload; I'm sure I can find a workload where it hurts.
>>>>
>>>> In fact, I'm fairly sure Mike knows one from the top of his head, seeing
>>>> how he's the one playing about trying to shrink that idle search :-)
>>>
>>> Like anything where scheduling latency doesn't heavily dominate. Even
>>> if searching were free, bounces aren't, even for the very light.
>>>
>>
>> If scheduling latency doesn't hurt then making the search shouldn't
>> matter should it? I get that migrations aren't free, but it seems like
>> they can't hurt that much.
>
> Nah, they don't hurt much :)
>
> commit e0a79f529d5ba2507486d498b25da40911d95cf6
> Author: Mike Galbraith <[email protected]>
> Date: Mon Jan 28 12:19:25 2013 +0100
>
> sched: Fix select_idle_sibling() bouncing cow syndrome
>
> If the previous CPU is cache affine and idle, select it.
>
> The current implementation simply traverses the sd_llc domain,
> taking the first idle CPU encountered, which walks buddy pairs
> hand in hand over the package, inflicting excruciating pain.
>
> 1 tbench pair (worst case) in a 10 core + SMT package:
>
> pre 15.22 MB/sec 1 procs
> post 252.01 MB/sec 1 procs
>
>
>> This application is huge, it's our
>> webserver, we're doing like 400 requests per second on these things, and
>> hands down moving stuff to idle cpus is beating the pants off of staying
>> on the same cpu. Is there a specific workload I could build a test for
>> that you think this approach would hurt? Thanks,
>
> Search cost hurts fast movers, as does dragging even a small footprint
> all over the place, as you can see above.
>

Eesh ok, do you happen to remember how you ran tbench so I can add it to
my tests here? In addition to fixing this problem we're also interested
in tracking performance of new kernels so we don't have to do this "what
the hell went wrong in the last 6 releases" dance every year, so I'm
throwing every performance thing we find useful in our test
infrastructure. Thanks,

Josef

2015-06-03 17:43:42

by Mike Galbraith

[permalink] [raw]
Subject: Re: [PATCH RESEND] sched: prefer an idle cpu vs an idle sibling for BALANCE_WAKE

On Wed, 2015-06-03 at 13:16 -0400, Josef Bacik wrote:

> Eesh ok, do you happen to remember how you ran tbench so I can add it to
> my tests here? In addition to fixing this problem we're also interested
> in tracking performance of new kernels so we don't have to do this "what
> the hell went wrong in the last 6 releases" dance every year, so I'm
> throwing every performance thing we find useful in our test
> infrastructure. Thanks,
>
> Josef

Start a tbench server, then tbench -t 30 1 localhost. You're unlikely
to find anything as painful as that bouncing cow bug was, but you won't
have to look hard at all to find bounce pain.

There are also other loads like your server where waking to an idle cpu
dominates all else, pgbench is one of those. In that case, you've got a
1:N waker/wakee relationship, and what matters above ALL else is when
the mother of all work (the single server thread) wants a CPU, it had
better get it NOW, else the load stalls. Likewise, 'mom' being
preempted hurts truckloads. Perhaps your server has a similar thing
going on, keeping wakees the hell away from the waker rules all.

-Mike

2015-06-03 20:34:43

by Josef Bacik

[permalink] [raw]
Subject: Re: [PATCH RESEND] sched: prefer an idle cpu vs an idle sibling for BALANCE_WAKE

On 06/03/2015 01:43 PM, Mike Galbraith wrote:
> On Wed, 2015-06-03 at 13:16 -0400, Josef Bacik wrote:
>
>> Eesh ok, do you happen to remember how you ran tbench so I can add it to
>> my tests here? In addition to fixing this problem we're also interested
>> in tracking performance of new kernels so we don't have to do this "what
>> the hell went wrong in the last 6 releases" dance every year, so I'm
>> throwing every performance thing we find useful in our test
>> infrastructure. Thanks,
>>
>> Josef
>
> Start a tbench server, then tbench -t 30 1 localhost. You're unlikely
> to find anything as painful as that bouncing cow bug was, but you won't
> have to look hard at all to find bounce pain.
>
> There are also other loads like your server where waking to an idle cpu
> dominates all else, pgbench is one of those. In that case, you've got a
> 1:N waker/wakee relationship, and what matters above ALL else is when
> the mother of all work (the single server thread) wants a CPU, it had
> better get it NOW, else the load stalls. Likewise, 'mom' being
> preempted hurts truckloads. Perhaps your server has a similar thing
> going on, keeping wakees the hell away from the waker rules all.
>

Yeah our server has two waker threads (one per numa node) and then the N
number of wakee threads. I'll run tbench and pgbench with the new
patches and see if there's a degredation. Thanks,

Josef

2015-06-04 04:52:40

by Mike Galbraith

[permalink] [raw]
Subject: Re: [PATCH RESEND] sched: prefer an idle cpu vs an idle sibling for BALANCE_WAKE

On Wed, 2015-06-03 at 16:34 -0400, Josef Bacik wrote:
> On 06/03/2015 01:43 PM, Mike Galbraith wrote:

> > There are also other loads like your server where waking to an idle cpu
> > dominates all else, pgbench is one of those. In that case, you've got a
> > 1:N waker/wakee relationship, and what matters above ALL else is when
> > the mother of all work (the single server thread) wants a CPU, it had
> > better get it NOW, else the load stalls. Likewise, 'mom' being
> > preempted hurts truckloads. Perhaps your server has a similar thing
> > going on, keeping wakees the hell away from the waker rules all.
> >
>
> Yeah our server has two waker threads (one per numa node) and then the N
> number of wakee threads. I'll run tbench and pgbench with the new
> patches and see if there's a degredation. Thanks,

If you look for wake_wide(), it could perhaps be used to select wider
search for only the right flavor load component when BALANCE_WAKE is
set. That would let the cache lovers in your box continue to perform
while improving the 1:N component. That wider search still needs to
become cheaper though, low hanging fruit being to stop searching when
you find load = 0.. but you may meet the energy efficient folks, who
iirc want to make it even more expensive.

wake_wide() inadvertently helped another sore spot btw - a gaggle of
pretty light tasks being awakened from an interrupt source tended to
cluster around that source, preventing such loads from being all they
can be in a very similar manner. Xen (shudder;) showed that nicely in
older kernels, due to the way its weird dom0 gizmo works.

-Mike