2008-08-18 06:52:49

by Ingo Molnar

[permalink] [raw]
Subject: Re: scale sysctl_sched_shares_ratelimit with nr_cpus


* Zhang, Yanmin <[email protected]> wrote:

> Ingo,
>
> My linux mailbox is locked, so I send from my another mail box.
>
> I tested the patch of scale sysctl_sched_shares_ratelimit with nr_cpus
> on 16-core tigerton by volanoMark. CONFIG_GROUP_SCHED=y. Comparing
> with pure 2.6.27-rc3, the patched kernel has about 15% improvement,
> and it seems volanoMark runs more smoothly with the patched kernel.

cool. It's already upstream (post-rc3 commit):

55cd534: sched: scale sysctl_sched_shares_ratelimit with nr_cpus

Ingo


2008-08-18 06:54:58

by Zhang, Yanmin

[permalink] [raw]
Subject: RE: scale sysctl_sched_shares_ratelimit with nr_cpus

>>-----Original Message-----
>>From: Ingo Molnar [mailto:[email protected]]
>>Sent: Monday, August 18, 2008 2:52 PM
>>To: Zhang, Yanmin
>>Cc: [email protected]; Linux Kernel Mailing List
>>Subject: Re: scale sysctl_sched_shares_ratelimit with nr_cpus
>>
>>
>>* Zhang, Yanmin <[email protected]> wrote:
>>
>>> Ingo,
>>>
>>> My linux mailbox is locked, so I send from my another mail box.
>>>
>>> I tested the patch of scale sysctl_sched_shares_ratelimit with
nr_cpus
>>> on 16-core tigerton by volanoMark. CONFIG_GROUP_SCHED=y. Comparing
>>> with pure 2.6.27-rc3, the patched kernel has about 15% improvement,
>>> and it seems volanoMark runs more smoothly with the patched kernel.
>>
>>cool. It's already upstream (post-rc3 commit):
[YM] But comparing with 2.6.26, volanoMark result still has about 60%
regression.

>>
>> 55cd534: sched: scale sysctl_sched_shares_ratelimit with nr_cpus
>>
>> Ingo

2008-08-18 07:02:14

by Ingo Molnar

[permalink] [raw]
Subject: Re: scale sysctl_sched_shares_ratelimit with nr_cpus


* Zhang, Yanmin <[email protected]> wrote:

> >>cool. It's already upstream (post-rc3 commit):
> [YM] But comparing with 2.6.26, volanoMark result still has about 60%
> regression.

Does a scheduler trace show anything about why that drop happens? Do
something like this to trace the scheduler:

assuming debugfs is mounted under /debug and CONFIG_SCHED_TRACER=y:

echo 1 > /debug/tracing/tracing_cpumask
echo sched_switch > /debug/tracing/available_tracers
cat /debug/tracing/trace_pipe > trace.txt

( regarding tracing_cpumask: trace only 1 CPU to make sure volanomark is
not disturbed too much by many-CPUs tracing. )

Ingo

2008-08-18 08:26:24

by Zhang, Yanmin

[permalink] [raw]
Subject: RE: scale sysctl_sched_shares_ratelimit with nr_cpus



>>-----Original Message-----
>>From: Ingo Molnar [mailto:[email protected]]
>>Sent: Monday, August 18, 2008 3:02 PM
>>To: Zhang, Yanmin
>>Cc: [email protected]; Linux Kernel Mailing List
>>Subject: Re: scale sysctl_sched_shares_ratelimit with nr_cpus
>>
>>
>>* Zhang, Yanmin <[email protected]> wrote:
>>
>>> >>cool. It's already upstream (post-rc3 commit):
>>> [YM] But comparing with 2.6.26, volanoMark result still has about
60%
>>> regression.
>>
>>Does a scheduler trace show anything about why that drop happens? Do
>>something like this to trace the scheduler:
>>
>>assuming debugfs is mounted under /debug and CONFIG_SCHED_TRACER=y:
>>
>> echo 1 > /debug/tracing/tracing_cpumask
>> echo sched_switch > /debug/tracing/available_tracers
>> cat /debug/tracing/trace_pipe > trace.txt
[YM] Thanks for your good pointer. I collected the data and didn't find
anything abnormal except the pid about waker.

Receiver-197-13665 [00] 1369.966423: 13665:120:R + 13607:120:S
Receiver-197-13665 [00] 1369.966440: 13665:120:R + 13611:120:S
Receiver-197-13665 [00] 1369.966458: 13665:120:R + 13615:120:S
Receiver-197-13665 [00] 1369.966463: 13665:120:R + 13619:120:S
Receiver-197-13665 [00] 1369.966466: 13665:120:R + 13623:120:S
Receiver-197-13665 [00] 1369.966469: 13665:120:R + 13627:120:S
Receiver-197-13665 [00] 1369.966475: 13665:120:R + 13631:120:S
Receiver-197-13665 [00] 1369.966480: 13665:120:R + 13635:120:S
Receiver-197-13665 [00] 1369.966485: 13665:120:R + 13639:120:S
Receiver-197-13665 [00] 1369.966495: 13665:120:R + 13643:120:S
Receiver-197-13665 [00] 1369.966507: 13871:120:R + 13647:120:S
Above waker pid is 13871 while the current pid is 13665. I found lots of
such mismatch data.

Receiver-197-13665 [00] 1369.966513: 13465:120:R + 13651:120:S
Receiver-197-13665 [00] 1369.966516: 13665:120:R + 13655:120:S
Receiver-197-13665 [00] 1369.966521: 13665:120:R + 13659:120:S
Receiver-197-13665 [00] 1369.966530: 13665:120:R + 13667:120:S
Receiver-197-13665 [00] 1369.966544: 13883:120:R + 13663:120:S
Receiver-197-13665 [00] 1369.966549: 13665:120:R ==> 13667:120:R
Sender-140-13667 [00] 1369.966573: 13351:120:R + 13668:120:S
Sender-140-13667 [00] 1369.966578: 13667:120:R ==> 13659:120:R


BTW, I analyzed schedstat data and found wake_affine and
load_balance_newidle
seem abnormal. 2.6.27-rc has more task pulls.
I set CONFIG_GROUP_SCHED=n with above testing.

>>
>>( regarding tracing_cpumask: trace only 1 CPU to make sure volanomark
is
>> not disturbed too much by many-CPUs tracing. )
>>
>> Ingo

2008-08-18 08:42:23

by Ingo Molnar

[permalink] [raw]
Subject: Re: scale sysctl_sched_shares_ratelimit with nr_cpus


* Zhang, Yanmin <[email protected]> wrote:

> >>Does a scheduler trace show anything about why that drop happens? Do
> >>something like this to trace the scheduler:
> >>
> >>assuming debugfs is mounted under /debug and CONFIG_SCHED_TRACER=y:
> >>
> >> echo 1 > /debug/tracing/tracing_cpumask
> >> echo sched_switch > /debug/tracing/available_tracers
> >> cat /debug/tracing/trace_pipe > trace.txt
> [YM] Thanks for your good pointer. I collected the data and didn't find
> anything abnormal except the pid about waker.
>
> Receiver-197-13665 [00] 1369.966423: 13665:120:R + 13607:120:S
> Receiver-197-13665 [00] 1369.966440: 13665:120:R + 13611:120:S
> Receiver-197-13665 [00] 1369.966458: 13665:120:R + 13615:120:S
> Receiver-197-13665 [00] 1369.966463: 13665:120:R + 13619:120:S
> Receiver-197-13665 [00] 1369.966466: 13665:120:R + 13623:120:S
> Receiver-197-13665 [00] 1369.966469: 13665:120:R + 13627:120:S
> Receiver-197-13665 [00] 1369.966475: 13665:120:R + 13631:120:S
> Receiver-197-13665 [00] 1369.966480: 13665:120:R + 13635:120:S
> Receiver-197-13665 [00] 1369.966485: 13665:120:R + 13639:120:S
> Receiver-197-13665 [00] 1369.966495: 13665:120:R + 13643:120:S
> Receiver-197-13665 [00] 1369.966507: 13871:120:R + 13647:120:S
> Above waker pid is 13871 while the current pid is 13665. I found lots of
> such mismatch data.
>
> Receiver-197-13665 [00] 1369.966513: 13465:120:R + 13651:120:S
> Receiver-197-13665 [00] 1369.966516: 13665:120:R + 13655:120:S
> Receiver-197-13665 [00] 1369.966521: 13665:120:R + 13659:120:S
> Receiver-197-13665 [00] 1369.966530: 13665:120:R + 13667:120:S
> Receiver-197-13665 [00] 1369.966544: 13883:120:R + 13663:120:S
> Receiver-197-13665 [00] 1369.966549: 13665:120:R ==> 13667:120:R
> Sender-140-13667 [00] 1369.966573: 13351:120:R + 13668:120:S
> Sender-140-13667 [00] 1369.966578: 13667:120:R ==> 13659:120:R
>
>
> BTW, I analyzed schedstat data and found wake_affine and
> load_balance_newidle seem abnormal. 2.6.27-rc has more task pulls. I
> set CONFIG_GROUP_SCHED=n with above testing.

hm, does this mean there's too much idle time during the testrun,
because we dont load-balance agressively enough?

Ingo

2008-08-18 08:46:06

by Zhang, Yanmin

[permalink] [raw]
Subject: RE: scale sysctl_sched_shares_ratelimit with nr_cpus

>>-----Original Message-----
>>From: Ingo Molnar [mailto:[email protected]]
>>Sent: Monday, August 18, 2008 4:42 PM
>>To: Zhang, Yanmin
>>Cc: [email protected]; Linux Kernel Mailing List
>>Subject: Re: scale sysctl_sched_shares_ratelimit with nr_cpus
>>
>>
>>* Zhang, Yanmin <[email protected]> wrote:
>>
>>> >>Does a scheduler trace show anything about why that drop happens?
Do
>>> >>something like this to trace the scheduler:
>>> >>
>>> >>assuming debugfs is mounted under /debug and
CONFIG_SCHED_TRACER=y:
>>> >>
>>> >> echo 1 > /debug/tracing/tracing_cpumask
>>> >> echo sched_switch > /debug/tracing/available_tracers
>>> >> cat /debug/tracing/trace_pipe > trace.txt
>>> [YM] Thanks for your good pointer. I collected the data and didn't
find
>>> anything abnormal except the pid about waker.
>>>
>>> Receiver-197-13665 [00] 1369.966423: 13665:120:R +
13607:120:S
>>> Receiver-197-13665 [00] 1369.966440: 13665:120:R +
13611:120:S
>>> Receiver-197-13665 [00] 1369.966458: 13665:120:R +
13615:120:S
>>> Receiver-197-13665 [00] 1369.966463: 13665:120:R +
13619:120:S
>>> Receiver-197-13665 [00] 1369.966466: 13665:120:R +
13623:120:S
>>> Receiver-197-13665 [00] 1369.966469: 13665:120:R +
13627:120:S
>>> Receiver-197-13665 [00] 1369.966475: 13665:120:R +
13631:120:S
>>> Receiver-197-13665 [00] 1369.966480: 13665:120:R +
13635:120:S
>>> Receiver-197-13665 [00] 1369.966485: 13665:120:R +
13639:120:S
>>> Receiver-197-13665 [00] 1369.966495: 13665:120:R +
13643:120:S
>>> Receiver-197-13665 [00] 1369.966507: 13871:120:R +
13647:120:S
>>> Above waker pid is 13871 while the current pid is 13665. I found
lots of
>>> such mismatch data.
>>>
>>> Receiver-197-13665 [00] 1369.966513: 13465:120:R +
13651:120:S
>>> Receiver-197-13665 [00] 1369.966516: 13665:120:R +
13655:120:S
>>> Receiver-197-13665 [00] 1369.966521: 13665:120:R +
13659:120:S
>>> Receiver-197-13665 [00] 1369.966530: 13665:120:R +
13667:120:S
>>> Receiver-197-13665 [00] 1369.966544: 13883:120:R +
13663:120:S
>>> Receiver-197-13665 [00] 1369.966549: 13665:120:R ==>
13667:120:R
>>> Sender-140-13667 [00] 1369.966573: 13351:120:R +
13668:120:S
>>> Sender-140-13667 [00] 1369.966578: 13667:120:R ==>
13659:120:R
>>>
>>>
>>> BTW, I analyzed schedstat data and found wake_affine and
>>> load_balance_newidle seem abnormal. 2.6.27-rc has more task pulls. I
>>> set CONFIG_GROUP_SCHED=n with above testing.
>>
>>hm, does this mean there's too much idle time during the testrun,
>>because we dont load-balance agressively enough?
[YM] With 2.6.26, cpu idle is about 6%; with 2.6.27-rc, idle is about
0~1%.
It seems volanoMark prefers some idle. I diff the sched source codes and
couldn't
find why load balance pulls more tasks successfully in 2.6.27-rc.