Hi all,
some results based on massing_intr.c by Satoru, can be found on
http://people.redhat.com/mingo/cfs-scheduler/tools/massive_intr.c
Runned several times like:
$ massing_intr 5 2 >> results-kernel-5.2
$ massing_intr 300 300 >> results-kernel-300.300
To calculate average and standard deviation:
$ original-awk -f awkscript results-file
awkscript file included.
(for debian users: apt-get install original-awk)
Here's the data, values and facts:
kernel run as average stddev
====== ====== ======= ======
2.6.22-rc4-ck1 5 2 34 0
2.6.22-rc4-ck1 5 2 22 0
2.6.22-rc4-ck1 5 2 24.6 0.219
2.6.22-rc4-ck1 5 2 31.4 0.219
2.6.22-rc4-ck1 5 2 40 0
2.6.22-rc4-cfs-v16 5 2 36 0
2.6.22-rc4-cfs-v16 5 2 30 0
2.6.22-rc4-cfs-v16 5 2 27.6 0.219
2.6.22-rc4-cfs-v16 5 2 29.6 0.219
2.6.22-rc4-cfs-v16 5 2 42 0
2.6.22-rc4-cfs-v16 300 300 126.427 0.289
2.6.22-rc4-cfs-v16 300 300 125.35 0.275
2.6.22-rc4-cfs-v16 300 300 127,797 0,028
2.6.22-rc4-cfs-v16 300 300 125,367 0,028
2.6.22-rc4-cfs-v16 300 300 125,213 0,024
2.6.22-rc4-ck1 300 300 125.413 0,028
2.6.22-rc4-ck1 300 300 125,34 0,027
2.6.22-rc4-ck1 300 300 124,69 0,027
2.6.22-rc4-ck1 300 300 125,093 0,017
2.6.22-rc4-ck1 300 300 125,597 0,028
* "run as" it's the parameters passed to the program massive_intr.
All the files and data can be found on
http://www.debianpt.org/~elmig/pool/kernel/20070611/
Just one note, the first time this test was run:
-cfs-v16 i got this values: 44, 23, 19, 16, 42;
-2.6.21-debian: 29, 25, 22, 16, 32;
-ck1: 37 37 37 37 37
The machine was a Sempron64 3.0 GHz.
I know that other people, who read lkml, also tested the same way, it
would be nice if they also post their data.
--
Com os melhores cumprimentos/Best regards,
Miguel Figueiredo
http://www.DebianPT.org
2007/6/12, Miguel Figueiredo <[email protected]>:
> Hi all,
>
> some results based on massing_intr.c by Satoru, can be found on
> http://people.redhat.com/mingo/cfs-scheduler/tools/massive_intr.c
>
2007/6/12, Miguel Figueiredo <[email protected]>:
> Hi all,
>
> some results based on massing_intr.c by Satoru, can be found on
> http://people.redhat.com/mingo/cfs-scheduler/tools/massive_intr.c
Hi,
Thanks for this reminder. I am gonna provide similar results if wanted
and useful for any dev.
However I took massiv_intr.c for a slightly different approach of
testing. The sole focus was responsiveness / (usable responsiveness
)of the desktop from the pov of an user.
I used massive_intr.c to bring the system to a state of lacking
usability ( overloaded system ) .
The setup was as follow:
mergedfb 2 monitors
Left desktop was displaying a music video playback using kaffeine /
xine at 100 % size. Right
Desktop had Firefox with 4 Tabs open 2 were text, 2 were graphical .
I did run massive_intr.c for 60 secs with increasing nproc (
10,20,30,40,50,60) waiting for effects.
Below a small table of the results
2.6.21.1-cfs-v16
nproc , usability result
10 , serious frame drops , Firefox hardly recognizes clicked links,
but still usable
20 - 30, usability loss ( somehow under cfs firefox never finished
user requests like displaying web pages or opening new pages , no
feedback anymore, sudden changes on the desktop )
40 , sound skipping
2.6.21.1-ck2
nproc , usability result
10 - 20 , fine
30 - 50 , video frame drops , stair effect in playback
60 , unusable delay for responding to user actions ( clicking links ,
switching tabs )
70 , sound skipping
I know that this is still highly subjective but I tried to describe
the test procedure to make it as repeatable and simple as possible.
Furthermore tried to scale the user experience into numbers. I won't
even attempt to interpret those results but from an user point of view
the answer is crystal clear. It still remains -ck
. I performed this test 3 times and results are clearly the same.
As a sidenote . CK / SD seemed to be far more scalable in general. I
started designing a bit more complicated test scenario involving more
applications and opening new applications in a 2 minute timeframe. I
did this using -ck .
Once I booted into -cfs-v16 I realized that this test case was useless
because that test could simply not be performed because -cfs-v16 did
not even closely scale that well as -ck did.
I'll keep testing both schedulers ... .
Input, questions and feedback regarding the testing method are highly
appreciated.
The output of massive_intr can be found here :
http://www.yoper.com/scheduler-test/
kind regards
--
Tobias Gerschner
Member of Board of Yoper Linux Ltd. NZ
Knowing is not enough; we must apply. Willing is not enough; we must do.
* Tobias Gerschner <[email protected]> wrote:
> I did run massive_intr.c for 60 secs with increasing nproc (
> 10,20,30,40,50,60) waiting for effects.
>
> Below a small table of the results
>
> 2.6.21.1-cfs-v16
>
> nproc , usability result
>
> 10 , serious frame drops , Firefox hardly recognizes clicked links,
> but still usable
> 20 - 30, usability loss ( somehow under cfs firefox never finished
> user requests like displaying web pages or opening new pages , no
> feedback anymore, sudden changes on the desktop )
ouch! The expected load-testing result under CFS should be something
like this:
http://bhhdoa.org.au/pipermail/ck/2007-June/007817.html
could you send me the output of /proc/sched_debug? (while say a
"massive_intr 20" is running?)
Roughly what hardware do you have? (could you send me your lspci -v
output and dmesg output?)
Ingo
2007/6/12, Ingo Molnar <[email protected]>:
>
> * Tobias Gerschner <[email protected]> wrote:
>
> > I did run massive_intr.c for 60 secs with increasing nproc (
> > 10,20,30,40,50,60) waiting for effects.
> >
> > Below a small table of the results
> >
> > 2.6.21.1-cfs-v16
> >
> > nproc , usability result
> >
> > 10 , serious frame drops , Firefox hardly recognizes clicked links,
> > but still usable
> > 20 - 30, usability loss ( somehow under cfs firefox never finished
> > user requests like displaying web pages or opening new pages , no
> > feedback anymore, sudden changes on the desktop )
>
> ouch! The expected load-testing result under CFS should be something
> like this:
>
> http://bhhdoa.org.au/pipermail/ck/2007-June/007817.html
>
> could you send me the output of /proc/sched_debug? (while say a
> "massive_intr 20" is running?)
>
> Roughly what hardware do you have? (could you send me your lspci -v
> output and dmesg output?)
>
> Ingo
>
Hi,
it's a peacock freeliner xp II. Close to 5 year old Laptop with an
Athlon XP 2600+ using 1 GB of RAM / no swap enabled.
The other information will be sent as soon as I am back at work .
regards
--
Tobias Gerschner
Member of Board of Yoper Linux Ltd. NZ
Knowing is not enough; we must apply. Willing is not enough; we must do.
* Tobias Gerschner <[email protected]> wrote:
> it's a peacock freeliner xp II. Close to 5 year old Laptop with an
> Athlon XP 2600+ using 1 GB of RAM / no swap enabled.
>
> The other information will be sent as soon as I am back at work .
thanks! Here's another thing that would be worth testing:
could you pin the CPU's frequency to the highest setting (via whatever
method of your choice, by selecting the 'performance' cpufreq governor
for example). Plus could you do a test with the following additional
kernel boot parameter: idle=poll. [both changes would exclude TSC
related artifacts.] (This would be for testing only, to narrow down the
regression.)
Ingo
* Ingo Molnar <[email protected]> wrote:
> * Tobias Gerschner <[email protected]> wrote:
>
> > I did run massive_intr.c for 60 secs with increasing nproc (
> > 10,20,30,40,50,60) waiting for effects.
> >
> > Below a small table of the results
> >
> > 2.6.21.1-cfs-v16
> >
> > nproc , usability result
> >
> > 10 , serious frame drops , Firefox hardly recognizes clicked links,
> > but still usable
> > 20 - 30, usability loss ( somehow under cfs firefox never finished
> > user requests like displaying web pages or opening new pages , no
> > feedback anymore, sudden changes on the desktop )
>
> ouch! The expected load-testing result under CFS should be something
> like this:
>
> http://bhhdoa.org.au/pipermail/ck/2007-June/007817.html
i have just tried the same workload with cfs and with sd048 from -ck,
and cannot reproduce this. To make sure it's not some other change in
-ck, could you try the pure SD patches ontop of 2.6.21.1 too:
http://www.kernel.org/pub/linux/kernel/people/ck/patches/2.6/2.6.21/2.6.21-ck2/patches/2.6.21-sd-0.48.patch
http://www.kernel.org/pub/linux/kernel/people/ck/patches/2.6/2.6.21/2.6.21-ck2/patches/sched-sd-0.48-interactive_tunable.patch
Ingo
On Tuesday 12 June 2007 18:57, Ingo Molnar wrote:
> * Ingo Molnar <[email protected]> wrote:
> > * Tobias Gerschner <[email protected]> wrote:
> > > I did run massive_intr.c for 60 secs with increasing nproc (
> > > 10,20,30,40,50,60) waiting for effects.
> > >
> > > Below a small table of the results
Nice results. Thanks for taking the time to post them!
> > >
> > > 2.6.21.1-cfs-v16
> > >
> > > nproc , usability result
> > >
> > > 10 , serious frame drops , Firefox hardly recognizes clicked links,
> > > but still usable
> > > 20 - 30, usability loss ( somehow under cfs firefox never finished
> > > user requests like displaying web pages or opening new pages , no
> > > feedback anymore, sudden changes on the desktop )
> >
> > ouch! The expected load-testing result under CFS should be something
> > like this:
> >
> > http://bhhdoa.org.au/pipermail/ck/2007-June/007817.html
>
> i have just tried the same workload with cfs and with sd048 from -ck,
> and cannot reproduce this. To make sure it's not some other change in
> -ck, could you try the pure SD patches ontop of 2.6.21.1 too:
I'm pleased you think the rest of my patches may help there but only the SD
patches affect scheduling unless you set a different scheduling policy or
there is a vm issue.
List:
ck2-version.patch
2.6.21-sd-0.48.patch
sched-sd-0.48-interactive_tunable.patch
sched-range.patch
sched-iso-5.4.patch
track_mutexes-1.patch
sched-idleprio-2.3.patch
sched-limit_policy_changes.patch
sched-ck-add-above-background-load-function.patch
cfq-ioprio_inherit_rt_class.patch
cfq-iso_idleprio_ionice.patch
mm-swap_prefetch-35.patch
mm-convert_swappiness_to_mapped.patch
mm-lots_watermark.diff
mm-kswapd_inherit_prio-1.patch
mm-prio_dependant_scan-2.patch
mm-background_scan-2.patch
mm-filesize_dependant_lru_cache_add.patch
mm-idleprio_prio.patch
kconfig-expose_vmsplit_option.patch
hz-default_1000.patch
hz-no_default_250.patch
hz-raise_max-2.patch
ck-desktop-tune.patch
mm-swap-prefetch-35-38.patch
apart from ck-desktop-tune.patch which is in total this:
-int rr_interval __read_mostly = 8;
+int rr_interval __read_mostly = 6;
which is already a tunable that's part of SD.
So unless there's a vm issue (which does not appear to be the case) I can't
see how any of these will change Tobias' extensive testing results.
Thanks.
--
-ck
* Con Kolivas <[email protected]> wrote:
> So unless there's a vm issue (which does not appear to be the case) I
> can't see how any of these will change Tobias' extensive testing
> results.
yep - i've retested with -ck2 and cannot reproduce his results. So i'm
waiting for his feedback to see why this workload is behaving like that
on his box and why not more like what other testers have found:
http://bhhdoa.org.au/pipermail/ck/2007-June/007817.html
in any case, we'll figure this out :-)
Ingo
* Tobias Gerschner <[email protected]> wrote:
> The output of massive_intr can be found here :
> http://www.yoper.com/scheduler-test/
here's the spread of the massive_intr results (the average 'jitter' of
the second column of the results - lower values are indicating more
stable / more fair massive_intr results, in percentage):
CFS SD
----------------------
10: 0.02 0.55
20: 0.21 0.78
30: 0.26 0.95
40: 0.27 1.46
50: 0.37 1.24
60: 0.37 0.92
but of course i'm primarily interested is the observed difference in
interactivity :-)
Ingo
Am Dienstag 12 Juni 2007 schrieb Miguel Figueiredo:
> Hi all,
>
> some results based on massing_intr.c by Satoru, can be found on
> http://people.redhat.com/mingo/cfs-scheduler/tools/massive_intr.c
Hi Miquel, Ingo, Con!
I have been a week without internet access. I have been testing 2.6.21.3 +
sws2 2.2.10 and cfs-v15 and ck2 on both my T42 and my Amarok machine T23
[1].
From subjective feedback I can't tell a difference anymore. But I usually
do not play 3D games or use Beryl / Compiz (except for showing it off).
I did not yet test with disabled NEED_RESCHED CFS workaround as Ingo asked
my in private mail. This is a work-around to fix the audio skipping
problems on my Amarok machine. I will compile a CFS v16 and test with
this work-around disabled.
I had one issue with CFS v15 at least, but its not easy to reproduce. I
usually test Amarok playback while kernel package make (kernel compile),
opening a lot of KDE apps (Konqueror mainly) and moving the Amarok window
like mad. After it I close the windows of that KDE apps again. For
Konqueror I can use the menu entry "Close all instances" since I have 20+
of them open. When I do this, all windows will be closed and I
occassionally get some audio skips there. AFAIR it didn't happen with SD
there on my tests with it.
One issue on SD I had was some audio skips *directly* after resume (with
suspend2). I did not see these with CFS-v15.
Both are issues that are not easy to trackdown, test reproducably for me.
I am willing to run other tests on my Amarok machine. Since I can't tell a
subjective difference anymore this likely involved running some
benchmarks. Test would be whether sound playback is 100% OK during
running that benchmark.
To make it comparative it would be nice if a certain test producedure is
done on several machines by several people. What would be good test
scenarios?
I am interested in desktop loads, and would like stuff like this:
1) Generating high load on the X server.
2) Starting loads of applications and stopping them again quickly. Maybe
its possible to simulate my manually starting KDE apps by clicking around
in the startbar wildly and stopping them again (to track this CFS issue).
3) Starting one or more computational expensive tasks like kernel
compiles.
4) Putting load onto the VM layer / block layer. I am not sure about this.
We are testing CPU schedulers, not IO schedulers, but different VM
scenarious are still important IMHO.
These are specially interesting IMHO when being combined. I tried to
simulate this with my manual point and click testings. (Kernel compile +
starting apps + moving Amarok window like mad.)
My main concerns during those tests are:
1) Is audio playback 100% okay? Instead of only hearing whether there are
glitches there possible is a way to *measure* them? Could be extended to
video playback tough I would expect glitches at a certain load... (well
at some point eventually audio play back will stop too)
2) Is the mouse pointer always respsonsive? (When there is something I do
not like it is a frozen mouse pointer as I had a lot with earlier CFS
versions previous to above workaround). Maybe this can be measured too?
Is that the massive_intr testing?
3) How is interactivity? How long does it take till an applications react
to a mouse click... but then how on earth do you measure this?
I guess there are other important criterias to have a look - especially
also the behavior on a server for example - how does the scheduler behave
on a webserver with many clients issuing differently sized requests?
Any suggestions?
Now what would be a testing procedure that is affordable for testers and
as relevant to scheduler testing as it could be? My problem with using
benchmarks is that I first have to spent lots of time to figure out how
they work and how to produce a reasonable setup. If someone - preferably
Linux kernel scheduler experts - invent a testing procedure I could
follow step by step and then post the results it would be easier for me
to collect some useful informations for you Kernel developers.
Without any more extensive testing aside from above mentioned difficult to
reproduce artifacts in my subjective perception SD (as in 2.6.21-ck2) and
CFS-v15 are on par now. So based on that other criteria - code size,
design criteria, maintainership, responding to bug reports, further
benchmarks and real world usage tests - would need to be used to decide
which scheduler should go into the kernel.
As I read several times SD is smaller regarding to code size. OTOH to me
it seems that Ingo Molnar has more time and ability to support
maintainership, but still to what I have seen Con did his best here too
and fixed all known issues with SD.
[1] http://martin-steigerwald.de/amarok-machine/
Regards,
--
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA B82F 991B EAAC A599 84C7
> -----Message d'origine-----
> De : [email protected]
> [mailto:[email protected]] De la part de
> Miguel Figueiredo
> Envoy? : 11 juin 2007 20:30
>
> Hi all,
>
> some results based on massing_intr.c by Satoru, can be found
> on http://people.redhat.com/mingo/cfs-scheduler/tools/massive_intr.c
>
>
> I know that other people, who read lkml, also tested the same
> way, it would be nice if they also post their data.
>
For the pleasure of comparing CPU schedulers, both CFS v16 & full CK2 patched pre-built kernels for Fedora 7 using latest build 3194 are now available at http://linux-dev.qc.ec.gc.ca
There is also a not yet fully tuned YUM repository configuration file available at http://linux-dev.qc.ec.gc.ca/kernel/fedora/alt-sched.repo
Although I suggest you download the rpm's directly from the web page & install both manually using rpm -ivh --force
Have fun.
- vin
> -----Message d'origine-----
> De : [email protected]
> [mailto:[email protected]] De la part de
> Fortier,Vincent [Montreal]
> Envoy? : 12 juin 2007 21:36
> ? : Miguel Figueiredo; linux kernel mailing list; [email protected]
> Cc : Con Kolivas; Ingo Molnar
> Objet : RE: call for more SD versus CFS comparisons (was: Re:
> [ck] Mainline plans)
>
> > -----Message d'origine-----
> > De : [email protected]
> > [mailto:[email protected]] De la part de Miguel
> > Figueiredo Envoy? : 11 juin 2007 20:30
> >
> > Hi all,
> >
> > some results based on massing_intr.c by Satoru, can be found on
> > http://people.redhat.com/mingo/cfs-scheduler/tools/massive_intr.c
> >
> >
> > I know that other people, who read lkml, also tested the
> same way, it
> > would be nice if they also post their data.
> >
>
> For the pleasure of comparing CPU schedulers, both CFS v16 &
> full CK2 patched pre-built kernels for Fedora 7 using latest
> build 3194 are now available at http://linux-dev.qc.ec.gc.ca
>
> There is also a not yet fully tuned YUM repository
> configuration file available at
> http://linux-dev.qc.ec.gc.ca/kernel/fedora/alt-sched.repo
> Although I suggest you download the rpm's directly from the
> web page & install both manually using rpm -ivh --force
>
> Have fun.
>
Also, now that I think of it, here are my results:
Test:
./massive_intr 20 6000
Hardware:
- Athlon 64 4200+
- 2GB ram
- nVidia 6600GT using latest Beta driver NVIDIA-Linux-x86_64-100.14.06-pkg2.run
- SATA drive
- On-board audio (00:04.0 Multimedia audio controller: nVidia Corporation CK804 AC'97 Audio Controller (rev a2))
Kernels:
CFS v16 2.6.21 FC7 build 3194
CK2 2.6.21 FC7 build 3194
Here is what the TOP output was looking like for both case:
top - 13:36:22 up 38 min, 4 users, load average: 20.41, 13.43, 7.49
Tasks: 172 total, 21 running, 150 sleeping, 0 stopped, 1 zombie
Cpu0 : 99.7%us, 0.3%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu1 : 94.7%us, 4.6%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.3%hi, 0.3%si, 0.0%st
Mem: 2058860k total, 1217132k used, 841728k free, 56456k buffers
Swap: 1959888k total, 0k used, 1959888k free, 781636k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
6099 megalout 20 0 7860 224 136 R 10 0.0 0:26.12 massive_intr
6102 megalout 20 0 7860 224 136 R 10 0.0 0:26.27 massive_intr
6104 megalout 20 0 7860 224 136 R 10 0.0 0:26.51 massive_intr
6105 megalout 20 0 7860 224 136 R 10 0.0 0:26.32 massive_intr
6107 megalout 20 0 7860 224 136 R 10 0.0 0:26.62 massive_intr
6112 megalout 20 0 7860 224 136 R 10 0.0 0:26.32 massive_intr
6116 megalout 20 0 7860 224 136 R 10 0.0 0:26.26 massive_intr
6109 megalout 20 0 7860 224 136 R 10 0.0 0:26.32 massive_intr
6115 megalout 20 0 7860 224 136 R 10 0.0 0:26.50 massive_intr
6118 megalout 20 0 7860 224 136 R 9 0.0 0:26.37 massive_intr
6100 megalout 20 0 7860 224 136 R 9 0.0 0:26.51 massive_intr
6103 megalout 20 0 7860 224 136 R 9 0.0 0:26.24 massive_intr
6106 megalout 20 0 7860 224 136 R 9 0.0 0:26.32 massive_intr
6108 megalout 20 0 7860 224 136 R 9 0.0 0:26.32 massive_intr
6110 megalout 20 0 7860 224 136 R 9 0.0 0:26.54 massive_intr
6111 megalout 20 0 7860 224 136 R 9 0.0 0:26.37 massive_intr
6113 megalout 20 0 7860 224 136 R 9 0.0 0:26.56 massive_intr
6114 megalout 20 0 7860 224 136 R 9 0.0 0:26.30 massive_intr
6117 megalout 20 0 7860 224 136 R 9 0.0 0:26.34 massive_intr
6101 megalout 20 0 7860 224 136 R 9 0.0 0:26.31 massive_intr
4681 root 20 0 196m 67m 12m S 8 3.4 4:06.00 X
6036 megalout 20 0 694m 53m 32m S 2 2.6 0:07.82 amarokapp
4904 megalout 20 0 271m 9.9m 6948 S 1 0.5 0:18.16 gkrellm
5054 megalout 20 0 160m 45m 20m S 1 2.2 0:10.90 firefox-bin
5201 megalout 20 0 160m 9304 5480 S 1 0.5 0:01.42 emerald
5210 megalout 20 0 240m 12m 6284 S 1 0.6 0:33.29 beryl
CFS v16:
---------
beryl interractivity way too unresponsive..
- window decoration highlight taking around 5-10 secs to switch between windows focus
- window movement either impossible or animation laggy enough to not being seen at all.
- Cube rotation still possible using mouse scroll although really really really laggy
Amarok MP3 music:
- No audio skips at all. Playing really well!
CK2 patchset:
-------------
Beryl interractivity almost totally unresponsive... In fact mouse movement was near-impossible to control.
- window movement totally impossible
- Cube rotation still possible using mouse scroll although there was no animation at all
Amarok MP3 music:
- Same as CFS, no skips at all. Playing really well!
I'll quote Martin on the Ck mailing list:
> According to Ingo most of the interactivity issues should be fixed by now.
> Still I do not see how that translates to "CFS was generally better".
Well, here you go:
For having followed RSDL since version 0.28 I'd say Con did a really great job but for me CFS v16 now, on my hardware, offers better interactivity using beryl window manager.
- vin
On 13-06-2007 03:54, Fortier,Vincent [Montreal] wrote:
...
> Kernels:
> CFS v16 2.6.21 FC7 build 3194
> CK2 2.6.21 FC7 build 3194
...
> CFS v16:
> ---------
> beryl interractivity way too unresponsive..
> - window decoration highlight taking around 5-10 secs to switch between windows focus
> - window movement either impossible or animation laggy enough to not being seen at all.
> - Cube rotation still possible using mouse scroll although really really really laggy
>
> Amarok MP3 music:
> - No audio skips at all. Playing really well!
>
>
> CK2 patchset:
> -------------
> Beryl interractivity almost totally unresponsive... In fact mouse movement was near-impossible to control.
> - window movement totally impossible
> - Cube rotation still possible using mouse scroll although there was no animation at all
>
> Amarok MP3 music:
> - Same as CFS, no skips at all. Playing really well!
...
Very nice testing! But, maybe it could be even more interesting after
adding here the "vanilla" kernel too, i.e. 2.6.21 FC7 build 3194
without CFS/CK2?
Regards,
Jarek P.
Hi,
>> I did run massive_intr.c for 60 secs with increasing nproc (
>> 10,20,30,40,50,60) waiting for effects.
>>
>> Below a small table of the results
>>
>> 2.6.21.1-cfs-v16
>>
>> nproc , usability result
>>
>> 10 , serious frame drops , Firefox hardly recognizes clicked links,
>> but still usable
>> 20 - 30, usability loss ( somehow under cfs firefox never finished
>> user requests like displaying web pages or opening new pages , no
>> feedback anymore, sudden changes on the desktop )
>
>ouch! The expected load-testing result under CFS should be something
>like this:
>
> http://bhhdoa.org.au/pipermail/ck/2007-June/007817.html
>
>could you send me the output of /proc/sched_debug? (while say a
>"massive_intr 20" is running?)
>
>Roughly what hardware do you have? (could you send me your lspci -v
>output and dmesg output?)
>
> Ingo
>
Hi,
After some serious but fun testing on my machine over hours, Ingo got
CFS behave on par with SD. It was my understanding that the changes
were mainly adjusting tunables rather than changing code. But that is
not for me to explain.
>From my point of view it was impressive to see the determination INGO
had, to make sure he delivers the best he can :) . And I learned a lot
how to provide usable / readable test results.
So thumbs up for CFS and SD . ALL IMO: The current advantage of SD
over CFS is the lack of need of tuning for SD . So there is certainly
room for improvement for CFS.
Over the weekend I'll prepare some test cases and documentation for
them to test CFS and SD more specific so that a broader public can
provide the same sort of (comparable) data. The test we used only
covered 1 usage case. This is certainly not enough to measure the
performance of such a key component.
I am looking forward to the next version of CFS and I will certainly
test it toroughly.
kind regards to all responsiveness junkees
--
Tobias Gerschner
Member of Board of Yoper Linux Ltd. NZ
Knowing is not enough; we must apply. Willing is not enough; we must do.
Martin Steigerwald wrote:
> Am Dienstag 12 Juni 2007 schrieb Miguel Figueiredo:
>> Hi all,
>>
>> some results based on massing_intr.c by Satoru, can be found on
>> http://people.redhat.com/mingo/cfs-scheduler/tools/massive_intr.c
>
> Hi Miquel, Ingo, Con!
>
[...]
> Any suggestions?
I read somewhere in the list that X itself makes lots of hocus pocus
that affect the behavior of programs running inside X itself (i even
read about X's own scheduling - someone can confirm/deny it? - and evil
behavior on drivers).
If we look/test a fair/responsive scheduler isn't better to test it
outside X?
IMHO, X itself, it's too complex and may obscure our tests on
fairness/interactivity.
Anyone knows any good tests for interctivity?
[...]
--
Com os melhores cumprimentos/Best regards,
Miguel Figueiredo
http://www.DebianPT.org