the -J2 O(1) scheduler patch is available:
http://redhat.com/~mingo/O(1)-scheduler/sched-O1-2.5.3-pre1-J2.patch
http://redhat.com/~mingo/O(1)-scheduler/sched-O1-2.4.17-J2.patch
Changes since -J0:
- Ed Tomlinson: optimize wake_up_forked_process() further.
- the -J0 patch had a broken version of the task migration code - it did
not include all the necessery changes for task migration to work at
all. This broke 3 or more CPU boxes. The setting of the task-migration
IPI vector was missing. -J2test-booted on an 8-way system just fine.
- micro-optimize wakeup: high-frequency wakeups do not call the average
calculation code.
- finishing touches on interactiveness:
1) default-niceness processes can only reach 90% of the full priority
range. This protects normal processes from nice +10 CPU hogs, and
protects nice -20 interactive tasks (audio playback, emergency
shells, etc.) from normal processes.
2) updates on priority inheritance of forked children: child processes
get 80% of the parent's priority. [it was 66% in -J0.] The difference
is visible during high compilation load, xterms under Gnome/KDE start
up much faster, because such startups create two new processes, thus
the second process gets the penalty twice. With 80%, the penalty is
just enough for the shell to stay out of the 'CPU-bound hell' of
compilation jobs.
3) the 0...39 'user priority' range is now split up into three areas:
A) 'unconditionally interactive tasks' in the lower 25% range.
B) 'CPU-bound tasks' in the high 25% range.
C) 'conditionally interactive tasks' in the middle 50% range.
tasks in category B) are interactive if they are 10% below their
default priority. (ie. if they sleep more than they do run.)
the new interactivenes changes made my systems even smoother than they
were under 2.5.3-pre1. None of the interactiveness logic changes add
overhead to the fast path. (the changes are either compile-time, or are in
some slow path.) Every of the above three changes was measured to improve
interactivity in compilation workloads and other workloads.
Comments, reports, suggestions welcome. Especially the testing of
interactiveness would be great, comparing the -J2 patch against other
kernels. (stock or patched kernels, 2.4 or 2.5 kernels, older O(1)
scheduler patches, etc.)
Ingo
On Fri, Jan 18, 2002 at 07:18:10PM +0100, Ingo Molnar wrote:
>
> the -J2 O(1) scheduler patch is available:
>
> http://redhat.com/~mingo/O(1)-scheduler/sched-O1-2.5.3-pre1-J2.patch
> http://redhat.com/~mingo/O(1)-scheduler/sched-O1-2.4.17-J2.patch
<snips>
Just a few comments on this. Compiled on 2.4.17 with gcc-2.95.4. O(1)-J2
only patch applied.
With the thread earlier regarding KDE and image thumbnails and xmms, yes
the machine is now useable whilst the images are being read and rendered,
and xmms doesn't falter, but about 2 seconds after kde finishes, xmms
falters twice (briefly - catches up very quickly).
Also, when X starts up, some of my startup settings seem to load too
quickly - I use uwm and a couple of xterms are started without the
terminal options being set (it seems) - instead of getting "user@host:~/"
on the left, I get each character on a seperate line. Also colours are
not fully set - mutt starts and then it seems the xterm bgcolour gets
applied.
Neither of these have ever happened without the O(1) patch. Please let
me know if I can be of any further assistance.
Matthew
Ingo,
Tested -J2 on SuSE 6.4 Linux 2.4.17 kernel on my "big" Linux box - Tyan
Tiger MP, 2x Athlon MP 1600+ with 1.25 GB RAM and Mylex AcceleRAID 170
RAID 5. Default parameters for timeslice, etc in -J2 sched.h were used.
Excellent interactivity maintained with make -j bzImage issued - load
average was about 100. Mozilla 0.9.7 was very usable while this was
going on.
Kernel compile times:
with no other significant processes (CPU wise) running
O(1) J2 default
make bzImage 3 min 45 sec 3 min 36 sec
make -j2 bzImage 1 min 59 sec 1 min 57 sec
make -j16 bzImage 2 min 02 sec 2 min 01 sec
make -j bzImage 2 min 13 sec 2 min 09 sec
peak -j load 106 210
with 2 setiathome processes running at nice 19
O(1) J2 default
make bzImage 4 min 10 sec 3 min 42 sec
make -j2 bzImage 2 min 56 sec 2 min 17 sec
make -j16 bzImage 2 min 22 sec 2 min 03 sec
make -j bzImage 2 min 24 sec 2 min 11 sec
peak -j load 100 206
A few things come to my attention with these numbers. It seems as if
heavily niced processes are still getting too much CPU time. Also, there
may be work still needed on the parent-child stuff - Just watching
the graphical output of xosview and KDE System Guard during make -j2
with the seti processes running showed that CPU 0 was only being used
about 50% of the time on average, with sometimes it being totally used
by the niced SETI process, and other times totally by the compile. The
fraction of usage for CPU 0 for the non-niced compile processes would
bounce wildly between 0 and 100% (with the SETI always making up the
difference). CPU 1 would always remain fully loaded with the compile.
Dividing the two corresponding times for the make -j2 case
((1+59/60)/(2+56/60)) gives a CPU usage of 0.68 of available for
compilation.
Usage this 0.68 as a proportion ratio, this means the SETI process(es)
were getting 0.32 or 32% of the total processor time on the machine.
Assigning 0.5 to CPU 1 (which was fully loaded), and subtracting from
0.68 gives 0.18. Multiplying 0.18 by two to get the fraction on CPU 0
gives 0.36 or 36%, which means CPU 1 was spending 36% of its time on the
compile and 64% on SETI. This is in-line with the visual behavior seen
watching xosview and KDE System Guard.
Do you think this is correct behavior, or is more tuning or adjustment
to the balancing algorithm in order? Maybe the same issue that is
causing this is behind getting a peak load of only about 100 doing make
-j with O(1) versus about 200 with the normal scheduler.
The interactivity is absolutely stunning. The GUI remains extremely
usable even during the make -j's which brings interactivity to its knees
with the normal scheduler.
Absolutely *no* stability issues whatsover. Rock-solid.
Hope this provides some useful info.
Jim Owens
On Sat, Jan 19, 2002 at 10:19:29PM +0000, Matthew Sackman wrote:
> On Fri, Jan 18, 2002 at 07:18:10PM +0100, Ingo Molnar wrote:
> >
> > the -J2 O(1) scheduler patch is available:
> >
> > http://redhat.com/~mingo/O(1)-scheduler/sched-O1-2.5.3-pre1-J2.patch
> > http://redhat.com/~mingo/O(1)-scheduler/sched-O1-2.4.17-J2.patch
> <snips>
One other thing that I've noticed, switching virtual workspaces will reliably
cause xmms to stutter. If you switch rapidly then it is exacerbated.
Matthew
On Sun, Jan 20, 2002 at 11:01:04PM +0000, Matthew Sackman wrote:
> > > http://redhat.com/~mingo/O(1)-scheduler/sched-O1-2.4.17-J2.patch
>
> One other thing that I've noticed, switching virtual workspaces will
> reliably cause xmms to stutter. If you switch rapidly then it is
> exacerbated.
And without sched O(1) it isn't ?
This doesn't have to be the scheduler problem, but a problem somewhere
else (low latency (?)).
For me it doesn't skip even under heavy disk (IDE), VM and cpu load
while while switching 1280x1024 workspaces really fast and for a long
time
[Athlon 850, Matrox G450, XF4.1, Window Maker]
(I use it with rmap11c + mini-ll + ah-IDE ... = -jl-11-mini + 18pre3)
--
Martin Ma?ok http://underground.cz/
[email protected] http://Xtrmntr.org/ORBman/
On Mon, Jan 21, 2002 at 01:02:31AM +0100, Martin Ma?ok wrote:
> On Sun, Jan 20, 2002 at 11:01:04PM +0000, Matthew Sackman wrote:
> > > > http://redhat.com/~mingo/O(1)-scheduler/sched-O1-2.4.17-J2.patch
> >
> > One other thing that I've noticed, switching virtual workspaces will
> > reliably cause xmms to stutter. If you switch rapidly then it is
> > exacerbated.
>
> And without sched O(1) it isn't ?
>
> This doesn't have to be the scheduler problem, but a problem somewhere
> else (low latency (?)).
>
> For me it doesn't skip even under heavy disk (IDE), VM and cpu load
> while while switching 1280x1024 workspaces really fast and for a long
> time
> [Athlon 850, Matrox G450, XF4.1, Window Maker]
>
> (I use it with rmap11c + mini-ll + ah-IDE ... = -jl-11-mini + 18pre3)
I've never had it without. The files xmms is playing is on an NFS mount.
Previously I've only used 2.4.x kernels with no patches. The O(1) J2 is
the only patch I've got applied. (PIII 500, 192MB, XF4.1, uwm, 100BaseTX,
NFS 3, X running with Xinerama on dual head - any more details?)
Matthew