I performed the following tests running both 2.4.0-test12-pre7 and
2.2.18-pre26. All kernel builds were done in console mode (no X).
All numbers are seconds required to make bzImage. Times were
obtained using the date command before and after make bzImage in
a script. Each test was performed three times.
1 2 3 ave.
449 443 440 444 make bzImage for 2.4.0t12p7 running 2.2.18p26
460 458 454 457.3 make bzImage for 2.4.0t12p7 running 2.4.0t12p7
310 310 307 309 make bzImage for 2.2.18p26 running 2.2.18p26
318 319 317 318 make bzImage for 2.2.18p26 running 2.4.0t12p7
2.2.18p26 is shorthand for 2.2.18-pre26.
2.4.0t12p7 is shorthand for 2.4.0-test12-pre7.
2.2.18-pre26 was patched with reiserfs-3.5.28.
2.2.18-pre26 was compiled with gcc 2.91.66 (kgcc).
2.4.0-test12-pre7 was patched with reiserfs-3.6.22.
2.4.0-test12-pre7 was compiled with gcc 2.95.3.
The .config files were unchanged during the tests.
A make clean was performed before each test.
The test machine was not connected to a network during the tests.
Test machine: single processor P-III (450 Mhz), 192MB, IDE disk (ST317221A).
Conclusion: UP 2.2.18 makes kernels 3% faster than UP 2.4.0-test12
using ReiserFS. However, the margin of victory is small enough that a
recount may be necessary.
It would be interesting to see results using ext2fs and results from SMP
machines.
Steven
| 2.2.18-pre26 was compiled with gcc 2.91.66 (kgcc).
| 2.4.0-test12-pre7 was compiled with gcc 2.95.3.
That's your answer right there.
GCC 2.95.3 compiles much slower than kgcc.
Rerun the 2.4.0 with kgcc to be fair. :)
Aaron Tiensivu wrote:
>| 2.2.18-pre26 was compiled with gcc 2.91.66 (kgcc).
>| 2.4.0-test12-pre7 was compiled with gcc 2.95.3.
>
>That's your answer right there.
>GCC 2.95.3 compiles much slower than kgcc.
>
>Rerun the 2.4.0 with kgcc to be fair. :)
Actually, it is fair. There are really two results,
1) 309 sec for 2.2.18p26 vs 318 sec for 2.4.0t12p7 where the
task was building 2.2.18p26 using kgcc.
2) 444 sec for 2.2.18p26 vs 457.3 sec for 2.4.0t12p7 where the
task was building 2.4.0t12p7 using gcc.
In each case, the task and the tools used are the same. The
only difference was the kernel used. In both cases, 2.2.18 won by 3%. Its
comparing apples to apples and oranges to oranges. Granted 3% isn't
very much, but I would have guessed that 2.4.0 would have been the
winner. It wasn't, at least for this single processor machine.
Now, if you're saying that 2.4.0-test12 will get the job done faster when
compiled using kgcc, that's something else. I'll try that out to see if it
makes a difference.
Steven
Steven Cole <[email protected]> writes:
[...]
> In each case, the task and the tools used are the same. The only
> difference was the kernel used. In both cases, 2.2.18 won by 3%.
> Its comparing apples to apples and oranges to oranges. Granted 3%
> isn't very much, but I would have guessed that 2.4.0 would have been
> the winner. It wasn't, at least for this single processor machine.
Two points: (1) gcc 2.95 makes slightly slower code than egcs-1.1
(according to benchmarks on gcc.gnu.org) so compile 2.4 kernel with
egcs for a fairer comparison. (2) The new VM was a performance
regression for throughput.
I think that it is important that the extent of the indisputable
performance decreases be quantified and traced. For me there was a
subjective performance peak around 2.3.48 IIRC, though it might have
been before. Andrea Archangeli has a VM patch that seems to
help in some cases.
It would be interesting to run a series of (automated) tests on a lot
of kernel versions, and to see how far performance is behind FreeBSD
(or even NetBSD).
[...]
--
http://www.penguinpowered.com/~vii
On 11 Dec 2000, John Fremlin wrote:
> Two points: [snipped]
Doing a 'make bzImage' is NOT VM-intensive. Using this as a test
for the VM doesn't make any sense since it doesn't really excercise
the VM in any way...
If you want to measure, or even just bitch about, the VM, you should
at least quote results from something that uses the VM ;)
regards,
Rik
--
Hollywood goes for world dumbination,
Trailer at 11.
http://www.surriel.com/
http://www.conectiva.com/ http://distro.conectiva.com.br/
> Doing a 'make bzImage' is NOT VM-intensive. Using this as a test
> for the VM doesn't make any sense since it doesn't really excercise
> the VM in any way...
Its an interesting demo that 2.4 has some performance problems since 2.2
is slower than 2.0 although nowdays not much.
Alan Cox wrote:
>
> > Doing a 'make bzImage' is NOT VM-intensive. Using this as a test
> > for the VM doesn't make any sense since it doesn't really excercise
> > the VM in any way...
>
> Its an interesting demo that 2.4 has some performance problems since 2.2
> is slower than 2.0 although nowdays not much.
Speaking about performance - could someone explain me why
md5checksumming on 10GB
partition is taking my whole 128MB memory and is permamently swaping out
every application off my memory to swap so the computer is very slow
during
this process???
Could I set somewhere in /proc that I do not wish to have 100MB disk
buffers ?
--
Zdenek Kabelac http://i.am/kabi/ [email protected] {debian.org; fi.muni.cz}
In article <[email protected]> you wrote:
>> Doing a 'make bzImage' is NOT VM-intensive. Using this as a test
>> for the VM doesn't make any sense since it doesn't really excercise
>> the VM in any way...
> Its an interesting demo that 2.4 has some performance problems since 2.2
> is slower than 2.0 although nowdays not much.
Seems to depend on the hardware used. On my test box, 2.4 is faster by
0.3s....
Greetings,
Arjan van de Ven
Machine:
AMD Duron 700Mhz with 128Mb of 133Mhz Ram
2 IBM 15Gb ATA100 disks in RAID0 raid
tested kernels:
2.2.18 + raid patch + latest IDE patch
2.4.0-test12pre7
compiling 2.2.18 with gcc 2.95.2
1st run 2nd 3rd
kernel 2.2.18/raid/ide 3:28.909 3:28.819 3:28.840
kernel 2.4.0test12pre7 3:28:520 3:28.534 3:28.546
On Mon, 11 Dec 2000, Alan Cox wrote:
> > Doing a 'make bzImage' is NOT VM-intensive. Using this as a test
> > for the VM doesn't make any sense since it doesn't really excercise
> > the VM in any way...
>
> Its an interesting demo that 2.4 has some performance problems
> since 2.2 is slower than 2.0 although nowdays not much.
Indeed, but blaming the VM subsystem for something which hardly
touches the VM is a tad strange ...
Rik
--
Hollywood goes for world dumbination,
Trailer at 11.
http://www.surriel.com/
http://www.conectiva.com/ http://distro.conectiva.com.br/
On Mon, 11 Dec 2000, Alan Cox wrote:
> > Doing a 'make bzImage' is NOT VM-intensive. Using this as a test
> > for the VM doesn't make any sense since it doesn't really excercise
> > the VM in any way...
>
> Its an interesting demo that 2.4 has some performance problems since 2.2
> is slower than 2.0 although nowdays not much.
How much of that is due to the fact that the 2.4.0 scheduler interrupts
processes more often than 2.2.x? Is the better interactivity worth the
slight drop in performance?
Gerhard
--
Gerhard Mack
[email protected]
<>< As a computer I find your faith in technology amusing.
On Mon, Dec 11, 2000 at 04:38:11PM -0200, Rik van Riel wrote:
> On 11 Dec 2000, John Fremlin wrote:
>
> > Two points: [snipped]
>
>
> Doing a 'make bzImage' is NOT VM-intensive. Using this as a test
> for the VM doesn't make any sense since it doesn't really excercise
> the VM in any way...
Also, you should view the result of some hdd benchmarks because
it's possible to get different values for 2.4 and 2.2 which is major
point for a fair test (maybe 2.4 and 2.2 have got different default values
and so on. try hdparm -tT /dev/hd? if you have got IDE disks).
--
---[ G?bor L?n?rt ]----[ Vivendi Telecom Hungary ]---[ [email protected] ]---
U have 8 bit computer or chip of them and it's unused or to be sold? Call me!
-------[ +36 30 2270823 ]--------> LGB <-------[ Linux/UNIX/8bit 4ever ]-----
> How much of that is due to the fact that the 2.4.0 scheduler interrupts
> processes more often than 2.2.x? Is the better interactivity worth the
> slight drop in performance?
What better interactivity ;)
Aaron Tiensivu wrote:
>Rerun the 2.4.0 with kgcc to be fair. :)
John Fremlin wrote:
>Two points: (1) gcc 2.95 makes slightly slower code than egcs-1.1
>(according to benchmarks on gcc.gnu.org) so compile 2.4 kernel with
>egcs for a fairer comparison. (2) The new VM was a performance
Ok, several people have said that kgcc makes a slightly
better (faster) kernel than gcc. Here are some more results.
1 2 3 ave.
453 456 455 454.7 make bzImage for 2.4.0t12p7 running 2.4.0t12p7kgcc
compare this to my previous test using test12-pre7 compiled with gcc:
460 458 454 457.3 make bzImage for 2.4.0t12p7 running 2.4.0t12p7gcc
2.4.0t12p7kgcc is shorthand for 2.4.0-test12-pre7k made with kgcc.
2.4.0t12p7gcc is shorthand for 2.4.0-test12-pre7 made with gcc.
kgcc does indeed make a slightly faster (0.5%) kernel, but I think
we're getting into the pregnant or dimpled chad thing at this point.
To create a kgcc test12-pre7, I modified line 18 and 29 of the top
level Makefile to be =kgcc. Of course, I then restored the Makefile
to original, since I'm not testing how fast gcc vs kgcc compiles a
bunch of code. I modified EXTRAVERSION to be -test12k so I could
double check with uname -r to make sure I booted the correct kernel.
Kgcc made a somewhat larger kernel than gcc. The same .config file
was used for both kernels.
829034 Dec 7 20:46 vmlinuz-2.4.0-test12-pre7
854863 Dec 11 14:12 vmlinuz-2.4.0-test12-pre7k
I have a SMP (dual P-III 733Mhz) machine at work, but it will be
unavailable for testing for a few more days. I suspect that 2.4.0-test12
will do better than 2.2.18 with 2 CPUs. I'll know in a few days.
Building kernels is something we do so frequently and this test is so easy
to reproduce is why I performed it in the first place. I think it may be as
good a test of real performance as some of the more formal benchmarks.
Comments anyone?
Steven
On Mon, 11 Dec 2000, Steven Cole wrote:
> I have a SMP (dual P-III 733Mhz) machine at work, but it will be
> unavailable for testing for a few more days. I suspect that 2.4.0-test12
> will do better than 2.2.18 with 2 CPUs. I'll know in a few days.
>
> Building kernels is something we do so frequently and this test is so easy
> to reproduce is why I performed it in the first place. I think it may be as
> good a test of real performance as some of the more formal benchmarks.
> Comments anyone?
I think it's better with -j. Do it with -jN where N is small enough
to keep the box away from swap, and then repeat with N large enough to
swap modestly (not too heavily or you're only testing disk MTBF:).
-Mike
On Mon, 11 Dec 2000, Mike Galbraith wrote:
> On Mon, 11 Dec 2000, Steven Cole wrote:
> > I have a SMP (dual P-III 733Mhz) machine at work, but it will be
> > unavailable for testing for a few more days. I suspect that 2.4.0-test12
> > will do better than 2.2.18 with 2 CPUs. I'll know in a few days.
[snip]
>
> I think it's better with -j. Do it with -jN where N is small enough
> to keep the box away from swap, and then repeat with N large enough to
> swap modestly (not too heavily or you're only testing disk MTBF:).
I've always used make -j2 bzImage for my two processor machine.
I like being able to build kernels in a little over two minutes.
Simple question here, and risking displaying great ignorance:
Does it make sense to use make -jN where N is much greater than the
number of CPUs?
Steven
On Mon, 11 Dec 2000, Steven Cole wrote:
> On Mon, 11 Dec 2000, Mike Galbraith wrote:
> > On Mon, 11 Dec 2000, Steven Cole wrote:
> > > I have a SMP (dual P-III 733Mhz) machine at work, but it will be
> > > unavailable for testing for a few more days. I suspect that 2.4.0-test12
> > > will do better than 2.2.18 with 2 CPUs. I'll know in a few days.
> [snip]
> >
> > I think it's better with -j. Do it with -jN where N is small enough
> > to keep the box away from swap, and then repeat with N large enough to
> > swap modestly (not too heavily or you're only testing disk MTBF:).
>
> I've always used make -j2 bzImage for my two processor machine.
> I like being able to build kernels in a little over two minutes.
>
> Simple question here, and risking displaying great ignorance:
> Does it make sense to use make -jN where N is much greater than the
> number of CPUs?
If you're testing VM, definitely yes. Otherwise.. _not_ ;-)
-Mike
On Mon, 11 Dec 2000, Steven Cole wrote:
> Building kernels is something we do so frequently and this test
> is so easy to reproduce is why I performed it in the first
> place. I think it may be as good a test of real performance as
> some of the more formal benchmarks. Comments anyone?
Just one comment. You cannot use a kernel build to measure
other things than those subsystems which the kernel build
excercises.
Things you could measure with a kernel build: scheduling (L2
cache efficiency), fork, readahead, cpu speed, framebuffer
speed (in the make dep phase) and maybe hard disk speed.
Things you cannot measure with a kernel build: networking,
swapping (unless you do a very big parallel build, and even
then it's questionable), raw IO speed (the kernel build is
latency sensitive, but doesn't need much throughput), ...
regards,
Rik
--
Hollywood goes for world dumbination,
Trailer at 11.
http://www.surriel.com/
http://www.conectiva.com/ http://distro.conectiva.com.br/
Steven Cole wrote:
[...]
> Simple question here, and risking displaying great ignorance:
> Does it make sense to use make -jN where N is much greater than the
> number of CPUs?
No, but it makes sense to have N at least one more than the number of
cpus,
if you have the memory. This because your processes occationally
will wait for disk io, and this time may then be utilized to
run the "extra" task. But don't overdo it, as you get less disk
cache this way. make -j3 seems to be fastest on my 2-cpu machine
with 128M ram.
Helge Hafting
On Tue, 12 Dec 2000, Rik van Riel wrote:
> On Mon, 11 Dec 2000, Steven Cole wrote:
>
> > Building kernels is something we do so frequently and this test
> > is so easy to reproduce is why I performed it in the first
> > place. I think it may be as good a test of real performance as
> > some of the more formal benchmarks. Comments anyone?
>
> Just one comment. You cannot use a kernel build to measure
> other things than those subsystems which the kernel build
> excercises.
One comment back ;-)
Of course.
> Things you could measure with a kernel build: scheduling (L2
> cache efficiency), fork, readahead, cpu speed, framebuffer
> speed (in the make dep phase) and maybe hard disk speed.
Yes, among others. RealHardLife hard disk speed.. kinda sorta.
> Things you cannot measure with a kernel build: networking,
> swapping (unless you do a very big parallel build, and even
> then it's questionable), raw IO speed (the kernel build is
> latency sensitive, but doesn't need much throughput), ...
I believe you are wrong wrt parallel kernel builds as swap test..
it's not questionable at all. :) Or, if it is, please explain why.
My view:
The kernel build is above and beyond all other considerations a CPU
bound job which has a cachable component which is not negligable,
and has an I/O component, but not a dominating one. That makes it
an ideal generic test candidate for vm throughput.. in any box not
blessed with unlimited I/O capability.
I think that a parallel kernel build is the perfect basic VM functional
test _because_ of it's simple requirements. Limit the I/O to what your
box can deliver (must know before testing), and it's fine.
I agree if you say that it's nothing beyond a basic functionality test.
I test this way because my box doesn't have the hardware to do serious
I/O.. and neither do at least 99.9% of other boxen out there.
What else can you suggest to test basic VM throughput in an I/O starved
environment? It has to be multi-task, and has to be CPU bound to be
meaningful.
-Mike
Helge Hafting wrote:
>Steven Cole wrote:
>[...]
>>Simple question here, and risking displaying great ignorance:
>>Does it make sense to use make -jN where N is much greater than the
>>number of CPUs?
>
>No, but it makes sense to have N at least one more than the number of
>cpus,
>if you have the memory. This because your processes occationally
>will wait for disk io, and this time may then be utilized to
>run the "extra" task. But don't overdo it, as you get less disk
>cache this way. make -j3 seems to be fastest on my 2-cpu machine
>with 128M ram.
Thanks for the answer. That makes a lot of sense. When I get the time,
I'll verify that, at least for this fairly narrowly defined task of building
a kernel.
In order to minimize external and variable influences on the CPU load, I
performed all these tests in console mode not connected to a network. That
may have been an unrealistic test, as that is not how I normally do kernel
builds. Having to juggle more work, like running X and KDE, could shift the
results (2.2.18 vs 2.4.0-test12) around a bit. I'll repeat the tests with a
more normal work environment. If anything statistically significant is found,
I'll mention it. Thanks to all for their input.
Steven
On Monday 11 December 2000 11:46, Alan Cox wrote:
>
> Its an interesting demo that 2.4 has some performance problems since 2.2
> is slower than 2.0 although nowdays not much.
Results for SMP 2.2.18 vs SMP 2.4.0-test12 are in.
I repeated my earlier tests on a much faster dual P-III machine.
Executive summary: SMP 2.4.0 is 2% faster than SMP 2.2.18.
Although I made the following changes in the test procedure, these
tests for 2.2.18 and 2.4.0-test12 were held under identical conditions.
I used make -j3 bzImage for these tests on this SMP machine.
The test machine is a dual P-III (733 Mhz), 256MB, IDE.
I ran X and KDE 2.0 during the tests to provide a greater though
reproducable load on the tested kernel.
The 2.2.18 kernel used was the final 2.2.18.
The 2.4.0-test12 is still 2.4.0-test12-pre7 since test12(final)
does not yet build with reiserfs. Team Reiser is working on this.
Here are the numbers I got. Again, three runs each were done.
Task: make -j3 bzImage for 2.4.0-test12-pre7 kernel tree.
Numbers are seconds to build.
1 2 3 ave.
143 143 143 143 Running 2.2.18 SMP
140 140 140 140 Running 2.4.0-test12-pre7 SMP
The numbers are very repeatable, as you can see.
This time, 2.4.0-test12 wins by 2%. Recounts can be performed
by anyone, anytime.
Steven
On Tue, 12 Dec 2000, Steven Cole wrote:
>
> Executive summary: SMP 2.4.0 is 2% faster than SMP 2.2.18.
Note that kernel compilation really isn't a very relevant benchmark,
because percentage differences in this range can be basically just noise:
things like driver version differences that show up, but impact different
machines different ways (maybe one driver is better for certain machines,
and worse on others. Things like that).
The setup you descibe is just too CPU-intensive, with little potential for
truly interesting differences.
> I ran X and KDE 2.0 during the tests to provide a greater though
> reproducable load on the tested kernel.
You might want to do the same in 32-64MB of RAM. And actually move your
mouse around a bit to keep KDE/X from just being paged out, at which point
it turns un-interesting again. I don't know how to do that repeatably,
though, but one thing I occasionally do is to read my email (which is not
very CPU-intensive, but it does keep the desktop active and also gives me
a feel for interactive behaviour).
At that point the numbers are probably going to show more difference (and
the variation is probably going to be much bigger).
Linus
On Tuesday 12 December 2000 11:40, Linus Torvalds wrote:
> On Tue, 12 Dec 2000, Steven Cole wrote:
> > Executive summary: SMP 2.4.0 is 2% faster than SMP 2.2.18.
>
> > I ran X and KDE 2.0 during the tests to provide a greater though
> > reproducable load on the tested kernel.
>
> You might want to do the same in 32-64MB of RAM. And actually move your
> mouse around a bit to keep KDE/X from just being paged out, at which point
> it turns un-interesting again. I don't know how to do that repeatably,
> though, but one thing I occasionally do is to read my email (which is not
> very CPU-intensive, but it does keep the desktop active and also gives me
> a feel for interactive behaviour).
>
Keeping the memory the same, I repeated the kernel builds
while moving the mouse in a similar way, and switching the
desktop 3 times, same desktops for each test. Yes, I know,
this doesn't test much more, since nothing was swapped out.
These results are even closer. The differences are so slight,
that they are not statistically significant. Hmmm, maybe no
news is good news in this case.
Perhaps if anything is interesting from this test,
it is the negative result: No significant performance
difference for this particular CPU-intensive task on only
two processors.
I'm sure it would be fun to try this test on a GS320 32-CPU Wildfire.
I believe a 24-CPU Sun E10000 built a 2.4.0-test7 kernel in about 20 seconds.
Fun, but maybe not too meaningful. Sigh.
Task: make -j3 bzImage for 2.4.0-test12-pre7 kernel tree.
Numbers are seconds to build.
New results (with fiddling with the desktop):
1 2 3 ave.
143 142 142 142.3 Running 2.2.18 SMP
141 141 142 141.3 Running 2.4.0-test12-pre7 SMP
Steven
On Tue, 12 Dec 2000, Steven Cole wrote:
>
> Task: make -j3 bzImage for 2.4.0-test12-pre7 kernel tree.
Actually, do it with
make -j3 'MAKE=make -j3' bzImage
A single "-j3" won't do much. It will only build three directories at a
time, and you'll never see much load. But doing it recursively means that
you'll build three at a time all the way out to the leaf directories, and
you should see loads up to 20+, and much more memory pressure too.
Linus
On Tuesday 12 December 2000 13:38, Linus Torvalds wrote:
> On Tue, 12 Dec 2000, Steven Cole wrote:
> > Task: make -j3 bzImage for 2.4.0-test12-pre7 kernel tree.
>
> Actually, do it with
>
> make -j3 'MAKE=make -j3' bzImage
>
> A single "-j3" won't do much. It will only build three directories at a
> time, and you'll never see much load. But doing it recursively means that
> you'll build three at a time all the way out to the leaf directories, and
> you should see loads up to 20+, and much more memory pressure too.
>
> Linus
Ok, repeated the tests with make -j3 'MAKE=make -j3' bzImage
I ran xosview to monitor the load.
The load values for 2.2.18 seemed to stay higher longer than
for 2.4.0-test12. I recorded the peak load observed.
For comparison, with make -j3 bzImage, the peak load was much
lower, about 2.7.
Task: make -j3 'MAKE=make -j3' bzImage for 2.4.0-test12-pre7 kernel tree.
Numbers are seconds to build.
New results:
1 2 3 ave.
143 143 143 143 Running 2.2.18 SMP
19.1 17.5 19.2 18.6 Max load observed with xosview
142 141 141 141.3 Running 2.4.0-test12-pre7 SMP
16.2 16.8 15.2 16.1 Max load observed with xosview
Steven
Alan Cox wrote:
> > How much of that is due to the fact that the 2.4.0 scheduler interrupts
> > processes more often than 2.2.x? Is the better interactivity worth the
> > slight drop in performance?
>
> What better interactivity ;)
Indeed!
On my dual Celeron workstation, 2.4 looks to me as if it is scheduling
"more". Thus when I move a window, the window takes on all intervening
positions. Under 2.2, the window sometimes jerks 10 pixels or so, but
it acutally follows the mouse. Under 2.4, you can get hte window to
lag the mouse by a significant amount.
Thus to me, 2.4 FEELS much less interactive. When I move windows they
don't follow the mouse in real-time.
Roger.
--
** [email protected] ** http://www.BitWizard.nl/ ** +31-15-2137555 **
*-- BitWizard writes Linux device drivers for any device you may have! --*
* There are old pilots, and there are bold pilots.
* There are also old, bald pilots.
Rogier Wolff writes:
> Alan Cox wrote:
> > What better interactivity ;)
> Thus to me, 2.4 FEELS much less interactive. When I move windows they
> don't follow the mouse in real-time.
Interesting observation: in a scrolling rxvt, kernel 2.0 is smoother than
2.2, which is smoother than 2.4. I hope this trend isn't going to
continue to 2.6. ;(
_____
|_____| ------------------------------------------------- ---+---+-
| | Russell King [email protected] --- ---
| | | | http://www.arm.linux.org.uk/personal/aboutme.html / / |
| +-+-+ --- -+-
/ | THE developer of ARM Linux |+| /|\
/ | | | --- |
+-+-+ ------------------------------------------------- /\\\ |
Russell King wrote:
>
> Rogier Wolff writes:
> > Alan Cox wrote:
> > > What better interactivity ;)
> > Thus to me, 2.4 FEELS much less interactive. When I move windows they
> > don't follow the mouse in real-time.
>
> Interesting observation: in a scrolling rxvt, kernel 2.0 is smoother than
> 2.2, which is smoother than 2.4. I hope this trend isn't going to
> continue to 2.6. ;(
Could this be due to the shorter times caculated by the scheduler
recaculate code with the change that moved "nice" into the task_struct?
George