Talk about lies, damned lies, and statistics. I've seen comments about
disk throughput, and running hdparm -t is one of my normal tests for new
kernels. On this particular box (1000Mhz Duron, 512MB) hdparm regularly
reported disk reads of >= 40MB/S, at least on the outside. Then I
booted 2.6.5 and the reported speed dropped away to typically 26-28MB/S.
I remembered a recent comment about block sizes, so I tested the read
speed shown for various partitions. The outer part of the disk has
several VFAT partitions, the ext3 partitions on the inside now report
around 31MB/S - better than 26, but sounds poor, and 2.6.1 reported a
slightly faster speed (might be the compiler, for 2.6.1 I used
gcc-2.95.3, now I'm using gcc-3.3.3). All of the 2.6 kernels on this
box have preempt enabled.
But, when all's said and done these are only numbers. I found the
biggest tar on this box (463MiB) and timed -
cp from hda10 to hda9 (these are the innermost partitions)
sync
rm from hda9
sync again
Repeated three times, no other users, noted the real time.
Under 2.4.25, between 41 and 45 seconds.
Under 2.6.1, between 42 and 50 seconds.
Under 2.6.5, between 38 and 40 seconds.
So, despite the numbers shown by hdparm looking worse, when only one
user is doing anything the performance is actually improved. I've no
idea which changes have achieved this, but thanks to whoever were
involved.
Ken
--
das eine Mal als Trag?die, das andere Mal als Farce
Ken,
AFAICS, some hdparm -t or dbench 1 will reveal slight variations
between 2.4 & 2.6 ... In some cases 2.4 will be better.
Running dbench 10 we have :
2.6
202 48.57
234 22.28
234 16.00
234 12.48
234 10.23
234 8.66
234 7.52
431 12.37
463 11.86
464 10.76
2.4.21
119 15.06
151 10.75
282 15.54
301 13.44
313 11.84
352 11.49
360 10.38
360 9.29
360 8.42
Here's what we can call a server direction.2.6 is unbeatable there due
to IO scheduler (i.e. As-iosched, cfq and noop rock'n'roll)
There are no good conclusions at all although ,at this state of
development, IMHO, 2.4 seems more 'client friendly' and 2.6 server
oriented in this chapter.
Regards,
FabF
FabF wrote:
> Here's what we can call a server direction.2.6 is unbeatable there due
> to IO scheduler (i.e. As-iosched, cfq and noop rock'n'roll)
Well, which one were you using just now? AS I assume?
>
> There are no good conclusions at all although ,at this state of
> development, IMHO, 2.4 seems more 'client friendly' and 2.6 server
> oriented in this chapter.
IO scheduler wise, AS should be good for "clients" (ie. desktop)
because it is the desktop where AS's possible small throughput
regressions are not a big problem even if they did arise.
Ken Moffat wrote:
>
> So, despite the numbers shown by hdparm looking worse, when only one
> user is doing anything the performance is actually improved. I've no
> idea which changes have achieved this, but thanks to whoever were
> involved.
I've done tests using dd to and from the raw block device under 2.4 and
2.6. Memory size (kernel boot param mem=) doesn't seem to affect
performance, so I assume that means that dd to and from the raw block
device is unbuffered. When I compare read and write speeds between 2.4
and 2.6, 2.6 is definately slower. The last 2.6 kernel I tried this
with is 2.6.5.
On Tue, 27 Apr 2004, Timothy Miller wrote:
>
>
> Ken Moffat wrote:
>
> >
> > So, despite the numbers shown by hdparm looking worse, when only one
> > user is doing anything the performance is actually improved. I've no
> > idea which changes have achieved this, but thanks to whoever were
> > involved.
>
>
> I've done tests using dd to and from the raw block device under 2.4 and
> 2.6. Memory size (kernel boot param mem=) doesn't seem to affect
> performance, so I assume that means that dd to and from the raw block
> device is unbuffered. When I compare read and write speeds between 2.4
> and 2.6, 2.6 is definately slower. The last 2.6 kernel I tried this
> with is 2.6.5.
>
Well, my original test used cp, sync, rm, sync. I've no statistics
from running 2.4 on this box to compare against.
Ken
--
das eine Mal als Trag?die, das andere Mal als Farce
Ken Moffat wrote:
> On Tue, 27 Apr 2004, Timothy Miller wrote:
>
>
>>
>>Ken Moffat wrote:
>>
>>
>>>So, despite the numbers shown by hdparm looking worse, when only one
>>>user is doing anything the performance is actually improved. I've no
>>>idea which changes have achieved this, but thanks to whoever were
>>>involved.
>>
>>
>>I've done tests using dd to and from the raw block device under 2.4 and
>>2.6. Memory size (kernel boot param mem=) doesn't seem to affect
>>performance, so I assume that means that dd to and from the raw block
>>device is unbuffered. When I compare read and write speeds between 2.4
>>and 2.6, 2.6 is definately slower. The last 2.6 kernel I tried this
>>with is 2.6.5.
>>
>
>
> Well, my original test used cp, sync, rm, sync. I've no statistics
> from running 2.4 on this box to compare against.
>
Based on my experience, cp and anything else that uses the filesystem
gets buffered. I can tell this because, without sync, the throughput
varies with memory size. Furthermore, I wanted to know raw throughput,
so I used a block device so I could eliminate filesystem overhead.
Reading and writing the block device does not seem to be buffered
because the run time is not affected by memory (host RAM) size. That
is, unless dd does an implicit sync.
The numbers I get when using dd to and from one of my drives under a 2.4
kernel with the drive connected to the on-board IDE controller are
roughly the same as published benchmarks.