Hello,
I've recently upgraded kernels from 2.4.19-rc1 to rc2 and on to final
and have experienced a dramatic slowdown in raid0 performance since.
Final 2.4.19 was patched with XFS and preempt and compiled using
gcc-3.1.1 - I've tried switching compilers to 2.96 (mandrake 8.2), and
cutting out preempt patches. First md1 array consists of two partitions
from hda & hdc. hdparm for both drives looks fine by themselves:
[root@waltsathlon walt]# hdparm -t /dev/hda
/dev/hda:
Timing buffered disk reads: 64 MB in 1.66 seconds = 38.55 MB/sec
/dev/hdc:
Timing buffered disk reads: 64 MB in 1.65 seconds = 38.79 MB/sec
However, using them combined as raid0, md1:
/dev/md1:
Timing buffered disk reads: 64 MB in 1.44 seconds = 44.44 MB/sec
In 2.4.18 and up through 2.4.19-rc1 I saw 66-70MB/sec from this array.
Starting in rc2 it dropped to the mid 40's. I've also ran bonnie++ and
confirmed several significant drops, however, not as bad as you would
think considering the results of hdparm.
Hardware:
Soyo Dragon+ MB w/ Athlon 1800+, 512MB Ram
Drives connected via onboard Promise PDC20265 in ultra mode
2x40GB IBM Deskstar 60GXP (I know....)
Anything else you need, just ask. Please CC me in replies as I'm not a
member of the list. Thanks,
-Walt
Sorry, should have said more about raid arrays. Drives are partitioned
as follows:
hda1, hdc1 = 4GB
hda2, hdc2 = Extended part - remainder of drive
hda5, hdc5 = 2GB = raid1, md0 /boot
hda6, hdc6 = ~15GB = raid0, md1 /usr
hda7, hdc7 = ~15GB = raid0, md2 /home
hda8, hdc8 = 1.5GB = raid0, /
hda9, hdc9 = remainder = swap
I agree, it seems as though you could see preempt lower performance, but
again, I haven't seen that either. In fact, the 2.4.18 kernel I was
using before was compiled with preempt also and showed ~68MB/Sec on md1,md2.
As for changes I may have made to .config? Nothing new. 2.4.19-rc1
compiled with xfs and preempt worked well also. I tried looking for
differences in raid drivers, but there were none to the raid0 driver.
ide-pdc202xx.c contained many changes, but I'm not a kernel hacker and
couldn't spot anything that might have affected this. Odd that it shows
up even under hdparm. Interestingly, when testing with bonnie++, the
overall sequential output was similar to the higher performing older
kernels. However, creates, deletes, and rewrites were all down
significantly.
-Walt
Mark Hahn wrote:
>>Final 2.4.19 was patched with XFS and preempt and compiled using
>
>
> it's easy to imagine cases where preempt would produce lower performance,
> though I haven't seen any hard evidence of that.
>
>
>>cutting out preempt patches. First md1 array consists of two partitions
>>from hda & hdc. hdparm for both drives looks fine by themselves:
>
>
> are they the first two partitions in hda/c?
>
>
>>/dev/hda:
>> Timing buffered disk reads: 64 MB in 1.66 seconds = 38.55 MB/sec
>>/dev/hdc:
>> Timing buffered disk reads: 64 MB in 1.65 seconds = 38.79 MB/sec
>
>
> such a disk will normally degrade to around half that performance
> in the tail of the disk.
>
>
>>/dev/md1:
>> Timing buffered disk reads: 64 MB in 1.44 seconds = 44.44 MB/sec
>>
>>In 2.4.18 and up through 2.4.19-rc1 I saw 66-70MB/sec from this array.
>>Starting in rc2 it dropped to the mid 40's. I've also ran bonnie++ and
>
>
> nothing else changed?
>