Hi!
Recently I've got an Asus TUSL2 motherboard, which has an extra Promise
IDE RAID controller with a PDC20265 chip. I've connected two IBM 60 GB
disks to it (one disk per channel). I am using kernel 2.4.17-pre8
(with CONFIG_BLK_DEV_PDC202XX=y and with/without CONFIG_PDC202XX_BURST=y),
which nicely detects the extra controller and both disks, hde and hdg. If
I test the writing and reading speed (hdparm -t, dd if=/dev/zero of=test
...) separately for each disk, I get the expected figures, like 36-37
MB/sec for reading, about 30 MB/sec for writing. If, however, I try to
write simultaneously to both disks, the performance drops drastically. The
rate for writing is then something like 3.5 MB/sec (!). I wonder if anyone
have seen anything like that or might have any ideas on how to solve the
problem.
Suspecting the hardware, I've posted this message to
comp.os.linux.hardware first, but no one have seen such a behaviour. I
have also tried different sets of IDE cables.
Best regards and TIA,
Jurij.
P.S. Please cc responses to me, because I'm not on the list.
Hi.
This is on a AMD K6-2 350Mhz, 128MB RAM, on a Ali Aladin5 based Gigabyte board
3x IBM DTLA 40GB discs on a Promisse TX2 Ultra 100 in a PCI slot.
Single transfer to one disk:
server1:~ # hdparm -tT /dev/ide/host2/bus0/target0/lun0/disc
/dev/ide/host2/bus0/target0/lun0/disc:
Timing buffer-cache reads: 128 MB in 2.12 seconds = 60.38 MB/sec
Timing buffered disk reads: 64 MB in 2.86 seconds = 22.38 MB/sec
Dual transfer to disks on sperated channels (values per disk):
server1:~ # hdparm -tT /dev/ide/host2/bus0/target0/lun0/disc
/dev/ide/host2/bus0/target0/lun0/disc:
Timing buffer-cache reads: 128 MB in 4.13 seconds = 30.99 MB/sec
Timing buffered disk reads: 64 MB in 4.66 seconds = 13.73 MB/sec
Dual transfer to disks on the same channel (values per dics):
server1:~ # hdparm -tT /dev/ide/host2/bus0/target1/lun0/disc
/dev/ide/host2/bus0/target1/lun0/disc:
Timing buffer-cache reads: 128 MB in 4.44 seconds = 28.83 MB/sec
Timing buffered disk reads: 64 MB in 8.36 seconds = 7.66 MB/sec
Hey! This might be the cause for the slowdown I reported in another
Raid5 / ReiserFS thread!!
Is this an general IDE issue or is some queueing code in the kernel
rather bad/slow for this task???
On Sat, 15 Dec 2001 18:35:29 +0100 (MET)
Jurij Smakov <[email protected]> wrote:
> Hi!
>
> Recently I've got an Asus TUSL2 motherboard, which has an extra Promise
> IDE RAID controller with a PDC20265 chip. I've connected two IBM 60 GB
> disks to it (one disk per channel). I am using kernel 2.4.17-pre8
> (with CONFIG_BLK_DEV_PDC202XX=y and with/without CONFIG_PDC202XX_BURST=y),
> which nicely detects the extra controller and both disks, hde and hdg. If
> I test the writing and reading speed (hdparm -t, dd if=/dev/zero of=test
> ...) separately for each disk, I get the expected figures, like 36-37
> MB/sec for reading, about 30 MB/sec for writing. If, however, I try to
> write simultaneously to both disks, the performance drops drastically. The
> rate for writing is then something like 3.5 MB/sec (!). I wonder if anyone
> have seen anything like that or might have any ideas on how to solve the
> problem.
>
> Suspecting the hardware, I've posted this message to
> comp.os.linux.hardware first, but no one have seen such a behaviour. I
> have also tried different sets of IDE cables.
>
> Best regards and TIA,
>
>
> Jurij.
>
> P.S. Please cc responses to me, because I'm not on the list.
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>
k33p h4ck1n6
Ren?
--
Ren? Rebe (Registered Linux user: #248718 <http://counter.li.org>)
eMail: [email protected]
[email protected]
Homepage: http://www.tfh-berlin.de/~s712059/index.html
Anyone sending unwanted advertising e-mail to this address will be
charged $25 for network traffic and computing time. By extracting my
address from this message or its header, you agree to these terms.
Hi. Thanks for the reply!
I'm sorry tht I can not do many other benchmarks - since the server is
in production ...
On Sat, 15 Dec 2001 13:15:57 -0500 (EST)
Mark Hahn <[email protected]> wrote:
> > Single transfer to one disk:
> > server1:~ # hdparm -tT /dev/ide/host2/bus0/target0/lun0/disc
>
> is any of this different if you use non-devfs?
This might make a difference? Maybe the open() call but read/write on
a open fd ??
> > Timing buffered disk reads: 64 MB in 2.86 seconds = 22.38 MB/sec
> > Timing buffered disk reads: 64 MB in 4.66 seconds = 13.73 MB/sec
>
> that's somewhat surprising; I wonder if there's some extra
> synchronization in the pdc driver. otoh, I didn't notice
> this sort of thing at all with a recent raid box I built
> (6x100G, 2xpdc, 1 via, athlonxp/1600, pc2100).
Did you tryed ReiserFS on a software raid device?
> can you replicate this with benchmarks more sane than hdparm
> (like build a raid0, and run bonnie or iozone on an ext2 on it?)
There is a Raid5 on top of 3 discs running ReiserFS (the performance is
bad ...)
But here are the results of a dd if=/bla bla/disc
A single disk:
server1:~ # time dd if=/dev/ide/host2/bus0/target0/lun0/part1 of=/dev/zero bs=1000000 count=200
200+0 records in
200+0 records out
real 0m7.106s
user 0m0.030s
sys 0m3.640s
=>28.1 MB/s
And parallel read from two mastess:
server1:~ # time dd if=/dev/ide/host2/bus0/target0/lun0/part1 of=/dev/zero bs=1000000 count=200
200+0 records in
200+0 records out
real 0m12.797s
user 0m0.010s
sys 0m4.210s
=>15.6 MB/s
And parallel read from master/slave:
server1:~ # time dd if=/dev/ide/host2/bus0/target0/lun0/part1 of=/dev/zero bs=1000000 count=200
200+0 records in
200+0 records out
real 0m18.427s
user 0m0.000s
sys 0m2.690s
=>10.85 MB/s
> > Timing buffered disk reads: 64 MB in 8.36 seconds = 7.66 MB/sec
>
> that's not particularly surprising, since there's no concurrency
> among disks in an ide chain.
I do not understand? No concurrency? The two reads are fighting for the
same IDE channel the whole time ... ?
> > Is this an general IDE issue or is some queueing code in the kernel
> > rather bad/slow for this task???
>
> hdparm is a fairly horrible benchmark; running two at once is nonsense.
> and hdparm's use of block devices probably interacts with things like
> the blocksize (1k unless you build a 4k FS), and the fact that blkdev
> moved into the pagecache.
OK. Above are some raw-read numbers. Currently the disc are in a RAID5
setup with ReiserFS on top - running in production ...
The ReiserFS is realy realy realy slow (reported in another thread).
We might add a 4th disc an reconfigure it leaving some Gigs on each
disc for tests - but not in this year ...
k33p h4ck1n6
Ren?
--
Ren? Rebe (Registered Linux user: #248718 <http://counter.li.org>)
eMail: [email protected]
[email protected]
Homepage: http://www.tfh-berlin.de/~s712059/index.html
Anyone sending unwanted advertising e-mail to this address will be
charged $25 for network traffic and computing time. By extracting my
address from this message or its header, you agree to these terms.
Well blame that on the folks that are not taking kernel code that will
allow you to solve this problem. Linus is the number one offender.
Regards,
Andre Hedrick
CEO/President, LAD Storage Consulting Group
Linux ATA Development
Linux Disk Certification Project
On Sat, 15 Dec 2001, Rene Rebe wrote:
> Hi.
>
> This is on a AMD K6-2 350Mhz, 128MB RAM, on a Ali Aladin5 based Gigabyte board
> 3x IBM DTLA 40GB discs on a Promisse TX2 Ultra 100 in a PCI slot.
>
> Single transfer to one disk:
> server1:~ # hdparm -tT /dev/ide/host2/bus0/target0/lun0/disc
>
> /dev/ide/host2/bus0/target0/lun0/disc:
> Timing buffer-cache reads: 128 MB in 2.12 seconds = 60.38 MB/sec
> Timing buffered disk reads: 64 MB in 2.86 seconds = 22.38 MB/sec
>
> Dual transfer to disks on sperated channels (values per disk):
> server1:~ # hdparm -tT /dev/ide/host2/bus0/target0/lun0/disc
>
> /dev/ide/host2/bus0/target0/lun0/disc:
> Timing buffer-cache reads: 128 MB in 4.13 seconds = 30.99 MB/sec
> Timing buffered disk reads: 64 MB in 4.66 seconds = 13.73 MB/sec
>
> Dual transfer to disks on the same channel (values per dics):
> server1:~ # hdparm -tT /dev/ide/host2/bus0/target1/lun0/disc
>
> /dev/ide/host2/bus0/target1/lun0/disc:
> Timing buffer-cache reads: 128 MB in 4.44 seconds = 28.83 MB/sec
> Timing buffered disk reads: 64 MB in 8.36 seconds = 7.66 MB/sec
>
> Hey! This might be the cause for the slowdown I reported in another
> Raid5 / ReiserFS thread!!
>
> Is this an general IDE issue or is some queueing code in the kernel
> rather bad/slow for this task???
>
> On Sat, 15 Dec 2001 18:35:29 +0100 (MET)
> Jurij Smakov <[email protected]> wrote:
>
> > Hi!
> >
> > Recently I've got an Asus TUSL2 motherboard, which has an extra Promise
> > IDE RAID controller with a PDC20265 chip. I've connected two IBM 60 GB
> > disks to it (one disk per channel). I am using kernel 2.4.17-pre8
> > (with CONFIG_BLK_DEV_PDC202XX=y and with/without CONFIG_PDC202XX_BURST=y),
> > which nicely detects the extra controller and both disks, hde and hdg. If
> > I test the writing and reading speed (hdparm -t, dd if=/dev/zero of=test
> > ...) separately for each disk, I get the expected figures, like 36-37
> > MB/sec for reading, about 30 MB/sec for writing. If, however, I try to
> > write simultaneously to both disks, the performance drops drastically. The
> > rate for writing is then something like 3.5 MB/sec (!). I wonder if anyone
> > have seen anything like that or might have any ideas on how to solve the
> > problem.
> >
> > Suspecting the hardware, I've posted this message to
> > comp.os.linux.hardware first, but no one have seen such a behaviour. I
> > have also tried different sets of IDE cables.
> >
> > Best regards and TIA,
> >
> >
> > Jurij.
> >
> > P.S. Please cc responses to me, because I'm not on the list.
> >
> > -
> > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> > the body of a message to [email protected]
> > More majordomo info at http://vger.kernel.org/majordomo-info.html
> > Please read the FAQ at http://www.tux.org/lkml/
> >
>
>
>
> k33p h4ck1n6
> Ren?
>
> --
> Ren? Rebe (Registered Linux user: #248718 <http://counter.li.org>)
>
> eMail: [email protected]
> [email protected]
>
> Homepage: http://www.tfh-berlin.de/~s712059/index.html
>
> Anyone sending unwanted advertising e-mail to this address will be
> charged $25 for network traffic and computing time. By extracting my
> address from this message or its header, you agree to these terms.
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>
Andre Hedrick wrote:
> Well blame that on the folks that are not taking kernel code that will
> allow you to solve this problem. Linus is the number one offender.
Linus is taking some patches and not others right now... so what? A
couple of my patches, isolated and clearly unrelated to bio and mochel's
driver work, made it in. Others got dropped.
I see several people (not just you Andre) whining about the dropped
patches, when it seems to clear to me that only a few things in specific
areas are getting applied right now. For you specifically, Andre, Jen's
patches have been slated for 2.5.x for a while, so it seems blindingly
obvious that he would not take your IDE patches at least until the bio
subsystem is finished and clean, since you IDE patches would clearly
depend on the bio changes.
I do not believe this as a personal condemnation of your patches, or
bcrl's, or anyone else's.
Patience is a virtue ;-) We have a long devel series in front of us
and we are only at the pre-patches to the FIRST 2.5.x release.
--
Jeff Garzik | Only so many songs can be sung
Building 1024 | with two lips, two lungs, and one tongue.
MandrakeSoft | - nomeansno
On Sat, 15 Dec 2001 10:52:28 -0800 (PST)
Andre Hedrick <[email protected]> wrote:
>
> Well blame that on the folks that are not taking kernel code that will
> allow you to solve this problem. Linus is the number one offender.
Well. Maybe under Marcelo this might change ... - Since this is your second
mail (I read) were you point out that the kernel _might_ need some updates
- are there Hedrick patches online I can give a try?
(btw. Did you took a look into the IDE's devfs registration where the hostXYZ
might be wrong [last week's post] ?)
Thanks for you good work!
> Regards,
> Andre Hedrick
> CEO/President, LAD Storage Consulting Group
> Linux ATA Development
> Linux Disk Certification Project
>
> On Sat, 15 Dec 2001, Rene Rebe wrote:
[...]
k33p h4ck1n6
Ren?
--
Ren? Rebe (Registered Linux user: #248718 <http://counter.li.org>)
eMail: [email protected]
[email protected]
Homepage: http://www.tfh-berlin.de/~s712059/index.html
Anyone sending unwanted advertising e-mail to this address will be
charged $25 for network traffic and computing time. By extracting my
address from this message or its header, you agree to these terms.
Jeff Garzik wrote:
>
> Andre Hedrick wrote:
> > Well blame that on the folks that are not taking kernel code that will
> > allow you to solve this problem. Linus is the number one offender.
>
> Linus is taking some patches and not others right now... so what? A
> couple of my patches, isolated and clearly unrelated to bio and mochel's
> driver work, made it in. Others got dropped.
>
> I see several people (not just you Andre) whining about the dropped
> patches, when it seems to clear to me that only a few things in specific
> areas are getting applied right now. For you specifically, Andre, Jen's
> patches have been slated for 2.5.x for a while, so it seems blindingly
> obvious that he would not take your IDE patches at least until the bio
> subsystem is finished and clean, since you IDE patches would clearly
> depend on the bio changes.
>
> I do not believe this as a personal condemnation of your patches, or
> bcrl's, or anyone else's.
>
> Patience is a virtue ;-) We have a long devel series in front of us
> and we are only at the pre-patches to the FIRST 2.5.x release.
Unfortunately, Andre's patch includes an important BUGFIX which must
go into 2.4 (CompactFlash hang/IRQ storm on pcmcia-PCI adapter card).
Despite I'm sending this bugifx for several months now it is not included yet!
I understand Andre would like to get the whole patch accepted as I have no
evidence he ever submitted my isolated (about 3-liner) bugfix.
Andre, what are your plans concerning 2.4 ?
I acknowledge the validity of the patch to you and Linus and agreed for
its need. As you can see he has not got a clue nor could you sell him
one. His additude toward laptops is /dev/null, otherwise he would have
taken the patches a long time ago and had the infrastructure for proper
APM calls in place.
Regards,
On Sun, 16 Dec 2001, Gunther Mayer wrote:
> Jeff Garzik wrote:
> >
> > Andre Hedrick wrote:
> > > Well blame that on the folks that are not taking kernel code that will
> > > allow you to solve this problem. Linus is the number one offender.
> >
> > Linus is taking some patches and not others right now... so what? A
> > couple of my patches, isolated and clearly unrelated to bio and mochel's
> > driver work, made it in. Others got dropped.
> >
> > I see several people (not just you Andre) whining about the dropped
> > patches, when it seems to clear to me that only a few things in specific
> > areas are getting applied right now. For you specifically, Andre, Jen's
> > patches have been slated for 2.5.x for a while, so it seems blindingly
> > obvious that he would not take your IDE patches at least until the bio
> > subsystem is finished and clean, since you IDE patches would clearly
> > depend on the bio changes.
> >
> > I do not believe this as a personal condemnation of your patches, or
> > bcrl's, or anyone else's.
> >
> > Patience is a virtue ;-) We have a long devel series in front of us
> > and we are only at the pre-patches to the FIRST 2.5.x release.
>
> Unfortunately, Andre's patch includes an important BUGFIX which must
> go into 2.4 (CompactFlash hang/IRQ storm on pcmcia-PCI adapter card).
>
> Despite I'm sending this bugifx for several months now it is not included yet!
>
> I understand Andre would like to get the whole patch accepted as I have no
> evidence he ever submitted my isolated (about 3-liner) bugfix.
>
> Andre, what are your plans concerning 2.4 ?
>
Andre Hedrick
CEO/President, LAD Storage Consulting Group
Linux ATA Development
Linux Disk Certification Project
On Sat, Dec 15, 2001 at 02:18:35PM -0500, Jeff Garzik wrote:
> Andre Hedrick wrote:
> > Well blame that on the folks that are not taking kernel code that will
> > allow you to solve this problem. Linus is the number one offender.
>
> Linus is taking some patches and not others right now... so what? A
> couple of my patches, isolated and clearly unrelated to bio and mochel's
> driver work, made it in. Others got dropped.
Patches that are unrelated to bio and obviously correct shouldn't be dropped
indefinately, or if they're being deferred, then $maintainer should say so.
> I do not believe this as a personal condemnation of your patches, or
> bcrl's, or anyone else's.
>
> Patience is a virtue ;-) We have a long devel series in front of us
> and we are only at the pre-patches to the FIRST 2.5.x release.
There is no reason not to have a 6 month devel cycle, and plenty of reasons
in favour of it. If people aren't going to bother reviewing patches in a
timely fashion, they should tell people when a good time to resend patches
is. Given the whole vm fiasco in 2.4 (which is still a mess and falling
apart for heavy loads) which stems from a lot of random direction with
patches, I hope that some of the underlying problems will get fixed. But
it really doesn't look that way.
-ben
--
Fish.
On Sat, 15 Dec 2001 13:15:57 -0500 (EST)
Mark Hahn <[email protected]> wrote:
> can you replicate this with benchmarks more sane than hdparm
> (like build a raid0, and run bonnie or iozone on an ext2 on it?)
Here are the bonnie results on a RAID0 in my setup (kernel 2.4.17-pre8,
raidtools 0.9.0, PDC20265 controller on Asus TUSL2 motherboard, 2 IBM
60GB disks, one disk per channel). /etc/raidtab contains:
raiddev /dev/md0
raid-level 0
nr-raid-disks 2
persistent-superblock 1
chunk-size 4
device /dev/hde
raid-disk 0
device /dev/hdg
raid-disk 1
mkraid /dev/md0
mke2fs /dev/md0
mount /dev/md0 /backup
bonnie -d /backup -n 1 -s 1024k -u0
Version 1.02 ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
1G 2338 16 2442 1 2135 1 7674 54 75023 28 236.7 0
^^^^
As one can see, the results of writing time are RIDICULOUSLY low (2.4
MB/sec), while reading is ok. For comparison, result of bonnie on a
separate disk, used in the array:
mke2fs /dev/hde
mount /dev/hde /hde
bonnie -d /hde -n 1 -s 1024k -u0
Version 1.02 ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
1G 11127 79 35617 21 16186 12 12969 93 39762 13 225.0 0
The results for the second disk look similarly.
Best regards,
Jurij.
> Here are the bonnie results on a RAID0 in my setup (kernel 2.4.17-pre8,
> raidtools 0.9.0, PDC20265 controller on Asus TUSL2 motherboard, 2 IBM
> 60GB disks, one disk per channel). /etc/raidtab contains:
Check you have the right IDE driver compiled in. Also try the RH 2.4.9
or a 2.4.12-ac8 type kernel and see if its about 10 times faster. For some
stuff it seems 2.4.10 destroyed performance and Andrea has yet to fix that
although lots of other stuff has recovered from the VM mess