Hello,
There is something wrong with ide data throughput with at last both via
kt133 and promise pcd20265 controllers.
I have Asus A7V-133 Mobo with VIA KT133A chipset and onboard Promise
pcd20265 ide controller. My CPU is Athlon 1400 MHz and I have 512 MB of
PC133 SDRAM. I noticed that connecting two ata100 hdds into the same
channel makes everything much slower. So I made some test:
# uname -r
2.4.18pre1
1) PDC20265
PDC20265: IDE controller on PCI bus 00 dev 88
PCI: Found IRQ 11 for device 00:11.0
PDC20265: chipset revision 2
PDC20265: not 100% native mode: will probe irqs later
PDC20265: (U)DMA Burst Bit ENABLED Primary PCI Mode Secondary PCI Mode.
ide0: BM-DMA at 0x8000-0x8007, BIOS settings: hda:pio, hdb:pio
ide1: BM-DMA at 0x8008-0x800f, BIOS settings: hdc:pio, hdd:pio
hdc: ST380021A, ATA DISK drive
hdd: ST380021A, ATA DISK drive
hdc: 156301488 sectors (80026 MB) w/2048KiB Cache, CHS=155061/16/63, UDMA(100)
hdd: 156301488 sectors (80026 MB) w/2048KiB Cache, CHS=155061/16/63, UDMA(100)
# /usr/bin/time hdparm -t /dev/hdc
/dev/hdc:
Timing buffered disk reads: 64 MB in 1.63 seconds = 39.26 MB/sec
0.05user 0.26system 0:04.66elapsed 6%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (371major+12minor)pagefaults 0swaps
# /usr/bin/time hdparm -t /dev/hdd
/dev/hdd:
Timing buffered disk reads: 64 MB in 1.63 seconds = 39.26 MB/sec
0.03user 0.39system 0:04.67elapsed 8%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (371major+12minor)pagefaults 0swaps
root@nimfa:~# /usr/bin/time hdparm -t /dev/hdd
OK, it seems that with one hdd there is no problem. Data transfers
are quite high (about 40 MB/sec) and CPU usage is low: 6% to 8% is
AFAIK quite good value. But let's try this:
# /usr/bin/time hdparm -t /dev/hdc & /usr/bin/time hdparm -t /dev/hdd
[1] 152
/dev/hdc:
/dev/hdd:
Timing buffered disk reads: Timing buffered disk reads: 64 MB in 5.48 seconds = 11.68 MB/sec
0.01user 0.41system 0:08.52elapsed 4%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (371major+12minor)pagefaults 0swaps
64 MB in 5.55 seconds = 11.53 MB/sec
0.05user 0.30system 0:08.60elapsed 4%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (371major+12minor)pagefaults 0swaps
[1]+ Done /usr/bin/time hdparm -t /dev/hdc
Oooops?!!?! Two ata100 hdds and each one can only archive read speed
at 11.5 MB/sec - this is only 24% of ATA100 interface throughput!
2) vt82c686b
VP_IDE: IDE controller on PCI bus 00 dev 21
VP_IDE: chipset revision 6
VP_IDE: not 100% native mode: will probe irqs later
VP_IDE: VIA vt82c686b (rev 40) IDE UDMA100 controller on pci00:04.1
hdg: ST380021A, ATA DISK drive
hdh: ST380021A, ATA DISK drive
hdg: 156301488 sectors (80026 MB) w/2048KiB Cache, CHS=155061/16/63, UDMA(100)
hdh: 156301488 sectors (80026 MB) w/2048KiB Cache, CHS=155061/16/63, UDMA(100)
# /usr/bin/time hdparm -t /dev/hdg
/dev/hdg:
Timing buffered disk reads: 64 MB in 1.63 seconds = 39.26 MB/sec
0.05user 0.21system 0:04.67elapsed 5%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (371major+12minor)pagefaults 0swaps
# /usr/bin/time hdparm -t /dev/hdh
/dev/hdh:
Timing buffered disk reads: 64 MB in 1.63 seconds = 39.26 MB/sec
0.00user 0.35system 0:04.67elapsed 7%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (371major+12minor)pagefaults 0swaps
Nice :) 39.26 MB/sec - the same value like for PDC :) OK, what about two
disks at the same time:
# /usr/bin/time hdparm -t /dev/hdg & /usr/bin/time hdparm -t /dev/hdh
[1] 185
/dev/hdg:
/dev/hdh:
Timing buffered disk reads: Timing buffered disk reads: 64 MB in 5.35 seconds = 11.96 MB/sec
0.01user 0.43system 0:08.40elapsed 5%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (371major+12minor)pagefaults 0swaps
64 MB in 5.45 seconds = 11.74 MB/sec
0.04user 0.27system 0:08.50elapsed 3%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (371major+12minor)pagefaults 0swaps
[1]+ Done /usr/bin/time hdparm -t /dev/hdg
And again! ATA100 with 24 MB/sec! Why this is so slow?! Any ideas?
Best regards,
Krzysztof Oledzki
This is an inherent quirk (SCSI folks would say brain damage) in IDE.
Only one drive on an IDE chain may be accessed at once and only one
request may go to that drive at a time. Therefore, the maximum you could
hope for in that test is half speed on each. Throw in the overhead of
continuously hopping between them and 12MB is no surprise.
That is why even cheapo Compaqs and Gateways have the hard drive and
CD-ROM on separate chains. It's also why IDE RAID cards have a separate
connector for each drive.
-- Brian
On Tuesday 01 January 2002 05:34 pm, Krzysztof Oledzki wrote:
> Hello,
>
> There is something wrong with ide data throughput with at last both via
> kt133 and promise pcd20265 controllers.
>
> I have Asus A7V-133 Mobo with VIA KT133A chipset and onboard Promise
> pcd20265 ide controller. My CPU is Athlon 1400 MHz and I have 512 MB of
> PC133 SDRAM. I noticed that connecting two ata100 hdds into the same
> channel makes everything much slower. So I made some test:
Brian,
Well if hell freezes over and I die, the patches to make the driver
handled clean low_level IO threading will never be accepted. Because they
model the state-diagrams of the physical layer of the hardware exactly in
the transport layer, it is totally orthoginal to the darwinism of Linux.
Design is a problem, it is not permitted in a darwin-evolution model.
It only allows you to access both drives on a channel and only suffer a
10% IO loss max on each, but you gain a smooth IO access to both drives.
Regards,
Andre Hedrick
CEO/President, LAD Storage Consulting Group
Linux ATA Development
Linux Disk Certification Project
On Tue, 1 Jan 2002, Brian wrote:
> This is an inherent quirk (SCSI folks would say brain damage) in IDE.
>
> Only one drive on an IDE chain may be accessed at once and only one
> request may go to that drive at a time. Therefore, the maximum you could
> hope for in that test is half speed on each. Throw in the overhead of
> continuously hopping between them and 12MB is no surprise.
>
> That is why even cheapo Compaqs and Gateways have the hard drive and
> CD-ROM on separate chains. It's also why IDE RAID cards have a separate
> connector for each drive.
>
> -- Brian
>
> On Tuesday 01 January 2002 05:34 pm, Krzysztof Oledzki wrote:
> > Hello,
> >
> > There is something wrong with ide data throughput with at last both via
> > kt133 and promise pcd20265 controllers.
> >
> > I have Asus A7V-133 Mobo with VIA KT133A chipset and onboard Promise
> > pcd20265 ide controller. My CPU is Athlon 1400 MHz and I have 512 MB of
> > PC133 SDRAM. I noticed that connecting two ata100 hdds into the same
> > channel makes everything much slower. So I made some test:
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>
Followup to: <[email protected]>
By author: Andre Hedrick <[email protected]>
In newsgroup: linux.dev.kernel
>
> Well if hell freezes over and I die, the patches to make the driver
> handled clean low_level IO threading will never be accepted. Because they
> model the state-diagrams of the physical layer of the hardware exactly in
> the transport layer, it is totally orthoginal to the darwinism of Linux.
> Design is a problem, it is not permitted in a darwin-evolution model.
>
I was trying to figure out what certain peoples issue with this was,
and the answer I got back was concern about buggy hardware (both host
side and target side) breaking the documented model. I am personally
in no position to evaluate the veracity of that claim; perhaps you
could comment on how to deal with broken hardware in your model.
-hpa
--
<[email protected]> at work, <[email protected]> in private!
"Unix gives you enough rope to shoot yourself in the foot."
http://www.zytor.com/~hpa/puzzle.txt <[email protected]>
On Tue, Jan 01, 2002 at 04:52:02PM -0800, H. Peter Anvin wrote:
> I was trying to figure out what certain peoples issue with this was,
> and the answer I got back was concern about buggy hardware (both host
> side and target side) breaking the documented model. I am personally
> in no position to evaluate the veracity of that claim; perhaps you
> could comment on how to deal with broken hardware in your model.
And how can we tell if a previous implementation was buggy or if it was
actually hardware that was buggy?
-ben
Benjamin LaHaise wrote:
> On Tue, Jan 01, 2002 at 04:52:02PM -0800, H. Peter Anvin wrote:
>
>>I was trying to figure out what certain peoples issue with this was,
>>and the answer I got back was concern about buggy hardware (both host
>>side and target side) breaking the documented model. I am personally
>>in no position to evaluate the veracity of that claim; perhaps you
>>could comment on how to deal with broken hardware in your model.
>>
>
> And how can we tell if a previous implementation was buggy or if it was
> actually hardware that was buggy?
>
There are plenty of known hardware bugs, this is probably a better base
for discussion...
-hpa
On Tue, Jan 01, 2002 at 05:24:50PM -0800, H. Peter Anvin wrote:
> There are plenty of known hardware bugs, this is probably a better base
> for discussion...
Well, then we should get these changes into the development tree as early as
possible to ensure that we catch all the problems...
-ben
On Tue, 1 Jan 2002, Benjamin LaHaise wrote:
> On Tue, Jan 01, 2002 at 05:24:50PM -0800, H. Peter Anvin wrote:
> > There are plenty of known hardware bugs, this is probably a better base
> > for discussion...
>
> Well, then we should get these changes into the development tree as early as
> possible to ensure that we catch all the problems...
>
> -ben
So before I explain the theory applied model, how about giving me all the
concerns and maybe if Linus would post a list.
This will be take some time so I will not reply on the fly, it gets me in
trouble, here on lkml.
Regards,
Andre Hedrick
CEO/President, LAD Storage Consulting Group
Linux ATA Development
Linux Disk Certification Project
On Tue, 1 Jan 2002, Brian wrote:
> This is an inherent quirk (SCSI folks would say brain damage) in IDE.
>
> Only one drive on an IDE chain may be accessed at once and only one
> request may go to that drive at a time. Therefore, the maximum you could
> hope for in that test is half speed on each. Throw in the overhead of
> continuously hopping between them and 12MB is no surprise.
So?!? This ATA100 and ATA133 standards do not make any sens? It is not
possible to have more than 66 MB/sec with on drive and is seems that it is
not possible to use more than ~30MB/sek of 100 or 133 MB/sec ATA100/133
bus speed with two HDDs. Oh :(((
Another question - why ATA100/ATA66 HDDs are so slow with UDMA33?
With new IBM 60 GB IC35L060AVER07-0 I have much more than 33 MB/sec with
ATA100 and only 24 MB/sec with UDMA33 (Asus P2B with IntelBX). New 80GB Seagates
(Baracuda IV) have the same problem.
Best regards,
Krzysztof Oledzki
On Wed, Jan 02, 2002 at 06:21:25PM +0100, Krzysztof Oledzki wrote:
>
>
> On Tue, 1 Jan 2002, Brian wrote:
>
> > This is an inherent quirk (SCSI folks would say brain damage) in IDE.
> >
> > Only one drive on an IDE chain may be accessed at once and only one
> > request may go to that drive at a time. Therefore, the maximum you could
> > hope for in that test is half speed on each. Throw in the overhead of
> > continuously hopping between them and 12MB is no surprise.
>
> So?!? This ATA100 and ATA133 standards do not make any sens? It is not
> possible to have more than 66 MB/sec with on drive and is seems that it is
> not possible to use more than ~30MB/sek of 100 or 133 MB/sec ATA100/133
> bus speed with two HDDs. Oh :(((
>
> Another question - why ATA100/ATA66 HDDs are so slow with UDMA33?
> With new IBM 60 GB IC35L060AVER07-0 I have much more than 33 MB/sec with
> ATA100 and only 24 MB/sec with UDMA33 (Asus P2B with IntelBX). New 80GB Seagates
> (Baracuda IV) have the same problem.
Actually 24 MB/sec is quite a miracle with UDMA33. I'd expect values
around 16 MB/sec. Because, as far as I know, unlike SCSI, IDE doesn't do
concurrent reads and transfers (except for readahead), effectively
halving the interface transfer speed.
--
Vojtech Pavlik
SuSE Labs
Brian,
This was true in the past and with many older drivers. However when and
if the new driver I have is adpoted, it will make SCSI cry. So please
stop polluting the issue.
Let me be as objective as I can be.
I built a special Mylex 3-channel raid 10 systems using 6 15K drive at
Ultra160. Given that I was clever, I was able to push that system to read
and write at 170MB/sec. I was very impressed by this performance, however
this was hardware raid, caching of 256MB, and 66/64 pci bus. This was a
dual PIII w/ 2GB of EEC-Buffered-Registered.
Now I have managed to use two hosts w/ 4 channels no caching controller,
no hardware raid, 4 7200RPM drives and software raid 0. I was able to
push 109MB/sec writing and 167MB/sec reading.
Also under a similar environment, I was able to, using a single card, 4
drives, not hardware-raid, no caching controller, reach 90MB/sec writing
and reading was about 78MB/sec.
Now lets adjust cost of componets and SCSI loses big.
Once there are 10K ATA drives in the market, and none exist that I know of
to date even in beta, then we can retest .
In the meantime here is another dose of reality.
http://www.tecchannel.de/hardware/817/index.html
Regards,
Andre Hedrick
Linux ATA Development
On Wed, 2 Jan 2002, Krzysztof Oledzki wrote:
>
>
> On Tue, 1 Jan 2002, Brian wrote:
>
> > This is an inherent quirk (SCSI folks would say brain damage) in IDE.
> >
> > Only one drive on an IDE chain may be accessed at once and only one
> > request may go to that drive at a time. Therefore, the maximum you could
> > hope for in that test is half speed on each. Throw in the overhead of
> > continuously hopping between them and 12MB is no surprise.
>
> So?!? This ATA100 and ATA133 standards do not make any sens? It is not
> possible to have more than 66 MB/sec with on drive and is seems that it is
> not possible to use more than ~30MB/sek of 100 or 133 MB/sec ATA100/133
> bus speed with two HDDs. Oh :(((
>
> Another question - why ATA100/ATA66 HDDs are so slow with UDMA33?
> With new IBM 60 GB IC35L060AVER07-0 I have much more than 33 MB/sec with
> ATA100 and only 24 MB/sec with UDMA33 (Asus P2B with IntelBX). New 80GB Seagates
> (Baracuda IV) have the same problem.
>
> Best regards,
>
> Krzysztof Oledzki
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>
On Wednesday 02 January 2002 02:31 pm, Andre Hedrick wrote:
> Brian,
>
> This was true in the past and with many older drivers. However when and
> if the new driver I have is adpoted, it will make SCSI cry. So please
> stop polluting the issue.
Both the master and the slave may have requests in progress at once now?
This is the first time I have heard that issue refuted. In fact, we just
bought an 8-drive 3ware 7800, with 8 channels and 8 cables, that seemed to
further confirm that issue.
> Now I have managed to use two hosts w/ 4 channels no caching controller,
> no hardware raid, 4 7200RPM drives and software raid 0. I was able to
> push 109MB/sec writing and 167MB/sec reading.
So each drive was a master on a chain to itself? I am not denying the
performance of this setup. Also was this on the above hardware (the read
speed would exceed a PCI 33/32 bus)
> Also under a similar environment, I was able to, using a single card, 4
> drives, not hardware-raid, no caching controller, reach 90MB/sec writing
> and reading was about 78MB/sec.
4 drives on two chains (master & slave on each) is certainly more
interesting. The write speed is impressive, but what cut the read
performance in half?
> Now lets adjust cost of componets and SCSI loses big.
Indeed. That 720GB file server totaled ~$3000.
-- Brian
On Wed, 2 Jan 2002, Andre Hedrick wrote:
>
> Brian,
>
> This was true in the past and with many older drivers. However when and
> if the new driver I have is adpoted, it will make SCSI cry. So please
> stop polluting the issue.
>
> Let me be as objective as I can be.
>
> I built a special Mylex 3-channel raid 10 systems using 6 15K drive at
> Ultra160. Given that I was clever, I was able to push that system to read
> and write at 170MB/sec. I was very impressed by this performance, however
> this was hardware raid, caching of 256MB, and 66/64 pci bus. This was a
> dual PIII w/ 2GB of EEC-Buffered-Registered.
>
> Now I have managed to use two hosts w/ 4 channels no caching controller,
> no hardware raid, 4 7200RPM drives and software raid 0. I was able to
> push 109MB/sec writing and 167MB/sec reading.
>
> Also under a similar environment, I was able to, using a single card, 4
> drives, not hardware-raid, no caching controller, reach 90MB/sec writing
> and reading was about 78MB/sec.
>
> Now lets adjust cost of componets and SCSI loses big.
> Once there are 10K ATA drives in the market, and none exist that I know of
> to date even in beta, then we can retest .
And since here in the real world, where the set of all people not
including you lives, seek time dominates storage performance thus 15000RPM
disks are going to chew up and spit out 7200RPM disks.
It is very silly to compare a hardware RAID 0+1 to a software RAID 0,
since RAID 0+1 has to push twice as many bytes as RAID 0 to acheive the
same effect, and your software RAID 0 has a much more powerful CPU than
the i960 on your Mylex RAID controller.
-jwb
On Wed, 2 Jan 2002, Brian wrote:
>> Also under a similar environment, I was able to, using a single card, 4
>> drives, not hardware-raid, no caching controller, reach 90MB/sec writing
>> and reading was about 78MB/sec.
>
>4 drives on two chains (master & slave on each) is certainly more
>interesting. The write speed is impressive, but what cut the read
>performance in half?
I'd take those numbers with a very large grain of salt. The fastest drives
in existance have a ZBR @ slightly half those numbers (and they are all SCSI,
btw.) Thus, your math is wrong or there is some serious voodoo going down.
(I'll have someone check for chicken blood.)
On price alone, SCSI has always lost. However, IDE has always been inferior.
No disconnect/reconnect. No tagged command queing. No linked commands. Very
small addressable space. Etc. After a decade, IDE is now beginning to add
all those things SCSI has had for years. (They started using the SCSI command
protocol several years ago -- "ATAPI")
IDE is just fine for toys. It's a serious pain in the ass for any serious
work. It takes expensive hardware RAID cards to make IDE tolerable. (and
I'm not talking about the 30$ PoS HPT crap.)
--Ricky
PS: I once turned down a 360MHz Ultra10 in favor of a 167MHz Ultra1 because
of the absolutely shitty IDE performance. The U1 was actually faster
at compiling software. (Solaris 2.6, btw)
On Wed, 2 Jan 2002, Ricky Beam wrote:
...
> IDE is just fine for toys. It's a serious pain in the ass for any serious
> work.
my goodness; it's been so long since l-k saw this traditional sport!
nothing much has changed in the intrim: SCSI still costs 2-3x as much,
and still offers the same, ever-more-niche set of advantages
(decent hotswap, somewhat higher reliability, moderately higher performance,
easier expansion to more disks and/or other devices.)
> It takes expensive hardware RAID cards to make IDE tolerable. (and
> I'm not talking about the 30$ PoS HPT crap.)
besides having missed the last 2-3 generations of ATA (which include
things like diskconnect), you have clearly not noticed that entry-level
hardware with PoS UDMA100 controllers can sustain more bandwidth than
you can hope to consume (120 MB/s is pretty easy, even on 32x33 PCI!)
> PS: I once turned down a 360MHz Ultra10 in favor of a 167MHz Ultra1 because
> of the absolutely shitty IDE performance. The U1 was actually faster
> at compiling software. (Solaris 2.6, btw)
yeah, if Sun can't make IDE scream, then no one can eh?
On Wed, 2 Jan 2002, Mark Hahn wrote:
>my goodness; it's been so long since l-k saw this traditional sport!
>nothing much has changed in the intrim: SCSI still costs 2-3x as much,
>and still offers the same, ever-more-niche set of advantages
>(decent hotswap, somewhat higher reliability, moderately higher performance,
>easier expansion to more disks and/or other devices.)
If it's so much of a niche (and by extension desired by so few), why has
IDE become more and more like SCSI over the past decade? IDE is just
beginning (over the last 2-3 years) to acquire the features SCSI has had
for over a decade. Give it another decade and IDE will simply be a SCSI
physical layer.
So summarize a decade old arguement:
(IDE Camp) SCSI sucks because it's too damned expensive.
(SCSI Camp) IDE sucks because it isn't SCSI. [followed by a long list of
features present in SCSI but not IDE.]
You cannot beat IDE's price/performance with a stick. However, anyone
who cares about system performance (and lifespan) will opt for the expense
of SCSI.
>besides having missed the last 2-3 generations of ATA (which include
>things like diskconnect), you have clearly not noticed that entry-level
And who has diskconnect implemented? How many devices support it?
How many years before most of the hideous data destroying bugs and
incompatibilities are rooted out?
>hardware with PoS UDMA100 controllers can sustain more bandwidth than
>you can hope to consume (120 MB/s is pretty easy, even on 32x33 PCI!)
...with only two devices per channel and a rather heavy penalty for more
than one. SCSI is only significantly penalized when approaching bus
saturation.
And looking at the data rates for the Maxtor 160GB drive (infact the
entire D540X line)... 43.4M/s to/from media (i.e. cache) with sustained
rates of 35.9/17.8 OD/ID. Maxtor are the only ones with U133 drives.
(And the Maxtor SCSI drives kick that thing's ass... internal rate of
350-622Mb/s for a sustained throughput of 33-55MB/s. Expensive but
much much faster.)
>> PS: I once turned down a 360MHz Ultra10 in favor of a 167MHz Ultra1 because
>> of the absolutely shitty IDE performance. The U1 was actually faster
>> at compiling software. (Solaris 2.6, btw)
>
>yeah, if Sun can't make IDE scream, then no one can eh?
Linux wasn't any freakin' better at it. (Sun's IDE still seriously sucks.)
--Ricky
This has always been a really FUN argument
IDE vs. SCSI
BTW, why not add FC-AL or Serial-ATA into the mix, too?
i just wanted to add one thing. you guys are all right so
far, IDE has distinct advantages and so does linux, but you're
missing something :
SCSI is meant for high performance, high reliability systems.
it's not that the SCSI protocol is meant for this, but the
drives are. the drive quality for SCSI drives is MUCH higher
examples :
IBM's flagship 120GXP IDE drive is rated as 1 in 10^13 error rate,
with 40K start/stop cycles and a three year warranty.
IBM's 36Z15 SCSI drive is rated at 1 in 10^16 with 50K cycles
and a FIVE year warranty.
seagate and maxtor have similar differences.
If you're running a production system UNDER LOAD for 24x7 then
you should be using SCSI. there's not really any room for
argument here is there? :)
Dana Lacoste
Ottawa, Canada
(Using SCSI on desktop [silly] and in product [woo-hoo!] :)
> -----Original Message-----
> From: Ricky Beam [mailto:[email protected]]
> Sent: January 3, 2002 00:58
> To: Mark Hahn
> Cc: Linux Kernel Mail List
> Subject: Re: Two hdds on one channel - why so slow?
>
>
> On Wed, 2 Jan 2002, Mark Hahn wrote:
> >my goodness; it's been so long since l-k saw this traditional sport!
> >nothing much has changed in the intrim: SCSI still costs
> 2-3x as much,
> >and still offers the same, ever-more-niche set of advantages
> >(decent hotswap, somewhat higher reliability, moderately
> higher performance,
> >easier expansion to more disks and/or other devices.)
>
> If it's so much of a niche (and by extension desired by so
> few), why has
> IDE become more and more like SCSI over the past decade? IDE is just
> beginning (over the last 2-3 years) to acquire the features
> SCSI has had
> for over a decade. Give it another decade and IDE will
> simply be a SCSI
> physical layer.
>
> So summarize a decade old arguement:
> (IDE Camp) SCSI sucks because it's too damned expensive.
>
> (SCSI Camp) IDE sucks because it isn't SCSI. [followed by a
> long list of
> features present in SCSI but not IDE.]
>
> You cannot beat IDE's price/performance with a stick. However, anyone
> who cares about system performance (and lifespan) will opt
> for the expense
> of SCSI.
>
> >besides having missed the last 2-3 generations of ATA (which include
> >things like diskconnect), you have clearly not noticed that
> entry-level
>
> And who has diskconnect implemented? How many devices support it?
> How many years before most of the hideous data destroying bugs and
> incompatibilities are rooted out?
>
> >hardware with PoS UDMA100 controllers can sustain more bandwidth than
> >you can hope to consume (120 MB/s is pretty easy, even on 32x33 PCI!)
>
> ...with only two devices per channel and a rather heavy
> penalty for more
> than one. SCSI is only significantly penalized when approaching bus
> saturation.
>
> And looking at the data rates for the Maxtor 160GB drive (infact the
> entire D540X line)... 43.4M/s to/from media (i.e. cache) with
> sustained
> rates of 35.9/17.8 OD/ID. Maxtor are the only ones with U133 drives.
> (And the Maxtor SCSI drives kick that thing's ass... internal rate of
> 350-622Mb/s for a sustained throughput of 33-55MB/s. Expensive but
> much much faster.)
>
> >> PS: I once turned down a 360MHz Ultra10 in favor of a
> 167MHz Ultra1 because
> >> of the absolutely shitty IDE performance. The U1 was
> actually faster
> >> at compiling software. (Solaris 2.6, btw)
> >
> >yeah, if Sun can't make IDE scream, then no one can eh?
>
> Linux wasn't any freakin' better at it. (Sun's IDE still
> seriously sucks.)
>
> --Ricky
>
>
> -
> To unsubscribe from this list: send the line "unsubscribe
> linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>
> If you're running a production system UNDER LOAD for 24x7 then
> you should be using SCSI. there's not really any room for
> argument here is there? :)
no, since your assertion is strictly religious. a sane person designs
a system based on nice things like MTBF and RAID and price/performance.
On Wed, Jan 02, 2002 at 08:52:31PM -0500, Mark Hahn wrote:
> On Wed, 2 Jan 2002, Ricky Beam wrote:
> > It takes expensive hardware RAID cards to make IDE tolerable. (and
> > I'm not talking about the 30$ PoS HPT crap.)
> besides having missed the last 2-3 generations of ATA (which include
> things like diskconnect), you have clearly not noticed that entry-level
> hardware with PoS UDMA100 controllers can sustain more bandwidth than
> you can hope to consume (120 MB/s is pretty easy, even on 32x33 PCI!)
It's not always bandwidth (raw IO) that is the problem. We've got
a couple clusters where a 3ware 64xx w/4 IBM GXPs in raid0 cannot
keep up with the "load" of Mysql doing lots of ops on lots of files.
Yeah, it's not just how much load, but what kind of load. Streaming
a 3 gig file into memory, then dumping a 3 gig file to disk is a lot
different than opening 3000 1 meg files, twitching a bit and then
closing them.
I like our 3ware controllers, they've allowed us to migrate a off a
whole bunch of Sun Hardware, and saved us a whole bunch of money,
but on our loaded machines, we've lost a lot of sleep (of course
some of that seems to be due to memory corruption issues as well.)
> > PS: I once turned down a 360MHz Ultra10 in favor of a 167MHz Ultra1 because
> > of the absolutely shitty IDE performance. The U1 was actually faster
> > at compiling software. (Solaris 2.6, btw)
> yeah, if Sun can't make IDE scream, then no one can eh?
If SCSI had the economy of scale that IDE enjoys, it would be a lot
cheaper than it is now. Not as cheap as IDE currently is, but still
a lot cheaper.
ATA/IDE is trying pick and choose the best parts of SCSI w/out
picking up the costs--which is an admirable goal. The question is
how close can they get w/out incurring the costs?
--
Share and Enjoy.
Em Thu, Jan 03, 2002 at 06:54:24PM -0800, Petro escreveu:
> ATA/IDE is trying pick and choose the best parts of SCSI w/out
> picking up the costs--which is an admirable goal. The question is
> how close can they get w/out incurring the costs?
Well, as it gets more and more close to SCSI, slowly, maybe all? scale?
- Arnaldo
Hi everybody,
sorry for the trivial question, but maybe you can give me some pointers.
I'm setting up Linux on ASUS A7V266-E board, Athlon XP 1800+ machine
and
have the following problem:
My new IBM 40GB hard drive on ide0 (alone, master) controller is
always get set at boot
to UDMA2 mode, not UDMA5.
The second identical drive on onboard promise controller is getting set
to UDMA5
and runs much faster.
I looked in BIOS setup, and BIOS sets the first ide0 drive to UDMA5,
which at least says that
cable is the correct one, and that it is linux boot which changes the
setting to udma2.
Here are the related pieces of dmesg. As you see I use RH rawhide 2.4.16
kernel, which is
something like 2.4.17-pre8, I think
# dmesg
Linux version 2.4.16-0.13 ([email protected]) (gcc
version 2.96 20000731 (Red Hat Linux 7.1 2.96-98)) #1 Dec 14 05:30:28
EST 2001
...............................
Fri Local APIC disabled by BIOS -- reenabling.
Found and enabled local APIC!
Kernel command line: auto BOOT_IMAGE=linux-16 ro root=301
BOOT_FILE=/boot/vmlinuz-2.4.16-0.13 hdc=ide-scsi
ide_setup: hdc=ide-scsi
Initializing CPU#0
Detected 1544.511 MHz processor.
Console: colour VGA+ 80x25
Calibrating delay loop... 3080.19 BogoMIPS
Memory: 1544904k/1572784k available (1560k kernel code, 27492k reserved,
316k data, 248k init, 655280k highmem)
.............
PCI: PCI BIOS revision 2.10 entry at 0xf0df0, last bus=1
PCI: Using configuration type 1
PCI: Probing PCI hardware
Unknown bridge resource 0: assuming transparent
PCI: Using IRQ router VIA [1106/3074] at 00:11.0
PCI: Found IRQ 11 for device 00:11.1
PCI: Sharing IRQ 11 with 01:00.0
isapnp: Scanning for PnP cards...
isapnp: No Plug & Play device found
Linux NET4.0 for Linux 2.4
Based upon Swansea University Computer Society NET3.039
Initializing RT netlink socket
apm: BIOS version 1.2 Flags 0x03 (Driver version 1.15)
Starting kswapd
allocated 64 pages and 64 bhs reserved for the highmem bounces
VFS: Diskquotas version dquot_6.5.0 initialized
pty: 2048 Unix98 ptys configured
Serial driver version 5.05c (2001-07-08) with MANY_PORTS MULTIPORT
SHARE_IRQ SERIAL_PCI ISAPNP enabled
ttyS00 at 0x03f8 (irq = 4) is a 16550A
ttyS01 at 0x02f8 (irq = 3) is a 16550A
Real Time Clock Driver v1.10e
block: 128 slots per queue, batch=32
RAMDISK driver initialized: 16 RAM disks of 4096K size 1024 blocksize
Uniform Multi-Platform E-IDE driver Revision: 6.31
ide: Assuming 33MHz system bus speed for PIO modes; override with
idebus=xx
PDC20265: IDE controller on PCI bus 00 dev 30
PCI: Found IRQ 9 for device 00:06.0
PCI: Sharing IRQ 9 with 00:11.2
PCI: Sharing IRQ 9 with 00:11.3
PCI: Sharing IRQ 9 with 00:11.4
PDC20265: chipset revision 2
PDC20265: not 100% native mode: will probe irqs later
PDC20265: (U)DMA Burst Bit ENABLED Primary PCI Mode Secondary PCI Mode.
ide2: BM-DMA at 0xb000-0xb007, BIOS settings: hde:pio, hdf:DMA
ide3: BM-DMA at 0xb008-0xb00f, BIOS settings: hdg:DMA, hdh:pio
VP_IDE: IDE controller on PCI bus 00 dev 89
PCI: Found IRQ 11 for device 00:11.1
PCI: Sharing IRQ 11 with 01:00.0
VP_IDE: chipset revision 6
VP_IDE: not 100% native mode: will probe irqs later
VP_IDE: VIA vt8233 (rev 00) IDE UDMA100 controller on pci00:11.1
ide0: BM-DMA at 0xa400-0xa407, BIOS settings: hda:DMA, hdb:pio
ide1: BM-DMA at 0xa408-0xa40f, BIOS settings: hdc:DMA, hdd:DMA
hda: IC35L040AVER07-0, ATA DISK drive
hdc: PLEXTOR CD-R PX-W2410A, ATAPI CD/DVD-ROM drive
hdd: ASUS CD-S520/A, ATAPI CD/DVD-ROM drive
hdg: IC35L040AVER07-0, ATA DISK drive
ide0 at 0x1f0-0x1f7,0x3f6 on irq 14
ide1 at 0x170-0x177,0x376 on irq 15
ide3 at 0xb800-0xb807,0xb402 on irq 9
hda: 80418240 sectors (41174 MB) w/1916KiB Cache, CHS=5005/255/63,
UDMA(33) <---- problem
hdg: 80418240 sectors (41174 MB) w/1916KiB Cache, CHS=79780/16/63,
UDMA(100)
ide-floppy driver 0.97.sv
.....................................................
Any clues ? As well, could somebody explain me, what exactly is the
device on IRQ 11 (00:11.1)
Thank you very much, Dmitri
On Thu, Jan 03, 2002 at 09:29:30PM -0700, Dmitri Pogosyan wrote:
> Hi everybody,
> sorry for the trivial question, but maybe you can give me some pointers.
>
> I'm setting up Linux on ASUS A7V266-E board, Athlon XP 1800+ machine
> and
> have the following problem:
>
> My new IBM 40GB hard drive on ide0 (alone, master) controller is
> always get set at boot
> to UDMA2 mode, not UDMA5.
> The second identical drive on onboard promise controller is getting set
> to UDMA5
> and runs much faster.
>
> I looked in BIOS setup, and BIOS sets the first ide0 drive to UDMA5,
> which at least says that
> cable is the correct one, and that it is linux boot which changes the
> setting to udma2.
>
> Here are the related pieces of dmesg. As you see I use RH rawhide 2.4.16
> kernel, which is
> something like 2.4.17-pre8, I think
Some RH kernels (may include yours) deliberately disable UDMA3, 4 and 5
on any VIA IDE controller. I don't know why. Unpatch your kernel and
it'll likely work.
>
> # dmesg
> Linux version 2.4.16-0.13 ([email protected]) (gcc
> version 2.96 20000731 (Red Hat Linux 7.1 2.96-98)) #1 Dec 14 05:30:28
> EST 2001
> ...............................
> Fri Local APIC disabled by BIOS -- reenabling.
> Found and enabled local APIC!
> Kernel command line: auto BOOT_IMAGE=linux-16 ro root=301
> BOOT_FILE=/boot/vmlinuz-2.4.16-0.13 hdc=ide-scsi
> ide_setup: hdc=ide-scsi
> Initializing CPU#0
> Detected 1544.511 MHz processor.
> Console: colour VGA+ 80x25
> Calibrating delay loop... 3080.19 BogoMIPS
> Memory: 1544904k/1572784k available (1560k kernel code, 27492k reserved,
> 316k data, 248k init, 655280k highmem)
> .............
>
> PCI: PCI BIOS revision 2.10 entry at 0xf0df0, last bus=1
> PCI: Using configuration type 1
> PCI: Probing PCI hardware
> Unknown bridge resource 0: assuming transparent
> PCI: Using IRQ router VIA [1106/3074] at 00:11.0
> PCI: Found IRQ 11 for device 00:11.1
> PCI: Sharing IRQ 11 with 01:00.0
> isapnp: Scanning for PnP cards...
> isapnp: No Plug & Play device found
> Linux NET4.0 for Linux 2.4
> Based upon Swansea University Computer Society NET3.039
> Initializing RT netlink socket
> apm: BIOS version 1.2 Flags 0x03 (Driver version 1.15)
> Starting kswapd
> allocated 64 pages and 64 bhs reserved for the highmem bounces
> VFS: Diskquotas version dquot_6.5.0 initialized
> pty: 2048 Unix98 ptys configured
> Serial driver version 5.05c (2001-07-08) with MANY_PORTS MULTIPORT
> SHARE_IRQ SERIAL_PCI ISAPNP enabled
> ttyS00 at 0x03f8 (irq = 4) is a 16550A
> ttyS01 at 0x02f8 (irq = 3) is a 16550A
> Real Time Clock Driver v1.10e
> block: 128 slots per queue, batch=32
> RAMDISK driver initialized: 16 RAM disks of 4096K size 1024 blocksize
> Uniform Multi-Platform E-IDE driver Revision: 6.31
> ide: Assuming 33MHz system bus speed for PIO modes; override with
> idebus=xx
> PDC20265: IDE controller on PCI bus 00 dev 30
> PCI: Found IRQ 9 for device 00:06.0
> PCI: Sharing IRQ 9 with 00:11.2
> PCI: Sharing IRQ 9 with 00:11.3
> PCI: Sharing IRQ 9 with 00:11.4
> PDC20265: chipset revision 2
> PDC20265: not 100% native mode: will probe irqs later
> PDC20265: (U)DMA Burst Bit ENABLED Primary PCI Mode Secondary PCI Mode.
> ide2: BM-DMA at 0xb000-0xb007, BIOS settings: hde:pio, hdf:DMA
> ide3: BM-DMA at 0xb008-0xb00f, BIOS settings: hdg:DMA, hdh:pio
> VP_IDE: IDE controller on PCI bus 00 dev 89
> PCI: Found IRQ 11 for device 00:11.1
> PCI: Sharing IRQ 11 with 01:00.0
> VP_IDE: chipset revision 6
> VP_IDE: not 100% native mode: will probe irqs later
> VP_IDE: VIA vt8233 (rev 00) IDE UDMA100 controller on pci00:11.1
> ide0: BM-DMA at 0xa400-0xa407, BIOS settings: hda:DMA, hdb:pio
> ide1: BM-DMA at 0xa408-0xa40f, BIOS settings: hdc:DMA, hdd:DMA
> hda: IC35L040AVER07-0, ATA DISK drive
> hdc: PLEXTOR CD-R PX-W2410A, ATAPI CD/DVD-ROM drive
> hdd: ASUS CD-S520/A, ATAPI CD/DVD-ROM drive
> hdg: IC35L040AVER07-0, ATA DISK drive
> ide0 at 0x1f0-0x1f7,0x3f6 on irq 14
> ide1 at 0x170-0x177,0x376 on irq 15
> ide3 at 0xb800-0xb807,0xb402 on irq 9
> hda: 80418240 sectors (41174 MB) w/1916KiB Cache, CHS=5005/255/63,
> UDMA(33) <---- problem
> hdg: 80418240 sectors (41174 MB) w/1916KiB Cache, CHS=79780/16/63,
> UDMA(100)
> ide-floppy driver 0.97.sv
> .....................................................
>
> Any clues ? As well, could somebody explain me, what exactly is the
> device on IRQ 11 (00:11.1)
>
> Thank you very much, Dmitri
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
--
Vojtech Pavlik
SuSE Labs
On Wed, 2 Jan 2002, Mark Hahn wrote:
>
> yes, I know what he said. it's true that there's no concurrency,
> but he's wrong about expecting half (due to readahead/writebehind),
> and there's no real overhead in switching.
So why my disks work with ~12MB/sec per device (~24 per channel) when
both HDDs are accessed on the sime time?
> in short, master-slave concurrency is not common (but definitely
> supported by the standard and some disks), but this has less
> effect than you'd think. especially since most people just
> treat ide as a single-drive ptp link. which works fine, since
> ide channels cost $15 or less, and ide disks are *so* much cheaper
> than scsi.
Yes. IDE as a PtP device works nice. But this means that in most cases
it is possible to connect only half of expected devices. What a pity :(
Best regards,
Krzysztof Oldzki
> Some RH kernels (may include yours) deliberately disable UDMA3, 4 and 5
> on any VIA IDE controller. I don't know why. Unpatch your kernel and
> it'll likely work.
RH 2.4.2-x. That was before we had the official VIA solution to the chipset
bug. It was better to be safe than sorry for an end user distro.
Alan
On Fri, Jan 04, 2002 at 10:35:32AM +0000, Alan Cox wrote:
> > Some RH kernels (may include yours) deliberately disable UDMA3, 4 and 5
> > on any VIA IDE controller. I don't know why. Unpatch your kernel and
> > it'll likely work.
>
> RH 2.4.2-x. That was before we had the official VIA solution to the chipset
> bug. It was better to be safe than sorry for an end user distro.
But ... did this (limiting UDMA to 2) stop the bug from being manifested?
--
Vojtech Pavlik
SuSE Labs
> > RH 2.4.2-x. That was before we had the official VIA solution to the chipset
> > bug. It was better to be safe than sorry for an end user distro.
>
> But ... did this (limiting UDMA to 2) stop the bug from being manifested?
Mostly yes. The VIA bug appears to be dependant on heavy PCI loading. Now
we have a proper fix its all ok.
If you want a list of what VIA changes are in which RH release kernels
[email protected] can you give you a precise summary.
On Fri, Jan 04, 2002 at 11:20:00AM +0000, you [Alan Cox] claimed:
> > > RH 2.4.2-x. That was before we had the official VIA solution to the chipset
> > > bug. It was better to be safe than sorry for an end user distro.
> >
> > But ... did this (limiting UDMA to 2) stop the bug from being manifested?
>
> Mostly yes. The VIA bug appears to be dependant on heavy PCI loading. Now
> we have a proper fix its all ok.
We are still seeing what seems to be Via PCI corruption when using HPT370 on
Abit-KT7-RAID. This is pretty high load (stream read/write two disks in
parallel.) It appears as 90-160 byte disk corruption.
It has been reproduced on 2.2.18pre19 + ide, 2.2.20+ide and 2.4.15.
We now seem to have found a BIOS setting that cures this for 2.2.20+ide.
The weird thing is that if we boot 2.2.21pre2+ide (pre2 includes the 2.4
backported VIA fixes), the corruption occurs.
We'll try to diff lspci -vvxxx outputs and post a more detailed report
shortly.
-- v --
[email protected]
Petro <[email protected]>:
> On Wed, Jan 02, 2002 at 08:52:31PM -0500, Mark Hahn wrote:
> > On Wed, 2 Jan 2002, Ricky Beam wrote:
> > > PS: I once turned down a 360MHz Ultra10 in favor of a 167MHz Ultra1 because
> > > of the absolutely shitty IDE performance. The U1 was actually faster
> > > at compiling software. (Solaris 2.6, btw)
> > yeah, if Sun can't make IDE scream, then no one can eh?
>
> If SCSI had the economy of scale that IDE enjoys, it would be a lot
> cheaper than it is now. Not as cheap as IDE currently is, but still
> a lot cheaper.
>
> ATA/IDE is trying pick and choose the best parts of SCSI w/out
> picking up the costs--which is an admirable goal. The question is
> how close can they get w/out incurring the costs?
About the time it attempts to support 16-60 drives on one controller
(15 targets, 4 luns per target), with full asynchronous operation.
The costs start accumulating with the async operation.
I've always treated IDE as only part - the controller sharing the equivalent
of a single SCSI target, with two luns. The PCI interface appears about
equivalent to that of the SCSI controller, but the IDE controller completely
drops the multiple target feature (as well as the shared data/command bus).
IDE boards with 4 drives seem to be two IDE controllers using the same PCI
interface.
In my experience, SCSI is not cost effective for systems with a single disk.
As soon as you go to 4 or more disks, the throughput of SCSI takes over unless
you are expanding a pre-existing workstation configuration.
-------------------------------------------------------------------------
Jesse I Pollard, II
Email: [email protected]
Any opinions expressed are solely my own.
In article <[email protected]> you wrote:
> In my experience, SCSI is not cost effective for systems with a single disk.
> As soon as you go to 4 or more disks, the throughput of SCSI takes over unless
> you are expanding a pre-existing workstation configuration.
IDE Scales fine to 8 Channels (aka 8 Drives). Anything more than 8 Drives on
an HBA is insane anyway.
I love the FC-to-IDE(8) Solution. You get Hardware Raid with 8 Channels, each
drive a didicated channel, thats much more reliable than usual 2 or 3 channel
SCSI Configurations.
Do you realy run more than say 10 hard disk devices on a single SCSI Bus,
ever?
Greetings
Bernd
On Fri, Jan 04, 2002 at 03:37:21PM +0200, Ville Herva wrote:
>
> We are still seeing what seems to be Via PCI corruption when using HPT370 on
> Abit-KT7-RAID. This is pretty high load (stream read/write two disks in
> parallel.) It appears as 90-160 byte disk corruption.
>
> It has been reproduced on 2.2.18pre19 + ide, 2.2.20+ide and 2.4.15.
>
> We now seem to have found a BIOS setting that cures this for 2.2.20+ide.
> The weird thing is that if we boot 2.2.21pre2+ide (pre2 includes the 2.4
> backported VIA fixes), the corruption occurs.
>
> We'll try to diff lspci -vvxxx outputs and post a more detailed report
> shortly.
What's the BIOS setting?
-Dave
> We now seem to have found a BIOS setting that cures this for 2.2.20+ide.
> The weird thing is that if we boot 2.2.21pre2+ide (pre2 includes the 2.4
> backported VIA fixes), the corruption occurs.
Thats very interesting indeed. The more info you can send me the better
On Fri, Jan 04, 2002 at 11:56:57AM -0500, Mr. James W. Laferriere wrote:
> Hello Bernd , With a little searching I haven't found a source
> for the FC-ide controllers you mentioned in this email to the
> list . would you please share a URL: ? Tia , JimL
in no particular order:
http://www.synetic.net/Prod.html
http://www.synetic.net/Price-Lists/SyneRAID_Prices_USD.htm)
http://www.trinity-tec.com/
http://www.advunibyte.de/Products/RAID/Workgroup320/Workgroup320-Frame.HTML
(just search for FC-to-IDE on google)
Greetings
Bernd
--
(OO) -- [email protected] --
( .. ) ecki@{inka.de,linux.de,debian.org} http://home.pages.de/~eckes/
o--o *plush* 2048/93600EFD eckes@irc +497257930613 BE5-RIPE
(O____O) When cryptography is outlawed, bayl bhgynjf jvyy unir cevinpl!
On Fri, 4 Jan 2002, Mark Hahn wrote:
> > Yes. IDE as a PtP device works nice. But this means that in most cases
> > it is possible to connect only half of expected devices. What a pity :(
> "expected". if you require 2disks/chain, then sure, ide will be
> a disappointment. since an ide chain costs O($15), you're crying
> about spilled beer...
Actually it costs much more. For example Promise Ultra100TX2
costs about 54$ (so is is 27$ per channel). Ultra133TX2 is more expensive
- around 88$. It is more than half of TEKRAM DC-390U2W (Ultra2+Ultra
SCSI Adapter) with Symbios 53C895 chipset. Those are SRP prices with VAT
tax. Maybe prices in USA/Canada are much lower...
Best regards,
Krzysztof Oledzki
On Friday 04 January 2002 11:14, Bernd Eckenfels wrote:
> On Fri, Jan 04, 2002 at 11:56:57AM -0500, Mr. James W. Laferriere wrote:
> > Hello Bernd , With a little searching I haven't found a source
> > for the FC-ide controllers you mentioned in this email to the
> > list . would you please share a URL: ? Tia , JimL
>
> in no particular order:
>
> http://www.synetic.net/Prod.html
> http://www.synetic.net/Price-Lists/SyneRAID_Prices_USD.htm)
> http://www.trinity-tec.com/
> http://www.advunibyte.de/Products/RAID/Workgroup320/Workgroup320-Frame.HTML
>
> (just search for FC-to-IDE on google)
>
> Greetings
> Bernd
Why go Fibre Channel when Firewire is really starting to catch on?
--
[email protected].
--------- Received message begins Here ---------
>
> In article <[email protected]> you wrote:
> > In my experience, SCSI is not cost effective for systems with a single disk.
> > As soon as you go to 4 or more disks, the throughput of SCSI takes over unless
> > you are expanding a pre-existing workstation configuration.
>
> IDE Scales fine to 8 Channels (aka 8 Drives). Anything more than 8 Drives on
> an HBA is insane anyway.
>
> I love the FC-to-IDE(8) Solution. You get Hardware Raid with 8 Channels, each
> drive a didicated channel, thats much more reliable than usual 2 or 3 channel
> SCSI Configurations.
>
> Do you realy run more than say 10 hard disk devices on a single SCSI Bus,
> ever?
Only place I've seen that (not sure it was true SCSI though) was on a
Cray file server - each target (4) was itself a raid 5, with 5 transports.
That would be total of 20 disks. The four targets were striped together in
software to form a single filesystem. I think that was our first over 300GB
filesystem (several years ago). Now we are using 1 TB to 5TB filesystems
with nearly 5 million files each.
-------------------------------------------------------------------------
Jesse I Pollard, II
Email: [email protected]
Any opinions expressed are solely my own.
On Wed, 2 Jan 2002 20:52:31 -0500 (EST)
Mark Hahn <[email protected]> wrote:
> my goodness; it's been so long since l-k saw this traditional sport!
> nothing much has changed in the intrim: SCSI still costs 2-3x as much,
> and still offers the same,
Hm, maybe this is an interesting story for you:
indeed SCSI costs more, but across vendors there is not much of a difference in
SCSI-pricing. So you do it intelligent, buy brand names that have 5 years
warranty. If you really use the drives, they will fail within warranty, and you
get the _original_ price back (because in 4 years or so, a replacement is
impossible because the models are all gone). For this money you go ahead and
buy a new one (which is of course state-of-the-art), but no new investment at
all. With IDE you are busted, because no vendor has any warranty lasting long
enough. Don't try to argue that this is unfair comparison, warranty counts.
Don't tell me this is not going to work, because it _does_.
Your price argument is _zero_ for anyone knowing the market.
Regards,
Stephan
> all. With IDE you are busted, because no vendor has any warranty lasting long
> enough. Don't try to argue that this is unfair comparison, warranty counts.
> Don't tell me this is not going to work, because it _does_.
Right at the moment the same process seems to work for IDE drives with 1
year warranties.
Alan (raid addict ;))
On Fri, 4 Jan 2002 18:38:33 +0000 (GMT)
Alan Cox <[email protected]> wrote:
> > all. With IDE you are busted, because no vendor has any warranty lasting
long> > enough. Don't try to argue that this is unfair comparison, warranty
counts. > > Don't tell me this is not going to work, because it _does_.
>
> Right at the moment the same process seems to work for IDE drives with 1
> year warranties.
In this case you obviously _need_ the hotplug feature, or you will never reach
the linux max uptime value ;-)
Regards,
Stephan
In article <[email protected]> you wrote:
> Why go Fibre Channel when Firewire is really starting to catch on?
Because Firewire is Consumer electronics and nearly dead. Dont now of
Enterpise Solutions with Firewire. Besides there is no switching support for
it.
I am happy to see those FC-S/ATA Controlers soon :)
Greetings
Bernd
> Because Firewire is Consumer electronics and nearly dead. Dont now of
> Enterpise Solutions with Firewire. Besides there is no
> switching support for it.
Also the bandwidth differences :
Firewire (Generation 1, what you can get now) is 400Mbit/s
FC Gen 1 is 100MByte/s
Gen 2 is 200MByte/s
(OK, I know those last two numbers are right, but I don't
know what the NAMES of the standards are :)
Firewire isn't even supposed to be in the same league! :)
Dana Lacoste
Ottawa, Canada
On Fri, 4 Jan 2002, Bernd Eckenfels wrote:
> In article <[email protected]> you wrote:
> > Why go Fibre Channel when Firewire is really starting to catch on?
>
> Because Firewire is Consumer electronics and nearly dead. Dont now of
> Enterpise Solutions with Firewire. Besides there is no switching support for
> it.
That would explain all of those firewire cameras, VTRs, editing decks,
televisions, DVD players, CD players, computers, hard drives, tape drives,
CD burners, DVD burners, MP3 players, and oscilloscopes that keep popping
up with 1394 interfaces. You should write to their manufacturers and let
them know about their mistake!
-jwb
On Friday 04 January 2002 12:40, Bernd Eckenfels wrote:
> In article <[email protected]> you wrote:
> > Why go Fibre Channel when Firewire is really starting to catch on?
>
> Because Firewire is Consumer electronics and nearly dead. Dont now of
> Enterpise Solutions with Firewire. Besides there is no switching support
> for it.
>
> I am happy to see those FC-S/ATA Controlers soon :)
>
> Greetings
> Bernd
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
Checkout http://www.sancube.com. to see what I'm looking for. Also,
I'm hoping to see native ieee1394b disk drives (maybe Serial ATA
will help, but it's not an external solution IIRC).
--
[email protected].
On 20020104 Alan Cox wrote:
>> all. With IDE you are busted, because no vendor has any warranty lasting long
>> enough. Don't try to argue that this is unfair comparison, warranty counts.
>> Don't tell me this is not going to work, because it _does_.
>
>Right at the moment the same process seems to work for IDE drives with 1
>year warranties.
>
Yup, we are making IBM eat up its 'crystal plate' new drives. I have seen
two of them dying on a month. And we will return also the third without
even waiting it to fail.
(btw, I am still using -in low end linux boxen- Quantum SCSI drives that
came with prehistoric macs, SEs and so on, so they can be about 8 years
old. They work, slow for today standars, but work. Can anybody say the
same about ide drives ?)
--
J.A. Magallon # Let the source be with you...
mailto:[email protected]
Mandrake Linux release 8.2 (Cooker) for i586
Linux werewolf 2.4.18-pre1-beo #1 SMP Fri Jan 4 02:25:59 CET 2002 i686
On Fri, 4 Jan 2002, Alan Cox wrote:
> > all. With IDE you are busted, because no vendor has any warranty lasting long
> > enough. Don't try to argue that this is unfair comparison, warranty counts.
> > Don't tell me this is not going to work, because it _does_.
>
> Right at the moment the same process seems to work for IDE drives with 1
> year warranties.
Please consider picking up a modern drive and see it has a "THREE" (3)
Year warranty period which is about the length of service for a continuous
run device on the MTBF.
Regards,
Andre Hedrick
CEO/President, LAD Storage Consulting Group
Linux ATA Development
Linux Disk Certification Project
Vojtech Pavlik wrote:
> > My new IBM 40GB hard drive on ide0 (alone, master) controller is
> > always get set at boot
> > to UDMA2 mode, not UDMA5.
> > The second identical drive on onboard promise controller is getting set
> > to UDMA5
> > and runs much faster.
> >
> > I looked in BIOS setup, and BIOS sets the first ide0 drive to UDMA5,
> > which at least says that
> > cable is the correct one, and that it is linux boot which changes the
> > setting to udma2.
> >
> > Here are the related pieces of dmesg. As you see I use RH rawhide 2.4.16
> > kernel, which is
> > something like 2.4.17-pre8, I think
>
> Some RH kernels (may include yours) deliberately disable UDMA3, 4 and 5
> on any VIA IDE controller. I don't know why. Unpatch your kernel and
> it'll likely work.
>
Thanks, where should I look in the code to see if this is applicable to my
kernel version ?
Also RH7.2 stock 2.4.7 kernel was totally unhappy with my configuration
(VIA-IDE: chipset unknown - contact you) and DMA could not be set at all.
This was main my reason to upgrade to 2.4.16
Regards, Dmitri
[email protected] (J.A. Magallon) writes:
> (btw, I am still using -in low end linux boxen- Quantum SCSI drives that
> came with prehistoric macs, SEs and so on, so they can be about 8 years
> old. They work, slow for today standars, but work. Can anybody say the
> same about ide drives ?)
I can. Two IDE drives from one of my machines:
hda: Conner Peripherals 210MB - CP3201F, 203MB w/64kB Cache, CHS=684/16/38
hdb: WDC AC2170M, 162MB w/32kB Cache, CHS=1010/6/55
The Western Digital drive was bought with the PC in January 1993 (486sx24
later upgraded to 486dx30), so has reached 9 years old. The Conner is
a year or two younger.
The machine spent most of the time initially constantly on, had a idle
period for several years, and is now on full time again as a DNS server.
--
`O O' | [email protected]
// ^ \\ | http://www.pyrites.org.uk/
If you don't want to read such words, please use a Killfile.
X-Copyright: (C) 1996-2002 Henning Schmiedehausen
X-No-Archive: yes
X-Newsreader: NN version 6.5.1 (NOV)
Dana Lacoste <[email protected]> writes:
>Also the bandwidth differences :
>Firewire (Generation 1, what you can get now) is 400Mbit/s
>FC Gen 1 is 100MByte/s
>Gen 2 is 200MByte/s
>(OK, I know those last two numbers are right, but I don't
>know what the NAMES of the standards are :)
>Firewire isn't even supposed to be in the same league! :)
That wasn't supposed of IDE in the war against SCSI either, but look
where we're now. :-)
The one argument that noone brought around here is (and it is the
killer argument for me in IDE vs. SCSI): "external disk trays". Try
that with IDE (current IDE please. No SerialATA. ;-) ) without lots of
"out of spec" cables dangling out of your "enterprise computing
solution".
If you need more than say, three or four disks, your solution is
SCSI. Or FibreChannel.
Regards
Henning
--
Dipl.-Inf. (Univ.) Henning P. Schmiedehausen -- Geschaeftsfuehrer
INTERMETA - Gesellschaft fuer Mehrwertdienste mbH [email protected]
Am Schwabachgrund 22 Fon.: 09131 / 50654-0 [email protected]
D-91054 Buckenhof Fax.: 09131 / 50654-20
If you don't want to read such words, please use a Killfile.
X-Copyright: (C) 1996-2002 Henning Schmiedehausen
X-No-Archive: yes
X-Newsreader: NN version 6.5.1 (NOV)
"Jeffrey W. Baker" <[email protected]> writes:
>> Because Firewire is Consumer electronics and nearly dead. Dont now of
>> Enterpise Solutions with Firewire. Besides there is no switching support for
>> it.
>That would explain all of those firewire cameras, VTRs, editing decks,
>televisions, DVD players, CD players, computers, hard drives, tape drives,
>CD burners, DVD burners, MP3 players, and oscilloscopes that keep popping
>up with 1394 interfaces. You should write to their manufacturers and let
>them know about their mistake!
FireWire is an USB competitor. I remember the first IEEE1394
announcements about five-six years ago and suddently someone (IIRC M$
and Intel) unveiled their "Universal serial bus" spec and I thought to
myself "oh look, once again a good solution goes down the drain
because people with market share drive an inferior solution that is
five cents cheaper".
FC really is another league. Talk SAN. Talk storage switch
fabrics. Talk redundant switched pathes to disks in another
building. Talk Enterprise solutions.
Regards
Henning
--
Dipl.-Inf. (Univ.) Henning P. Schmiedehausen -- Geschaeftsfuehrer
INTERMETA - Gesellschaft fuer Mehrwertdienste mbH [email protected]
Am Schwabachgrund 22 Fon.: 09131 / 50654-0 [email protected]
D-91054 Buckenhof Fax.: 09131 / 50654-20
If you don't want to read such words, please use a Killfile.
X-Copyright: (C) 1996-2002 Henning Schmiedehausen
X-No-Archive: yes
X-Newsreader: NN version 6.5.1 (NOV)
"J.A. Magallon" <[email protected]> writes:
>(btw, I am still using -in low end linux boxen- Quantum SCSI drives that
>came with prehistoric macs, SEs and so on, so they can be about 8 years
>old. They work, slow for today standars, but work. Can anybody say the
>same about ide drives ?)
I have an Seagate ST238R (30 MB, 5 1/4" full size) on my Amiga. It
still spins up... haven't tried reading from it, though. 14 years old.
Regards
Henning
--
Dipl.-Inf. (Univ.) Henning P. Schmiedehausen -- Geschaeftsfuehrer
INTERMETA - Gesellschaft fuer Mehrwertdienste mbH [email protected]
Am Schwabachgrund 22 Fon.: 09131 / 50654-0 [email protected]
D-91054 Buckenhof Fax.: 09131 / 50654-20
On Sat, Jan 05, 2002 at 11:58:37AM +0000, Henning P. Schmiedehausen wrote:
> If you need more than say, three or four disks, your solution is
> SCSI. Or FibreChannel.
Which points up another dimension to this issue, that of "host
controller v.s. drive electronics".
Most of the FW drives I've seen use a FW->IDE bridge and have IDE
drives inside. This overcomes the biggest problem with IDE IMO, the
limit of one (or two if you'll accept the performance degradation)
drive per channel leading to lots of cables to fuss with.
I've got a 1.8T Exabyte disk box that has 24 IDE drives in it. It
attaches to the host computer via FC.
--
Share and Enjoy.
Alan Cox wrote:
> > Some RH kernels (may include yours) deliberately disable UDMA3, 4 and 5
> > on any VIA IDE controller. I don't know why. Unpatch your kernel and
> > it'll likely work.
>
> RH 2.4.2-x. That was before we had the official VIA solution to the chipset
> bug. It was better to be safe than sorry for an end user distro.
>
Yes, indeed. Seems RH-2.4.16-0.13 kernel still enforces disabling UDMA>2 for
VIA,
by means of setting cable type to 40w, even if 80w is present
#cat /proc/ide/via
------------ ---Primary IDE-----Secondary IDE------
Read DMA FIFO flush: yes yes
End Sector FIFO flush: no no
Prefetch Buffer: yes no
Post Write Buffer: yes no
Enabled: yes yes
Simplex only: no no
Cable Type: 40w 40w
If I force higher UDMA by ide0=ata66 kernel option, as discussed in RH bug
35274,
ide0 zero is set to UDMA5 (not the cable though) and everything is working.
I'll file a bug against RH kernel.
Thanks everybody, Dmitri
You're all DEAD WRONG.
IDE and SCSI both suck!
The way of the future is punch cards!
;)
--
Stevie-O
Real programmers use COPY CON PROGRAM.EXE
(Except those who use cat > a.out)
On Mon, 7 Jan 2002, Stevie O wrote:
> You're all DEAD WRONG.
>
> IDE and SCSI both suck!
>
> The way of the future is punch cards!
Please! Next time warn me to put down my drink before doing something like
that. Do you realize how hard it is to clean this stuff off the monitor?
I have this vision of Charlie Chaplain feeding cards through a reader at
the rate of 385,000 cards per second. Of course, one gets stuck and
causes chaos.
I remember Hollerith Cards.
(WARNING: Put down the Jones Soda!)
On Mon, 7 Jan 2002, Thomas Molina wrote:
>On Mon, 7 Jan 2002, Stevie O wrote:
>> The way of the future is punch cards!
>
>I remember Hollerith Cards.
Paper tape! If it's good enough for the .gov, it's good enough for you.
--Ricky
PS: If we lived in Mr Hahn's world, we'd all still be using MFM/RLL drives.
(He seems to have forgotten what IDE stands for.)
On Mon, 7 Jan 2002, Ricky Beam wrote:
> >I remember Hollerith Cards.
>
> Paper tape! If it's good enough for the .gov, it's good enough for you.
Have you ever seen an ASR-33 paper tape pileup? I have; I assure you it's
not a pretty sight.
Thomas Molina <[email protected]>:
>
> On Mon, 7 Jan 2002, Ricky Beam wrote:
>
> > >I remember Hollerith Cards.
> >
> > Paper tape! If it's good enough for the .gov, it's good enough for you.
>
> Have you ever seen an ASR-33 paper tape pileup? I have; I assure you it's
> not a pretty sight.
Didn't see one of those, but I did see (caused) a pileup from a 30in/sec
reader.... just try to catch the input before it tears the tape when it
falls out of the holder ... :-)
-------------------------------------------------------------------------
Jesse I Pollard, II
Email: [email protected]
Any opinions expressed are solely my own.
Uttered "Jesse Pollard" <[email protected]>, spoke thus:
> Thomas Molina <[email protected]>:
> >
> > On Mon, 7 Jan 2002, Ricky Beam wrote:
> >
> > > >I remember Hollerith Cards.
> > >
> > > Paper tape! If it's good enough for the .gov, it's good enough for you.
> >
> > Have you ever seen an ASR-33 paper tape pileup? I have; I assure you it's
> > not a pretty sight.
>
> Didn't see one of those, but I did see (caused) a pileup from a 30in/sec
> reader.... just try to catch the input before it tears the tape when it
> falls out of the holder ... :-)
It was always great fun to switch the 3000 line-per-minute back online
while the operator was down behind the printer, trying to get the
paper to fold properly.
-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- + -- -- -- -- -- -- -- -- -- --
Tommy Reynolds | mailto: <[email protected]>
Red Hat, Inc., Embedded Development Services | Phone: +1.256.704.9286
307 Wynn Drive NW, Huntsville, AL 35805 USA | FAX: +1.256.837.3839
Senior Software Developer | Mobile: +1.919.641.2923
On Mon, 7 Jan 2002, Ricky Beam wrote:
> --Ricky
>
> PS: If we lived in Mr Hahn's world, we'd all still be using MFM/RLL drives.
> (He seems to have forgotten what IDE stands for.)
/KILLFILE
Andre Hedrick
Linux ATA Development
On Mon, Jan 07, 2002 at 03:11:22AM -0500, Stevie O wrote:
> You're all DEAD WRONG.
> IDE and SCSI both suck!
> The way of the future is punch cards!
Are there any drivers for a paper-tape reader?
--
Share and Enjoy.
Hi Folks!
Since we are back to where I begun maybe you can help me. A friend of mine
and I are looking for a PDP-11 DOS/BATCH system image. We have images of
RSX-11, RTS, RT-11, etc. But DOS/BATCH is missing. I recall having
disassembled some parts of it and it was beautiful!
Any pointers will be most welcome.
Edesio
[email protected]
On Mon, Jan 07, 2002 at 10:40:34AM -0600, Thomas Molina wrote:
> On Mon, 7 Jan 2002, Ricky Beam wrote:
>
> > >I remember Hollerith Cards.
> >
> > Paper tape! If it's good enough for the .gov, it's good enough for you.
>
> Have you ever seen an ASR-33 paper tape pileup? I have; I assure you it's
> not a pretty sight.
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
> On Mon, Jan 07, 2002 at 03:11:22AM -0500, Stevie O wrote:
> > You're all DEAD WRONG.
> > IDE and SCSI both suck!
> > The way of the future is punch cards!
I wonder what will survive time to serve as a record of our epoch for
future historians. Punch cards may have better chance than magnetica
carrier.
I can still read my programs on punch cards from early eighties (you know,
program lines were printed on top of the card, they just need some sorting
to be operational :) ), but were are all my floppies from later times ??
So we still may be know in the future as the punch card civilization ! :)
DMITRI
On Mon, 7 Jan 2002, Edesio Costa e Silva wrote:
> Since we are back to where I begun maybe you can help me. A friend of mine
> and I are looking for a PDP-11 DOS/BATCH system image. We have images of
> RSX-11, RTS, RT-11, etc. But DOS/BATCH is missing. I recall having
> disassembled some parts of it and it was beautiful!
Can't help you there, but I do have a fig-Forth listing for the PDP-11
and an ILLIAC (I) Programmers Manual. :-)
--
M. Edward "Ancient Stripe" Borasky
[email protected]
http://www.borasky-research.net
Give me your brains or I'll blow your money out.
> On Mon, Jan 07, 2002 at 03:11:22AM -0500, Stevie O wrote:
> > You're all DEAD WRONG.
> > IDE and SCSI both suck!
> > The way of the future is punch cards!
>
> Are there any drivers for a paper-tape reader?
2.2 S/390 code seems to have one
On Tue, 8 Jan 2002, Alan Cox wrote:
> 2.2 S/390 code seems to have one
I have this mental image of some intern or co-op student on their first
day at IBM.
"Hey, you.. new kid.. Write a driver for this"
mike
On Tuesday 08 January 2002 2:45 pm, Mike Dresser wrote:
> On Tue, 8 Jan 2002, Alan Cox wrote:
> > 2.2 S/390 code seems to have one
>
> I have this mental image of some intern or co-op student on their first
> day at IBM.
>
> "Hey, you.. new kid.. Write a driver for this"
Yes. "Write device driver for (some IBM tape drive) on S/390" was indeed on
one project list for students they put on the WWW recently...
James.
> > > You're all DEAD WRONG.
> > > IDE and SCSI both suck!
> > > The way of the future is punch cards!
> >
> > Are there any drivers for a paper-tape reader?
>
> 2.2 S/390 code seems to have one
I thought paper-tape readers were serial ???
--
Lab tests show that use of micro$oft causes cancer in lab animals
On Friday 04 January 2002 12:40 pm, Jesse Pollard wrote:
> --------- Received message begins Here ---------
>
> > In article <[email protected]> you wrote:
> > > In my experience, SCSI is not cost effective for systems with a single
> > > disk. As soon as you go to 4 or more disks, the throughput of SCSI
> > > takes over unless you are expanding a pre-existing workstation
> > > configuration.
> >
> > IDE Scales fine to 8 Channels (aka 8 Drives). Anything more than 8 Drives
> > on an HBA is insane anyway.
> >
> > I love the FC-to-IDE(8) Solution. You get Hardware Raid with 8 Channels,
> > each drive a didicated channel, thats much more reliable than usual 2 or
> > 3 channel SCSI Configurations.
> >
> > Do you realy run more than say 10 hard disk devices on a single SCSI Bus,
> > ever?
>
> Only place I've seen that (not sure it was true SCSI though) was on a
> Cray file server - each target (4) was itself a raid 5, with 5 transports.
> That would be total of 20 disks. The four targets were striped together in
> software to form a single filesystem. I think that was our first over 300GB
> filesystem (several years ago). Now we are using 1 TB to 5TB filesystems
> with nearly 5 million files each.
A couple years back I played with a 20-way SCSI software RAID, which was
actually dual qlogic fiber channel, 10 drives per controller. That was done
for throughput reasons (attempting to capture uncompressed HDTV signals to
disk, needed something like 160 megabytes per second, which of course
required a 66 mhz 64 bit PCI bus...) It broke the kernel in more than one
place, by the way. Long since fixed, I believe. (Try it and see. :)
More recently I've played around a little with many-way IDE software RAID,
which is much more fun. The hard part is really getting enough power for all
the drives (Ever seen an enormous tower case with three power supplies
mounted in it, AND a lot of splitters?) The goal was to see how cheaply we
could get to 10 terabytes of storage, so we didn't really care about
throughput there (the output of the cluster was the gigabit ethernet uplink
on the switch, and the output of each node was a pair of bonded 100baseT
ethernet adapters). So we were willing to hanging two drives off each IDE
controller (master and slave), which halved the throughput but that wasn't
the bottleneck anyway. (Dual 100baseT is 22 megabytes per second, a SINGLE
modern IDE drive can swamp that. And the raid could max out the PCI bus
pretty easily with one drive per controller...)
You can cheaply get generic IDE expansion cards with at least 2 controllers
on each card (4 drives per card), and you can get a board with onboard video,
at least one onboard NIC (we had two onboard, that's why we did the bonding),
two onboard IDE controllers, and 4 free PCI slots. That's 4 drives onboard,
16 through PCI, total of 20 drives. (You need to tweak the linux kernel to
allow that many IDE controllers. PCI plug and pray is your friend here...)
The goal was to see how cheaply we could get to 10 terabytes of storage. We
didn't do the whole cluster, but I think we determined we only needed 4 or 5
nodes to do it. Then the dot-com crash hit and that company's business model
changed, project got shelved...
But if you think about serving MPEG 4 video streams through simple TCP/IP
(DVD quality's only about 150 kilobytes per second with mpeg 4, add in 5 megs
of buffer at the client side and who cares about latency...) Really big IDE
software RAID systems are quite nice. :)
Rob
On Tue, Jan 08, 2002 at 07:41:42AM -0500, Rob Landley wrote:
> The goal was to see how cheaply we could get to 10 terabytes of storage. We
> didn't do the whole cluster, but I think we determined we only needed 4 or 5
> nodes to do it. Then the dot-com crash hit and that company's business model
> changed, project got shelved...
Hi Rob, how did you manage to get 10TB storage? It's my understanding
that kernel block device still counts 1kB blocks using 32bit (signed)
integer. So, that's 2TB in total. Are you talking about 5 x 2TB?
--
William Park, Open Geometry Consulting, <[email protected]>.
8 CPU cluster, NAS, (Slackware) Linux, Python, LaTeX, Vim, Mutt, Tin
On Fri, 4 Jan 2002, Alan Cox wrote:
>Right at the moment the same process seems to work for IDE drives with 1
>year warranties.
*grin* SCSI drives last a long time and have long warranties. IDE drives
fail often and have short warranties. It all averages out. The point is
how often do you send a drive back? With IDE, it's all the time. With
SCSI it's rare.
Case in point, how many SCSI drives have been bad right out of the box vs.
IDE? In my experience, I've never had a bad SCSI drive from the get-go.
I currently have one Maxtor waiting to be sent back. And 2 out of 16 for
a 1G array were defective at powerup. (2 more failed within a week.)
Which is cheaper... asprin and shipping charges, or going SCSI from the
get go? (I know, but I don't like headaches! and the lovely Caen Raptor
line makes things way too expensive for my boss.)
--Ricky
On Fri, 4 Jan 2002, Bernd Eckenfels wrote:
>In article <[email protected]> you wrote:
>> Why go Fibre Channel when Firewire is really starting to catch on?
>
>Because Firewire is Consumer electronics and nearly dead. Dont now of
>Enterpise Solutions with Firewire. Besides there is no switching support for
>it.
Actually, I suggested the use of 1394 drives instead of the extremely old
and crappy DLT tape drive. There was just one problem... Solaris/Sparc
doesn't support SBP2. (In fact, it only supports Sun's firewire camera.)
--Ricky
On Fri, 4 Jan 2002, Andre Hedrick wrote:
>Please consider picking up a modern drive and see it has a "THREE" (3)
>Year warranty period which is about the length of service for a continuous
>run device on the MTBF.
3years is ~27k hours. The MTBF on modern drives is more like 57years.
(500k hours.) 100k hours is 11+ years. No ide drive ever manufactured
will last that long. (Maybe if it's sitting on a shelf for 90% of its life.)
--Ricky
On Sat, 5 Jan 2002, Henning P. Schmiedehausen wrote:
>The one argument that noone brought around here is (and it is the
>killer argument for me in IDE vs. SCSI): "external disk trays". Try
>that with IDE (current IDE please. No SerialATA. ;-) ) without lots of
>"out of spec" cables dangling out of your "enterprise computing
>solution".
You don't have to be out-of-spec, but doing things within the spec becomes
VERY expensive. Those cheap-ass 30$ IDE drive trays are 110% shit. Only
when you get into the >100$ models do they work reliablly without setting
the office on fire. Even the 4500RPM drives get too hot in the cheapy's.
And IDE drive with an SCA connector would be great.
--Ricky
> >Please consider picking up a modern drive and see it has a "THREE" (3)
> >Year warranty period which is about the length of service for a continuous
> >run device on the MTBF.
>
> 3years is ~27k hours. The MTBF on modern drives is more like 57years.
> (500k hours.) 100k hours is 11+ years. No ide drive ever manufactured
> will last that long. (Maybe if it's sitting on a shelf for 90% of its life.)
just say what you mean: IDE condemns your soul to eternal damnation!
On Tuesday 08 January 2002 17:49, Ricky Beam wrote:
> On Fri, 4 Jan 2002, Bernd Eckenfels wrote:
> >In article <[email protected]> you wrote:
> >> Why go Fibre Channel when Firewire is really starting to catch on?
> >
> >Because Firewire is Consumer electronics and nearly dead. Dont now of
> >Enterpise Solutions with Firewire. Besides there is no switching support
> > for it.
>
> Actually, I suggested the use of 1394 drives instead of the extremely old
> and crappy DLT tape drive. There was just one problem... Solaris/Sparc
> doesn't support SBP2. (In fact, it only supports Sun's firewire camera.)
So their new workstations come with Firewire ports but don't support SBP-2.
Ugh. That's Sun for you.
--
[email protected].
On Tue, 8 Jan 2002, Timothy Covell wrote:
>So their new workstations come with Firewire ports but don't support SBP-2.
>Ugh. That's Sun for you.
It's even worse... they've been part of the IEEE standards body since 1996!
As I recall, it's fully supported under x86. Hello, type "make" on a sparc?
I've been tempted to write a driver myself, but lack the motivation. (*my*
sparc will run linux just fine :-))
--Ricky
Ricky,
We have all tried to be nice and you continue to wage a battle of storage
classes. Do you recall a move called "Animal House", where the Delta
house is before a review board?
"BLOW JOB" "BLOW JOB" "BLOW JOB" "BLOW JOB" "BLOW JOB" "BLOW JOB"
WHeeeeeeeeeeeeee and everyone runs out leaves the subject alone, does that
help?
On Tue, 8 Jan 2002, Ricky Beam wrote:
> On Fri, 4 Jan 2002, Andre Hedrick wrote:
> >Please consider picking up a modern drive and see it has a "THREE" (3)
> >Year warranty period which is about the length of service for a continuous
> >run device on the MTBF.
>
> 3years is ~27k hours. The MTBF on modern drives is more like 57years.
> (500k hours.) 100k hours is 11+ years. No ide drive ever manufactured
> will last that long. (Maybe if it's sitting on a shelf for 90% of its life.)
>
> --Ricky
>
>
On Wed, 9 Jan 2002, Andre Hedrick wrote:
>
> Ricky,
[SNIPPED...]
Don't think for an instant that MTBF has anything to do with the
actual life-time of a device. The only correlation is that a
longer MTBF may mean a longer life-time. MTBF is not equal
to life-time at all.
MTBF is a numerical value obtained by using an agreed upon
method of calculation or observation. MTBF will demonstrate
that a machine that has no components will run forever.
Of course it won't function.
Demonstrated MTBF is often (usually) obtained by taking a
large number of components and subjecting them to short-term
tests. This fails to produce any evidence of real life-time
as the following example will show:
Suppose we have a timer chip that has a defective design in
a stage which will short out and blow the device after 2 hours
of operation. We want to measure the demonstrated MTBF so we
take 10,000 chips and run them for an hour. None fail. We have
now demonstrated 10,000 hr MTBF. Simple. This is not a joke.
What is the demonstrated MTBF of a fuse? You have to destroy
it to see if it worked -- at which time it has failed.
Marketing grabbed another buzz-word and used it as a ploy to
attract customers when MTBF started appearing in consumer oriented
data sheets.
Actual observation by many, of mechanical devices such as trucks,
tractors, steel-roll-mills and disk drives shows that once started,
then tend to run forever. However, they fail to restart if shut down
after a long period of operation. A disk-drive that sits on a shelf
often doesn't fare any better. It's like fruit that starts to decay
after being picked from the "Disk Drive Tree".
Cheers,
Dick Johnson
Penguin : Linux version 2.4.1 on an i686 machine (797.90 BogoMips).
I was going to compile a list of innovations that could be
attributed to Microsoft. Once I realized that Ctrl-Alt-Del
was handled in the BIOS, I found that there aren't any.
On Tuesday 08 January 2002 04:18 pm, William Park wrote:
> On Tue, Jan 08, 2002 at 07:41:42AM -0500, Rob Landley wrote:
> > The goal was to see how cheaply we could get to 10 terabytes of storage.
> > We didn't do the whole cluster, but I think we determined we only needed
> > 4 or 5 nodes to do it. Then the dot-com crash hit and that company's
> > business model changed, project got shelved...
>
> Hi Rob, how did you manage to get 10TB storage? It's my understanding
> that kernel block device still counts 1kB blocks using 32bit (signed)
> integer. So, that's 2TB in total. Are you talking about 5 x 2TB?
Made a cluster.
We were extracting stuff out of it via URL, with a database to lookup where
each URL lived, so we could have different files live on different servers.
(If we'd wanted everything to look like it lived on exactly the same machine,
we could have had one machine mount the other machines' space via samba or
nfs, but that would have created extra network traffic inside the cluster.)
The proposed design was to have the whole cluster look like it was at 1
public IP address via IP masquerading and port forwarding (port 80 is the
apache on node 0, 81 is the apache on node 1, 82 is the apache on node 2...)
This was just to save world-routable IPs. We didn't get that far...
Bascially, we just wanted lots of storage, cheap and reliable (we were doing
RAID 5 across the disks in each cluster), and didn't care what it looked
like. We were also experimenting with DVD jukeboxes to feed data into the
cluster (the cluster was cache for larger offline storage, the project was to
license syndicated television content (old episodes of mash, battlestar
galactica, you name it) and provide video on demand for a flat monthly fee.
Each local cable company would have a cluster, which would pull data through
the internet from servers in atlanta or california, wherever an ultimate
content licensor lived. It could be shipped around on DVD stacks too...)
Fun project. Too bad it didn't work out...
Rob
On Wed, Jan 09, 2002 at 05:56:32AM -0500, Rob Landley wrote:
> On Tuesday 08 January 2002 04:18 pm, William Park wrote:
> > Hi Rob, how did you manage to get 10TB storage? It's my understanding
> > that kernel block device still counts 1kB blocks using 32bit (signed)
> > integer. So, that's 2TB in total. Are you talking about 5 x 2TB?
>
> Made a cluster.
>
> We were extracting stuff out of it via URL, with a database to lookup where
> each URL lived, so we could have different files live on different servers.
> (If we'd wanted everything to look like it lived on exactly the same machine,
> we could have had one machine mount the other machines' space via samba or
> nfs, but that would have created extra network traffic inside the cluster.)
>
> The proposed design was to have the whole cluster look like it was at 1
> public IP address via IP masquerading and port forwarding (port 80 is the
> apache on node 0, 81 is the apache on node 1, 82 is the apache on node 2...)
> This was just to save world-routable IPs. We didn't get that far...
>
> Bascially, we just wanted lots of storage, cheap and reliable (we were doing
> RAID 5 across the disks in each cluster), and didn't care what it looked
> like. We were also experimenting with DVD jukeboxes to feed data into the
> cluster (the cluster was cache for larger offline storage, the project was to
> license syndicated television content (old episodes of mash, battlestar
> galactica, you name it) and provide video on demand for a flat monthly fee.
> Each local cable company would have a cluster, which would pull data through
> the internet from servers in atlanta or california, wherever an ultimate
> content licensor lived. It could be shipped around on DVD stacks too...)
>
> Fun project. Too bad it didn't work out...
Darn... You could do it nicely now with 10 servers, 1TB in each box.
Since you're only "broadcasting", you can mount the disks read-only,
too. :-)
--
William Park, Open Geometry Consulting, <[email protected]>.
8 CPU cluster, NAS, (Slackware) Linux, Python, LaTeX, Vim, Mutt, Tin