Hello,
I need someone to confirm that linux is capable of doing large amounts
of I/O with hardware raid controllers. I have tried 4 raid controllers
(2 of which have been confirmed to have issues with linux... namely Dell
PERC *megaraid* series and an Adaptec card *aacraid*) and have not been
able to obtain more then 60MB/s doing hardware raid 5. The raid cards
I'm testing are quad channel ultra160's with a total of 8 10k 72GB
ultra320 drives (2 per channel) per raid volume... thus I should be able
to do a fairly large amount of I/O (100+MB sequential writes I'd
assume).
I have tried every possible striping configuration a long with multiple
filesystem (ext2/3/xfs) configurations so I do not believe it is an
issue with all 4 cards (I am currently testing a Mylex extremeRaid 2000
and have seen these do much more then 60MB/s in the past on other
platforms).
I've tried tuning elevator settings, read-ahead (in 2.4), and changing
the scheduler under 2.6 between default and deadlock. Is there
something that needs to be done to get linux to do large amounts of
I/O. Do the drivers I'm trying (aacraid, megaraid, DAC960, dpt_i2o)
have performance problems?
Please confirm and if possible provide possible settings needed to get
linux in the mode for high i/o or general places to tune I/O.
Thanks,
Ben
I get the following performance numbers from a Dell PowerEdge 1750 (2x Xeon)
with a Perc 4/Di (megaraid driver). 13x U320 10k 146GB disks configured in a
raid5 (in a PowerVault 220s external U320 enclosure).
Kernel is 2.6.3 (Mandrake 10.0)
# bonnie++ -u mwatts -d .
Using uid:500, gid:500.
Writing with putc()...done
Writing intelligently...done
Rewriting...done
Reading with getc()...done
Reading intelligently...done
start 'em...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version 1.02c ------Sequential Output------ --Sequential Input-
--Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec
%CP
ircd.eris.qineti 4G 28774 99 89047 93 54483 21 34335 89 115549 20 575.2
1
------Sequential Create------ --------Random
Create--------
-Create-- --Read--- -Delete-- -Create-- --Read---
-Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec
%CP
16 2192 99 +++++ +++ +++++ +++ 2197 99 +++++ +++ 5526
99
ircd.eris.qinetiq.com,4G,28774,99,89047,93,54483,21,34335,89,115549,20,575.2,1,16,2192,99,
+++++,+++,+++++,+++,2197,99,+++++,+++,5526,99
hdparm:
# hdparm -tT /dev/sdb
/dev/sdb:
Timing buffer-cache reads: 2812 MB in 2.00 seconds = 1404.11 MB/sec
Timing buffered disk reads: 294 MB in 3.00 seconds = 97.92 MB/sec
I've been advised that the megaraid2 driver can be faster, but I've yet to try
it.
Mark.
Hello,
I do have some additional questions to help others to better diagnose your
problem. Currently I am suspecting a bottle neck in your mainboard :)
In article <[email protected]> you wrote:
> PERC *megaraid* series and an Adaptec card *aacraid*) and have not been
> able to obtain more then 60MB/s doing hardware raid 5.
Is that peak or saturated load? you may want to try to only fill the write
back cache of the controller, to not get affected by slow raidness or disks.
The raid5 is not very good in speeding up random reads or sequential writes.
Perhaps you want to try stripping on level 0.
> The raid cards
> I'm testing are quad channel ultra160's with a total of 8 10k 72GB
> ultra320 drives (2 per channel) per raid volume... thus I should be able
> to do a fairly large amount of I/O (100+MB sequential writes I'd
> assume).
What is the IO Bus you are talking about? Single PCI Bus?
Have you tried an alternative operating system?
> I have tried every possible striping configuration a long with multiple
> filesystem (ext2/3/xfs)
You may want to try it without a filesystem and perhaps even with a faster
raid configuration like stripping.
> Please confirm and if possible provide possible settings needed to get
> linux in the mode for high i/o or general places to tune I/O.
how does your vmstat and iostat look like? How many ram and cpu you are
talking about? is it an smp kernel?
Greetings
Bernd
--
eckes privat - http://www.eckes.org/
Project Freefire - http://www.freefire.org/
> Is that peak or saturated load? you may want to try to only fill the write
> back cache of the controller, to not get affected by slow raidness or disks.
>
Saturated, just doing a simple dd out to a 2gig file.
> The raid5 is not very good in speeding up random reads or sequential writes.
> Perhaps you want to try stripping on level 0.
>
We need the data integrity and the storage so 0/1 isn't an option and 0
definantly isn't.
> What is the IO Bus you are talking about? Single PCI Bus?
PCI-x 133MHz (card is running at 66MHz).
> Have you tried an alternative operating system?
>
I've tried Red Hat AS 3 and Gentoo 2004.1
> You may want to try it without a filesystem and perhaps even with a faster
> raid configuration like stripping.
>
Have actually done both for tests. writing to the device w/o FS returns
about the same rate. Raid 0 is a bit faster then 5, but not by more
then a couple of MB.
> how does your vmstat and iostat look like? How many ram and cpu you are
> talking about? is it an smp kernel?
>
Ram is 2GB, CPU is Dual 3.2GHz Xeons hyperthread (thus 4 cpu's to OS),
smp is enabled. iostat is very spotty, here is output from iostat:
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s
avgrq-sz avgqu-sz await svctm %util
rd/c0d0 0.00 84389.83 0.00 0.00 0.00 680576.27 0.00
340288.14 0.00 760.19 0.00 0.00 102.54
rd/c0d0 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 31860.75 0.00 0.00 96.34
rd/c0d0 0.00 38713.00 0.00 0.00 0.00 312152.00 0.00
156076.00 0.00 29827.95 0.00 0.00 100.00
rd/c0d0 0.00 11048.00 0.00 0.00 0.00 89088.00 0.00
44544.00 0.00 13318.73 0.00 0.00 100.00
rd/c0d0 0.00 12948.00 0.00 0.00 0.00 104448.00 0.00
52224.00 0.00 13405.94 0.00 0.00 100.00
rd/c0d0 0.00 9344.00 0.00 0.00 0.00 75360.00 0.00
37680.00 0.00 13488.18 0.00 0.00 100.00
rd/c0d0 0.00 9178.00 0.00 0.00 0.00 74032.00 0.00
37016.00 0.00 29379.11 0.00 0.00 100.00
rd/c0d0 0.00 12962.00 0.00 0.00 0.00 104560.00 0.00
52280.00 0.00 13653.53 0.00 0.00 100.00
rd/c0d0 0.00 9363.00 0.00 0.00 0.00 75512.00 0.00
37756.00 0.00 13737.13 0.00 0.00 100.00
rd/c0d0 0.00 11321.00 0.00 0.00 0.00 91288.00 0.00
45644.00 0.00 29128.95 0.00 0.00 100.00
rd/c0d0 0.00 9902.00 0.00 0.00 0.00 79872.00 0.00
39936.00 0.00 13901.66 0.00 0.00 100.00
rd/c0d0 0.00 10140.00 0.00 0.00 0.00 81784.00 0.00
40892.00 0.00 13984.07 0.00 0.00 100.00
rd/c0d0 0.00 8773.00 0.00 0.00 0.00 70768.00 0.00
35384.00 0.00 28883.74 0.00 0.00 100.00
rd/c0d0 0.00 13709.00 0.00 0.00 0.00 110592.00 0.00
55296.00 0.00 14150.42 0.00 0.00 100.00
rd/c0d0 0.00 10143.00 0.00 0.00 0.00 81808.00 0.00
40904.00 0.00 14240.10 0.00 0.00 100.00
rd/c0d0 0.00 11047.00 0.00 0.00 0.00 89088.00 0.00
44544.00 0.00 28629.28 0.00 0.00 100.00
rd/c0d0 0.00 10790.00 0.00 0.00 0.00 87040.00 0.00
43520.00 0.00 14403.05 0.00 0.00 100.00
And here is vmstat output:
procs memory swap io
system cpu
1 1 0 833460 33768 595768 0 0 0 459360 160 371 0
51 49 0
0 3 0 668604 33928 759864 0 0 0 35196 140 31 0 30
10 60
1 2 0 611132 33980 813932 0 0 0 54944 160 48 0
13 3 84
0 3 0 568668 34016 851236 0 0 0 43520 143 54 0
7 0 93
0 3 0 534652 34056 888740 0 0 0 40448 145 60 0
7 0 93
1 2 0 473476 34096 933548 0 0 0 56320 145 73 0
9 0 91
0 3 0 445868 34132 970332 0 0 0 32256 139 56 0
6 0 94
0 3 0 396756 34176 1015048 0 0 0 43520 142 60 0
8 0 92
0 3 0 352124 34220 1059988 0 0 0 44700 145 54 0
9 0 91
2 1 0 308348 34264 1101852 0 0 0 50140 139 70 0
8 0 92
0 3 0 253488 34312 1153080 0 0 0 42272 140 63 0
10 0 90
0 3 0 228168 34344 1185468 0 0 0 31848 142 54 0
6 0 94
0 3 0 190080 34384 1225076 0 0 0 40856 140 57 0
8 0 92
1 2 0 130820 34428 1271444 0 0 0 55912 138 71 0
9 0 91
0 3 0 90352 34468 1313104 0 0 0 35328 140 57 0
11 2 88
1 2 0 65652 34540 1375600 0 0 0 47772 146 61 0
10 0 90
0 4 0 58060 34580 1416852 0 0 0 37280 142 41 0
9 0 91
0 4 0 100292 34580 1416852 0 0 0 36496 139 57 0
1 0 98
0 3 0 145160 34580 1416852 0 0 0 55248 141 18 0
1 49 50
0 3 0 187804 34580 1416852 0 0 0 40960 143 14 0
1 50 49
Thanks,
Ben
In article <[email protected]> you wrote:
>> The raid5 is not very good in speeding up random reads or sequential writes.
>> Perhaps you want to try stripping on level 0.
>>
> We need the data integrity and the storage so 0/1 isn't an option and 0
> definantly isn't.
sure but it helps to find out if it is a limitation of the raid controller,
the disk, pci or linux. first do a cache read, then a raid0 read and then
start to wonder about redundancy.
>> Have you tried an alternative operating system?
> I've tried Red Hat AS 3 and Gentoo 2004.1
Well, i was more refering to Windows :)
> Have actually done both for tests. writing to the device w/o FS returns
> about the same rate. Raid 0 is a bit faster then 5, but not by more
> then a couple of MB.
This clearly shows you, that the raid controller or disk is the problem.
(However of course this can also mean the driver is not using the best
controller mode)
Perhaps using 2 x PercDC is better here.
Greetings
Bernd
--
eckes privat - http://www.eckes.org/
Project Freefire - http://www.freefire.org/