2008-12-03 14:34:21

by Justin Piszcz

[permalink] [raw]
Subject: 3ware 9650SE-16ML w/XFS & RAID6: first impressions

For read speed, it appears that there is some overhead (2x). With software
raid6 I used to get 1.0Gbyte/sec read speed, or at least 800MiB/s across the
entire array, now I only get 538MiB/s across the entire array. However,
write speed is improved. Before it was in the mid 400s for RAID6 and peaking at
502MiB/s at the outer edge of the drives. For the write speed now, its around
620-640MiB/s at the end of the array and 580-620MiB/s going in further and
502MiB/s across the entire array. I always heard all of the horror stories with 3ware cards, it turns out they're not so bad after all.

Here is the STR across a RAID6 array of 10 velociraptors configured as RAID6 with storsave=perform and blockdev --setra 65536 and using the XFS filesystem.

p34:/t# /usr/bin/time dd if=/dev/zero of=really_big_file bs=1M
dd: writing `really_big_file': No space left on device
2288604+0 records in
2288603+0 records out
2399774998528 bytes (2.4 TB) copied, 4777.22 s, 502 MB/s
Command exited with non-zero status 1
1.47user 3322.83system 1:19:37elapsed 69%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (1major+506minor)pagefaults 0swaps

p34:/t# /usr/bin/time dd if=really_big_file of=/dev/null bs=1M
2288603+1 records in
2288603+1 records out
2399774998528 bytes (2.4 TB) copied, 4457.55 s, 538 MB/s
1.66user 1538.87system 1:14:17elapsed 34%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (4major+498minor)pagefaults 0swaps
p34:/t#

So what is the difference between protect, balance and perform?

Protect
p34:~# /opt/3ware/9500/tw_cli /c0/u1 set storsave=protect
Setting Command Storsave Policy for unit /c0/u1 to [protect] ... Done.

p34:/t# dd if=/dev/zero of=bigfile bs=1M count=20480
20480+0 records in
20480+0 records out
21474836480 bytes (21 GB) copied, 60.6041 s, 354 MB/s

Balance:
p34:~# /opt/3ware/9500/tw_cli /c0/u1 set storsave=balance
Setting Command Storsave Policy for unit /c0/u1 to [balance] ... Done.

# dd if=/dev/zero of=bigfile bs=1M count=20480
20480+0 records in
20480+0 records out
21474836480 bytes (21 GB) copied, 60.4287 s, 355 MB/s

Perform
p34:~# /opt/3ware/9500/tw_cli /c0/u1 set storsave=perform
Warning: Unit /c0/u1 storsave policy is set to Performance which may cause data loss in the event of power failure.
Do you want to continue ? Y|N [N]: Y
Setting Command Storsave Policy for unit /c0/u1 to [perform] ... Done.

p34:/t# dd if=/dev/zero of=bigfile bs=1M count=20480
20480+0 records in
20480+0 records out
21474836480 bytes (21 GB) copied, 34.4955 s, 623 MB/s

--

And yes, I have a BBU attached to this card and everything is on a UPS.
Something I do not like though is 3ware states "TwinStor" "TwinStor" etc
etc if you read a file it will read from both drives, maybe it does this but
I have the I/O results to prove it-- it never breaks the barrier of a
single drive's STR. Whereas with SW RAID, you can read 2 or more files
concurrently and you will get the bandwidth of both drives. I have a lot
more testing to do, RAID0, RAID5, RAID6, RAID10, not sure when I will get
to it but just thought I'd post this to show that 3ware cards aren't as
slow as everyone says, at least not when attached to Velociraptors, a BBU
and XFS as the filesystem.

Justin.


2008-12-07 12:56:23

by Hans-Peter Jansen

[permalink] [raw]
Subject: Re: 3ware 9650SE-16ML w/XFS & RAID6: first impressions

Am Mittwoch, 3. Dezember 2008 schrieb Justin Piszcz:
> For read speed, it appears that there is some overhead (2x). With
> software raid6 I used to get 1.0Gbyte/sec read speed, or at least
> 800MiB/s across the entire array, now I only get 538MiB/s across the
> entire array. However, write speed is improved. Before it was in the mid
> 400s for RAID6 and peaking at 502MiB/s at the outer edge of the drives.
> For the write speed now, its around 620-640MiB/s at the end of the array
> and 580-620MiB/s going in further and 502MiB/s across the entire array. I
> always heard all of the horror stories with 3ware cards, it turns out
> they're not so bad after all.

Hustin, if you're after speed, AND sane support, you may consider trying
Areca products. I'm biased, since 3ware fed me up completely, when it came
to a single drive failure in a RAID 5 setup: the support only provides an
ugly web interface: when posting my problem, they replied with:

>>>
Thank you for contacting 3ware Technical Support.
Your submission has been received and assigned case id # xxxxxxxxxxxxxx. You
will receive a response within one business day.

Note:
Do not reply to this email ( [email protected] ). Emails sent to
[email protected] will not receive a response.
To check the status of your case, go to https://www.3ware.com and login.

Thank you indeed!
<<<

Well, they responded within 23-26 hours on average, only to request
information, that was given in the very first post! In the end (a week
later) they concluded: the problem cannot be solved without replacing the
firmware and reinstalling the array! The array couldn't be restored from
degraded mode - they blamed the samsung drives for it.

Since then I replaced all my 3ware controllers with Areca controllers and
haven't regretted it, but I haven't done any serious measurement (I mostly
care for reliability, btw) but they where noticably faster.

Cheers,
Pete