Dear LKML,
I am a long-time linux user, but not a C dev.
I have been using software RAID since I had an Athlon 64 3000, which
at the time took a toll of up to 30% of the CPU time on big writes (4
200GB disks on RAID 5), if my memory serves me well.
If you can get me some clarifications, I would be incredibly grateful.
Shouldn't software RAID always surpass hardware, given fast enough hardware?
Why is it that software RAID on current systems still gets less
performance than hardware counterparts?
Also, are there any knobs/pulleys/ledgers in the linux kernel so that
I can maximize RAID operation performance?
Given that our current bottleneck is the disk IO, it would take a
sincere effort to saturate the CPU with raid/disk operations (you'd
need lots and lots of disks).
Best regards
> Why is it that software RAID on current systems still gets less
> performance than hardware counterparts?
Part of it can be crappy disk interfaces: I was running software raid with
2 SATA-SIL cards, and would frequently be disk-bound with the CPU still
largely idle.
The cards were incapable of talking to more than one drive at a time. They
didn't support command queuing on the drives.
As a result, the system would set up a stripe, queue up the writes, then
have to wait as each write for EACH DISK in the 7 disk array was carried
out.
On a good hardware RAID controller, the disks can be written in parallel,
and the controller will support command queuing - so disk writes can be
run in parallel, and the writes themselves can be better optimized by the
disks.
I would like to believe this is not the case with Intel ICH9R...
Anyway, I want to make sure I have done everything possible to speed
up my 6-disk RAID.
Scheduling concurernt IO may not have a best single solution, I know,
but is the kernel 'perfect' in the sense of giving the RAID / SATA
subsystems all the cpu cycles it needs to perform best (with the
lowest possible latency)? Or do we have some knobs to tune the kernel?
On Thu, Apr 2, 2009 at 2:41 PM, <[email protected]> wrote:
>
>> Why is it that software RAID on current systems still gets less
>> performance than hardware counterparts?
>
> Part of it can be crappy disk interfaces: I was running software raid with
> 2 SATA-SIL cards, and would frequently be disk-bound with the CPU still
> largely idle.
>
> The cards were incapable of talking to more than one drive at a time. They
> didn't support command queuing on the drives.
>
> As a result, the system would set up a stripe, queue up the writes, then
> have to wait as each write for EACH DISK in the 7 disk array was carried
> out.
>
> On a good hardware RAID controller, the disks can be written in parallel,
> and the controller will support command queuing - so disk writes can be
> run in parallel, and the writes themselves can be better optimized by the
> disks.
>
>
>
--
-----
Tiago Mikhael Pastorello Freire a.k.a. Brazilian Joe