Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755931AbYFKUxs (ORCPT ); Wed, 11 Jun 2008 16:53:48 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1754800AbYFKUxT (ORCPT ); Wed, 11 Jun 2008 16:53:19 -0400 Received: from lucidpixels.com ([75.144.35.66]:52531 "EHLO lucidpixels.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754747AbYFKUxP (ORCPT ); Wed, 11 Jun 2008 16:53:15 -0400 Date: Wed, 11 Jun 2008 16:53:14 -0400 (EDT) From: Justin Piszcz To: Bill Davidsen cc: linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, xfs@oss.sgi.com, Alan Piszcz Subject: Re: Linux MD RAID 5 Benchmarks Across (3 to 10) 300 Gigabyte Veliciraptors In-Reply-To: Message-ID: References: <4850354A.8090503@tmr.com> User-Agent: Alpine 1.10 (DEB 962 2008-03-14) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1564 Lines: 42 On Wed, 11 Jun 2008, Justin Piszcz wrote: > > > On Wed, 11 Jun 2008, Bill Davidsen wrote: > >> Justin Piszcz wrote: >>> First, the original benchmarks with 6-SATA drives with fixed formatting, >>> using >>> right justification and the same decimal point precision throughout: >>> http://home.comcast.net/~jpiszcz/20080607/raid-benchmarks-decimal-fix-and-right-justified/disks.html >>> Now for for veliciraptors! Ever wonder what kind of speed is possible with >>> 3 disk, 4,5,6,7,8,9,10-disk RAID5s? I ran a loop to find out, each run is >>> executed three times and the average is taken of all three runs per each >>> RAID5 disk set. >>> >>> In short? The 965 no longer does justice with faster drives, a new chipset >>> and motherboard are needed. After reading or writing to 4-5 veliciraptors >>> it saturates the bus/965 chipset. >> >> This is very interesting, but a 16GB chunk size bears no relationship to >> anything I would run in the real world, and I suspect most people are in >> the same category. > > I based my bonnie++ test on: > http://everything2.org/?node_id=1479435 > > So I could compare to his results. > > I use a 1024k (1MiB) with 16384 stripe, this offered the best overall > read/write/rewrite performance AFAIK. 1024k chunk size (raid5 chunk size) echo 16384 > stripe_cache_size -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/