From: "Eff Norwood" Subject: RE: NFS read ok, write ok, simultaneous R+W=TERRIBLE Date: Thu, 5 Dec 2002 10:25:32 -0800 Sender: nfs-admin@lists.sourceforge.net Message-ID: References: <200212051728.gB5HS1jH005843@pookie.nersc.gov> Mime-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Cc: "Jake Hammer" Return-path: Received: from cynaptic.com ([128.121.116.181]) by sc8-sf-list1.sourceforge.net with esmtp (Exim 3.31-VA-mm2 #1 (Debian)) id 18K0gz-00035v-00 for ; Thu, 05 Dec 2002 10:25:45 -0800 To: , "NFS" In-Reply-To: <200212051728.gB5HS1jH005843@pookie.nersc.gov> Errors-To: nfs-admin@lists.sourceforge.net List-Help: List-Post: List-Subscribe: , List-Id: Discussion of NFS under Linux development, interoperability, and testing. List-Unsubscribe: , List-Archive: Jake, Shane, I can also add a "me too". Our similar setup produces the same good read, good write, but terrible read/write. Local disk tests using multiple streams with files much larger than cache show good local disk performance. NFS v2, v3, UDP, TCP all work great for read,write, but are up to 95% slower on read/write. Eff Norwood > Jake, > > I can add a "me,too". > > We have about 40 3ware boxes (~30 TB). These have a single 8 > port card running raid 5 (7+1). We are currently > running XFS on them because we see better scaling performance > versus ext3. However, the numbers you are reporting look > similar to ours for 6 clients. > > I'm not convinced that the local performance is ruled > out. How are you testing the local performance? Are > you running multiple streams? Are you certain you > are busting cache? Don't get me wrong, I would love to > see a performance bug found in NFS that suddenly > makes everything better. It would make my life > easier. :-) > > Also, are the NFSd's in disk wait? If they are not, then > this would be a clearer indications that the nfs subsystem > may be to blame. We usually see all the daemons in > disk wait when we are really pounding one of the these > boxes. > > One other note. We were running software raid (on a 2.2.19 > kernel). After 3ware made some improvements to their > raid 5 implementation, we moved away from it. It appeared > that we were actually CPU bound with SW raid. We > are seeing better scaling performance with the HW > raid. Plus, we can use the hot swap ability of the > card (which for 300+ drives is nice). :-) > > If anyone finds tweaks it would be great to get > them folded into the FAQ. > > --Shane > > > Hi All, > > > > I'm working with a 2.4.19 + all Neil Brown 2.4.19 patches on a P4 Xeon > > system, 2048MB ram, uniprocessor kernel. Distro is Debian Woody > Stable. Disk > > subsystem is ATA RAID 5, 14 spindles. On writes from 15 > 100base-T clients > > via a Foundry switch, I am able to see 45MB/sec writes and > 60MB/sec reads. > > Clients are DD'ing 10GB files to and from the box > simultaneously. Mount = > > mount -o proto=udp,vers=2,wsize=32768,rsize=32768 bigbox:/space > > > > The problem is on read + write. As soon as the clients switch > from read only > > or write only and do BOTH read and write, the CPU pegs and > performance drops > > to 3MB/sec (three MB/s)! TOP shows that 4 of the NFSd's are > consuming all of > > the CPU. It's like they are contending for some kind of > resource like a lock > > or something. > > > > Any help would be sincerely appreciated. This is very strange > behavior. It > > also happens with 2.4.18 + all Neil Brown patches for 2.4.18. > > > > Thanks, > > > > Jake Hammer > > > > > > > > > > ------------------------------------------------------- > > This SF.net email is sponsored by: Microsoft Visual Studio.NET > > comprehensive development tool, built to increase your > > productivity. Try a free online hosted session at: > > http://ads.sourceforge.net/cgi-bin/redirect.pl?micr0003en > > _______________________________________________ > > NFS maillist - NFS@lists.sourceforge.net > > https://lists.sourceforge.net/lists/listinfo/nfs > > ------------------------------------------------------- This sf.net email is sponsored by:ThinkGeek Welcome to geek heaven. http://thinkgeek.com/sf _______________________________________________ NFS maillist - NFS@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/nfs