From: Eric Whiting Subject: Re: High Performance NFS Date: Thu, 31 Oct 2002 13:54:43 -0700 Sender: nfs-admin@lists.sourceforge.net Message-ID: <3DC19893.626B5DF5@amis.com> References: <011501c2811c$16327c70$1400005b@ipservices1.ioerror.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: nfs@lists.sourceforge.net Return-path: Received: from dermis.amis.com ([207.141.5.253]) by usw-sf-list1.sourceforge.net with esmtp (Exim 3.31-VA-mm2 #1 (Debian)) id 187MK5-0000B8-00 for ; Thu, 31 Oct 2002 12:53:49 -0800 To: derek@ioerror.com Errors-To: nfs-admin@lists.sourceforge.net List-Help: List-Post: List-Subscribe: , List-Id: Discussion of NFS under Linux development, interoperability, and testing. List-Unsubscribe: , List-Archive: What is the linux version of the server? Clients? NFSv3 or NFSv2? This is a somewhat important question. NFS has improved a lot over the last 12 months. I suggest something like this: 1. benchmark local storage. How fast is your writes without considering the network. Make sure your disks are as fast as possible before throwing NFS/network into the picutre. 2. benchmark network. How fast is your gigE for ftp transfers? gigE isn't automatically 125Mbytes/s -- you have to set it up right. Then test NFS. Is your 12-13MB/s read or writes? Derek Labian wrote: > > I'm really looking for a high performance NFS solution > and determining the bottlenecks is key for this upgrade. > > We currently have 2 Dedicated NFS servers. Supermicro > boxes w/ Dual P3 1Ghz, 1GB RAM and an adaptec 2100s w/ > 128MB cache connected to Dell u160 powervaults. > > All the servers are connected through gigabit ethernet. > > It seems we really start peaking out the disks at about > 12-13MB a sec. This seems kind of slow, but the accesses > are not sequential because of the volume. The drives > are 10k RPM btw. > > The servers run FreeBSD (latest) and distribute files to > 8 other servers. The average file size is around 5MB > but peaks up to several hundred MB's depending on whats > happening on any given day. > > Also, when peak traffic occurs, the NFS processes all > get stuck in disk access, and the requests start queing > up. > > For this reason, we have modified the NFSD and NFS Header > file thats part of the Kernel to support much more then > 20 Daemons. > > This help performance dramatically. > > Additionally, we have tried different levels of read ahead > blocks, block sizes etc. Read aheads above 1 work fine > until the load peaks up. (Since its reading more data) > > Also, smaller then 8k blocks seem to slow it down and > larger then 8k blocks seem to slow it down. > > Hence we use 8k blocks, 1 read ahead block, UDP. > > So, is there any suggestings for getting more performance > out of these things, or if we simply need to upgrade further > what are the suggestions for high performance NFS at a > reasonable cost. > > Derek > > ------------------------------------------------------- > This sf.net email is sponsored by: Influence the future > of Java(TM) technology. Join the Java Community > Process(SM) (JCP(SM)) program now. > http://ads.sourceforge.net/cgi-bin/redirect.pl?sunm0004en > _______________________________________________ > NFS maillist - NFS@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/nfs ------------------------------------------------------- This sf.net email is sponsored by: Influence the future of Java(TM) technology. Join the Java Community Process(SM) (JCP(SM)) program now. http://ads.sourceforge.net/cgi-bin/redirect.pl?sunm0004en _______________________________________________ NFS maillist - NFS@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/nfs