I have looked through the last couple months of mailing lists archives and
reviewed the material at nfs.sourceforge.net and the list to netapp's nfs
suggestions.
I am trying to get real good performance out of NFS. So far the best I've
got is about 1/10 of the local speed with dedicated 100mbps ethernet
between fairly speedy computers.
Here's the setup.
Server (coffeepot) - Athlon XP2000, Suse 8.1, 2.4.21-pre6 kernel from
kernel.org, boots to an ata100 drive, promise rm8000 external hardware
raid5 array on adaptec Adaptec AHA-2940U/UW/D controller, 3com 3c905C
forced to 100-FD with "/sbin/mii-tool -F 100baseTx-FD eth1".
coffeepot:~ # mount |grep sda
/dev/sda1 on /shared/home type ext2 (rw,noatime)
/dev/sda2 on /shared/backup type ext2 (rw,noatime)
/dev/sda3 on /shared/logs type ext2 (rw,noatime)
coffeepot:~ # cat /etc/exports
/shared/home 10.0.34.0/24(rw,no_root_squash,async)
/shared/backup/ 10.0.34.0/24(ro,root_squash,async)
/shared/logs 10.0.34.0/24(rw,root_squash,async)
coffeepot:~ # bonnie++ -d /shared/home/jp -s 1600 -r 512 -u jp
Version 1.01d ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
coffeepot 1600M 40206 42 39934 13 9989 3 19782 22 21765 5 317.2 1
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 2675 99 +++++ +++ +++++ +++ 2759 99 +++++ +++ 4360 100
Performance is fairly kickin' here locally.
Connected through a HP4000M switch set for full duplex 100baseT on the same
switch linecard for both ports is the client.
http://midcoast.com/~jp/10.0.15.2_15-day.png is the network traffic between the
two computers showing two bonnie++ tests on the right of the graph. There is
no packet loss between the computers when tested with flood pings or regular pings.
Client info.(froth) - Athlon XP2200, Suse 8.1, 2.4.21-pre6 kernel from
kernel.org, boots to a ata-100 drive. 3com 3c905C forced to 100-FD with
"/sbin/mii-tool -F 100baseTx-FD eth1".
froth:~ # cat /etc/mtab
10.0.34.1:/shared/backup /shared/backup nfs rw,tcp,hard,intr,rsize=1024,wsize=1024,addr=10.0.34.1 0 0
10.0.34.1:/shared/logs /shared/logs nfs rw,tcp,hard,intr,rsize=1024,wsize=1024,addr=10.0.34.1 0 0
10.0.34.1:/shared/home /shared/home nfs rw,udp,hard,intr,rsize=1400,wsize=1400,addr=10.0.34.1 0 0
same bonnie++ command:
Version 1.01d ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
froth 1600M 2724 5 2764 4 1395 3 2778 5 2848 3 33.5 0
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 1175 3 5061 12 2840 10 1208 5 5723 11 1684 4
froth,1600M,2724,5,2764,4,1395,3,2778,5,2848,3,33.5,0,16,1175,3,5061,12,2840,10,1208,5,5723,11,1684,4
I get about 2700 K/sec and seeks go from 317 to 33/sec. The transfer speed
matches the network traffic graph. I would like to do better than 2700ish.
What is possible for me to improve without moving to Gig-Ethernet?
I've tried both TCP and UDP NFS. rsize & wsize or 1024,1400,4096,8192. The larger
two have horrid performance due to packet fragmentation. Like magnitudes worse.
1024, 1400, UDP and TCP all have similar performance for me.
Also, is it possible to clear the counters in nfsstat?
MUCH TIA,
Jason
--
/*
Jason Philbrook | Midcoast Internet Solutions - Internet Access,
KB1IOJ | Hosting, and TCP-IP Networks for Midcoast Maine
http://f64.nu/ | http://www.midcoast.com/
*/
-------------------------------------------------------
This SF.net email is sponsored by: ValueWeb:
Dedicated Hosting for just $79/mo with 500 GB of bandwidth!
No other company gives more support or power for your dedicated server
http://click.atdmt.com/AFF/go/sdnxxaff00300020aff/direct/01/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
hi jp-
> What is possible for me to improve without moving to Gig-Ethernet?
>=20
> I've tried both TCP and UDP NFS. rsize & wsize or=20
> 1024,1400,4096,8192. The larger
> two have horrid performance due to packet fragmentation. Like=20
> magnitudes worse.
> 1024, 1400, UDP and TCP all have similar performance for me.
this sounds like a network issue. you should use a network
performance tool (like iPerf) to measure performance between
your client and server, and try to rectify any problems you
find there, before you work on NFS performance.
> Also, is it possible to clear the counters in nfsstat?
only via a client reboot.
-------------------------------------------------------
This SF.net email is sponsored by: ValueWeb:
Dedicated Hosting for just $79/mo with 500 GB of bandwidth!
No other company gives more support or power for your dedicated server
http://click.atdmt.com/AFF/go/sdnxxaff00300020aff/direct/01/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
>>>>> " " == jp <[email protected]> writes:
> Server (coffeepot) - Athlon XP2000, Suse 8.1, 2.4.21-pre6
> kernel from kernel.org, boots to an ata100 drive, promise
> rm8000 external hardware raid5 array on adaptec Adaptec
> AHA-2940U/UW/D controller, 3com 3c905C forced to 100-FD with
> "/sbin/mii-tool -F 100baseTx-FD eth1".
Why do you have to force it to 100-FD?
Cheers,
Trond
-------------------------------------------------------
This SF.net email is sponsored by: ValueWeb:
Dedicated Hosting for just $79/mo with 500 GB of bandwidth!
No other company gives more support or power for your dedicated server
http://click.atdmt.com/AFF/go/sdnxxaff00300020aff/direct/01/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
Thanks to the several people for responses!
> > Server (coffeepot) - Athlon XP2000, Suse 8.1, 2.4.21-pre6 kernel from
> > kernel.org, boots to an ata100 drive, promise rm8000 external hardware=20
> > raid5 array on adaptec Adaptec AHA-2940U/UW/D controller, 3com 3c905C=20
> > forced to 100-FD with "/sbin/mii-tool -F 100baseTx-FD eth1".
>
> Fast question: Are you sure the _switch_ is setup to do
> 100FD as well? In my experience, forcing FD on newer
> cards and switches is something that must be done
> carefully. Also, what about link flow control? I'm not
> sure the 3c905c can be forced to do flow control by
> simple means if your switch happens to support it. Try
> the same using N-Way auto-negotiation and check what the
> 3c905c thinks about it.
Flow control on the switch is disabled - the default, I checked. It's also
set for 100-FD, like my ethernet cards. I always hard-set ethernet
settings because I don't trust autonegotiation under all circumstances.
I installed iperf on both machines and there is not a problem sending
large amounts of data between machines.
coffeepot:~ # /usr/local/bin/iperf -s -u
froth:/tmp/iperf-1.7.0 # /usr/local/bin/iperf -c 10.0.34.1 -b 100m
WARNING: option -b implies udp testing
------------------------------------------------------------
Client connecting to 10.0.34.1, UDP port 5001
Sending 1470 byte datagrams
UDP buffer size: 64.0 KByte (default)
------------------------------------------------------------
[ 5] local 10.0.34.2 port 32876 connected with 10.0.34.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 5] 0.0-10.0 sec 114 MBytes 95.6 Mbits/sec
[ 5] Server Report:
[ 5] 0.0-10.0 sec 114 MBytes 95.6 Mbits/sec 0.246 ms 0/81337 (0%)
[ 5] Sent 81337 datagrams
froth:/proc # /usr/local/bin/iperf -c 10.0.34.1 -b 90m
WARNING: option -b implies udp testing
------------------------------------------------------------
Client connecting to 10.0.34.1, UDP port 5001
Sending 1470 byte datagrams
UDP buffer size: 64.0 KByte (default)
------------------------------------------------------------
[ 5] local 10.0.34.2 port 32876 connected with 10.0.34.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 5] 0.0-10.0 sec 108 MBytes 90.5 Mbits/sec
[ 5] Server Report:
[ 5] 0.0-10.0 sec 108 MBytes 90.5 Mbits/sec 0.000 ms 0/76925 (0%)
[ 5] Sent 76925 datagrams
>
> Cheers,
> --=20
> K=E5re Hviid Sys Admin [email protected] +45 3815 3075
> Institut for Datalingvistik, Handelsh=F8jskolen i K=F8benhavn
>
--
/*
Jason Philbrook | Midcoast Internet Solutions - Internet Access,
KB1IOJ | Hosting, and TCP-IP Networks for Midcoast Maine
http://f64.nu/ | http://www.midcoast.com/
*/
-------------------------------------------------------
This SF.net email is sponsored by: ValueWeb:
Dedicated Hosting for just $79/mo with 500 GB of bandwidth!
No other company gives more support or power for your dedicated server
http://click.atdmt.com/AFF/go/sdnxxaff00300020aff/direct/01/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
Hi,
Unless you're using an old exotic Cisco switch, i don't think you should do=
this, IMHO.
We've had the worst problems doing that and since we use autoneg ( with int=
el EEpro100 card)
we never had a single problem ever since.
Thanks,
Philippe
--
Philippe Gramoull=E9
[email protected]
Lycos Europe - NOC France
On Tue, 1 Apr 2003 10:39:50 -0500 (EST)
[email protected] wrote:
| I always hard-set ethernet=20
| settings because I don't trust autonegotiation under all circumstances.
-------------------------------------------------------
This SF.net email is sponsored by: ValueWeb:
Dedicated Hosting for just $79/mo with 500 GB of bandwidth!
No other company gives more support or power for your dedicated server
http://click.atdmt.com/AFF/go/sdnxxaff00300020aff/direct/01/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
I just have to respond to this. I must respectfully disagree.
Autonegotiation is tolerable at best.
With certain equiptment it works flawlessly, but MANY brands autonegotiate
correct speeds and duplex, but still exhibit 2-3% packetloss or intermittant
latency (high pings times etc). A perfect example is my cisco 2940 catalyst
switch and my alteon/nortel 180e (layer 2-7 switch). Both switches are high
quality and work well, but if you link them up with autonegiation you will
have problems. It will detect proper speeds and duplex, but has speed
problems and packet loss. When contacting BOTH cisco and nortel support
they both said autonegiation is bad news and should be used only to get
things up and going. Cisco said if all the products were cisco then no
problem, just as nortel said the same thing. Just my 2 cents worth, but I
have seen this problem on more than 5 devices on my own network alone.
L8r...
Matt
----- Original Message -----
From: "Philippe Gramoull?" <[email protected]>
To: <[email protected]>
Sent: Tuesday, April 01, 2003 9:06 AM
Subject: Re: [NFS] performance question
Hi,
Unless you're using an old exotic Cisco switch, i don't think you should do
this, IMHO.
We've had the worst problems doing that and since we use autoneg ( with
intel EEpro100 card)
we never had a single problem ever since.
Thanks,
Philippe
--
Philippe Gramoull?
[email protected]
Lycos Europe - NOC France
On Tue, 1 Apr 2003 10:39:50 -0500 (EST)
[email protected] wrote:
| I always hard-set ethernet
| settings because I don't trust autonegotiation under all circumstances.
-------------------------------------------------------
This SF.net email is sponsored by: ValueWeb:
Dedicated Hosting for just $79/mo with 500 GB of bandwidth!
No other company gives more support or power for your dedicated server
http://click.atdmt.com/AFF/go/sdnxxaff00300020aff/direct/01/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
-------------------------------------------------------
This SF.net email is sponsored by: ValueWeb:
Dedicated Hosting for just $79/mo with 500 GB of bandwidth!
No other company gives more support or power for your dedicated server
http://click.atdmt.com/AFF/go/sdnxxaff00300020aff/direct/01/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
Hi,
Ok, i should have been more precise :)
My recommandations were only for NIC <-> switch.
In case of switch <-> switch then you could indeed force things without
problems.
I was refering to some Linux NFS servers, here, having big troubles talkin=
g to=20
a switch (sorry i don't remember the brand) on which settings were forced.
Thanks,
Philippe
--
Philippe Gramoull=E9
[email protected]
Lycos Europe - NOC France
On Tue, 1 Apr 2003 09:22:13 -0700
"Matt Heaton" <[email protected]> wrote:
| I just have to respond to this. I must respectfully disagree.
| Autonegotiation is tolerable at best.
| With certain equiptment it works flawlessly, but MANY brands autonegoti=
ate
| correct speeds and duplex, but still exhibit 2-3% packetloss or intermi=
ttant
| latency (high pings times etc). A perfect example is my cisco 2940 cat=
alyst
| switch and my alteon/nortel 180e (layer 2-7 switch). Both switches are=
high
| quality and work well, but if you link them up with autonegiation you w=
ill
| have problems. It will detect proper speeds and duplex, but has speed
| problems and packet loss. When contacting BOTH cisco and nortel support
| they both said autonegiation is bad news and should be used only to get
| things up and going. Cisco said if all the products were cisco then no
| problem, just as nortel said the same thing. Just my 2 cents worth, bu=
t I
| have seen this problem on more than 5 devices on my own network alone.
|=20
| L8r...
|=20
| Matt
|=20
| ----- Original Message -----
| From: "Philippe Gramoull=E9" <[email protected]>
| To: <[email protected]>
| Sent: Tuesday, April 01, 2003 9:06 AM
| Subject: Re: [NFS] performance question
|=20
|=20
| Hi,
|=20
| Unless you're using an old exotic Cisco switch, i don't think you shoul=
d do
| this, IMHO.
|=20
| We've had the worst problems doing that and since we use autoneg ( with
| intel EEpro100 card)
| we never had a single problem ever since.
|=20
| Thanks,
|=20
| Philippe
|=20
| --
|=20
| Philippe Gramoull=E9
| [email protected]
| Lycos Europe - NOC France
|=20
|=20
|=20
|=20
| On Tue, 1 Apr 2003 10:39:50 -0500 (EST)
| [email protected] wrote:
|=20
| | I always hard-set ethernet
| | settings because I don't trust autonegotiation under all circumsta=
nces.
|=20
|=20
| -------------------------------------------------------
| This SF.net email is sponsored by: ValueWeb:
| Dedicated Hosting for just $79/mo with 500 GB of bandwidth!
| No other company gives more support or power for your dedicated server
| http://click.atdmt.com/AFF/go/sdnxxaff00300020aff/direct/01/
| _______________________________________________
| NFS maillist - [email protected]
| https://lists.sourceforge.net/lists/listinfo/nfs
|=20
|=20
|=20
|=20
| -------------------------------------------------------
| This SF.net email is sponsored by: ValueWeb:=20
| Dedicated Hosting for just $79/mo with 500 GB of bandwidth!=20
| No other company gives more support or power for your dedicated server
| http://click.atdmt.com/AFF/go/sdnxxaff00300020aff/direct/01/
| _______________________________________________
| NFS maillist - [email protected]
| https://lists.sourceforge.net/lists/listinfo/nfs
|=20
-------------------------------------------------------
This SF.net email is sponsored by: ValueWeb:
Dedicated Hosting for just $79/mo with 500 GB of bandwidth!
No other company gives more support or power for your dedicated server
http://click.atdmt.com/AFF/go/sdnxxaff00300020aff/direct/01/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
On Tue, 1 Apr 2003 [email protected] wrote:
> Flow control on the switch is disabled - the default, I checked. It's also
> set for 100-FD, like my ethernet cards. I always hard-set ethernet
> settings because I don't trust autonegotiation under all circumstances.
People that don't want to be helped should not ask for help any more!
I've already warned about forcing speed, check the net driver mailing
lists and scyld.com to see why and also for some words from Donald Becker
about why the forcing of speed and duplex ever came into discussion.
> WARNING: option -b implies udp testing
Oh yes, you want to test network quality with UDP... Have you ever thought
that NFS needs communication both ways ? If you think that your network
with forced full-duplex is perfect, try two UDP streams in opposite
directions - you should not loose one packet and still achieve high
bandwidth; and if you want to stress it even more, try UDP packets that do
not fit in an Ethernet frame.
--
Bogdan Costescu
IWR - Interdisziplinaeres Zentrum fuer Wissenschaftliches Rechnen
Universitaet Heidelberg, INF 368, D-69120 Heidelberg, GERMANY
Telephone: +49 6221 54 8869, Telefax: +49 6221 54 8868
E-mail: [email protected]
-------------------------------------------------------
This SF.net email is sponsored by: ValueWeb:
Dedicated Hosting for just $79/mo with 500 GB of bandwidth!
No other company gives more support or power for your dedicated server
http://click.atdmt.com/AFF/go/sdnxxaff00300020aff/direct/01/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs