> > I don't think it is a client congestion issue at this point. I can run
the
> > test with just one client on UDP and achieve 11.2 MB/sec with just one
mount
> > point. The client has 100 Mbit Ethernet, so should be the upper limit
(or
> > really close). In the 40 client read test, I have only achieved 2.875
MB/sec
> > per client. That and the fact that there are never more than 2 nfsd
threads
> > in a run state at one time (for UDP only) leads me to believe there is
still
> > a scaling problem on the server for UDP. I will continue to run the
test and
> > poke a prod around. Hopefully something will jump out at me. Thanks
for all
> > the input!
>
> Can You check /proc/net/rpc/nfsd which shows how many NFS requests have
> been retransmitted ?
>
> # cat /proc/net/rpc/nfsd
> rc 0 27680 162118
> ^^^
> This field means the clinents have retransmitted pakeckets.
> The transmission ratio will slow down if it have happened once.
> It may occur if the response from the server is slower than the
> clinents expect.
/proc/net/rpc/nfsd
rc 0 1 1025221
> And you can use older version - e.g. linux-2.4 series - for clients
> and see what will happen as older versions don't have any intelligent
> features.
Actually all of the clients are 2.4 (RH 7.0). I could change them out to
2.5, but it may take me a little while.
Let me do a little digging around. I seem to recall an issue I had earlier
this year when waking up the nfsd threads and having most of them just go
back to sleep. I need to go back to that code and understand it a little
better. Thanks for all of your help.
Andrew Theurer