> > > Congestion avoidance mechanism of NFS clients might cause this
> > > situation. I think the congestion window size is not enough
> > > for high end machines. You can make the window be larger as a
> > > test.
> >
> > The congestion avoidance window is supposed to adapt to the bandwidth
> > that is available. Turn congestion avoidance off if you like, but my
> > experience is that doing so tends to seriously degrade performance as
> > the number of timeouts + resends skyrockets.
>
> Yes, you must be right.
>
> But I guess Andrew may use a great machine so that the transfer rate
> has exeeded the maximum size of the congestion avoidance window.
> Can we determin preferable maximum window size dynamically?
Is this a concern on the client only? I can run a test with just one client
and see if I can saturate the 100Mbit adapter. If I can, would we need to
make any adjustments then? FYI, at 115 MB/sec total throughput, that's only
2.875 MB/sec for each of the 40 clients. For the TCP result of 181 MB/sec,
that's 4.525 MB/sec, IMO, both of which are comfortable throughputs for a
100Mbit client.
Andrew Theurer
Hello,
> > Congestion avoidance mechanism of NFS clients might cause this
> > situation. I think the congestion window size is not enough
> > for high end machines. You can make the window be larger as a
> > test.
> Is this a concern on the client only? I can run a test with just one client
> and see if I can saturate the 100Mbit adapter. If I can, would we need to
> make any adjustments then? FYI, at 115 MB/sec total throughput, that's only
> 2.875 MB/sec for each of the 40 clients. For the TCP result of 181 MB/sec,
> that's 4.525 MB/sec, IMO, both of which are comfortable throughputs for a
> 100Mbit client.
I think it's a client issue. NFS servers don't care about cogestion of UDP
traffic and they will try to response to all NFS requests as fast as they can.
You can try to increase the number of clients or the number of mount points
for a test. It's easy to mount the same directory of the server on some
directries of the client so that each of them can work simultaneously.
# mount -t nfs server:/foo /baa1
# mount -t nfs server:/foo /baa2
# mount -t nfs server:/foo /baa3
Thank you,
Hirokazu Takahashi.
On Saturday 19 October 2002 15:34, Hirokazu Takahashi wrote:
> Hello,
>
> > > Congestion avoidance mechanism of NFS clients might cause this
> > > situation. I think the congestion window size is not enough
> > > for high end machines. You can make the window be larger as a
> > > test.
> >
> > Is this a concern on the client only? I can run a test with just one
> > client and see if I can saturate the 100Mbit adapter. If I can, would we
> > need to make any adjustments then? FYI, at 115 MB/sec total throughput,
> > that's only 2.875 MB/sec for each of the 40 clients. For the TCP result
> > of 181 MB/sec, that's 4.525 MB/sec, IMO, both of which are comfortable
> > throughputs for a 100Mbit client.
>
> I think it's a client issue. NFS servers don't care about cogestion of UDP
> traffic and they will try to response to all NFS requests as fast as they
> can.
>
> You can try to increase the number of clients or the number of mount points
> for a test. It's easy to mount the same directory of the server on some
> directries of the client so that each of them can work simultaneously.
> # mount -t nfs server:/foo /baa1
> # mount -t nfs server:/foo /baa2
> # mount -t nfs server:/foo /baa3
I don't think it is a client congestion issue at this point. I can run the
test with just one client on UDP and achieve 11.2 MB/sec with just one mount
point. The client has 100 Mbit Ethernet, so should be the upper limit (or
really close). In the 40 client read test, I have only achieved 2.875 MB/sec
per client. That and the fact that there are never more than 2 nfsd threads
in a run state at one time (for UDP only) leads me to believe there is still
a scaling problem on the server for UDP. I will continue to run the test and
poke a prod around. Hopefully something will jump out at me. Thanks for all
the input!
Andrew Theurer
Hi,
> > > > Congestion avoidance mechanism of NFS clients might cause this
> > > > situation. I think the congestion window size is not enough
> > > > for high end machines. You can make the window be larger as a
> > > > test.
> I don't think it is a client congestion issue at this point. I can run the
> test with just one client on UDP and achieve 11.2 MB/sec with just one mount
> point. The client has 100 Mbit Ethernet, so should be the upper limit (or
> really close). In the 40 client read test, I have only achieved 2.875 MB/sec
> per client. That and the fact that there are never more than 2 nfsd threads
> in a run state at one time (for UDP only) leads me to believe there is still
> a scaling problem on the server for UDP. I will continue to run the test and
> poke a prod around. Hopefully something will jump out at me. Thanks for all
> the input!
Can You check /proc/net/rpc/nfsd which shows how many NFS requests have
been retransmitted ?
# cat /proc/net/rpc/nfsd
rc 0 27680 162118
^^^
This field means the clinents have retransmitted pakeckets.
The transmission ratio will slow down if it have happened once.
It may occur if the response from the server is slower than the
clinents expect.
And you can use older version - e.g. linux-2.4 series - for clients
and see what will happen as older versions don't have any intelligent
features.
Thank you,
Hirokazu Takahashi.