2002-10-17 02:03:33

by Andrew Theurer

[permalink] [raw]
Subject: Re: [NFS] Re: [PATCH] zerocopy NFS for 2.5.36

> From: Neil Brown <[email protected]>
> Date: Wed, 16 Oct 2002 13:44:04 +1000
>
> Presumably on a sufficiently large SMP machine that this became an
> issue, there would be multiple NICs. Maybe it would make sense to
> have one udp socket for each NIC. Would that make sense? or work?
> It feels to me to be cleaner than one for each CPU.
>
> Doesn't make much sense.
>
> Usually we are talking via one IP address, and thus over
> one device. It could be using multiple NICs via BONDING,
> but that would be transparent to anything at the socket
> level.
>
> Really, I think there is real value to making the socket
> per-cpu even on a 2 or 4 way system.


I am still seeing some sort of problem on an 8 way (hyperthreaded 8
logical/4 physical) on UDP with these patches. I cannot get more than 2
NFSd threads in a run state at one time. TCP usually has 8 or more. The
test involves 40 100Mbit clients reading a 200 MB file on one server (4
acenic adapters) in cache. I am fighting some other issues at the moment
(acpi wierdness), but so far before the patches, 82 MB/sec for NFSv2,UDP and
138 MB/sec for NFSv2,TCP. With the patches, 115 MB/sec for NFSv2,UDP and
181 MB/sec for NFSv2,TCP. One CPU is maxed due to acpi int storm, so I
think the results will get better. I'm not sure what other lock or
contention point this is hitting on UDP. If there is anything I can do to
help, please let me know, thanks.

Andrew Theurer


2002-10-17 02:31:26

by Hirokazu Takahashi

[permalink] [raw]
Subject: Re: [NFS] Re: [PATCH] zerocopy NFS for 2.5.36

Hello,

Thanks for testing my patches.

> I am still seeing some sort of problem on an 8 way (hyperthreaded 8
> logical/4 physical) on UDP with these patches. I cannot get more than 2
> NFSd threads in a run state at one time. TCP usually has 8 or more. The
> test involves 40 100Mbit clients reading a 200 MB file on one server (4
> acenic adapters) in cache. I am fighting some other issues at the moment
> (acpi wierdness), but so far before the patches, 82 MB/sec for NFSv2,UDP and
> 138 MB/sec for NFSv2,TCP. With the patches, 115 MB/sec for NFSv2,UDP and
> 181 MB/sec for NFSv2,TCP. One CPU is maxed due to acpi int storm, so I
> think the results will get better. I'm not sure what other lock or
> contention point this is hitting on UDP. If there is anything I can do to
> help, please let me know, thanks.

I guess some UDP packets might be lost. It may happen easily as UDP protocol
doesn't support flow control.
Can you check how many errors has happened?
You can see them in /proc/net/snmp of the server and the clients.

And how many threads did you start on your machine?
Buffer size of a UDP socket depends on number of kNFS threads.
Large number of threads might help you.