On Mon, Jun 30, 2008 at 11:26 AM, Jeff Layton <[email protected]> wrote:
> On Mon, 30 Jun 2008 15:40:30 +0530
> Krishna Kumar2 <[email protected]> wrote:
>> Dean Hildebrand <[email protected]> wrote on 06/27/2008 11:36:28 PM:
>> > One option might be to try using O_DIRECT if you are worried about
>> > memory (although I would read/write in at least 1 MB at a time). I
>> > would expect this to help at least a bit especially on reads.
>> > Also, check all the standard nfs tuning stuff, #nfsds, #rpc slots.
>> > Since with a loopback you effectively have no latency, you would want to
>> > ensure that neither the #nfsds or #rpc slots is a bottleneck (if either
>> > one is too low, you will have a problem). One way to reduce the # of
>> > requests and therefore require fewer nfsds/rpc_slots is to 'cat
>> > /proc/mounts' to see your wsize/rsize. Ensure your wsize/rsize is a
>> > decent size (~ 1MB).
>> Number of nfsd: 64, and
>> sunrpc.transports = sunrpc.udp_slot_table_entries = 128
>> sunrpc.tcp_slot_table_entries = 128
>> I am using:
>> mount -o
>> localhost:/local /nfs
>> I have also tried with 1MB for both rsize/wsize and it didn't change the BW
>> (other than
>> mini variations).
>> - KK
> Recently I spent some time with others here at Red Hat looking
> at problems with nfs server performance. One thing we found was that
> there are some problems with multiple nfsd's. It seems like the I/O
> scheduling or something is fooled by the fact that sequential write
> calls are often handled by different nfsd's. This can negatively
> impact performance (I don't think we've tracked this down completely
> yet, however).
Yeah, I think that's what Dean is alluding to above. There was a
FreeNix paper a few years back that discusses this same readahead
problem with the FreeBSD NFS server.