Hi,
I have come across a interesting behavior when using NFS server on machines
with large amounts of memory(16/32Gb). If I have exported filesystems with
lots of data and many directories that were access and had dentry/inode cache
entries cached on the server, the clients took a long time(sometimes minutes)
to mount the NFS exported directories.
While the client was mostly IDLE, the servers rpc.mountd thread was consuming
100% of one CPU on the system. After some digging, I found that the problem
lies on the new nfsd filesystem code that talks to nfsctl. Every time I issue
a mount on the client, the server invalidates both inode and dentry caches.
Inode cache get invalidated on sys_nfsservctl(fs/nfsctl.c) on the fput call
which eventually calls invalidate_inodes. Dentry caches get invalidated on
do_open(fs/nfsctl.c) when calling do_kernel_mount which eventually calls
shrink_dcache_sb.
The inode cache invalidation is easy to work around by just mounting the nfsd
to something like /tmp/nfsd, but I haven't been able(except with a very
dangerous and stupid patch) to work around the dentry cache invalidation.
This invalidation seem unnecessary and it cause by the interaction with this
new interface for nfsctl.
Any help would be appreciated. :)
Thanks - JRS
-------------------------------------------------------
This SF.net email is sponsored by OSDN developer relations
Here's your chance to show off your extensive product knowledge
We want to know what you know. Tell us and you have a chance to win $100
http://www.zoomerang.com/survey.zgi?HRPT1X3RYQNC5V4MLNSV3E54
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs