Hello,
I have a Sun Ultra-10 running Solaris 8 with current patches as a NFS
server, with 3 identical clients, and 25 Redhat Linux 7.1 clients. The filesystem
mounts and 'stays' fine on the other Sun machines, but the Linux boxes mount
is flaky. On the Linux boxes, it mounts fine on boot, and the data is there
and accessible. After a short time, the data is "gone", while the mount is
still apparently there. Results from 2 "df -k" commands on a Linux box:
[root@xxxxxxx weather]# df
Filesystem 1k-blocks Used Available Use% Mounted on
/dev/hdb6 25205252 10785776 13139100 46% /
/dev/hdb7 13108468 1179272 11263304 10% /usr
xxxxxxx.xxxxx.uni.edu:/space
12899648 4451328 8319328 35% /mnt/gemdata
5 minutes later.......
[root@xxxxxxx ~]$ df
Filesystem 1k-blocks Used Available Use% Mounted on
/dev/hdb6 25205252 10785760 13139116 46% /
/dev/hdb7 13108468 1179272 11263304 10% /usr
xxxxxxx.xxxx.uni.edu:/space
0 1 0 0% /mnt/gemdata
As you can see, the mount persists but the data is 'gone'. My mount command
from /etc/dfs/dfstab on the Sun NFS server is shared with "everyone." Additionally, on the linux
boxes, I see these messages in /var/log/messages:
johnstown kernel: nfs_statfs: statfs error = 13
johnstown kernel: call_verify: server requires stronger authentication.
Any idea what I may do to keep the NFS mount persistent on my Redhat 7.1
Linux boxes?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Patrick O'Reilly Support Scientist
The STORM Project [email protected]
208 Latham Hall ph: 319-273-3789
University of Northern Iowa
Cedar Falls, IA 50614
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~