Hi all,
I am trying to do an nfs failover from one server to another in a 4 node
cluster . During this failover, I have a link from /var/lib/nfs directory to
a mount that resides on a shared disk. When i want to move the "resources"
from one node to the other ... this is how I go about initiating the move :
1) /etc/init.d/nfsserver stop
2) mount /var/lib/nfs on the failover node
3) move IP address to failover node
3) /etc/init.d/nfsserver start
4) /sbin/sm-notify -v <failover ip address>
Are these the correct steps ? What I end up getting is a Stale NFS File
handle on all client nodes, until I unmount and remount them again. For the
purpose of my application this mount/unmount of clients is undesirable .
What should I do ? Any help would be appreciated.
( I am using SLES 10 SP 1 as a platform :
uname -a
Linux mach1 2.6.16.46-0.12-smp #1 SMP Thu May 17 14:00:09 UTC 2007 x86_64
x86_64 x86_64 GNU/Linux )
Thank you,
Saurabh
Hi,
I will be sure to try that and see if I get the same problem ...
I am exporting an ext3 filesystem ... Does that change anything ?
Thank you,
Saurabh
On 10/9/07, Jordi Prats <[email protected]> wrote:
>
> To avoid the stale state on your clients you should add a tag to your
> exported filesystems. For example:
>
> /usr 10.0.0.1(rw,sync,no_root_squash,nohide,fsid=1)
> /usr 10.0.0.2(rw,sync,no_root_squash,nohide,fsid=1)
>
> /var 10.0.0.1(rw,sync,no_root_squash,nohide,fsid=2)
> /var 10.0.0.2(rw,sync,no_root_squash,nohide,fsid=2)
>
> regards,
> Jordi
>
> J. Bruce Fields wrote:
> > On Tue, Oct 09, 2007 at 03:40:22PM -0400, Saurabh Sehgal wrote:
> >> Hi all,
> >>
> >> I am trying to do an nfs failover from one server to another in a 4
> node
> >> cluster . During this failover, I have a link from /var/lib/nfs
> directory to
> >> a mount that resides on a shared disk. When i want to move the
> "resources"
> >> from one node to the other ... this is how I go about initiating the
> move :
> >>
> >> 1) /etc/init.d/nfsserver stop
> >> 2) mount /var/lib/nfs on the failover node
> >> 3) move IP address to failover node
> >> 3) /etc/init.d/nfsserver start
> >> 4) /sbin/sm-notify -v <failover ip address>
> >>
> >> Are these the correct steps ? What I end up getting is a Stale NFS File
> >> handle on all client nodes, until I unmount and remount them again. For
> the
> >> purpose of my application this mount/unmount of clients is undesirable
> .
> >> What should I do ? Any help would be appreciated.
> >
> > What filesystem are you exporting?
> >
> > --b.
> >
> >
> -------------------------------------------------------------------------
> > This SF.net email is sponsored by: Splunk Inc.
> > Still grepping through log files to find problems? Stop.
> > Now Search log events and configuration files using AJAX and a browser.
> > Download your FREE copy of Splunk now >> http://get.splunk.com/
> > _______________________________________________
> > NFS maillist - [email protected]
> > https://lists.sourceforge.net/lists/listinfo/nfs
> >
> >
>
On Wed, Oct 10, 2007 at 09:38:18AM -0400, Saurabh Sehgal wrote:
> I will be sure to try that and see if I get the same problem ...
>
> I am exporting an ext3 filesystem ...
So you're using a shared block device?
> Does that change anything ?
It's just important to make sure you're exporting the *identical* ext3
filesystem from the two servers--copies made above the filesystem level
(using cp or rsync or whatever) aren't sufficient to ensure that
filehandles stay the same.
--b.
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
On Tue, Oct 09, 2007 at 03:40:22PM -0400, Saurabh Sehgal wrote:
> Hi all,
>
> I am trying to do an nfs failover from one server to another in a 4 node
> cluster . During this failover, I have a link from /var/lib/nfs directory to
> a mount that resides on a shared disk. When i want to move the "resources"
> from one node to the other ... this is how I go about initiating the move :
>
> 1) /etc/init.d/nfsserver stop
> 2) mount /var/lib/nfs on the failover node
> 3) move IP address to failover node
> 3) /etc/init.d/nfsserver start
> 4) /sbin/sm-notify -v <failover ip address>
>
> Are these the correct steps ? What I end up getting is a Stale NFS File
> handle on all client nodes, until I unmount and remount them again. For the
> purpose of my application this mount/unmount of clients is undesirable .
> What should I do ? Any help would be appreciated.
What filesystem are you exporting?
--b.
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
To avoid the stale state on your clients you should add a tag to your
exported filesystems. For example:
/usr 10.0.0.1(rw,sync,no_root_squash,nohide,fsid=1)
/usr 10.0.0.2(rw,sync,no_root_squash,nohide,fsid=1)
/var 10.0.0.1(rw,sync,no_root_squash,nohide,fsid=2)
/var 10.0.0.2(rw,sync,no_root_squash,nohide,fsid=2)
regards,
Jordi
J. Bruce Fields wrote:
> On Tue, Oct 09, 2007 at 03:40:22PM -0400, Saurabh Sehgal wrote:
>> Hi all,
>>
>> I am trying to do an nfs failover from one server to another in a 4 node
>> cluster . During this failover, I have a link from /var/lib/nfs directory to
>> a mount that resides on a shared disk. When i want to move the "resources"
>> from one node to the other ... this is how I go about initiating the move :
>>
>> 1) /etc/init.d/nfsserver stop
>> 2) mount /var/lib/nfs on the failover node
>> 3) move IP address to failover node
>> 3) /etc/init.d/nfsserver start
>> 4) /sbin/sm-notify -v <failover ip address>
>>
>> Are these the correct steps ? What I end up getting is a Stale NFS File
>> handle on all client nodes, until I unmount and remount them again. For the
>> purpose of my application this mount/unmount of clients is undesirable .
>> What should I do ? Any help would be appreciated.
>
> What filesystem are you exporting?
>
> --b.
>
> -------------------------------------------------------------------------
> This SF.net email is sponsored by: Splunk Inc.
> Still grepping through log files to find problems? Stop.
> Now Search log events and configuration files using AJAX and a browser.
> Download your FREE copy of Splunk now >> http://get.splunk.com/
> _______________________________________________
> NFS maillist - [email protected]
> https://lists.sourceforge.net/lists/listinfo/nfs
>
>
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs