2002-03-19 15:14:53

by Dumas Patrice

[permalink] [raw]
Subject: tunneling of nfs and nfs locks

Hi,
I have a redhat 7.2 with nfs-utils 0.3.1, linux 2.4.7-10, mount-2.11g, portmap
4.0. The lockd is in the kernel. I use sec_rpc-1.13 to forward the rpc connections
(http://www.math.ualberta.ca/imaging/snfs/)

I have a file /file_on_remote on the "remote" host, I want to mount it on the
"local" host. I use the following /etc/fstab entry:

local:/file_on_remote /file_on_remote nfs rw,mountprog=201000,nfsprog=200003

I have a running rpc_psrv serveur which forwards the rpc requests with rpc
numbers 201000 and 200003 to host "remote" ; service 201000 is forwarded as
100005 on "remote" and 200003 as 100003. after
mount /file_on_remote
I can access the files which are on remote in /file_on_remote, but the locking
mechanism doesn't work (I test it with mutt, as it reports that he couldn't
make a lock, maybe there could be cleaner ways to test for it...).

Is it normal that the locking fails in this situation (nfs forwarded) ? What
could I do to have it work ?


This may be unrelated, or not : there is a strange thing which happen when I
do, on "local":
umount /file_on_remote
It responds:
Cannot MOUNTPROG RPC: RPC: Programm not registered

The filesystem seems to be cleanly unmounted, however.

more info
/etc/exports on "remote" (ip of remote is 193.51.xxx.yyy)

/file_on_remote 193.51.xxx.yyy(rw)

rpcinfo on "remote" gives
[root@remote nfs-utils-0.3.1]# rpcinfo -p
program no_version protocole no_port
100000 2 tcp 111 portmapper
100000 2 udp 111 portmapper
100011 1 udp 986 rquotad
100011 2 udp 986 rquotad
100011 1 tcp 989 rquotad
100011 2 tcp 989 rquotad
100005 1 udp 32769 mountd
100005 1 tcp 32769 mountd
100005 2 udp 32769 mountd
100005 2 tcp 32769 mountd
100005 3 udp 32769 mountd
100005 3 tcp 32769 mountd
100003 2 udp 2049 nfs
100003 3 udp 2049 nfs
100021 1 udp 32770 nlockmgr
100021 3 udp 32770 nlockmgr
100021 4 udp 32770 nlockmgr
100024 1 udp 35165 status
100024 1 tcp 46514 status

and on "local"
[root@local nfs-utils-0.3.1]# rpcinfo -p
program no_version protocole no_port
100000 2 tcp 111 portmapper
100000 2 udp 111 portmapper
100024 1 udp 1024 status
100024 1 tcp 1024 status
201000 3 udp 1036
200003 3 udp 1036
100021 1 udp 1037 nlockmgr
100021 3 udp 1037 nlockmgr
100021 4 udp 1037 nlockmgr

Pat

_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs


2002-03-19 15:38:48

by Trond Myklebust

[permalink] [raw]
Subject: Re: tunneling of nfs and nfs locks

>>>>> " " == Dumas Patrice <[email protected]> writes:

> I have a running rpc_psrv serveur which forwards the rpc
> requests with rpc numbers 201000 and 200003 to host "remote" ;
> service 201000 is forwarded as 100005 on "remote" and 200003 as
> 100003. after mount /file_on_remote I can access the files
> which are on remote in /file_on_remote, but the locking
> mechanism doesn't work (I test it with mutt, as it reports that
> he couldn't make a lock, maybe there could be cleaner ways to
> test for it...).

> Is it normal that the locking fails in this situation (nfs
> forwarded) ? What could I do to have it work ?

Locking uses a different protocol. It relies on using the portmapper
to look up RPC service number 100021. For that reason, it will just
find the ordinary lockd manager on localhost...

> This may be unrelated, or not : there is a strange thing which
> happen when I do, on "local": umount /file_on_remote It
> responds: Cannot MOUNTPROG RPC: RPC: Programm not registered

Yeah. umount has a hack that sends a MOUNTPROC_UMNT call to the
server. Unfortunately it ignores all options such as 'mountport='.

Cheers,
Trond

_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs

2002-03-19 15:38:39

by Tavis Barr

[permalink] [raw]
Subject: Re: tunneling of nfs and nfs locks


See the draft version of the HOWTO that I recently emailed out for
details on using SSH to forward NFS. Only mountd and nfsd are
forwarded; lockd and statd are separate modules and would be difficult
to tunnel. Therefore file locking won't work. Your best bet to have
encrypted traffic in a production environment would probably be IPSec
(FreeSWAN). You could also use Samba.

Good luck,
Tavis


On Tue, 2002-03-19 at 10:12, Dumas Patrice wrote:
> Hi,
> I have a redhat 7.2 with nfs-utils 0.3.1, linux 2.4.7-10, mount-2.11g, portmap
> 4.0. The lockd is in the kernel. I use sec_rpc-1.13 to forward the rpc connections
> (http://www.math.ualberta.ca/imaging/snfs/)
>
> I have a file /file_on_remote on the "remote" host, I want to mount it on the
> "local" host. I use the following /etc/fstab entry:
>
> local:/file_on_remote /file_on_remote nfs rw,mountprog=201000,nfsprog=200003
>
> I have a running rpc_psrv serveur which forwards the rpc requests with rpc
> numbers 201000 and 200003 to host "remote" ; service 201000 is forwarded as
> 100005 on "remote" and 200003 as 100003. after
> mount /file_on_remote
> I can access the files which are on remote in /file_on_remote, but the locking
> mechanism doesn't work (I test it with mutt, as it reports that he couldn't
> make a lock, maybe there could be cleaner ways to test for it...).
>
> Is it normal that the locking fails in this situation (nfs forwarded) ? What
> could I do to have it work ?
>
>
> This may be unrelated, or not : there is a strange thing which happen when I
> do, on "local":
> umount /file_on_remote
> It responds:
> Cannot MOUNTPROG RPC: RPC: Programm not registered
>
> The filesystem seems to be cleanly unmounted, however.
>
> more info
> /etc/exports on "remote" (ip of remote is 193.51.xxx.yyy)
>
> /file_on_remote 193.51.xxx.yyy(rw)
>
> rpcinfo on "remote" gives
> [root@remote nfs-utils-0.3.1]# rpcinfo -p
> program no_version protocole no_port
> 100000 2 tcp 111 portmapper
> 100000 2 udp 111 portmapper
> 100011 1 udp 986 rquotad
> 100011 2 udp 986 rquotad
> 100011 1 tcp 989 rquotad
> 100011 2 tcp 989 rquotad
> 100005 1 udp 32769 mountd
> 100005 1 tcp 32769 mountd
> 100005 2 udp 32769 mountd
> 100005 2 tcp 32769 mountd
> 100005 3 udp 32769 mountd
> 100005 3 tcp 32769 mountd
> 100003 2 udp 2049 nfs
> 100003 3 udp 2049 nfs
> 100021 1 udp 32770 nlockmgr
> 100021 3 udp 32770 nlockmgr
> 100021 4 udp 32770 nlockmgr
> 100024 1 udp 35165 status
> 100024 1 tcp 46514 status
>
> and on "local"
> [root@local nfs-utils-0.3.1]# rpcinfo -p
> program no_version protocole no_port
> 100000 2 tcp 111 portmapper
> 100000 2 udp 111 portmapper
> 100024 1 udp 1024 status
> 100024 1 tcp 1024 status
> 201000 3 udp 1036
> 200003 3 udp 1036
> 100021 1 udp 1037 nlockmgr
> 100021 3 udp 1037 nlockmgr
> 100021 4 udp 1037 nlockmgr
>
> Pat
>
> _______________________________________________
> NFS maillist - [email protected]
> https://lists.sourceforge.net/lists/listinfo/nfs
--
---------------------------------------------------------

Tavis Barr ,-~~-.___.
Assistant Professor of Economics / | ' \
Long Island University ( ) 0
202 Hoxie Hall \_/-, ,----'
C.W. Post Campus, 720 Northern Blvd. ==== //
Brookville, NY 11548 / \-'~; /~~~(O)
516-299-2321 / __/~| / |
[email protected] =( _____| (_________|

---------------------------------------------------------

_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs

2002-03-22 14:55:55

by Dumas Patrice

[permalink] [raw]
Subject: Re: tunneling of nfs and nfs locks

Hi,

> Locking uses a different protocol. It relies on using the portmapper
> to look up RPC service number 100021. For that reason, it will just
> find the ordinary lockd manager on localhost...

I got it to work, by forwarding 100021 on the server. However this is not a
definite solution, because I cannot have lock with different server.

I think it is a bit strange that it is possible to precise another service
number for mountd and nfs, but not for lockd. Is it a design choice, or could
it be changed ?

I have read a little the statd code and a document on the web about lockd/statd
which is a bit old, but I hope still up to date, and it seems to me that making
statd work with rpc forwarding should be more complicated or even impossible
because everything is made on a host basis. As every lock operation will
appear coming from/to the same host, lockd won't even bother with the status.

Has anybody an idea about that ?

Pat

_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs

2002-03-22 15:29:48

by Trond Myklebust

[permalink] [raw]
Subject: Re: tunneling of nfs and nfs locks

>>>>> " " == Dumas Patrice <[email protected]> writes:

> Hi,
>> Locking uses a different protocol. It relies on using the
>> portmapper to look up RPC service number 100021. For that
>> reason, it will just find the ordinary lockd manager on
>> localhost...

> I got it to work, by forwarding 100021 on the server. However
> this is not a definite solution, because I cannot have lock
> with different server.

Bug: statd will be monitoring the wrong machine. It will be told to
monitor 'localhost' by the kernel.

> I think it is a bit strange that it is possible to precise
> another service number for mountd and nfs, but not for
> lockd. Is it a design choice, or could it be changed ?

Not really. Unlike NFS, itself the NLM (NFS locking manager) protocol
is a stateful protocol. In order to get things like F_SETLKW to work,
the client and server sometimes have to swap roles: meaning that the
server has to send an RPC call back to the client in order to tell it
that "the other process that was holding the lock has now released it,
and so I've granted the lock to you".

This means that the server and the client both have to know about each
other's NLM service number. There is nothing in the protocol that
allows them to exchange this information, so it needs to be hardcoded.


Cheers,
Trond

_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs

2002-03-26 17:22:12

by Dumas Patrice

[permalink] [raw]
Subject: Re: tunneling of nfs and nfs locks

Hi,

> Bug: statd will be monitoring the wrong machine. It will be told to
> monitor 'localhost' by the kernel.

There is a even worse bug: if the lock is a lock with F_SETLKW, then the server
will make the rpc callback saying the lock is released to himself...

However, I have thought about a (maybe complicated) workaround to enable a
working lock management with rpc redirection. (I haven't allready thought about
something similar with status).

Imagine that it is possible to change the rpc programm the client and the
server make callback to (let's say it is 200021). There is still a regular
lockd running which registered the 100021 program number.

1) the client lockd needs a lock. It calls 200021 on the client
2) a proxy rpc server forwards 200021 -> 100021 on the server. It remembers
this service may need a response callback from the server, and registers the
client name. It also registers that he accepts callbacks to 200021 on the
server.
3) Once it may grant a lock, the servers's lockd make a callback to 200021 on
the server.
4) The rpc forwarder gets the server callback. It finds which clients it has to
be forwarded to (thanks to the registering done in 2)). Now it forwards
200021 -> 100021 on the client. The client lockd knows the lock is granted.

Do you think this could be implemented ? The part that doesn't seems obvious to
me is that one "It finds which clients it has to be forwarded to (thanks to the
registering done in 2))". I begun to read the lockd code, to know whether there
was enough information to retrieve the client name. I suspect I will have to
dig into the callback arguments and use the lock or cookie field.

Does it seems to be feasible ?

Pat

_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs