There could be a case, when NFSd file system is mounted in network, different
to socket's one, like below:
"ip netns exec" creates new network and mount namespace, which duplicates NFSd
mount point, created in init_net context. And thus NFS server stop in nested
network context leads to RPCBIND client destruction in init_net.
Then, on NFSd start in nested network context, rpc.nfsd process creates socket
in nested net and passes it into "write_ports", which leads to RPCBIND sockets
creation in init_net context because of the same reason (NFSd monut point was
created in init_net context). An attempt to register passed socket in nested
net leads to panic, because no RPCBIND client present in nexted network
namespace.
This patch add check that passed socket's net matches NFSd superblock's one.
And returns -EINVAL error to user psace otherwise.
Reported-by: Weng Meiling <[email protected]>
Signed-off-by: Stanislav Kinsbursky <[email protected]>
Cc: [email protected]
---
fs/nfsd/nfsctl.c | 5 +++++
include/linux/sunrpc/svcsock.h | 1 +
net/sunrpc/svcsock.c | 11 +++++++++++
3 files changed, 17 insertions(+), 0 deletions(-)
diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
index 7f55517..f34d9de 100644
--- a/fs/nfsd/nfsctl.c
+++ b/fs/nfsd/nfsctl.c
@@ -699,6 +699,11 @@ static ssize_t __write_ports_addfd(char *buf, struct net *net)
if (err != 0 || fd < 0)
return -EINVAL;
+ if (svc_alien_sock(net, fd)) {
+ printk(KERN_ERR "%s: socket net is different to NFSd's one\n", __func__);
+ return -EINVAL;
+ }
+
err = nfsd_create_serv(net);
if (err != 0)
return err;
diff --git a/include/linux/sunrpc/svcsock.h b/include/linux/sunrpc/svcsock.h
index 62fd1b7..947009e 100644
--- a/include/linux/sunrpc/svcsock.h
+++ b/include/linux/sunrpc/svcsock.h
@@ -56,6 +56,7 @@ int svc_recv(struct svc_rqst *, long);
int svc_send(struct svc_rqst *);
void svc_drop(struct svc_rqst *);
void svc_sock_update_bufs(struct svc_serv *serv);
+bool svc_alien_sock(struct net *net, int fd);
int svc_addsock(struct svc_serv *serv, const int fd,
char *name_return, const size_t len);
void svc_init_xprt_sock(void);
diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
index b6e59f0..3ba5b87 100644
--- a/net/sunrpc/svcsock.c
+++ b/net/sunrpc/svcsock.c
@@ -1397,6 +1397,17 @@ static struct svc_sock *svc_setup_socket(struct svc_serv *serv,
return svsk;
}
+bool svc_alien_sock(struct net *net, int fd)
+{
+ int err;
+ struct socket *sock = sockfd_lookup(fd, &err);
+
+ if (sock && (sock_net(sock->sk) != net))
+ return true;
+ return false;
+}
+EXPORT_SYMBOL_GPL(svc_alien_sock);
+
/**
* svc_addsock - add a listener socket to an RPC service
* @serv: pointer to RPC service to which to add a new listener
04.01.2014 02:22, J. Bruce Fields пишет:
> On Mon, Dec 30, 2013 at 05:23:59PM +0300, Stanislav Kinsbursky wrote:
>> There could be a case, when NFSd file system is mounted in network, different
>> to socket's one, like below:
>>
>> "ip netns exec" creates new network and mount namespace, which duplicates NFSd
>> mount point, created in init_net context. And thus NFS server stop in nested
>> network context leads to RPCBIND client destruction in init_net.
>> Then, on NFSd start in nested network context, rpc.nfsd process creates socket
>> in nested net and passes it into "write_ports", which leads to RPCBIND sockets
>> creation in init_net context because of the same reason (NFSd monut point was
>> created in init_net context). An attempt to register passed socket in nested
>> net leads to panic, because no RPCBIND client present in nexted network
>> namespace.
>
> So it's the attempt to use a NULL ->rpcb_local_clnt4?
>
Correct.
--
Best regards,
Stanislav Kinsbursky
On Mon, Dec 30, 2013 at 05:23:59PM +0300, Stanislav Kinsbursky wrote:
> There could be a case, when NFSd file system is mounted in network, different
> to socket's one, like below:
>
> "ip netns exec" creates new network and mount namespace, which duplicates NFSd
> mount point, created in init_net context. And thus NFS server stop in nested
> network context leads to RPCBIND client destruction in init_net.
> Then, on NFSd start in nested network context, rpc.nfsd process creates socket
> in nested net and passes it into "write_ports", which leads to RPCBIND sockets
> creation in init_net context because of the same reason (NFSd monut point was
> created in init_net context). An attempt to register passed socket in nested
> net leads to panic, because no RPCBIND client present in nexted network
> namespace.
So it's the attempt to use a NULL ->rpcb_local_clnt4?
Interesting, thanks--applying with a minor fix to logged message.
--b.
>
> This patch add check that passed socket's net matches NFSd superblock's one.
> And returns -EINVAL error to user psace otherwise.
>
> Reported-by: Weng Meiling <[email protected]>
> Signed-off-by: Stanislav Kinsbursky <[email protected]>
> Cc: [email protected]
> ---
> fs/nfsd/nfsctl.c | 5 +++++
> include/linux/sunrpc/svcsock.h | 1 +
> net/sunrpc/svcsock.c | 11 +++++++++++
> 3 files changed, 17 insertions(+), 0 deletions(-)
>
> diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
> index 7f55517..f34d9de 100644
> --- a/fs/nfsd/nfsctl.c
> +++ b/fs/nfsd/nfsctl.c
> @@ -699,6 +699,11 @@ static ssize_t __write_ports_addfd(char *buf, struct net *net)
> if (err != 0 || fd < 0)
> return -EINVAL;
>
> + if (svc_alien_sock(net, fd)) {
> + printk(KERN_ERR "%s: socket net is different to NFSd's one\n", __func__);
> + return -EINVAL;
> + }
> +
> err = nfsd_create_serv(net);
> if (err != 0)
> return err;
> diff --git a/include/linux/sunrpc/svcsock.h b/include/linux/sunrpc/svcsock.h
> index 62fd1b7..947009e 100644
> --- a/include/linux/sunrpc/svcsock.h
> +++ b/include/linux/sunrpc/svcsock.h
> @@ -56,6 +56,7 @@ int svc_recv(struct svc_rqst *, long);
> int svc_send(struct svc_rqst *);
> void svc_drop(struct svc_rqst *);
> void svc_sock_update_bufs(struct svc_serv *serv);
> +bool svc_alien_sock(struct net *net, int fd);
> int svc_addsock(struct svc_serv *serv, const int fd,
> char *name_return, const size_t len);
> void svc_init_xprt_sock(void);
> diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
> index b6e59f0..3ba5b87 100644
> --- a/net/sunrpc/svcsock.c
> +++ b/net/sunrpc/svcsock.c
> @@ -1397,6 +1397,17 @@ static struct svc_sock *svc_setup_socket(struct svc_serv *serv,
> return svsk;
> }
>
> +bool svc_alien_sock(struct net *net, int fd)
> +{
> + int err;
> + struct socket *sock = sockfd_lookup(fd, &err);
> +
> + if (sock && (sock_net(sock->sk) != net))
> + return true;
> + return false;
> +}
> +EXPORT_SYMBOL_GPL(svc_alien_sock);
> +
> /**
> * svc_addsock - add a listener socket to an RPC service
> * @serv: pointer to RPC service to which to add a new listener
>
Hi Bruce,
The upstream has merged your git tree for-3.14, but there is no this patch?
Do you forget this patch?
Thanks!
Weng Meiling
On 2014/1/4 6:22, J. Bruce Fields wrote:
> On Mon, Dec 30, 2013 at 05:23:59PM +0300, Stanislav Kinsbursky wrote:
>> There could be a case, when NFSd file system is mounted in network, different
>> to socket's one, like below:
>>
>> "ip netns exec" creates new network and mount namespace, which duplicates NFSd
>> mount point, created in init_net context. And thus NFS server stop in nested
>> network context leads to RPCBIND client destruction in init_net.
>> Then, on NFSd start in nested network context, rpc.nfsd process creates socket
>> in nested net and passes it into "write_ports", which leads to RPCBIND sockets
>> creation in init_net context because of the same reason (NFSd monut point was
>> created in init_net context). An attempt to register passed socket in nested
>> net leads to panic, because no RPCBIND client present in nexted network
>> namespace.
>
> So it's the attempt to use a NULL ->rpcb_local_clnt4?
>
> Interesting, thanks--applying with a minor fix to logged message.
>
> --b.
>
On Sat, Feb 15, 2014 at 09:51:20AM +0800, Weng Meiling wrote:
> Hi Bruce,
>
> The upstream has merged your git tree for-3.14, but there is no this patch?
> Do you forget this patch?
Apologies, I'm not sure what happened.
Looking back at it.... The patch causes all my pynfs reboot recovery
tests to fail. They're just doing a "systemctl restart
nfs-server.service", and "systemctl status nfs-server.service" shows in
part
ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS $RPCNFSDCOUNT (code=exited, status=1/FAILURE)
So the patch is causing rpc.nfsd to fail? No network namespaces should
be involved.
I haven't investigated any further.
--b.
>
> Thanks!
> Weng Meiling
>
>
> On 2014/1/4 6:22, J. Bruce Fields wrote:
> > On Mon, Dec 30, 2013 at 05:23:59PM +0300, Stanislav Kinsbursky wrote:
> >> There could be a case, when NFSd file system is mounted in network, different
> >> to socket's one, like below:
> >>
> >> "ip netns exec" creates new network and mount namespace, which duplicates NFSd
> >> mount point, created in init_net context. And thus NFS server stop in nested
> >> network context leads to RPCBIND client destruction in init_net.
> >> Then, on NFSd start in nested network context, rpc.nfsd process creates socket
> >> in nested net and passes it into "write_ports", which leads to RPCBIND sockets
> >> creation in init_net context because of the same reason (NFSd monut point was
> >> created in init_net context). An attempt to register passed socket in nested
> >> net leads to panic, because no RPCBIND client present in nexted network
> >> namespace.
> >
> > So it's the attempt to use a NULL ->rpcb_local_clnt4?
> >
> > Interesting, thanks--applying with a minor fix to logged message.
> >
> > --b.
> >
>
>
>