2021-09-28 13:31:48

by Trond Myklebust

[permalink] [raw]
Subject: Re: [PATCH net 2/2] auth_gss: Fix deadlock that blocks rpcsec_gss_exit_net when use-gss-proxy==1

On Tue, 2021-09-28 at 11:14 +0800, Wang Hai wrote:
> When use-gss-proxy is set to 1, write_gssp() creates a rpc client in
> gssp_rpc_create(), this increases the netns refcount by 2, these
> refcounts are supposed to be released in rpcsec_gss_exit_net(), but
> it
> will never happen because rpcsec_gss_exit_net() is triggered only
> when
> the netns refcount gets to 0, specifically:
>     refcount=0 -> cleanup_net() -> ops_exit_list ->
> rpcsec_gss_exit_net
> It is a deadlock situation here, refcount will never get to 0 unless
> rpcsec_gss_exit_net() is called. So, in this case, the netns refcount
> should not be increased.
>
> In this case, xprt will take a netns refcount which is not supposed
> to be taken. Add a new flag to rpc_create_args called
> RPC_CLNT_CREATE_NO_NET_REF for not increasing the netns refcount.
>
> It is safe not to hold the netns refcount, because when
> cleanup_net(), it
> will hold the gssp_lock and then shut down the rpc client
> synchronously.
>
>
I don't like this solution at all. Adding this kind of flag is going to
lead to problems down the road.

Is there any reason whatsoever why we need this RPC client to exist
when there is no active knfsd server? IOW: Is there any reason why we
shouldn't defer creating this RPC client for when knfsd starts up in
this net namespace, and why we can't shut it down when knfsd shuts
down?

--
Trond Myklebust
Linux NFS client maintainer, Hammerspace
[email protected]



2021-09-28 13:50:37

by J. Bruce Fields

[permalink] [raw]
Subject: Re: [PATCH net 2/2] auth_gss: Fix deadlock that blocks rpcsec_gss_exit_net when use-gss-proxy==1

On Tue, Sep 28, 2021 at 01:30:17PM +0000, Trond Myklebust wrote:
> On Tue, 2021-09-28 at 11:14 +0800, Wang Hai wrote:
> > When use-gss-proxy is set to 1, write_gssp() creates a rpc client in
> > gssp_rpc_create(), this increases the netns refcount by 2, these
> > refcounts are supposed to be released in rpcsec_gss_exit_net(), but
> > it
> > will never happen because rpcsec_gss_exit_net() is triggered only
> > when
> > the netns refcount gets to 0, specifically:
> >     refcount=0 -> cleanup_net() -> ops_exit_list ->
> > rpcsec_gss_exit_net
> > It is a deadlock situation here, refcount will never get to 0 unless
> > rpcsec_gss_exit_net() is called. So, in this case, the netns refcount
> > should not be increased.
> >
> > In this case, xprt will take a netns refcount which is not supposed
> > to be taken. Add a new flag to rpc_create_args called
> > RPC_CLNT_CREATE_NO_NET_REF for not increasing the netns refcount.
> >
> > It is safe not to hold the netns refcount, because when
> > cleanup_net(), it
> > will hold the gssp_lock and then shut down the rpc client
> > synchronously.
> >
> >
> I don't like this solution at all. Adding this kind of flag is going to
> lead to problems down the road.
>
> Is there any reason whatsoever why we need this RPC client to exist
> when there is no active knfsd server? IOW: Is there any reason why we
> shouldn't defer creating this RPC client for when knfsd starts up in
> this net namespace, and why we can't shut it down when knfsd shuts
> down?

The rpc create is done in the context of the process that writes to
/proc/net/rpc/use-gss-proxy to get the right namespaces. I don't know
how hard it would be capture that information for a later create.

--b.

2021-09-28 14:05:32

by Trond Myklebust

[permalink] [raw]
Subject: Re: [PATCH net 2/2] auth_gss: Fix deadlock that blocks rpcsec_gss_exit_net when use-gss-proxy==1

On Tue, 2021-09-28 at 09:49 -0400, [email protected] wrote:
> On Tue, Sep 28, 2021 at 01:30:17PM +0000, Trond Myklebust wrote:
> > On Tue, 2021-09-28 at 11:14 +0800, Wang Hai wrote:
> > > When use-gss-proxy is set to 1, write_gssp() creates a rpc client
> > > in
> > > gssp_rpc_create(), this increases the netns refcount by 2, these
> > > refcounts are supposed to be released in rpcsec_gss_exit_net(),
> > > but
> > > it
> > > will never happen because rpcsec_gss_exit_net() is triggered only
> > > when
> > > the netns refcount gets to 0, specifically:
> > >     refcount=0 -> cleanup_net() -> ops_exit_list ->
> > > rpcsec_gss_exit_net
> > > It is a deadlock situation here, refcount will never get to 0
> > > unless
> > > rpcsec_gss_exit_net() is called. So, in this case, the netns
> > > refcount
> > > should not be increased.
> > >
> > > In this case, xprt will take a netns refcount which is not
> > > supposed
> > > to be taken. Add a new flag to rpc_create_args called
> > > RPC_CLNT_CREATE_NO_NET_REF for not increasing the netns refcount.
> > >
> > > It is safe not to hold the netns refcount, because when
> > > cleanup_net(), it
> > > will hold the gssp_lock and then shut down the rpc client
> > > synchronously.
> > >
> > >
> > I don't like this solution at all. Adding this kind of flag is
> > going to
> > lead to problems down the road.
> >
> > Is there any reason whatsoever why we need this RPC client to exist
> > when there is no active knfsd server? IOW: Is there any reason why
> > we
> > shouldn't defer creating this RPC client for when knfsd starts up
> > in
> > this net namespace, and why we can't shut it down when knfsd shuts
> > down?
>
> The rpc create is done in the context of the process that writes to
> /proc/net/rpc/use-gss-proxy to get the right namespaces.  I don't
> know
> how hard it would be capture that information for a later create.
>

svcauth_gss_proxy_init() uses the net namespace SVC_NET(rqstp) (i.e.
the knfsd namespace) in the call to gssp_accept_sec_context_upcall().

IOW: the net namespace used in the call to find the RPC client is the
one set up by knfsd, and so if use-gss-proxy was set in a different
namespace than the one used by knfsd, then it won't be found.

--
Trond Myklebust
Linux NFS client maintainer, Hammerspace
[email protected]


2021-09-28 14:17:43

by J. Bruce Fields

[permalink] [raw]
Subject: Re: [PATCH net 2/2] auth_gss: Fix deadlock that blocks rpcsec_gss_exit_net when use-gss-proxy==1

On Tue, Sep 28, 2021 at 02:04:49PM +0000, Trond Myklebust wrote:
> On Tue, 2021-09-28 at 09:49 -0400, [email protected] wrote:
> > On Tue, Sep 28, 2021 at 01:30:17PM +0000, Trond Myklebust wrote:
> > > On Tue, 2021-09-28 at 11:14 +0800, Wang Hai wrote:
> > > > When use-gss-proxy is set to 1, write_gssp() creates a rpc client
> > > > in
> > > > gssp_rpc_create(), this increases the netns refcount by 2, these
> > > > refcounts are supposed to be released in rpcsec_gss_exit_net(),
> > > > but
> > > > it
> > > > will never happen because rpcsec_gss_exit_net() is triggered only
> > > > when
> > > > the netns refcount gets to 0, specifically:
> > > >     refcount=0 -> cleanup_net() -> ops_exit_list ->
> > > > rpcsec_gss_exit_net
> > > > It is a deadlock situation here, refcount will never get to 0
> > > > unless
> > > > rpcsec_gss_exit_net() is called. So, in this case, the netns
> > > > refcount
> > > > should not be increased.
> > > >
> > > > In this case, xprt will take a netns refcount which is not
> > > > supposed
> > > > to be taken. Add a new flag to rpc_create_args called
> > > > RPC_CLNT_CREATE_NO_NET_REF for not increasing the netns refcount.
> > > >
> > > > It is safe not to hold the netns refcount, because when
> > > > cleanup_net(), it
> > > > will hold the gssp_lock and then shut down the rpc client
> > > > synchronously.
> > > >
> > > >
> > > I don't like this solution at all. Adding this kind of flag is
> > > going to
> > > lead to problems down the road.
> > >
> > > Is there any reason whatsoever why we need this RPC client to exist
> > > when there is no active knfsd server? IOW: Is there any reason why
> > > we
> > > shouldn't defer creating this RPC client for when knfsd starts up
> > > in
> > > this net namespace, and why we can't shut it down when knfsd shuts
> > > down?
> >
> > The rpc create is done in the context of the process that writes to
> > /proc/net/rpc/use-gss-proxy to get the right namespaces.  I don't
> > know
> > how hard it would be capture that information for a later create.
> >
>
> svcauth_gss_proxy_init() uses the net namespace SVC_NET(rqstp) (i.e.
> the knfsd namespace) in the call to gssp_accept_sec_context_upcall().
>
> IOW: the net namespace used in the call to find the RPC client is the
> one set up by knfsd, and so if use-gss-proxy was set in a different
> namespace than the one used by knfsd, then it won't be found.

Right. If you've got multiple containers, you don't want to find a
gss-proxy from a different container.

--b.

2021-09-28 14:28:19

by Trond Myklebust

[permalink] [raw]
Subject: Re: [PATCH net 2/2] auth_gss: Fix deadlock that blocks rpcsec_gss_exit_net when use-gss-proxy==1

On Tue, 2021-09-28 at 10:17 -0400, [email protected] wrote:
> On Tue, Sep 28, 2021 at 02:04:49PM +0000, Trond Myklebust wrote:
> > On Tue, 2021-09-28 at 09:49 -0400, [email protected] wrote:
> > > On Tue, Sep 28, 2021 at 01:30:17PM +0000, Trond Myklebust wrote:
> > > > On Tue, 2021-09-28 at 11:14 +0800, Wang Hai wrote:
> > > > > When use-gss-proxy is set to 1, write_gssp() creates a rpc
> > > > > client
> > > > > in
> > > > > gssp_rpc_create(), this increases the netns refcount by 2,
> > > > > these
> > > > > refcounts are supposed to be released in
> > > > > rpcsec_gss_exit_net(),
> > > > > but
> > > > > it
> > > > > will never happen because rpcsec_gss_exit_net() is triggered
> > > > > only
> > > > > when
> > > > > the netns refcount gets to 0, specifically:
> > > > >     refcount=0 -> cleanup_net() -> ops_exit_list ->
> > > > > rpcsec_gss_exit_net
> > > > > It is a deadlock situation here, refcount will never get to 0
> > > > > unless
> > > > > rpcsec_gss_exit_net() is called. So, in this case, the netns
> > > > > refcount
> > > > > should not be increased.
> > > > >
> > > > > In this case, xprt will take a netns refcount which is not
> > > > > supposed
> > > > > to be taken. Add a new flag to rpc_create_args called
> > > > > RPC_CLNT_CREATE_NO_NET_REF for not increasing the netns
> > > > > refcount.
> > > > >
> > > > > It is safe not to hold the netns refcount, because when
> > > > > cleanup_net(), it
> > > > > will hold the gssp_lock and then shut down the rpc client
> > > > > synchronously.
> > > > >
> > > > >
> > > > I don't like this solution at all. Adding this kind of flag is
> > > > going to
> > > > lead to problems down the road.
> > > >
> > > > Is there any reason whatsoever why we need this RPC client to
> > > > exist
> > > > when there is no active knfsd server? IOW: Is there any reason
> > > > why
> > > > we
> > > > shouldn't defer creating this RPC client for when knfsd starts
> > > > up
> > > > in
> > > > this net namespace, and why we can't shut it down when knfsd
> > > > shuts
> > > > down?
> > >
> > > The rpc create is done in the context of the process that writes
> > > to
> > > /proc/net/rpc/use-gss-proxy to get the right namespaces.  I don't
> > > know
> > > how hard it would be capture that information for a later create.
> > >
> >
> > svcauth_gss_proxy_init() uses the net namespace SVC_NET(rqstp)
> > (i.e.
> > the knfsd namespace) in the call to
> > gssp_accept_sec_context_upcall().
> >
> > IOW: the net namespace used in the call to find the RPC client is
> > the
> > one set up by knfsd, and so if use-gss-proxy was set in a different
> > namespace than the one used by knfsd, then it won't be found.
>
> Right.  If you've got multiple containers, you don't want to find a
> gss-proxy from a different container.
>

Exactly. So there is no namespace context to capture in the RPC client
other than what's already in knfsd.
The RPC client doesn't capture any other process context. It can cache
a user cred in order to capture the user namespace, but that
information appears to be unused by this gssd RPC client.

So I'll repeat my question: Why can't we set this gssd RPC client up at
knfsd startup time, and tear it down when knfsd is shut down?

--
Trond Myklebust
Linux NFS client maintainer, Hammerspace
[email protected]


2021-09-28 14:58:18

by J. Bruce Fields

[permalink] [raw]
Subject: Re: [PATCH net 2/2] auth_gss: Fix deadlock that blocks rpcsec_gss_exit_net when use-gss-proxy==1

On Tue, Sep 28, 2021 at 02:27:33PM +0000, Trond Myklebust wrote:
> On Tue, 2021-09-28 at 10:17 -0400, [email protected] wrote:
> > On Tue, Sep 28, 2021 at 02:04:49PM +0000, Trond Myklebust wrote:
> > > On Tue, 2021-09-28 at 09:49 -0400, [email protected] wrote:
> > > > On Tue, Sep 28, 2021 at 01:30:17PM +0000, Trond Myklebust wrote:
> > > > > On Tue, 2021-09-28 at 11:14 +0800, Wang Hai wrote:
> > > > > > When use-gss-proxy is set to 1, write_gssp() creates a rpc
> > > > > > client
> > > > > > in
> > > > > > gssp_rpc_create(), this increases the netns refcount by 2,
> > > > > > these
> > > > > > refcounts are supposed to be released in
> > > > > > rpcsec_gss_exit_net(),
> > > > > > but
> > > > > > it
> > > > > > will never happen because rpcsec_gss_exit_net() is triggered
> > > > > > only
> > > > > > when
> > > > > > the netns refcount gets to 0, specifically:
> > > > > >     refcount=0 -> cleanup_net() -> ops_exit_list ->
> > > > > > rpcsec_gss_exit_net
> > > > > > It is a deadlock situation here, refcount will never get to 0
> > > > > > unless
> > > > > > rpcsec_gss_exit_net() is called. So, in this case, the netns
> > > > > > refcount
> > > > > > should not be increased.
> > > > > >
> > > > > > In this case, xprt will take a netns refcount which is not
> > > > > > supposed
> > > > > > to be taken. Add a new flag to rpc_create_args called
> > > > > > RPC_CLNT_CREATE_NO_NET_REF for not increasing the netns
> > > > > > refcount.
> > > > > >
> > > > > > It is safe not to hold the netns refcount, because when
> > > > > > cleanup_net(), it
> > > > > > will hold the gssp_lock and then shut down the rpc client
> > > > > > synchronously.
> > > > > >
> > > > > >
> > > > > I don't like this solution at all. Adding this kind of flag is
> > > > > going to
> > > > > lead to problems down the road.
> > > > >
> > > > > Is there any reason whatsoever why we need this RPC client to
> > > > > exist
> > > > > when there is no active knfsd server? IOW: Is there any reason
> > > > > why
> > > > > we
> > > > > shouldn't defer creating this RPC client for when knfsd starts
> > > > > up
> > > > > in
> > > > > this net namespace, and why we can't shut it down when knfsd
> > > > > shuts
> > > > > down?
> > > >
> > > > The rpc create is done in the context of the process that writes
> > > > to
> > > > /proc/net/rpc/use-gss-proxy to get the right namespaces.  I don't
> > > > know
> > > > how hard it would be capture that information for a later create.
> > > >
> > >
> > > svcauth_gss_proxy_init() uses the net namespace SVC_NET(rqstp)
> > > (i.e.
> > > the knfsd namespace) in the call to
> > > gssp_accept_sec_context_upcall().
> > >
> > > IOW: the net namespace used in the call to find the RPC client is
> > > the
> > > one set up by knfsd, and so if use-gss-proxy was set in a different
> > > namespace than the one used by knfsd, then it won't be found.
> >
> > Right.  If you've got multiple containers, you don't want to find a
> > gss-proxy from a different container.
> >
>
> Exactly. So there is no namespace context to capture in the RPC client
> other than what's already in knfsd.
>
> The RPC client doesn't capture any other process context. It can cache
> a user cred in order to capture the user namespace, but that
> information appears to be unused by this gssd RPC client.

OK, that's good to know, thanks.

It's doing a path lookup (it uses an AF_LOCAL socket), and I'm not
assuming that will get the same result across containers. Is there an
easy way to do just that path lookup here and delay the res till knfsd
startup?

--b.

>
> So I'll repeat my question: Why can't we set this gssd RPC client up at
> knfsd startup time, and tear it down when knfsd is shut down?
>
> --
> Trond Myklebust
> Linux NFS client maintainer, Hammerspace
> [email protected]
>
>

2021-09-28 15:37:38

by Trond Myklebust

[permalink] [raw]
Subject: Re: [PATCH net 2/2] auth_gss: Fix deadlock that blocks rpcsec_gss_exit_net when use-gss-proxy==1

On Tue, 2021-09-28 at 10:57 -0400, [email protected] wrote:
> On Tue, Sep 28, 2021 at 02:27:33PM +0000, Trond Myklebust wrote:
> > On Tue, 2021-09-28 at 10:17 -0400, [email protected] wrote:
> > > On Tue, Sep 28, 2021 at 02:04:49PM +0000, Trond Myklebust wrote:
> > > > On Tue, 2021-09-28 at 09:49 -0400, [email protected] wrote:
> > > > > On Tue, Sep 28, 2021 at 01:30:17PM +0000, Trond Myklebust
> > > > > wrote:
> > > > > > On Tue, 2021-09-28 at 11:14 +0800, Wang Hai wrote:
> > > > > > > When use-gss-proxy is set to 1, write_gssp() creates a
> > > > > > > rpc
> > > > > > > client
> > > > > > > in
> > > > > > > gssp_rpc_create(), this increases the netns refcount by
> > > > > > > 2,
> > > > > > > these
> > > > > > > refcounts are supposed to be released in
> > > > > > > rpcsec_gss_exit_net(),
> > > > > > > but
> > > > > > > it
> > > > > > > will never happen because rpcsec_gss_exit_net() is
> > > > > > > triggered
> > > > > > > only
> > > > > > > when
> > > > > > > the netns refcount gets to 0, specifically:
> > > > > > >     refcount=0 -> cleanup_net() -> ops_exit_list ->
> > > > > > > rpcsec_gss_exit_net
> > > > > > > It is a deadlock situation here, refcount will never get
> > > > > > > to 0
> > > > > > > unless
> > > > > > > rpcsec_gss_exit_net() is called. So, in this case, the
> > > > > > > netns
> > > > > > > refcount
> > > > > > > should not be increased.
> > > > > > >
> > > > > > > In this case, xprt will take a netns refcount which is
> > > > > > > not
> > > > > > > supposed
> > > > > > > to be taken. Add a new flag to rpc_create_args called
> > > > > > > RPC_CLNT_CREATE_NO_NET_REF for not increasing the netns
> > > > > > > refcount.
> > > > > > >
> > > > > > > It is safe not to hold the netns refcount, because when
> > > > > > > cleanup_net(), it
> > > > > > > will hold the gssp_lock and then shut down the rpc client
> > > > > > > synchronously.
> > > > > > >
> > > > > > >
> > > > > > I don't like this solution at all. Adding this kind of flag
> > > > > > is
> > > > > > going to
> > > > > > lead to problems down the road.
> > > > > >
> > > > > > Is there any reason whatsoever why we need this RPC client
> > > > > > to
> > > > > > exist
> > > > > > when there is no active knfsd server? IOW: Is there any
> > > > > > reason
> > > > > > why
> > > > > > we
> > > > > > shouldn't defer creating this RPC client for when knfsd
> > > > > > starts
> > > > > > up
> > > > > > in
> > > > > > this net namespace, and why we can't shut it down when
> > > > > > knfsd
> > > > > > shuts
> > > > > > down?
> > > > >
> > > > > The rpc create is done in the context of the process that
> > > > > writes
> > > > > to
> > > > > /proc/net/rpc/use-gss-proxy to get the right namespaces.  I
> > > > > don't
> > > > > know
> > > > > how hard it would be capture that information for a later
> > > > > create.
> > > > >
> > > >
> > > > svcauth_gss_proxy_init() uses the net namespace SVC_NET(rqstp)
> > > > (i.e.
> > > > the knfsd namespace) in the call to
> > > > gssp_accept_sec_context_upcall().
> > > >
> > > > IOW: the net namespace used in the call to find the RPC client
> > > > is
> > > > the
> > > > one set up by knfsd, and so if use-gss-proxy was set in a
> > > > different
> > > > namespace than the one used by knfsd, then it won't be found.
> > >
> > > Right.  If you've got multiple containers, you don't want to find
> > > a
> > > gss-proxy from a different container.
> > >
> >
> > Exactly. So there is no namespace context to capture in the RPC
> > client
> > other than what's already in knfsd.
> >
> > The RPC client doesn't capture any other process context. It can
> > cache
> > a user cred in order to capture the user namespace, but that
> > information appears to be unused by this gssd RPC client.
>
> OK, that's good to know, thanks.
>
> It's doing a path lookup (it uses an AF_LOCAL socket), and I'm not
> assuming that will get the same result across containers.  Is there
> an
> easy way to do just that path lookup here and delay the res till
> knfsd
> startup?
>

What is the use case here? Starting the gssd daemon or knfsd in
separate chrooted environments? We already know that they have to be
started in the same net namespace, which pretty much ensures it has to
be the same container.

--
Trond Myklebust
Linux NFS client maintainer, Hammerspace
[email protected]


2021-09-28 15:43:44

by J. Bruce Fields

[permalink] [raw]
Subject: Re: [PATCH net 2/2] auth_gss: Fix deadlock that blocks rpcsec_gss_exit_net when use-gss-proxy==1

On Tue, Sep 28, 2021 at 03:36:58PM +0000, Trond Myklebust wrote:
> What is the use case here? Starting the gssd daemon or knfsd in
> separate chrooted environments? We already know that they have to be
> started in the same net namespace, which pretty much ensures it has to
> be the same container.

Somehow I forgot that knfsd startup is happening in some real process's
context too (not just a kthread).

OK, great, I agree, that sounds like it should work.

--b.

2021-09-29 21:33:20

by J. Bruce Fields

[permalink] [raw]
Subject: Re: [PATCH net 2/2] auth_gss: Fix deadlock that blocks rpcsec_gss_exit_net when use-gss-proxy==1

On Tue, Sep 28, 2021 at 11:43:00AM -0400, [email protected] wrote:
> On Tue, Sep 28, 2021 at 03:36:58PM +0000, Trond Myklebust wrote:
> > What is the use case here? Starting the gssd daemon or knfsd in
> > separate chrooted environments? We already know that they have to be
> > started in the same net namespace, which pretty much ensures it has to
> > be the same container.
>
> Somehow I forgot that knfsd startup is happening in some real process's
> context too (not just a kthread).
>
> OK, great, I agree, that sounds like it should work.

Wang Hai, do you want to try that, or should I?

--b.

2021-09-30 02:09:38

by Wang Hai

[permalink] [raw]
Subject: Re: [PATCH net 2/2] auth_gss: Fix deadlock that blocks rpcsec_gss_exit_net when use-gss-proxy==1


?? 2021/9/30 5:12, [email protected] д??:
> On Tue, Sep 28, 2021 at 11:43:00AM -0400, [email protected] wrote:
>> On Tue, Sep 28, 2021 at 03:36:58PM +0000, Trond Myklebust wrote:
>>> What is the use case here? Starting the gssd daemon or knfsd in
>>> separate chrooted environments? We already know that they have to be
>>> started in the same net namespace, which pretty much ensures it has to
>>> be the same container.
>> Somehow I forgot that knfsd startup is happening in some real process's
>> context too (not just a kthread).
>>
>> OK, great, I agree, that sounds like it should work.
> Wang Hai, do you want to try that, or should I?
>
> --b.
> .
Thank you, of course with great pleasure. I tried the solution
suggested by Myklebust yesterday, but I can't seem to get this
done very well. It would be a great pleasure for me if you could
help to finish it. I can help test it after you finish it.

--
Wang Hai

>

2021-11-09 22:38:22

by J. Bruce Fields

[permalink] [raw]
Subject: Re: [PATCH net 2/2] auth_gss: Fix deadlock that blocks rpcsec_gss_exit_net when use-gss-proxy==1

On Thu, Sep 30, 2021 at 09:56:03AM +0800, wanghai (M) wrote:
>
> 在 2021/9/30 5:12, [email protected] 写道:
> >On Tue, Sep 28, 2021 at 11:43:00AM -0400, [email protected] wrote:
> >>On Tue, Sep 28, 2021 at 03:36:58PM +0000, Trond Myklebust wrote:
> >>>What is the use case here? Starting the gssd daemon or knfsd in
> >>>separate chrooted environments? We already know that they have to be
> >>>started in the same net namespace, which pretty much ensures it has to
> >>>be the same container.
> >>Somehow I forgot that knfsd startup is happening in some real process's
> >>context too (not just a kthread).
> >>
> >>OK, great, I agree, that sounds like it should work.

Ugh, took me a while to get back to this and I went down a couple dead
ends.

The result from selinux's point of view is that rpc.nfsd is doing things
it previously only expected gssproxy to do. Fixable with an update to
selinux policy. And easily fixed in the meantime by cut-and-pasting the
suggestions from the logs.

Still, the result's that mounts fail when you update the kernel, which
seems a violation of our usual rules about regressions. I'd like to do
better.

--b.

2021-11-17 19:19:10

by J. Bruce Fields

[permalink] [raw]
Subject: Re: [PATCH net 2/2] auth_gss: Fix deadlock that blocks rpcsec_gss_exit_net when use-gss-proxy==1

On Tue, Nov 09, 2021 at 12:21:11PM -0500, [email protected] wrote:
> On Thu, Sep 30, 2021 at 09:56:03AM +0800, wanghai (M) wrote:
> >
> > 在 2021/9/30 5:12, [email protected] 写道:
> > >On Tue, Sep 28, 2021 at 11:43:00AM -0400, [email protected] wrote:
> > >>On Tue, Sep 28, 2021 at 03:36:58PM +0000, Trond Myklebust wrote:
> > >>>What is the use case here? Starting the gssd daemon or knfsd in
> > >>>separate chrooted environments? We already know that they have to be
> > >>>started in the same net namespace, which pretty much ensures it has to
> > >>>be the same container.
> > >>Somehow I forgot that knfsd startup is happening in some real process's
> > >>context too (not just a kthread).
> > >>
> > >>OK, great, I agree, that sounds like it should work.
>
> Ugh, took me a while to get back to this and I went down a couple dead
> ends.
>
> The result from selinux's point of view is that rpc.nfsd is doing things
> it previously only expected gssproxy to do. Fixable with an update to
> selinux policy. And easily fixed in the meantime by cut-and-pasting the
> suggestions from the logs.
>
> Still, the result's that mounts fail when you update the kernel, which
> seems a violation of our usual rules about regressions. I'd like to do
> better.

So, I'm not applying this, but here's the patch, for what it's worth.

I'm not sure what the alternative is. Do we get a chance to do
something when gssproxy closes the connection, maybe?

--b.

commit 9fc4ae28ec95
Author: J. Bruce Fields <[email protected]>
Date: Thu Nov 4 16:55:28 2021 -0400

nfsd: connect to gssp-proxy on nfsd start

We communicate with gss-proxy using by rpc over a local unix socket.
The rpc client is set up in the context of the process that writes to
/proc/net/rpc/use-gss-proxy. Unfortunately that leaves us with no clear
place to shut down that client; we've been trying to shut it down when
the network namespace is destroyed, but the rpc client holds a reference
on the network namespace, so that never happens.

We do need to create the client in the context of a process that's
likely to be in the correct namespace. We can use rpc.nfsd instead. In
particular, we use create the rpc client when sockets are added to an
rpc server.

Signed-off-by: J. Bruce Fields <[email protected]>

diff --git a/include/linux/sunrpc/svcauth.h b/include/linux/sunrpc/svcauth.h
index 6d9cc9080aca..b60ecb51511d 100644
--- a/include/linux/sunrpc/svcauth.h
+++ b/include/linux/sunrpc/svcauth.h
@@ -183,4 +183,12 @@ static inline unsigned long hash_mem(char const *buf, int length, int bits)
return full_name_hash(NULL, buf, length) >> (32 - bits);
}

+struct gssp_clnt_ops {
+ struct module *owner;
+ int (*constructor)(struct net*);
+ void (*destructor)(struct net*);
+};
+
+extern struct gssp_clnt_ops gpops;
+
#endif /* _LINUX_SUNRPC_SVCAUTH_H_ */
diff --git a/net/sunrpc/auth_gss/gss_rpc_upcall.c b/net/sunrpc/auth_gss/gss_rpc_upcall.c
index 61c276bddaf2..04a311305b26 100644
--- a/net/sunrpc/auth_gss/gss_rpc_upcall.c
+++ b/net/sunrpc/auth_gss/gss_rpc_upcall.c
@@ -122,7 +122,6 @@ static int gssp_rpc_create(struct net *net, struct rpc_clnt **_clnt)

void init_gssp_clnt(struct sunrpc_net *sn)
{
- mutex_init(&sn->gssp_lock);
sn->gssp_clnt = NULL;
}

@@ -132,25 +131,23 @@ int set_gssp_clnt(struct net *net)
struct rpc_clnt *clnt;
int ret;

- mutex_lock(&sn->gssp_lock);
- ret = gssp_rpc_create(net, &clnt);
- if (!ret) {
- if (sn->gssp_clnt)
- rpc_shutdown_client(sn->gssp_clnt);
+ if (!sn->gssp_clnt) {
+ ret = gssp_rpc_create(net, &clnt);
+ if (ret)
+ return ret;
sn->gssp_clnt = clnt;
}
- mutex_unlock(&sn->gssp_lock);
- return ret;
+ return 0;
}

-void clear_gssp_clnt(struct sunrpc_net *sn)
+void clear_gssp_clnt(struct net *net)
{
- mutex_lock(&sn->gssp_lock);
+ struct sunrpc_net *sn = net_generic(net, sunrpc_net_id);
+
if (sn->gssp_clnt) {
rpc_shutdown_client(sn->gssp_clnt);
sn->gssp_clnt = NULL;
}
- mutex_unlock(&sn->gssp_lock);
}

static struct rpc_clnt *get_gssp_clnt(struct sunrpc_net *sn)
diff --git a/net/sunrpc/auth_gss/gss_rpc_upcall.h b/net/sunrpc/auth_gss/gss_rpc_upcall.h
index 31e96344167e..fd70f4fb56a9 100644
--- a/net/sunrpc/auth_gss/gss_rpc_upcall.h
+++ b/net/sunrpc/auth_gss/gss_rpc_upcall.h
@@ -31,6 +31,6 @@ void gssp_free_upcall_data(struct gssp_upcall_data *data);

void init_gssp_clnt(struct sunrpc_net *);
int set_gssp_clnt(struct net *);
-void clear_gssp_clnt(struct sunrpc_net *);
+void clear_gssp_clnt(struct net *);

#endif /* _GSS_RPC_UPCALL_H */
diff --git a/net/sunrpc/auth_gss/svcauth_gss.c b/net/sunrpc/auth_gss/svcauth_gss.c
index 7dba6a9c213a..79212437558f 100644
--- a/net/sunrpc/auth_gss/svcauth_gss.c
+++ b/net/sunrpc/auth_gss/svcauth_gss.c
@@ -1449,9 +1449,6 @@ static ssize_t write_gssp(struct file *file, const char __user *buf,
return res;
if (i != 1)
return -EINVAL;
- res = set_gssp_clnt(net);
- if (res)
- return res;
res = set_gss_proxy(net, 1);
if (res)
return res;
@@ -1505,10 +1502,8 @@ static void destroy_use_gss_proxy_proc_entry(struct net *net)
{
struct sunrpc_net *sn = net_generic(net, sunrpc_net_id);

- if (sn->use_gssp_proc) {
+ if (sn->use_gssp_proc)
remove_proc_entry("use-gss-proxy", sn->proc_net_rpc);
- clear_gssp_clnt(sn);
- }
}
#else /* CONFIG_PROC_FS */

@@ -1999,9 +1994,16 @@ gss_svc_shutdown_net(struct net *net)
rsc_cache_destroy_net(net);
}

+struct gssp_clnt_ops mygpops = {
+ .owner = THIS_MODULE,
+ .constructor = set_gssp_clnt,
+ .destructor = clear_gssp_clnt,
+};
+
int
gss_svc_init(void)
{
+ gpops = mygpops;
return svc_auth_register(RPC_AUTH_GSS, &svcauthops_gss);
}

@@ -2009,4 +2011,5 @@ void
gss_svc_shutdown(void)
{
svc_auth_unregister(RPC_AUTH_GSS);
+ gpops = mygpops;
}
diff --git a/net/sunrpc/netns.h b/net/sunrpc/netns.h
index 7ec10b92bea1..7d8653b3d81b 100644
--- a/net/sunrpc/netns.h
+++ b/net/sunrpc/netns.h
@@ -28,6 +28,7 @@ struct sunrpc_net {
unsigned int rpcb_is_af_local : 1;

struct mutex gssp_lock;
+ int gssp_users;
struct rpc_clnt *gssp_clnt;
int use_gss_proxy;
int pipe_version;
diff --git a/net/sunrpc/sunrpc_syms.c b/net/sunrpc/sunrpc_syms.c
index 691c0000e9ea..463c975151d7 100644
--- a/net/sunrpc/sunrpc_syms.c
+++ b/net/sunrpc/sunrpc_syms.c
@@ -54,6 +54,8 @@ static __net_init int sunrpc_init_net(struct net *net)
INIT_LIST_HEAD(&sn->all_clients);
spin_lock_init(&sn->rpc_client_lock);
spin_lock_init(&sn->rpcb_clnt_lock);
+ mutex_init(&sn->gssp_lock);
+ sn->gssp_users = 0;
return 0;

err_pipefs:
diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c
index 1e99ba1b9d72..30d9a9779093 100644
--- a/net/sunrpc/svc_xprt.c
+++ b/net/sunrpc/svc_xprt.c
@@ -19,6 +19,7 @@
#include <linux/module.h>
#include <linux/netdevice.h>
#include <trace/events/sunrpc.h>
+#include "netns.h"

#define RPCDBG_FACILITY RPCDBG_SVCXPRT

@@ -322,6 +323,37 @@ static int _svc_create_xprt(struct svc_serv *serv, const char *xprt_name,
return -EPROTONOSUPPORT;
}

+struct gssp_clnt_ops gpops = {};
+EXPORT_SYMBOL_GPL(gpops);
+
+int get_gssp_clnt(struct net *net)
+{
+ struct sunrpc_net *sn = net_generic(net, sunrpc_net_id);
+ int ret;
+
+ mutex_lock(&sn->gssp_lock);
+
+ if (try_module_get(gpops.owner) && gpops.constructor(net)) {
+ ret = gpops.constructor(net);
+ module_put(gpops.owner);
+ }
+ sn->gssp_users++;
+ mutex_unlock(&sn->gssp_lock);
+
+ return ret;
+}
+
+void put_gssp_clnt(struct net *net)
+{
+ struct sunrpc_net *sn = net_generic(net, sunrpc_net_id);
+
+ mutex_lock(&sn->gssp_lock);
+ sn->gssp_users--;
+ if (!sn->gssp_users && sn->gssp_clnt)
+ gpops.destructor(net);
+ mutex_unlock(&sn->gssp_lock);
+}
+
int svc_create_xprt(struct svc_serv *serv, const char *xprt_name,
struct net *net, const int family,
const unsigned short port, int flags,
@@ -329,11 +361,15 @@ int svc_create_xprt(struct svc_serv *serv, const char *xprt_name,
{
int err;

+ get_gssp_clnt(net);
+
err = _svc_create_xprt(serv, xprt_name, net, family, port, flags, cred);
if (err == -EPROTONOSUPPORT) {
request_module("svc%s", xprt_name);
err = _svc_create_xprt(serv, xprt_name, net, family, port, flags, cred);
}
+ if (err < 0)
+ put_gssp_clnt(net);
return err;
}
EXPORT_SYMBOL_GPL(svc_create_xprt);
@@ -1038,6 +1074,7 @@ static void svc_delete_xprt(struct svc_xprt *xprt)
{
struct svc_serv *serv = xprt->xpt_server;
struct svc_deferred_req *dr;
+ struct net *net = xprt->xpt_net;

if (test_and_set_bit(XPT_DEAD, &xprt->xpt_flags))
return;
@@ -1058,6 +1095,8 @@ static void svc_delete_xprt(struct svc_xprt *xprt)
kfree(dr);

call_xpt_users(xprt);
+ if (!test_bit(XPT_TEMP, &xprt->xpt_flags))
+ put_gssp_clnt(net);
svc_xprt_put(xprt);
}