2013-10-29 06:42:16

by NeilBrown

[permalink] [raw]
Subject: [PATCH/RFC] - hard-to-hit race in xprtsock.


We have a customer who hit a rare race in sunrpc (in a 3.0 based kernel,
but the relevant code doesn't seem to have changed much).

The thread that crashed was in
xs_tcp_setup_socket -> inet_stream_connect -> lock_sock_nested.

'sock' in this last function is NULL.

The only way I can imagine this happening is if some other thread called

xs_close -> xs_reset_transport -> sock_release -> inet_release

in a very small window a moment earlier.

As far as I can tell, xs_close is only called with XPRT_LOCKED set.

xs_tcp_setup_socket is mostly scheduled with XPRT_LOCKED set to which would
exclude them from running at the same time.


However xs_tcp_schedule_linger_timeout can schedule the thread which runs
xs_tcp_setup_socket without first claiming XPRT_LOCKED.
So I assume that is what is happening.

I imagine some race between the client closing the socket, and getting
TCP_FIN_WAIT1 from the server and somehow the two threads racing.

I wonder if it might make sense to always abort 'connect_worker' in
xs_close()?
I think the connect_worker really mustn't be running or queued at this point,
so cancelling it is either a no-op, or vitally important.

So: does the following patch seem reasonable? If so I'll submit it properly
with a coherent description etc.

Thanks,
NeilBrown



diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
index ee03d35..b19ba53 100644
--- a/net/sunrpc/xprtsock.c
+++ b/net/sunrpc/xprtsock.c
@@ -835,6 +835,8 @@ static void xs_close(struct rpc_xprt *xprt)

dprintk("RPC: xs_close xprt %p\n", xprt);

+ cancel_delayed_work_sync(&transport->connect_worker);
+
xs_reset_transport(transport);
xprt->reestablish_timeout = 0;

@@ -869,12 +871,8 @@ static void xs_local_destroy(struct rpc_xprt *xprt)
*/
static void xs_destroy(struct rpc_xprt *xprt)
{
- struct sock_xprt *transport = container_of(xprt, struct sock_xprt, xprt);
-
dprintk("RPC: xs_destroy xprt %p\n", xprt);

- cancel_delayed_work_sync(&transport->connect_worker);
-
xs_local_destroy(xprt);
}



Attachments:
signature.asc (828.00 B)

2013-10-30 06:03:04

by NeilBrown

[permalink] [raw]
Subject: Re: [PATCH/RFC] - hard-to-hit race in xprtsock.

On Tue, 29 Oct 2013 15:02:36 +0000 "Myklebust, Trond"
<[email protected]> wrote:

> On Tue, 2013-10-29 at 17:42 +1100, NeilBrown wrote:
> > We have a customer who hit a rare race in sunrpc (in a 3.0 based kernel,
> > but the relevant code doesn't seem to have changed much).
> >
> > The thread that crashed was in
> > xs_tcp_setup_socket -> inet_stream_connect -> lock_sock_nested.
> >
> > 'sock' in this last function is NULL.
> >
> > The only way I can imagine this happening is if some other thread called
> >
> > xs_close -> xs_reset_transport -> sock_release -> inet_release
> >
> > in a very small window a moment earlier.
> >
> > As far as I can tell, xs_close is only called with XPRT_LOCKED set.
> >
> > xs_tcp_setup_socket is mostly scheduled with XPRT_LOCKED set to which would
> > exclude them from running at the same time.
> >
> >
> > However xs_tcp_schedule_linger_timeout can schedule the thread which runs
> > xs_tcp_setup_socket without first claiming XPRT_LOCKED.
> > So I assume that is what is happening.
> >
> > I imagine some race between the client closing the socket, and getting
> > TCP_FIN_WAIT1 from the server and somehow the two threads racing.
> >
> > I wonder if it might make sense to always abort 'connect_worker' in
> > xs_close()?
> > I think the connect_worker really mustn't be running or queued at this point,
> > so cancelling it is either a no-op, or vitally important.
> >
> > So: does the following patch seem reasonable? If so I'll submit it properly
> > with a coherent description etc.
>
> Hi Neil,
>
> Will that do the right thing if the connect_worker and close are running
> on the same rpciod thread? I think it should, but I never manage to keep
> 100% up to date with the ever changing semantics of
> cancel_delayed_work_sync() and friends...
>
> Cheers,
> Trond

Thanks for asking that! I had the exact same concern when I first conceived
the patch.

I managed to convince my self that there wasn't a problem as long as
xs_tcp_setup_socket never called into xs_close.
Otherwise the worst case is that one thread running xs_close could block
while some other thread runs xs_{tcp,udp}_setup_socket.

Thanks,
NeilBrown


Attachments:
signature.asc (828.00 B)

2013-10-30 15:12:45

by Myklebust, Trond

[permalink] [raw]
Subject: Re: [PATCH/RFC] - hard-to-hit race in xprtsock.

On Wed, 2013-10-30 at 17:02 +-1100, NeilBrown wrote:
+AD4- On Tue, 29 Oct 2013 15:02:36 +-0000 +ACI-Myklebust, Trond+ACI-
+AD4- +ADw-Trond.Myklebust+AEA-netapp.com+AD4- wrote:
+AD4-
+AD4- +AD4- On Tue, 2013-10-29 at 17:42 +-1100, NeilBrown wrote:
+AD4- +AD4- +AD4- We have a customer who hit a rare race in sunrpc (in a 3.0 based kernel,
+AD4- +AD4- +AD4- but the relevant code doesn't seem to have changed much).
+AD4- +AD4- +AD4-
+AD4- +AD4- +AD4- The thread that crashed was in
+AD4- +AD4- +AD4- xs+AF8-tcp+AF8-setup+AF8-socket -+AD4- inet+AF8-stream+AF8-connect -+AD4- lock+AF8-sock+AF8-nested.
+AD4- +AD4- +AD4-
+AD4- +AD4- +AD4- 'sock' in this last function is NULL.
+AD4- +AD4- +AD4-
+AD4- +AD4- +AD4- The only way I can imagine this happening is if some other thread called
+AD4- +AD4- +AD4-
+AD4- +AD4- +AD4- xs+AF8-close -+AD4- xs+AF8-reset+AF8-transport -+AD4- sock+AF8-release -+AD4- inet+AF8-release
+AD4- +AD4- +AD4-
+AD4- +AD4- +AD4- in a very small window a moment earlier.
+AD4- +AD4- +AD4-
+AD4- +AD4- +AD4- As far as I can tell, xs+AF8-close is only called with XPRT+AF8-LOCKED set.
+AD4- +AD4- +AD4-
+AD4- +AD4- +AD4- xs+AF8-tcp+AF8-setup+AF8-socket is mostly scheduled with XPRT+AF8-LOCKED set to which would
+AD4- +AD4- +AD4- exclude them from running at the same time.
+AD4- +AD4- +AD4-
+AD4- +AD4- +AD4-
+AD4- +AD4- +AD4- However xs+AF8-tcp+AF8-schedule+AF8-linger+AF8-timeout can schedule the thread which runs
+AD4- +AD4- +AD4- xs+AF8-tcp+AF8-setup+AF8-socket without first claiming XPRT+AF8-LOCKED.
+AD4- +AD4- +AD4- So I assume that is what is happening.
+AD4- +AD4- +AD4-
+AD4- +AD4- +AD4- I imagine some race between the client closing the socket, and getting
+AD4- +AD4- +AD4- TCP+AF8-FIN+AF8-WAIT1 from the server and somehow the two threads racing.
+AD4- +AD4- +AD4-
+AD4- +AD4- +AD4- I wonder if it might make sense to always abort 'connect+AF8-worker' in
+AD4- +AD4- +AD4- xs+AF8-close()?
+AD4- +AD4- +AD4- I think the connect+AF8-worker really mustn't be running or queued at this point,
+AD4- +AD4- +AD4- so cancelling it is either a no-op, or vitally important.
+AD4- +AD4- +AD4-
+AD4- +AD4- +AD4- So: does the following patch seem reasonable? If so I'll submit it properly
+AD4- +AD4- +AD4- with a coherent description etc.
+AD4- +AD4-
+AD4- +AD4- Hi Neil,
+AD4- +AD4-
+AD4- +AD4- Will that do the right thing if the connect+AF8-worker and close are running
+AD4- +AD4- on the same rpciod thread? I think it should, but I never manage to keep
+AD4- +AD4- 100+ACU- up to date with the ever changing semantics of
+AD4- +AD4- cancel+AF8-delayed+AF8-work+AF8-sync() and friends...
+AD4- +AD4-
+AD4- +AD4- Cheers,
+AD4- +AD4- Trond
+AD4-
+AD4- Thanks for asking that+ACE- I had the exact same concern when I first conceived
+AD4- the patch.
+AD4-
+AD4- I managed to convince my self that there wasn't a problem as long as
+AD4- xs+AF8-tcp+AF8-setup+AF8-socket never called into xs+AF8-close.
+AD4- Otherwise the worst case is that one thread running xs+AF8-close could block
+AD4- while some other thread runs xs+AF8Aew-tcp,udp+AH0AXw-setup+AF8-socket.

OK. Let's go with that then. Could you please resend as a formal patch?

Cheers,
Trond
--
Trond Myklebust
Linux NFS client maintainer

NetApp
Trond.Myklebust+AEA-netapp.com
http://www.netapp.com

2013-10-29 15:02:37

by Myklebust, Trond

[permalink] [raw]
Subject: Re: [PATCH/RFC] - hard-to-hit race in xprtsock.

On Tue, 2013-10-29 at 17:42 +-1100, NeilBrown wrote:
+AD4- We have a customer who hit a rare race in sunrpc (in a 3.0 based kernel,
+AD4- but the relevant code doesn't seem to have changed much).
+AD4-
+AD4- The thread that crashed was in
+AD4- xs+AF8-tcp+AF8-setup+AF8-socket -+AD4- inet+AF8-stream+AF8-connect -+AD4- lock+AF8-sock+AF8-nested.
+AD4-
+AD4- 'sock' in this last function is NULL.
+AD4-
+AD4- The only way I can imagine this happening is if some other thread called
+AD4-
+AD4- xs+AF8-close -+AD4- xs+AF8-reset+AF8-transport -+AD4- sock+AF8-release -+AD4- inet+AF8-release
+AD4-
+AD4- in a very small window a moment earlier.
+AD4-
+AD4- As far as I can tell, xs+AF8-close is only called with XPRT+AF8-LOCKED set.
+AD4-
+AD4- xs+AF8-tcp+AF8-setup+AF8-socket is mostly scheduled with XPRT+AF8-LOCKED set to which would
+AD4- exclude them from running at the same time.
+AD4-
+AD4-
+AD4- However xs+AF8-tcp+AF8-schedule+AF8-linger+AF8-timeout can schedule the thread which runs
+AD4- xs+AF8-tcp+AF8-setup+AF8-socket without first claiming XPRT+AF8-LOCKED.
+AD4- So I assume that is what is happening.
+AD4-
+AD4- I imagine some race between the client closing the socket, and getting
+AD4- TCP+AF8-FIN+AF8-WAIT1 from the server and somehow the two threads racing.
+AD4-
+AD4- I wonder if it might make sense to always abort 'connect+AF8-worker' in
+AD4- xs+AF8-close()?
+AD4- I think the connect+AF8-worker really mustn't be running or queued at this point,
+AD4- so cancelling it is either a no-op, or vitally important.
+AD4-
+AD4- So: does the following patch seem reasonable? If so I'll submit it properly
+AD4- with a coherent description etc.

Hi Neil,

Will that do the right thing if the connect+AF8-worker and close are running
on the same rpciod thread? I think it should, but I never manage to keep
100+ACU- up to date with the ever changing semantics of
cancel+AF8-delayed+AF8-work+AF8-sync() and friends...

Cheers,
Trond
--
Trond Myklebust
Linux NFS client maintainer

NetApp
Trond.Myklebust+AEA-netapp.com
http://www.netapp.com