2015-09-15 15:49:27

by Suzuki K Poulose

[permalink] [raw]
Subject: [PATCH] SUNRPC: Fix a race in xs_reset_transport

From: "Suzuki K. Poulose" <[email protected]>

Encountered the following BUG() with 4.3-rc1 on a fast model
for arm64 with NFS root filesystem.

------------[ cut here ]------------
kernel BUG at fs/inode.c:1493!

Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
Modules linked in:
CPU: 2 PID: 1 Comm: swapper/0 Not tainted 4.3.0-rc1+ #855
Hardware name: FVP Base (DT)
task: ffffc000760b0000 ti: ffffc00076070000 task.ti: ffffc00076070000
PC is at iput+0x144/0x170
LR is at sock_release+0xbc/0xdc
pc : [<ffffc000001b4920>] lr : [<ffffc000004d1974>] pstate: 40000045
sp : ffffc00076073790
x29: ffffc00076073790 x28: ffffc00076073b40
x27: 00000000000003e8 x26: ffffc00076955000
x25: 000000000000000c x24: ffffc00076637200
x23: ffffc00076073930 x22: ffffc000769b8180
x21: ffffc000740500a8 x20: ffffc00074050158
x19: ffffc00074050030 x18: 000000009fcef6bf
x17: 00000000593e3df5 x16: 00000000b597f71d
x15: 00000000e2f9d3f6 x14: 0ffffffffffffffe
x13: 0000000000000020 x12: 0101010101010101
x11: 00000000000001c9 x10: 0000000000000750
x9 : ffffc00076073670 x8 : ffffc000760b07b0
x7 : 0000000000000001 x6 : 0000000000000001
x5 : 0000000000000000 x4 : 00000000ffffffff
x3 : 0000000000000000 x2 : ffffffffffffffff
x1 : ffffc00076070000 x0 : 0000000000000060

Process swapper/0 (pid: 1, stack limit =3D 0xffffc00076070020)
Stack: (0xffffc00076073790 to 0xffffc00076074000)

[ stack contents stripped ]

Call trace:
[<ffffc000001b4920>] iput+0x144/0x170
[<ffffc000004d1970>] sock_release+0xb8/0xdc
[<ffffc00000578df0>] xs_reset_transport+0x8c/0xac
[<ffffc00000578e60>] xs_close+0x50/0x6c
[<ffffc00000578e9c>] xs_destroy+0x20/0x5c
[<ffffc00000575f70>] xprt_destroy+0x68/0x8c
[<ffffc0000057777c>] xprt_put+0x24/0x30
[<ffffc000005726c4>] rpc_free_client+0x78/0xd8
[<ffffc0000057288c>] rpc_release_client+0x94/0xec
[<ffffc00000572aac>] rpc_shutdown_client+0x58/0x118
[<ffffc00000278588>] nfs_mount+0x100/0x234
[<ffffc0000026cc88>] nfs_request_mount+0xa8/0x12c
[<ffffc0000026e564>] nfs_try_mount+0x54/0x2b4
[<ffffc0000026f140>] nfs_fs_mount+0x5cc/0xac0
[<ffffc0000019f1a0>] mount_fs+0x38/0x158
[<ffffc000001b81a8>] vfs_kern_mount+0x48/0x11c
[<ffffc000001bb390>] do_mount+0x208/0xc04
[<ffffc000001bc0b0>] SyS_mount+0x78/0xd0
[<ffffc000007f0fa8>] mount_root+0x80/0x148
[<ffffc000007f11a8>] prepare_namespace+0x138/0x184
[<ffffc000007f0b20>] kernel_init_freeable+0x1cc/0x1f4
[<ffffc000005a2914>] kernel_init+0xc/0xd8
Code: b5fffc00 17ffffed d4210000 17ffffd7 (d4210000)
---[ end trace 02951451f1831f54 ]---

With rpc_debug enabled here is the log :

RPC: shutting down mount client for your.nfs.server
RPC: rpc_release_client(ffffc00076637800)
RPC: destroying UNIX authenticator ffffc000008f48c8
RPC: destroying mount client for your.nfs.server
RPC: destroying transport ffffc00076226000
RPC: xs_destroy xprt ffffc00076226000
RPC: xs_close xprt ffffc00076226000
RPC: xs_tcp_state_change client ffffc00076226000...
RPC: state 4 conn 1 dead 0 zapped 1 sk_shutdown 3
RPC: xs_tcp_state_change client ffffc00076226000...
RPC: state 5 conn 0 dead 0 zapped 1 sk_shutdown 3
RPC: xs_tcp_state_change client ffffc00076226000...
RPC: state 7 conn 0 dead 0 zapped 1 sk_shutdown 3
RPC: disconnected transport ffffc00076226000
RPC: xs_tcp_state_change client ffffc00076226000...
RPC: state 7 conn 0 dead 0 zapped 1 sk_shutdown 3
RPC: disconnected transport ffffc00076226000
RPC: xs_tcp_data_ready...
RPC: xs_tcp_state_change client ffffc00076226000...
RPC: state 7 conn 0 dead 0 zapped 1 sk_shutdown 3
RPC: disconnected transport ffffc00076226000
RPC: wake_up_first(ffffc00076226170 "xprt_sending")

So it looks like just before we lock the callbacks in xs_reset_transport,
a few of the callbacks got through and issued the reset before we could
lock it. And we end up repeating the cleanups, ending up in the above
BUG() due to multiple sock_release().

This patch fixes the race by confirming that somebody else hasn't performed
the reset while we were waiting for the lock. Also, the kernel_shutdown()
is performed only if the sock is non-NULL to avoid a possible race.

Signed-off-by: Suzuki K. Poulose <[email protected]>
---
net/sunrpc/xprtsock.c | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
index 7be90bc..6f4789d 100644
--- a/net/sunrpc/xprtsock.c
+++ b/net/sunrpc/xprtsock.c
@@ -822,9 +822,16 @@ static void xs_reset_transport(struct sock_xprt *trans=
port)
=09if (atomic_read(&transport->xprt.swapper))
=09=09sk_clear_memalloc(sk);
=20
-=09kernel_sock_shutdown(sock, SHUT_RDWR);
+=09if (sock)
+=09=09kernel_sock_shutdown(sock, SHUT_RDWR);
=20
=09write_lock_bh(&sk->sk_callback_lock);
+=09/* Check someone has already done the job, while we were waiting */
+=09if (!transport->inet) {
+=09=09write_unlock_bh(&sk->sk_callback_lock);
+=09=09return;
+=09}
+
=09transport->inet =3D NULL;
=09transport->sock =3D NULL;
=20
--=20
1.7.9.5



2015-09-15 18:52:38

by Jeff Layton

[permalink] [raw]
Subject: Re: [PATCH] SUNRPC: Fix a race in xs_reset_transport

On Tue, 15 Sep 2015 16:49:23 +0100
"Suzuki K. Poulose" <[email protected]> wrote:

> From: "Suzuki K. Poulose" <[email protected]>
>
> Encountered the following BUG() with 4.3-rc1 on a fast model
> for arm64 with NFS root filesystem.
>
> ------------[ cut here ]------------
> kernel BUG at fs/inode.c:1493!
>
> Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
> Modules linked in:
> CPU: 2 PID: 1 Comm: swapper/0 Not tainted 4.3.0-rc1+ #855
> Hardware name: FVP Base (DT)
> task: ffffc000760b0000 ti: ffffc00076070000 task.ti: ffffc00076070000
> PC is at iput+0x144/0x170
> LR is at sock_release+0xbc/0xdc
> pc : [<ffffc000001b4920>] lr : [<ffffc000004d1974>] pstate: 40000045
> sp : ffffc00076073790
> x29: ffffc00076073790 x28: ffffc00076073b40
> x27: 00000000000003e8 x26: ffffc00076955000
> x25: 000000000000000c x24: ffffc00076637200
> x23: ffffc00076073930 x22: ffffc000769b8180
> x21: ffffc000740500a8 x20: ffffc00074050158
> x19: ffffc00074050030 x18: 000000009fcef6bf
> x17: 00000000593e3df5 x16: 00000000b597f71d
> x15: 00000000e2f9d3f6 x14: 0ffffffffffffffe
> x13: 0000000000000020 x12: 0101010101010101
> x11: 00000000000001c9 x10: 0000000000000750
> x9 : ffffc00076073670 x8 : ffffc000760b07b0
> x7 : 0000000000000001 x6 : 0000000000000001
> x5 : 0000000000000000 x4 : 00000000ffffffff
> x3 : 0000000000000000 x2 : ffffffffffffffff
> x1 : ffffc00076070000 x0 : 0000000000000060
>
> Process swapper/0 (pid: 1, stack limit = 0xffffc00076070020)
> Stack: (0xffffc00076073790 to 0xffffc00076074000)
>
> [ stack contents stripped ]
>
> Call trace:
> [<ffffc000001b4920>] iput+0x144/0x170
> [<ffffc000004d1970>] sock_release+0xb8/0xdc
> [<ffffc00000578df0>] xs_reset_transport+0x8c/0xac
> [<ffffc00000578e60>] xs_close+0x50/0x6c
> [<ffffc00000578e9c>] xs_destroy+0x20/0x5c
> [<ffffc00000575f70>] xprt_destroy+0x68/0x8c
> [<ffffc0000057777c>] xprt_put+0x24/0x30
> [<ffffc000005726c4>] rpc_free_client+0x78/0xd8
> [<ffffc0000057288c>] rpc_release_client+0x94/0xec
> [<ffffc00000572aac>] rpc_shutdown_client+0x58/0x118
> [<ffffc00000278588>] nfs_mount+0x100/0x234
> [<ffffc0000026cc88>] nfs_request_mount+0xa8/0x12c
> [<ffffc0000026e564>] nfs_try_mount+0x54/0x2b4
> [<ffffc0000026f140>] nfs_fs_mount+0x5cc/0xac0
> [<ffffc0000019f1a0>] mount_fs+0x38/0x158
> [<ffffc000001b81a8>] vfs_kern_mount+0x48/0x11c
> [<ffffc000001bb390>] do_mount+0x208/0xc04
> [<ffffc000001bc0b0>] SyS_mount+0x78/0xd0
> [<ffffc000007f0fa8>] mount_root+0x80/0x148
> [<ffffc000007f11a8>] prepare_namespace+0x138/0x184
> [<ffffc000007f0b20>] kernel_init_freeable+0x1cc/0x1f4
> [<ffffc000005a2914>] kernel_init+0xc/0xd8
> Code: b5fffc00 17ffffed d4210000 17ffffd7 (d4210000)
> ---[ end trace 02951451f1831f54 ]---
>
> With rpc_debug enabled here is the log :
>
> RPC: shutting down mount client for your.nfs.server
> RPC: rpc_release_client(ffffc00076637800)
> RPC: destroying UNIX authenticator ffffc000008f48c8
> RPC: destroying mount client for your.nfs.server
> RPC: destroying transport ffffc00076226000
> RPC: xs_destroy xprt ffffc00076226000
> RPC: xs_close xprt ffffc00076226000
> RPC: xs_tcp_state_change client ffffc00076226000...
> RPC: state 4 conn 1 dead 0 zapped 1 sk_shutdown 3
> RPC: xs_tcp_state_change client ffffc00076226000...
> RPC: state 5 conn 0 dead 0 zapped 1 sk_shutdown 3
> RPC: xs_tcp_state_change client ffffc00076226000...
> RPC: state 7 conn 0 dead 0 zapped 1 sk_shutdown 3
> RPC: disconnected transport ffffc00076226000
> RPC: xs_tcp_state_change client ffffc00076226000...
> RPC: state 7 conn 0 dead 0 zapped 1 sk_shutdown 3
> RPC: disconnected transport ffffc00076226000
> RPC: xs_tcp_data_ready...
> RPC: xs_tcp_state_change client ffffc00076226000...
> RPC: state 7 conn 0 dead 0 zapped 1 sk_shutdown 3
> RPC: disconnected transport ffffc00076226000
> RPC: wake_up_first(ffffc00076226170 "xprt_sending")
>
> So it looks like just before we lock the callbacks in xs_reset_transport,
> a few of the callbacks got through and issued the reset before we could
> lock it. And we end up repeating the cleanups, ending up in the above
> BUG() due to multiple sock_release().
>
> This patch fixes the race by confirming that somebody else hasn't performed
> the reset while we were waiting for the lock. Also, the kernel_shutdown()
> is performed only if the sock is non-NULL to avoid a possible race.
>
> Signed-off-by: Suzuki K. Poulose <[email protected]>
> ---
> net/sunrpc/xprtsock.c | 9 ++++++++-
> 1 file changed, 8 insertions(+), 1 deletion(-)
>
> diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
> index 7be90bc..6f4789d 100644
> --- a/net/sunrpc/xprtsock.c
> +++ b/net/sunrpc/xprtsock.c
> @@ -822,9 +822,16 @@ static void xs_reset_transport(struct sock_xprt *transport)
> if (atomic_read(&transport->xprt.swapper))
> sk_clear_memalloc(sk);
>
> - kernel_sock_shutdown(sock, SHUT_RDWR);
> + if (sock)
> + kernel_sock_shutdown(sock, SHUT_RDWR);
>

Good catch, but...isn't this still racy? What prevents transport->sock
being set to NULL after you assign it to "sock" but before calling
kernel_sock_shutdown? You might end up calling that on a socket that
has already had sock_release called on it. I believe that would be a bad
thing.

What might be better is to move the assignment to sock inside the
spinlock and then call kernel socket shutdown after dropping the
spinlock. Something like:

write_lock_bh(&sk->sk_callback_lock);
...
sock = transport->sock;
transport->sock = NULL;
...
write_unlock_bh(&sk->sk_callback_lock);
if (sock)
kernel_sock_shutdown(sock, SHUT_RDWR);

I think that'd be safe...

> write_lock_bh(&sk->sk_callback_lock);
> + /* Check someone has already done the job, while we were waiting */
> + if (!transport->inet) {
> + write_unlock_bh(&sk->sk_callback_lock);
> + return;
> + }
> +
> transport->inet = NULL;
> transport->sock = NULL;
>


--
Jeff Layton <[email protected]>

2015-09-16 08:08:12

by Suzuki K Poulose

[permalink] [raw]
Subject: Re: [PATCH] SUNRPC: Fix a race in xs_reset_transport

On 15/09/15 19:52, Jeff Layton wrote:
> On Tue, 15 Sep 2015 16:49:23 +0100
> "Suzuki K. Poulose" <[email protected]> wrote:
>
>> From: "Suzuki K. Poulose" <[email protected]>
>>
>> Encountered the following BUG() with 4.3-rc1 on a fast model
>> for arm64 with NFS root filesystem.
>>
>> ------------[ cut here ]------------
>> kernel BUG at fs/inode.c:1493!
>>
>> Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
>> Modules linked in:
>> CPU: 2 PID: 1 Comm: swapper/0 Not tainted 4.3.0-rc1+ #855
>> Hardware name: FVP Base (DT)
>> task: ffffc000760b0000 ti: ffffc00076070000 task.ti: ffffc00076070000
>> PC is at iput+0x144/0x170
>> LR is at sock_release+0xbc/0xdc
>> pc : [<ffffc000001b4920>] lr : [<ffffc000004d1974>] pstate: 40000045
>> sp : ffffc00076073790
>> x29: ffffc00076073790 x28: ffffc00076073b40
>> x27: 00000000000003e8 x26: ffffc00076955000
>> x25: 000000000000000c x24: ffffc00076637200
>> x23: ffffc00076073930 x22: ffffc000769b8180
>> x21: ffffc000740500a8 x20: ffffc00074050158
>> x19: ffffc00074050030 x18: 000000009fcef6bf
>> x17: 00000000593e3df5 x16: 00000000b597f71d
>> x15: 00000000e2f9d3f6 x14: 0ffffffffffffffe
>> x13: 0000000000000020 x12: 0101010101010101
>> x11: 00000000000001c9 x10: 0000000000000750
>> x9 : ffffc00076073670 x8 : ffffc000760b07b0
>> x7 : 0000000000000001 x6 : 0000000000000001
>> x5 : 0000000000000000 x4 : 00000000ffffffff
>> x3 : 0000000000000000 x2 : ffffffffffffffff
>> x1 : ffffc00076070000 x0 : 0000000000000060
>>
>> Process swapper/0 (pid: 1, stack limit = 0xffffc00076070020)
>> Stack: (0xffffc00076073790 to 0xffffc00076074000)
>>
>> [ stack contents stripped ]
>>
>> Call trace:
>> [<ffffc000001b4920>] iput+0x144/0x170
>> [<ffffc000004d1970>] sock_release+0xb8/0xdc
>> [<ffffc00000578df0>] xs_reset_transport+0x8c/0xac
>> [<ffffc00000578e60>] xs_close+0x50/0x6c
>> [<ffffc00000578e9c>] xs_destroy+0x20/0x5c
>> [<ffffc00000575f70>] xprt_destroy+0x68/0x8c
>> [<ffffc0000057777c>] xprt_put+0x24/0x30
>> [<ffffc000005726c4>] rpc_free_client+0x78/0xd8
>> [<ffffc0000057288c>] rpc_release_client+0x94/0xec
>> [<ffffc00000572aac>] rpc_shutdown_client+0x58/0x118
>> [<ffffc00000278588>] nfs_mount+0x100/0x234
>> [<ffffc0000026cc88>] nfs_request_mount+0xa8/0x12c
>> [<ffffc0000026e564>] nfs_try_mount+0x54/0x2b4
>> [<ffffc0000026f140>] nfs_fs_mount+0x5cc/0xac0
>> [<ffffc0000019f1a0>] mount_fs+0x38/0x158
>> [<ffffc000001b81a8>] vfs_kern_mount+0x48/0x11c
>> [<ffffc000001bb390>] do_mount+0x208/0xc04
>> [<ffffc000001bc0b0>] SyS_mount+0x78/0xd0
>> [<ffffc000007f0fa8>] mount_root+0x80/0x148
>> [<ffffc000007f11a8>] prepare_namespace+0x138/0x184
>> [<ffffc000007f0b20>] kernel_init_freeable+0x1cc/0x1f4
>> [<ffffc000005a2914>] kernel_init+0xc/0xd8
>> Code: b5fffc00 17ffffed d4210000 17ffffd7 (d4210000)
>> ---[ end trace 02951451f1831f54 ]---
>>
>> With rpc_debug enabled here is the log :
>>
>> RPC: shutting down mount client for your.nfs.server
>> RPC: rpc_release_client(ffffc00076637800)
>> RPC: destroying UNIX authenticator ffffc000008f48c8
>> RPC: destroying mount client for your.nfs.server
>> RPC: destroying transport ffffc00076226000
>> RPC: xs_destroy xprt ffffc00076226000
>> RPC: xs_close xprt ffffc00076226000
>> RPC: xs_tcp_state_change client ffffc00076226000...
>> RPC: state 4 conn 1 dead 0 zapped 1 sk_shutdown 3
>> RPC: xs_tcp_state_change client ffffc00076226000...
>> RPC: state 5 conn 0 dead 0 zapped 1 sk_shutdown 3
>> RPC: xs_tcp_state_change client ffffc00076226000...
>> RPC: state 7 conn 0 dead 0 zapped 1 sk_shutdown 3
>> RPC: disconnected transport ffffc00076226000
>> RPC: xs_tcp_state_change client ffffc00076226000...
>> RPC: state 7 conn 0 dead 0 zapped 1 sk_shutdown 3
>> RPC: disconnected transport ffffc00076226000
>> RPC: xs_tcp_data_ready...
>> RPC: xs_tcp_state_change client ffffc00076226000...
>> RPC: state 7 conn 0 dead 0 zapped 1 sk_shutdown 3
>> RPC: disconnected transport ffffc00076226000
>> RPC: wake_up_first(ffffc00076226170 "xprt_sending")
>>
>> So it looks like just before we lock the callbacks in xs_reset_transport,
>> a few of the callbacks got through and issued the reset before we could
>> lock it. And we end up repeating the cleanups, ending up in the above
>> BUG() due to multiple sock_release().
>>
>> This patch fixes the race by confirming that somebody else hasn't performed
>> the reset while we were waiting for the lock. Also, the kernel_shutdown()
>> is performed only if the sock is non-NULL to avoid a possible race.
>>
>> Signed-off-by: Suzuki K. Poulose <[email protected]>
>> ---
>> net/sunrpc/xprtsock.c | 9 ++++++++-
>> 1 file changed, 8 insertions(+), 1 deletion(-)
>>
>> diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
>> index 7be90bc..6f4789d 100644
>> --- a/net/sunrpc/xprtsock.c
>> +++ b/net/sunrpc/xprtsock.c
>> @@ -822,9 +822,16 @@ static void xs_reset_transport(struct sock_xprt *transport)
>> if (atomic_read(&transport->xprt.swapper))
>> sk_clear_memalloc(sk);
>>
>> - kernel_sock_shutdown(sock, SHUT_RDWR);
>> + if (sock)
>> + kernel_sock_shutdown(sock, SHUT_RDWR);
>>
>
> Good catch, but...isn't this still racy? What prevents transport->sock
> being set to NULL after you assign it to "sock" but before calling
> kernel_sock_shutdown? You might end up calling that on a socket that
> has already had sock_release called on it. I believe that would be a bad
> thing.

You are right. I had a try with moving the kernel_sock_shutdown() under the lock,
but then it would cause lockups, hence left it there. I should have paid more attention
to the kernel_sock_shutdown() which I assumed would be safe :). Thanks for spotting.
I will send an updated version.


Thanks
Suzuki


2015-09-16 09:04:46

by Suzuki K Poulose

[permalink] [raw]
Subject: [PATCHv2] SUNRPC: Fix a race in xs_reset_transport

From: "Suzuki K. Poulose" <[email protected]>

Encountered the following BUG() with 4.3-rc1 on a fast model
for arm64 with NFS root filesystem.

------------[ cut here ]------------
kernel BUG at fs/inode.c:1493!

Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
Modules linked in:
CPU: 2 PID: 1 Comm: swapper/0 Not tainted 4.3.0-rc1+ #855
Hardware name: FVP Base (DT)
task: ffffc000760b0000 ti: ffffc00076070000 task.ti: ffffc00076070000
PC is at iput+0x144/0x170
LR is at sock_release+0xbc/0xdc
pc : [<ffffc000001b4920>] lr : [<ffffc000004d1974>] pstate: 40000045
sp : ffffc00076073790
x29: ffffc00076073790 x28: ffffc00076073b40
x27: 00000000000003e8 x26: ffffc00076955000
x25: 000000000000000c x24: ffffc00076637200
x23: ffffc00076073930 x22: ffffc000769b8180
x21: ffffc000740500a8 x20: ffffc00074050158
x19: ffffc00074050030 x18: 000000009fcef6bf
x17: 00000000593e3df5 x16: 00000000b597f71d
x15: 00000000e2f9d3f6 x14: 0ffffffffffffffe
x13: 0000000000000020 x12: 0101010101010101
x11: 00000000000001c9 x10: 0000000000000750
x9 : ffffc00076073670 x8 : ffffc000760b07b0
x7 : 0000000000000001 x6 : 0000000000000001
x5 : 0000000000000000 x4 : 00000000ffffffff
x3 : 0000000000000000 x2 : ffffffffffffffff
x1 : ffffc00076070000 x0 : 0000000000000060

Process swapper/0 (pid: 1, stack limit =3D 0xffffc00076070020)
Stack: (0xffffc00076073790 to 0xffffc00076074000)

[ stack contents stripped ]

Call trace:
[<ffffc000001b4920>] iput+0x144/0x170
[<ffffc000004d1970>] sock_release+0xb8/0xdc
[<ffffc00000578df0>] xs_reset_transport+0x8c/0xac
[<ffffc00000578e60>] xs_close+0x50/0x6c
[<ffffc00000578e9c>] xs_destroy+0x20/0x5c
[<ffffc00000575f70>] xprt_destroy+0x68/0x8c
[<ffffc0000057777c>] xprt_put+0x24/0x30
[<ffffc000005726c4>] rpc_free_client+0x78/0xd8
[<ffffc0000057288c>] rpc_release_client+0x94/0xec
[<ffffc00000572aac>] rpc_shutdown_client+0x58/0x118
[<ffffc00000278588>] nfs_mount+0x100/0x234
[<ffffc0000026cc88>] nfs_request_mount+0xa8/0x12c
[<ffffc0000026e564>] nfs_try_mount+0x54/0x2b4
[<ffffc0000026f140>] nfs_fs_mount+0x5cc/0xac0
[<ffffc0000019f1a0>] mount_fs+0x38/0x158
[<ffffc000001b81a8>] vfs_kern_mount+0x48/0x11c
[<ffffc000001bb390>] do_mount+0x208/0xc04
[<ffffc000001bc0b0>] SyS_mount+0x78/0xd0
[<ffffc000007f0fa8>] mount_root+0x80/0x148
[<ffffc000007f11a8>] prepare_namespace+0x138/0x184
[<ffffc000007f0b20>] kernel_init_freeable+0x1cc/0x1f4
[<ffffc000005a2914>] kernel_init+0xc/0xd8
Code: b5fffc00 17ffffed d4210000 17ffffd7 (d4210000)
---[ end trace 02951451f1831f54 ]---

With rpc_debug enabled here is the log :

RPC: shutting down mount client for your.nfs.server
RPC: rpc_release_client(ffffc00076637800)
RPC: destroying UNIX authenticator ffffc000008f48c8
RPC: destroying mount client for your.nfs.server
RPC: destroying transport ffffc00076226000
RPC: xs_destroy xprt ffffc00076226000
RPC: xs_close xprt ffffc00076226000
RPC: xs_tcp_state_change client ffffc00076226000...
RPC: state 4 conn 1 dead 0 zapped 1 sk_shutdown 3
RPC: xs_tcp_state_change client ffffc00076226000...
RPC: state 5 conn 0 dead 0 zapped 1 sk_shutdown 3
RPC: xs_tcp_state_change client ffffc00076226000...
RPC: state 7 conn 0 dead 0 zapped 1 sk_shutdown 3
RPC: disconnected transport ffffc00076226000
RPC: xs_tcp_state_change client ffffc00076226000...
RPC: state 7 conn 0 dead 0 zapped 1 sk_shutdown 3
RPC: disconnected transport ffffc00076226000
RPC: xs_tcp_data_ready...
RPC: xs_tcp_state_change client ffffc00076226000...
RPC: state 7 conn 0 dead 0 zapped 1 sk_shutdown 3
RPC: disconnected transport ffffc00076226000
RPC: wake_up_first(ffffc00076226170 "xprt_sending")

So it looks like just before we lock the callbacks in xs_reset_transport,
a few of the callbacks got through and issued the reset before we could
lock it. And we end up repeating the cleanups, ending up in the above
BUG() due to multiple sock_release().

This patch fixes the race by confirming that somebody else hasn't performed
the reset while we were waiting for the lock. Also, the kernel_shutdown()
is performed only if the sock is non-NULL to avoid a possible race.

Signed-off-by: Suzuki K. Poulose <[email protected]>
---
net/sunrpc/xprtsock.c | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
index 7be90bc..6f4789d 100644
--- a/net/sunrpc/xprtsock.c
+++ b/net/sunrpc/xprtsock.c
@@ -822,9 +822,16 @@ static void xs_reset_transport(struct sock_xprt *trans=
port)
=09if (atomic_read(&transport->xprt.swapper))
=09=09sk_clear_memalloc(sk);
=20
-=09kernel_sock_shutdown(sock, SHUT_RDWR);
+=09if (sock)
+=09=09kernel_sock_shutdown(sock, SHUT_RDWR);
=20
=09write_lock_bh(&sk->sk_callback_lock);
+=09/* Check someone has already done the job, while we were waiting */
+=09if (!transport->inet) {
+=09=09write_unlock_bh(&sk->sk_callback_lock);
+=09=09return;
+=09}
+
=09transport->inet =3D NULL;
=09transport->sock =3D NULL;
=20
--=20
1.7.9.5


2015-09-16 09:35:58

by Suzuki K Poulose

[permalink] [raw]
Subject: [PATCHv2] SUNRPC: Fix a race in xs_reset_transport

From: "Suzuki K. Poulose" <[email protected]>

Ignore the previous patch, which was really v1.

---

Encountered the following BUG() with 4.3-rc1 on a fast model
for arm64 with NFS root filesystem.

------------[ cut here ]------------
kernel BUG at fs/inode.c:1493!

Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
Modules linked in:
CPU: 2 PID: 1 Comm: swapper/0 Not tainted 4.3.0-rc1+ #855
Hardware name: FVP Base (DT)
task: ffffc000760b0000 ti: ffffc00076070000 task.ti: ffffc00076070000
PC is at iput+0x144/0x170
LR is at sock_release+0xbc/0xdc
pc : [<ffffc000001b4920>] lr : [<ffffc000004d1974>] pstate: 40000045
sp : ffffc00076073790
x29: ffffc00076073790 x28: ffffc00076073b40
x27: 00000000000003e8 x26: ffffc00076955000
x25: 000000000000000c x24: ffffc00076637200
x23: ffffc00076073930 x22: ffffc000769b8180
x21: ffffc000740500a8 x20: ffffc00074050158
x19: ffffc00074050030 x18: 000000009fcef6bf
x17: 00000000593e3df5 x16: 00000000b597f71d
x15: 00000000e2f9d3f6 x14: 0ffffffffffffffe
x13: 0000000000000020 x12: 0101010101010101
x11: 00000000000001c9 x10: 0000000000000750
x9 : ffffc00076073670 x8 : ffffc000760b07b0
x7 : 0000000000000001 x6 : 0000000000000001
x5 : 0000000000000000 x4 : 00000000ffffffff
x3 : 0000000000000000 x2 : ffffffffffffffff
x1 : ffffc00076070000 x0 : 0000000000000060

Process swapper/0 (pid: 1, stack limit =3D 0xffffc00076070020)
Stack: (0xffffc00076073790 to 0xffffc00076074000)

[ stack contents stripped ]

Call trace:
[<ffffc000001b4920>] iput+0x144/0x170
[<ffffc000004d1970>] sock_release+0xb8/0xdc
[<ffffc00000578df0>] xs_reset_transport+0x8c/0xac
[<ffffc00000578e60>] xs_close+0x50/0x6c
[<ffffc00000578e9c>] xs_destroy+0x20/0x5c
[<ffffc00000575f70>] xprt_destroy+0x68/0x8c
[<ffffc0000057777c>] xprt_put+0x24/0x30
[<ffffc000005726c4>] rpc_free_client+0x78/0xd8
[<ffffc0000057288c>] rpc_release_client+0x94/0xec
[<ffffc00000572aac>] rpc_shutdown_client+0x58/0x118
[<ffffc00000278588>] nfs_mount+0x100/0x234
[<ffffc0000026cc88>] nfs_request_mount+0xa8/0x12c
[<ffffc0000026e564>] nfs_try_mount+0x54/0x2b4
[<ffffc0000026f140>] nfs_fs_mount+0x5cc/0xac0
[<ffffc0000019f1a0>] mount_fs+0x38/0x158
[<ffffc000001b81a8>] vfs_kern_mount+0x48/0x11c
[<ffffc000001bb390>] do_mount+0x208/0xc04
[<ffffc000001bc0b0>] SyS_mount+0x78/0xd0
[<ffffc000007f0fa8>] mount_root+0x80/0x148
[<ffffc000007f11a8>] prepare_namespace+0x138/0x184
[<ffffc000007f0b20>] kernel_init_freeable+0x1cc/0x1f4
[<ffffc000005a2914>] kernel_init+0xc/0xd8
Code: b5fffc00 17ffffed d4210000 17ffffd7 (d4210000)
---[ end trace 02951451f1831f54 ]---

With rpc_debug enabled here is the log :

RPC: shutting down mount client for your.nfs.server
RPC: rpc_release_client(ffffc00076637800)
RPC: destroying UNIX authenticator ffffc000008f48c8
RPC: destroying mount client for your.nfs.server
RPC: destroying transport ffffc00076226000
RPC: xs_destroy xprt ffffc00076226000
RPC: xs_close xprt ffffc00076226000
RPC: xs_tcp_state_change client ffffc00076226000...
RPC: state 4 conn 1 dead 0 zapped 1 sk_shutdown 3
RPC: xs_tcp_state_change client ffffc00076226000...
RPC: state 5 conn 0 dead 0 zapped 1 sk_shutdown 3
RPC: xs_tcp_state_change client ffffc00076226000...
RPC: state 7 conn 0 dead 0 zapped 1 sk_shutdown 3
RPC: disconnected transport ffffc00076226000
RPC: xs_tcp_state_change client ffffc00076226000...
RPC: state 7 conn 0 dead 0 zapped 1 sk_shutdown 3
RPC: disconnected transport ffffc00076226000
RPC: xs_tcp_data_ready...
RPC: xs_tcp_state_change client ffffc00076226000...
RPC: state 7 conn 0 dead 0 zapped 1 sk_shutdown 3
RPC: disconnected transport ffffc00076226000
RPC: wake_up_first(ffffc00076226170 "xprt_sending")

So it looks like just before we lock the callbacks in xs_reset_transport,
a few of the callbacks got through and issued the reset before we could
lock it. And we end up repeating the cleanups, ending up in the above
BUG() due to multiple sock_release().

This patch fixes the race by confirming that somebody else hasn't performed
the reset while we were waiting for the lock. Also, the transport->sock is
only accessed within the lock to prevent using a stale sock for kernel_sock=
_shutdown().

Signed-off-by: Suzuki K. Poulose <[email protected]>
---
Changes since V1:
- Prevent race in accessing sock.
---
net/sunrpc/xprtsock.c | 15 ++++++++++++---
1 file changed, 12 insertions(+), 3 deletions(-)

diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
index 7be90bc..b95f5d0 100644
--- a/net/sunrpc/xprtsock.c
+++ b/net/sunrpc/xprtsock.c
@@ -812,7 +812,7 @@ static void xs_error_report(struct sock *sk)
=20
static void xs_reset_transport(struct sock_xprt *transport)
{
-=09struct socket *sock =3D transport->sock;
+=09struct socket *sock;
=09struct sock *sk =3D transport->inet;
=09struct rpc_xprt *xprt =3D &transport->xprt;
=20
@@ -822,9 +822,14 @@ static void xs_reset_transport(struct sock_xprt *trans=
port)
=09if (atomic_read(&transport->xprt.swapper))
=09=09sk_clear_memalloc(sk);
=20
-=09kernel_sock_shutdown(sock, SHUT_RDWR);
-
=09write_lock_bh(&sk->sk_callback_lock);
+=09/* Check someone has already done the job, while we were waiting */
+=09if (!transport->inet) {
+=09=09write_unlock_bh(&sk->sk_callback_lock);
+=09=09return;
+=09}
+=09sock =3D transport->sock;
+
=09transport->inet =3D NULL;
=09transport->sock =3D NULL;
=20
@@ -833,6 +838,10 @@ static void xs_reset_transport(struct sock_xprt *trans=
port)
=09xs_restore_old_callbacks(transport, sk);
=09xprt_clear_connected(xprt);
=09write_unlock_bh(&sk->sk_callback_lock);
+
+=09if (sock)
+=09=09kernel_sock_shutdown(sock, SHUT_RDWR);
+
=09xs_sock_reset_connection_flags(xprt);
=20
=09trace_rpc_socket_close(xprt, sock);
--=20
1.7.9.5


2015-09-16 09:48:13

by Marc Zyngier

[permalink] [raw]
Subject: Re: [PATCHv2] SUNRPC: Fix a race in xs_reset_transport

On 16/09/15 10:35, Suzuki K. Poulose wrote:
> From: "Suzuki K. Poulose" <[email protected]>
>
> Ignore the previous patch, which was really v1.
>
> ---
>
> Encountered the following BUG() with 4.3-rc1 on a fast model
> for arm64 with NFS root filesystem.
>
> ------------[ cut here ]------------
> kernel BUG at fs/inode.c:1493!
>
> Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
> Modules linked in:
> CPU: 2 PID: 1 Comm: swapper/0 Not tainted 4.3.0-rc1+ #855
> Hardware name: FVP Base (DT)
> task: ffffc000760b0000 ti: ffffc00076070000 task.ti: ffffc00076070000
> PC is at iput+0x144/0x170
> LR is at sock_release+0xbc/0xdc
> pc : [<ffffc000001b4920>] lr : [<ffffc000004d1974>] pstate: 40000045
> sp : ffffc00076073790
> x29: ffffc00076073790 x28: ffffc00076073b40
> x27: 00000000000003e8 x26: ffffc00076955000
> x25: 000000000000000c x24: ffffc00076637200
> x23: ffffc00076073930 x22: ffffc000769b8180
> x21: ffffc000740500a8 x20: ffffc00074050158
> x19: ffffc00074050030 x18: 000000009fcef6bf
> x17: 00000000593e3df5 x16: 00000000b597f71d
> x15: 00000000e2f9d3f6 x14: 0ffffffffffffffe
> x13: 0000000000000020 x12: 0101010101010101
> x11: 00000000000001c9 x10: 0000000000000750
> x9 : ffffc00076073670 x8 : ffffc000760b07b0
> x7 : 0000000000000001 x6 : 0000000000000001
> x5 : 0000000000000000 x4 : 00000000ffffffff
> x3 : 0000000000000000 x2 : ffffffffffffffff
> x1 : ffffc00076070000 x0 : 0000000000000060
>
> Process swapper/0 (pid: 1, stack limit = 0xffffc00076070020)
> Stack: (0xffffc00076073790 to 0xffffc00076074000)
>
> [ stack contents stripped ]
>
> Call trace:
> [<ffffc000001b4920>] iput+0x144/0x170
> [<ffffc000004d1970>] sock_release+0xb8/0xdc
> [<ffffc00000578df0>] xs_reset_transport+0x8c/0xac
> [<ffffc00000578e60>] xs_close+0x50/0x6c
> [<ffffc00000578e9c>] xs_destroy+0x20/0x5c
> [<ffffc00000575f70>] xprt_destroy+0x68/0x8c
> [<ffffc0000057777c>] xprt_put+0x24/0x30
> [<ffffc000005726c4>] rpc_free_client+0x78/0xd8
> [<ffffc0000057288c>] rpc_release_client+0x94/0xec
> [<ffffc00000572aac>] rpc_shutdown_client+0x58/0x118
> [<ffffc00000278588>] nfs_mount+0x100/0x234
> [<ffffc0000026cc88>] nfs_request_mount+0xa8/0x12c
> [<ffffc0000026e564>] nfs_try_mount+0x54/0x2b4
> [<ffffc0000026f140>] nfs_fs_mount+0x5cc/0xac0
> [<ffffc0000019f1a0>] mount_fs+0x38/0x158
> [<ffffc000001b81a8>] vfs_kern_mount+0x48/0x11c
> [<ffffc000001bb390>] do_mount+0x208/0xc04
> [<ffffc000001bc0b0>] SyS_mount+0x78/0xd0
> [<ffffc000007f0fa8>] mount_root+0x80/0x148
> [<ffffc000007f11a8>] prepare_namespace+0x138/0x184
> [<ffffc000007f0b20>] kernel_init_freeable+0x1cc/0x1f4
> [<ffffc000005a2914>] kernel_init+0xc/0xd8
> Code: b5fffc00 17ffffed d4210000 17ffffd7 (d4210000)
> ---[ end trace 02951451f1831f54 ]---
>
> With rpc_debug enabled here is the log :
>
> RPC: shutting down mount client for your.nfs.server
> RPC: rpc_release_client(ffffc00076637800)
> RPC: destroying UNIX authenticator ffffc000008f48c8
> RPC: destroying mount client for your.nfs.server
> RPC: destroying transport ffffc00076226000
> RPC: xs_destroy xprt ffffc00076226000
> RPC: xs_close xprt ffffc00076226000
> RPC: xs_tcp_state_change client ffffc00076226000...
> RPC: state 4 conn 1 dead 0 zapped 1 sk_shutdown 3
> RPC: xs_tcp_state_change client ffffc00076226000...
> RPC: state 5 conn 0 dead 0 zapped 1 sk_shutdown 3
> RPC: xs_tcp_state_change client ffffc00076226000...
> RPC: state 7 conn 0 dead 0 zapped 1 sk_shutdown 3
> RPC: disconnected transport ffffc00076226000
> RPC: xs_tcp_state_change client ffffc00076226000...
> RPC: state 7 conn 0 dead 0 zapped 1 sk_shutdown 3
> RPC: disconnected transport ffffc00076226000
> RPC: xs_tcp_data_ready...
> RPC: xs_tcp_state_change client ffffc00076226000...
> RPC: state 7 conn 0 dead 0 zapped 1 sk_shutdown 3
> RPC: disconnected transport ffffc00076226000
> RPC: wake_up_first(ffffc00076226170 "xprt_sending")
>
> So it looks like just before we lock the callbacks in xs_reset_transport,
> a few of the callbacks got through and issued the reset before we could
> lock it. And we end up repeating the cleanups, ending up in the above
> BUG() due to multiple sock_release().
>
> This patch fixes the race by confirming that somebody else hasn't performed
> the reset while we were waiting for the lock. Also, the transport->sock is
> only accessed within the lock to prevent using a stale sock for kernel_sock_shutdown().
>
> Signed-off-by: Suzuki K. Poulose <[email protected]>
> ---
> Changes since V1:
> - Prevent race in accessing sock.
> ---
> net/sunrpc/xprtsock.c | 15 ++++++++++++---
> 1 file changed, 12 insertions(+), 3 deletions(-)
>
> diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
> index 7be90bc..b95f5d0 100644
> --- a/net/sunrpc/xprtsock.c
> +++ b/net/sunrpc/xprtsock.c
> @@ -812,7 +812,7 @@ static void xs_error_report(struct sock *sk)
>
> static void xs_reset_transport(struct sock_xprt *transport)
> {
> - struct socket *sock = transport->sock;
> + struct socket *sock;
> struct sock *sk = transport->inet;
> struct rpc_xprt *xprt = &transport->xprt;
>
> @@ -822,9 +822,14 @@ static void xs_reset_transport(struct sock_xprt *transport)
> if (atomic_read(&transport->xprt.swapper))
> sk_clear_memalloc(sk);
>
> - kernel_sock_shutdown(sock, SHUT_RDWR);
> -
> write_lock_bh(&sk->sk_callback_lock);
> + /* Check someone has already done the job, while we were waiting */
> + if (!transport->inet) {
> + write_unlock_bh(&sk->sk_callback_lock);
> + return;
> + }
> + sock = transport->sock;
> +
> transport->inet = NULL;
> transport->sock = NULL;
>
> @@ -833,6 +838,10 @@ static void xs_reset_transport(struct sock_xprt *transport)
> xs_restore_old_callbacks(transport, sk);
> xprt_clear_connected(xprt);
> write_unlock_bh(&sk->sk_callback_lock);
> +
> + if (sock)
> + kernel_sock_shutdown(sock, SHUT_RDWR);
> +
> xs_sock_reset_connection_flags(xprt);
>
> trace_rpc_socket_close(xprt, sock);
>

This patch resurrects NFS on my arm64 model, so thanks for that!

Tested-by: Marc Zyngier <[email protected]>

M.
--
Jazz is not dead. It just smells funny...

2015-09-16 11:17:35

by Jeff Layton

[permalink] [raw]
Subject: Re: [PATCHv2] SUNRPC: Fix a race in xs_reset_transport

On Wed, 16 Sep 2015 10:35:49 +0100
"Suzuki K. Poulose" <[email protected]> wrote:

> From: "Suzuki K. Poulose" <[email protected]>
>
> Ignore the previous patch, which was really v1.
>
> ---
>
> Encountered the following BUG() with 4.3-rc1 on a fast model
> for arm64 with NFS root filesystem.
>
> ------------[ cut here ]------------
> kernel BUG at fs/inode.c:1493!
>
> Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
> Modules linked in:
> CPU: 2 PID: 1 Comm: swapper/0 Not tainted 4.3.0-rc1+ #855
> Hardware name: FVP Base (DT)
> task: ffffc000760b0000 ti: ffffc00076070000 task.ti: ffffc00076070000
> PC is at iput+0x144/0x170
> LR is at sock_release+0xbc/0xdc
> pc : [<ffffc000001b4920>] lr : [<ffffc000004d1974>] pstate: 40000045
> sp : ffffc00076073790
> x29: ffffc00076073790 x28: ffffc00076073b40
> x27: 00000000000003e8 x26: ffffc00076955000
> x25: 000000000000000c x24: ffffc00076637200
> x23: ffffc00076073930 x22: ffffc000769b8180
> x21: ffffc000740500a8 x20: ffffc00074050158
> x19: ffffc00074050030 x18: 000000009fcef6bf
> x17: 00000000593e3df5 x16: 00000000b597f71d
> x15: 00000000e2f9d3f6 x14: 0ffffffffffffffe
> x13: 0000000000000020 x12: 0101010101010101
> x11: 00000000000001c9 x10: 0000000000000750
> x9 : ffffc00076073670 x8 : ffffc000760b07b0
> x7 : 0000000000000001 x6 : 0000000000000001
> x5 : 0000000000000000 x4 : 00000000ffffffff
> x3 : 0000000000000000 x2 : ffffffffffffffff
> x1 : ffffc00076070000 x0 : 0000000000000060
>
> Process swapper/0 (pid: 1, stack limit = 0xffffc00076070020)
> Stack: (0xffffc00076073790 to 0xffffc00076074000)
>
> [ stack contents stripped ]
>
> Call trace:
> [<ffffc000001b4920>] iput+0x144/0x170
> [<ffffc000004d1970>] sock_release+0xb8/0xdc
> [<ffffc00000578df0>] xs_reset_transport+0x8c/0xac
> [<ffffc00000578e60>] xs_close+0x50/0x6c
> [<ffffc00000578e9c>] xs_destroy+0x20/0x5c
> [<ffffc00000575f70>] xprt_destroy+0x68/0x8c
> [<ffffc0000057777c>] xprt_put+0x24/0x30
> [<ffffc000005726c4>] rpc_free_client+0x78/0xd8
> [<ffffc0000057288c>] rpc_release_client+0x94/0xec
> [<ffffc00000572aac>] rpc_shutdown_client+0x58/0x118
> [<ffffc00000278588>] nfs_mount+0x100/0x234
> [<ffffc0000026cc88>] nfs_request_mount+0xa8/0x12c
> [<ffffc0000026e564>] nfs_try_mount+0x54/0x2b4
> [<ffffc0000026f140>] nfs_fs_mount+0x5cc/0xac0
> [<ffffc0000019f1a0>] mount_fs+0x38/0x158
> [<ffffc000001b81a8>] vfs_kern_mount+0x48/0x11c
> [<ffffc000001bb390>] do_mount+0x208/0xc04
> [<ffffc000001bc0b0>] SyS_mount+0x78/0xd0
> [<ffffc000007f0fa8>] mount_root+0x80/0x148
> [<ffffc000007f11a8>] prepare_namespace+0x138/0x184
> [<ffffc000007f0b20>] kernel_init_freeable+0x1cc/0x1f4
> [<ffffc000005a2914>] kernel_init+0xc/0xd8
> Code: b5fffc00 17ffffed d4210000 17ffffd7 (d4210000)
> ---[ end trace 02951451f1831f54 ]---
>
> With rpc_debug enabled here is the log :
>
> RPC: shutting down mount client for your.nfs.server
> RPC: rpc_release_client(ffffc00076637800)
> RPC: destroying UNIX authenticator ffffc000008f48c8
> RPC: destroying mount client for your.nfs.server
> RPC: destroying transport ffffc00076226000
> RPC: xs_destroy xprt ffffc00076226000
> RPC: xs_close xprt ffffc00076226000
> RPC: xs_tcp_state_change client ffffc00076226000...
> RPC: state 4 conn 1 dead 0 zapped 1 sk_shutdown 3
> RPC: xs_tcp_state_change client ffffc00076226000...
> RPC: state 5 conn 0 dead 0 zapped 1 sk_shutdown 3
> RPC: xs_tcp_state_change client ffffc00076226000...
> RPC: state 7 conn 0 dead 0 zapped 1 sk_shutdown 3
> RPC: disconnected transport ffffc00076226000
> RPC: xs_tcp_state_change client ffffc00076226000...
> RPC: state 7 conn 0 dead 0 zapped 1 sk_shutdown 3
> RPC: disconnected transport ffffc00076226000
> RPC: xs_tcp_data_ready...
> RPC: xs_tcp_state_change client ffffc00076226000...
> RPC: state 7 conn 0 dead 0 zapped 1 sk_shutdown 3
> RPC: disconnected transport ffffc00076226000
> RPC: wake_up_first(ffffc00076226170 "xprt_sending")
>
> So it looks like just before we lock the callbacks in xs_reset_transport,
> a few of the callbacks got through and issued the reset before we could
> lock it. And we end up repeating the cleanups, ending up in the above
> BUG() due to multiple sock_release().
>
> This patch fixes the race by confirming that somebody else hasn't performed
> the reset while we were waiting for the lock. Also, the transport->sock is
> only accessed within the lock to prevent using a stale sock for kernel_sock_shutdown().
>
> Signed-off-by: Suzuki K. Poulose <[email protected]>
> ---
> Changes since V1:
> - Prevent race in accessing sock.
> ---
> net/sunrpc/xprtsock.c | 15 ++++++++++++---
> 1 file changed, 12 insertions(+), 3 deletions(-)
>
> diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
> index 7be90bc..b95f5d0 100644
> --- a/net/sunrpc/xprtsock.c
> +++ b/net/sunrpc/xprtsock.c
> @@ -812,7 +812,7 @@ static void xs_error_report(struct sock *sk)
>
> static void xs_reset_transport(struct sock_xprt *transport)
> {
> - struct socket *sock = transport->sock;
> + struct socket *sock;
> struct sock *sk = transport->inet;
> struct rpc_xprt *xprt = &transport->xprt;
>
> @@ -822,9 +822,14 @@ static void xs_reset_transport(struct sock_xprt *transport)
> if (atomic_read(&transport->xprt.swapper))
> sk_clear_memalloc(sk);
>
> - kernel_sock_shutdown(sock, SHUT_RDWR);
> -
> write_lock_bh(&sk->sk_callback_lock);
> + /* Check someone has already done the job, while we were waiting */
> + if (!transport->inet) {
> + write_unlock_bh(&sk->sk_callback_lock);
> + return;
> + }
> + sock = transport->sock;
> +
> transport->inet = NULL;
> transport->sock = NULL;
>
> @@ -833,6 +838,10 @@ static void xs_reset_transport(struct sock_xprt *transport)
> xs_restore_old_callbacks(transport, sk);
> xprt_clear_connected(xprt);
> write_unlock_bh(&sk->sk_callback_lock);
> +
> + if (sock)
> + kernel_sock_shutdown(sock, SHUT_RDWR);
> +
> xs_sock_reset_connection_flags(xprt);
>
> trace_rpc_socket_close(xprt, sock);

Better, but now I'm wondering...is it problematic to restore the old
callbacks before calling kernel_sock_shutdown? I can't quite tell
whether it matters in all cases.

It might be best to just go ahead and take the spinlock twice here. Do
it once to clear the transport->sock pointer, call
kernel_sock_shutdown, and then take it again to restore the old
callbacks, etc.

I don't know though...I get the feeling there are races all over the
place in this code. It seems like there's a similar one wrt to the
transport->inet pointer. It seems a little silly that we clear it under
the sk->sk_callback_lock. You have to dereference that pointer
in order to get to the lock.

Maybe the right solution is to use an xchg to swap the inet pointer
with NULL so it can act as a gatekeeper. Whoever gets there first does
the rest of the shutdown.

Something like this maybe? Would this also fix the original problem?
Note that this patch is untested...

[PATCH] sunrpc: use xchg to fetch and clear the transport->inet pointer in xs_reset_transport

Reported-by: "Suzuki K. Poulose" <[email protected]>
Signed-off-by: Jeff Layton <[email protected]>
---
net/sunrpc/xprtsock.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
index 7be90bc1a7c2..57f79dcab493 100644
--- a/net/sunrpc/xprtsock.c
+++ b/net/sunrpc/xprtsock.c
@@ -813,9 +813,10 @@ static void xs_error_report(struct sock *sk)
static void xs_reset_transport(struct sock_xprt *transport)
{
struct socket *sock = transport->sock;
- struct sock *sk = transport->inet;
+ struct sock *sk;
struct rpc_xprt *xprt = &transport->xprt;

+ sk = xchg(&transport->inet, NULL);
if (sk == NULL)
return;

@@ -825,7 +826,6 @@ static void xs_reset_transport(struct sock_xprt *transport)
kernel_sock_shutdown(sock, SHUT_RDWR);

write_lock_bh(&sk->sk_callback_lock);
- transport->inet = NULL;
transport->sock = NULL;

sk->sk_user_data = NULL;
--
2.4.3


2015-09-17 13:38:33

by Trond Myklebust

[permalink] [raw]
Subject: Re: [PATCH] SUNRPC: Fix a race in xs_reset_transport

On Tue, Sep 15, 2015 at 2:52 PM, Jeff Layton <[email protected]> wrote:
> On Tue, 15 Sep 2015 16:49:23 +0100
> "Suzuki K. Poulose" <[email protected]> wrote:
>
>> net/sunrpc/xprtsock.c | 9 ++++++++-
>> 1 file changed, 8 insertions(+), 1 deletion(-)
>>
>> diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
>> index 7be90bc..6f4789d 100644
>> --- a/net/sunrpc/xprtsock.c
>> +++ b/net/sunrpc/xprtsock.c
>> @@ -822,9 +822,16 @@ static void xs_reset_transport(struct sock_xprt *transport)
>> if (atomic_read(&transport->xprt.swapper))
>> sk_clear_memalloc(sk);
>>
>> - kernel_sock_shutdown(sock, SHUT_RDWR);
>> + if (sock)
>> + kernel_sock_shutdown(sock, SHUT_RDWR);
>>
>
> Good catch, but...isn't this still racy? What prevents transport->sock
> being set to NULL after you assign it to "sock" but before calling
> kernel_sock_shutdown?

The XPRT_LOCKED state.

Cheers
Trond

2015-09-17 14:18:51

by Jeff Layton

[permalink] [raw]
Subject: Re: [PATCH] SUNRPC: Fix a race in xs_reset_transport

On Thu, 17 Sep 2015 09:38:33 -0400
Trond Myklebust <[email protected]> wrote:

> On Tue, Sep 15, 2015 at 2:52 PM, Jeff Layton <[email protected]> wrote:
> > On Tue, 15 Sep 2015 16:49:23 +0100
> > "Suzuki K. Poulose" <[email protected]> wrote:
> >
> >> net/sunrpc/xprtsock.c | 9 ++++++++-
> >> 1 file changed, 8 insertions(+), 1 deletion(-)
> >>
> >> diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
> >> index 7be90bc..6f4789d 100644
> >> --- a/net/sunrpc/xprtsock.c
> >> +++ b/net/sunrpc/xprtsock.c
> >> @@ -822,9 +822,16 @@ static void xs_reset_transport(struct sock_xprt *transport)
> >> if (atomic_read(&transport->xprt.swapper))
> >> sk_clear_memalloc(sk);
> >>
> >> - kernel_sock_shutdown(sock, SHUT_RDWR);
> >> + if (sock)
> >> + kernel_sock_shutdown(sock, SHUT_RDWR);
> >>
> >
> > Good catch, but...isn't this still racy? What prevents transport->sock
> > being set to NULL after you assign it to "sock" but before calling
> > kernel_sock_shutdown?
>
> The XPRT_LOCKED state.
>

IDGI -- if the XPRT_LOCKED bit was supposed to prevent that, then
how could you hit the original race? There should be no concurrent
callers to xs_reset_transport on the same xprt, right?

AFAICT, that bit is not set in the xprt_destroy codepath, which may be
the root cause of the problem. How would we take it there anyway?
xprt_destroy is void return, and may not be called in the context of a
rpc_task. If it's contended, what do we do? Sleep until it's cleared?

--
Jeff Layton <[email protected]>

2015-09-17 14:50:25

by Trond Myklebust

[permalink] [raw]
Subject: Re: [PATCH] SUNRPC: Fix a race in xs_reset_transport

On Thu, 2015-09-17 at 10:18 -0400, Jeff Layton wrote:
> On Thu, 17 Sep 2015 09:38:33 -0400
> Trond Myklebust <[email protected]> wrote:
>
> > On Tue, Sep 15, 2015 at 2:52 PM, Jeff Layton <
> > [email protected]> wrote:
> > > On Tue, 15 Sep 2015 16:49:23 +0100
> > > "Suzuki K. Poulose" <[email protected]> wrote:
> > >
> > > > net/sunrpc/xprtsock.c | 9 ++++++++-
> > > > 1 file changed, 8 insertions(+), 1 deletion(-)
> > > >
> > > > diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
> > > > index 7be90bc..6f4789d 100644
> > > > --- a/net/sunrpc/xprtsock.c
> > > > +++ b/net/sunrpc/xprtsock.c
> > > > @@ -822,9 +822,16 @@ static void xs_reset_transport(struct
> > > > sock_xprt *transport)
> > > > if (atomic_read(&transport->xprt.swapper))
> > > > sk_clear_memalloc(sk);
> > > >
> > > > - kernel_sock_shutdown(sock, SHUT_RDWR);
> > > > + if (sock)
> > > > + kernel_sock_shutdown(sock, SHUT_RDWR);
> > > >
> > >
> > > Good catch, but...isn't this still racy? What prevents transport
> > > ->sock
> > > being set to NULL after you assign it to "sock" but before
> > > calling
> > > kernel_sock_shutdown?
> >
> > The XPRT_LOCKED state.
> >
>
> IDGI -- if the XPRT_LOCKED bit was supposed to prevent that, then
> how could you hit the original race? There should be no concurrent
> callers to xs_reset_transport on the same xprt, right?

Correct. The only exception is xs_destroy.

> AFAICT, that bit is not set in the xprt_destroy codepath, which may
> be
> the root cause of the problem. How would we take it there anyway?
> xprt_destroy is void return, and may not be called in the context of
> a
> rpc_task. If it's contended, what do we do? Sleep until it's
> cleared?
>

How about the following.

8<-----------------------------------------------------------------

2015-09-17 14:59:13

by Jeff Layton

[permalink] [raw]
Subject: Re: [PATCH] SUNRPC: Fix a race in xs_reset_transport

On Thu, 17 Sep 2015 10:50:01 -0400
Trond Myklebust <[email protected]> wrote:

> On Thu, 2015-09-17 at 10:18 -0400, Jeff Layton wrote:
> > On Thu, 17 Sep 2015 09:38:33 -0400
> > Trond Myklebust <[email protected]> wrote:
> >
> > > On Tue, Sep 15, 2015 at 2:52 PM, Jeff Layton <
> > > [email protected]> wrote:
> > > > On Tue, 15 Sep 2015 16:49:23 +0100
> > > > "Suzuki K. Poulose" <[email protected]> wrote:
> > > >
> > > > > net/sunrpc/xprtsock.c | 9 ++++++++-
> > > > > 1 file changed, 8 insertions(+), 1 deletion(-)
> > > > >
> > > > > diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
> > > > > index 7be90bc..6f4789d 100644
> > > > > --- a/net/sunrpc/xprtsock.c
> > > > > +++ b/net/sunrpc/xprtsock.c
> > > > > @@ -822,9 +822,16 @@ static void xs_reset_transport(struct
> > > > > sock_xprt *transport)
> > > > > if (atomic_read(&transport->xprt.swapper))
> > > > > sk_clear_memalloc(sk);
> > > > >
> > > > > - kernel_sock_shutdown(sock, SHUT_RDWR);
> > > > > + if (sock)
> > > > > + kernel_sock_shutdown(sock, SHUT_RDWR);
> > > > >
> > > >
> > > > Good catch, but...isn't this still racy? What prevents transport
> > > > ->sock
> > > > being set to NULL after you assign it to "sock" but before
> > > > calling
> > > > kernel_sock_shutdown?
> > >
> > > The XPRT_LOCKED state.
> > >
> >
> > IDGI -- if the XPRT_LOCKED bit was supposed to prevent that, then
> > how could you hit the original race? There should be no concurrent
> > callers to xs_reset_transport on the same xprt, right?
>
> Correct. The only exception is xs_destroy.
>
> > AFAICT, that bit is not set in the xprt_destroy codepath, which may
> > be
> > the root cause of the problem. How would we take it there anyway?
> > xprt_destroy is void return, and may not be called in the context of
> > a
> > rpc_task. If it's contended, what do we do? Sleep until it's
> > cleared?
> >
>
> How about the following.
>
> 8<-----------------------------------------------------------------
> From e2e68218e66c6b0715fd6b8f1b3092694a7c0e62 Mon Sep 17 00:00:00 2001
> From: Trond Myklebust <[email protected]>
> Date: Thu, 17 Sep 2015 10:42:27 -0400
> Subject: [PATCH] SUNRPC: Fix races between socket connection and destroy code
>
> When we're destroying the socket transport, we need to ensure that
> we cancel any existing delayed connection attempts, and order them
> w.r.t. the call to xs_close().
>
> Signed-off-by: Trond Myklebust <[email protected]>
> ---
> net/sunrpc/xprtsock.c | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
> index 7be90bc1a7c2..d2dfbd043bea 100644
> --- a/net/sunrpc/xprtsock.c
> +++ b/net/sunrpc/xprtsock.c
> @@ -881,8 +881,11 @@ static void xs_xprt_free(struct rpc_xprt *xprt)
> */
> static void xs_destroy(struct rpc_xprt *xprt)
> {
> + struct sock_xprt *transport = container_of(xprt,
> + struct sock_xprt, xprt);
> dprintk("RPC: xs_destroy xprt %p\n", xprt);
>
> + cancel_delayed_work_sync(&transport->connect_worker);
> xs_close(xprt);
> xs_xprt_free(xprt);
> module_put(THIS_MODULE);

Yeah, that looks like it might do it. The only other xs_destroy callers
are in the connect codepath so canceling the work should prevent the
race. So...

Acked-by: Jeff Layton <[email protected]>

It wouldn't hurt to update the comments over xs_close too for
posterity. They currently say:

* The caller _must_ be holding XPRT_LOCKED in order to avoid issues with
* xs_reset_transport() zeroing the socket from underneath a writer.

...but that rule is clearly broken here.

2015-09-18 11:16:53

by Suzuki K Poulose

[permalink] [raw]
Subject: Re: [PATCH] SUNRPC: Fix a race in xs_reset_transport

On 17/09/15 15:50, Trond Myklebust wrote:
> On Thu, 2015-09-17 at 10:18 -0400, Jeff Layton wrote:
>> On Thu, 17 Sep 2015 09:38:33 -0400
>> Trond Myklebust <[email protected]> wrote:
>>
>>> On Tue, Sep 15, 2015 at 2:52 PM, Jeff Layton <
>>> [email protected]> wrote:
>>>> On Tue, 15 Sep 2015 16:49:23 +0100
>>>> "Suzuki K. Poulose" <[email protected]> wrote:
>>>>
>>>>> net/sunrpc/xprtsock.c | 9 ++++++++-
>>>>> 1 file changed, 8 insertions(+), 1 deletion(-)
>>>>>

...

>>
>> IDGI -- if the XPRT_LOCKED bit was supposed to prevent that, then
>> how could you hit the original race? There should be no concurrent
>> callers to xs_reset_transport on the same xprt, right?
>
> Correct. The only exception is xs_destroy.
>
>> AFAICT, that bit is not set in the xprt_destroy codepath, which may
>> be
>> the root cause of the problem. How would we take it there anyway?
>> xprt_destroy is void return, and may not be called in the context of
>> a
>> rpc_task. If it's contended, what do we do? Sleep until it's
>> cleared?
>>
>
> How about the following.
>
> 8<-----------------------------------------------------------------
> From e2e68218e66c6b0715fd6b8f1b3092694a7c0e62 Mon Sep 17 00:00:00 2001
> From: Trond Myklebust <[email protected]>
> Date: Thu, 17 Sep 2015 10:42:27 -0400
> Subject: [PATCH] SUNRPC: Fix races between socket connection and destroy code
>
> When we're destroying the socket transport, we need to ensure that
> we cancel any existing delayed connection attempts, and order them
> w.r.t. the call to xs_close().
>
> Signed-off-by: Trond Myklebust <[email protected]>
> ---
> net/sunrpc/xprtsock.c | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
> index 7be90bc1a7c2..d2dfbd043bea 100644
> --- a/net/sunrpc/xprtsock.c
> +++ b/net/sunrpc/xprtsock.c
> @@ -881,8 +881,11 @@ static void xs_xprt_free(struct rpc_xprt *xprt)
> */
> static void xs_destroy(struct rpc_xprt *xprt)
> {
> + struct sock_xprt *transport = container_of(xprt,
> + struct sock_xprt, xprt);
> dprintk("RPC: xs_destroy xprt %p\n", xprt);
>
> + cancel_delayed_work_sync(&transport->connect_worker);
> xs_close(xprt);
> xs_xprt_free(xprt);
> module_put(THIS_MODULE);


That doesn't fix it for me:

Root-NFS: nfsroot=/work/local/data/rootfs/arm64-rootfs-fvp,tcp,vers=2
NFS: nfs mount opts='vers=2,udp,rsize=4096,wsize=4096,tcp,vers=2,nolock,addr=your.nfs.server'
NFS: parsing nfs mount option 'vers=2'
NFS: parsing nfs mount option 'udp'
NFS: parsing nfs mount option 'rsize=4096'
NFS: parsing nfs mount option 'wsize=4096'
NFS: parsing nfs mount option 'tcp'
NFS: parsing nfs mount option 'vers=2'
NFS: parsing nfs mount option 'nolock'
NFS: parsing nfs mount option 'addr=your.nfs.server'
NFS: MNTPATH: '/work/local/data/rootfs/arm64-rootfs-fvp'
NFS: sending MNT request for your.nfs.server:/work/local/data/rootfs/arm64-rootfs-fvp
RPC: set up xprt to your.nfs.server (autobind) via tcp
RPC: created transport ffff800077e9d000 with 65536 slots
RPC: creating mount client for your.nfs.server (xprt ffff800077e9d000)
RPC: creating UNIX authenticator for client ffff8000787a7800
RPC: new task initialized, procpid 1
RPC: allocated task ffff800077f63600
RPC: 1 __rpc_execute flags=0x680
RPC: 1 call_start mount1 proc NULL (sync)
RPC: 1 call_reserve (status 0)
RPC: 1 reserved req ffff8000787a7600 xid b188dcac
RPC: wake_up_first(ffff800077e9d170 "xprt_sending")
RPC: 1 call_reserveresult (status 0)
RPC: 1 call_refresh (status 0)
RPC: 1 holding NULL cred ffff8000008e2e68
RPC: 1 refreshing NULL cred ffff8000008e2e68
RPC: 1 call_refreshresult (status 0)
RPC: 1 call_allocate (status 0)
RPC: 1 allocated buffer of size 96 at ffff800077e9d800
RPC: 1 call_bind (status 0)
RPC: 1 rpcb_getport_async(your.nfs.server, 100005, 1, 6)
RPC: 1 sleep_on(queue "xprt_binding" time 4294937393)
RPC: 1 added to queue ffff800077e9d0c8 "xprt_binding"
RPC: 1 setting alarm for 60000 ms
RPC: 1 rpcb_getport_async: trying rpcbind version 2
RPC: set up xprt to your.nfs.server (port 111) via tcp
RPC: created transport ffff800077e9e000 with 65536 slots
RPC: creating rpcbind client for your.nfs.server (xprt ffff800077e9e000)
RPC: creating UNIX authenticator for client ffff800078688000
RPC: new task initialized, procpid 1
RPC: allocated task ffff800077f63800
RPC: rpc_release_client(ffff800078688000)
RPC: 2 __rpc_execute flags=0x681
RPC: 2 call_start rpcbind2 proc GETPORT (async)
RPC: 2 call_reserve (status 0)
RPC: 2 reserved req ffff8000787a7e00 xid 591ea4ad
RPC: wake_up_first(ffff800077e9e170 "xprt_sending")
RPC: 2 call_reserveresult (status 0)
RPC: 2 call_refresh (status 0)
RPC: 2 looking up UNIX cred
RPC: looking up UNIX cred
RPC: allocating UNIX cred for uid 0 gid 0
RPC: 2 refreshing UNIX cred ffff800077ee5e40
RPC: 2 call_refreshresult (status 0)
RPC: 2 call_allocate (status 0)
RPC: 2 allocated buffer of size 512 at ffff800077e9e800
RPC: 2 call_bind (status 0)
RPC: 2 call_connect xprt ffff800077e9e000 is not connected
RPC: 2 xprt_connect xprt ffff800077e9e000 is not connected
RPC: 2 sleep_on(queue "xprt_pending" time 4294937393)
RPC: 2 added to queue ffff800077e9e218 "xprt_pending"
RPC: 2 setting alarm for 60000 ms
RPC: xs_connect scheduled xprt ffff800077e9e000
RPC: worker connecting xprt ffff800077e9e000 via tcp to your.nfs.server (port 111)
RPC: xs_tcp_state_change client ffff800077e9e000...
RPC: state 1 conn 0 dead 0 zapped 1 sk_shutdown 0
RPC: 2 __rpc_wake_up_task (now 4294937393)
RPC: 2 disabling timer
RPC: 2 removed from queue ffff800077e9e218 "xprt_pending"
RPC: __rpc_wake_up_task done
RPC: ffff800077e9e000 connect status 115 connected 1 sock state 1
RPC: wake_up_first(ffff800077e9e170 "xprt_sending")
RPC: 2 __rpc_execute flags=0x681
RPC: 2 xprt_connect_status: retrying
RPC: 2 call_connect_status (status -11)
RPC: 2 call_timeout (minor)
RPC: 2 call_bind (status 0)
RPC: 2 call_connect xprt ffff800077e9e000 is connected
RPC: 2 call_transmit (status 0)
RPC: 2 xprt_prepare_transmit
RPC: 2 rpc_xdr_encode (status 0)
RPC: 2 marshaling UNIX cred ffff800077ee5e40
RPC: 2 using AUTH_UNIX cred ffff800077ee5e40 to wrap rpc data
RPC: 2 encoding PMAP_GETPORT call (100005, 1, 6, 0)
RPC: 2 xprt_transmit(92)
RPC: xs_tcp_data_ready...
RPC: xs_tcp_data_recv started
RPC: reading TCP record fragment of length 28
RPC: reading XID (4 bytes)
RPC: reading request with XID 591ea4ad
RPC: reading CALL/REPLY flag (4 bytes)
RPC: read reply XID 591ea4ad
RPC: XID 591ea4ad read 20 bytes
RPC: xprt = ffff800077e9e000, tcp_copied = 28, tcp_offset = 28, tcp_reclen = 28
RPC: 2 xid 591ea4ad complete (28 bytes received)
RPC: xs_tcp_data_recv done
RPC: xs_tcp_send_request(92) = 0
RPC: 2 xmit complete
RPC: wake_up_first(ffff800077e9e170 "xprt_sending")
RPC: 2 call_status (status 28)
RPC: 2 call_decode (status 28)
RPC: 2 validating UNIX cred ffff800077ee5e40
RPC: 2 using AUTH_UNIX cred ffff800077ee5e40 to unwrap rpc data
RPC: 2 PMAP_GETPORT result: 58401
RPC: 2 call_decode result 0
RPC: setting port for xprt ffff800077e9d000 to 58401
RPC: 2 rpcb_getport_done(status 0, port 58401)
RPC: 2 return 0, status 0
RPC: 2 release task
RPC: freeing buffer of size 512 at ffff800077e9e800
RPC: 2 release request ffff8000787a7e00
RPC: wake_up_first(ffff800077e9e2c0 "xprt_backlog")
RPC: rpc_release_client(ffff800078688000)
RPC: destroying rpcbind client for your.nfs.server
RPC: destroying transport ffff800077e9e000
RPC: xs_destroy xprt ffff800077e9e000
RPC: xs_close xprt ffff800077e9e000
RPC: xs_reset xprt ffff800077e9e000
RPC: xs_tcp_state_change client ffff800077e9e000...
RPC: state 4 conn 1 dead 0 zapped 1 sk_shutdown 3
RPC: xs_tcp_state_change client ffff800077e9e000...
RPC: state 5 conn 0 dead 0 zapped 1 sk_shutdown 3
RPC: xs_tcp_state_change client ffff800077e9e000...
RPC: state 7 conn 0 dead 0 zapped 1 sk_shutdown 3
RPC: disconnected transport ffff800077e9e000
RPC: xs_tcp_state_change client ffff800077e9e000...
RPC: state 7 conn 0 dead 0 zapped 1 sk_shutdown 3
RPC: disconnected transport ffff800077e9e000
RPC: xs_tcp_data_ready...
RPC: xs_reset xprt ffff800077e9e000
RPC: xs_tcp_state_change client ffff800077e9e000...
RPC: state 7 conn 0 dead 0 zapped 1 sk_shutdown 3
RPC: disconnected transport ffff800077e9e000
RPC: wake_up_first(ffff800077e9e170 "xprt_sending")
------------[ cut here ]------------
kernel BUG at fs/inode.c:1493!
Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
Modules linked in:
CPU: 2 PID: 1 Comm: swapper/0 Tainted: G S 4.3.0-rc1+ #885
Hardware name: FVP Base (DT)
task: ffff800078478000 ti: ffff800078454000 task.ti: ffff800078454000
PC is at iput+0x144/0x170
LR is at sock_release+0xbc/0xdc
pc : [<ffff8000001b38a8>] lr : [<ffff8000004d0a0c>] pstate: 40400045
sp : ffff8000784574e0
x29: ffff8000784574e0 x28: 0000000000000002
x27: ffff800078688000 x26: ffff800077ee7cc0
x25: ffff80000062c000 x24: ffff800077e9d388
x23: ffff800077e9d0c8 x22: ffff800077f08180
x21: ffff80007808e628 x20: ffff80007808e6d8
x19: ffff80007808e5b0 x18: 0000000000000007
x17: 000000000000000e x16: ffff000000294fff
x15: ffffffffffffffff x14: 0ffffffffffffffe
x13: 0000000000000020 x12: 0101010101010101
x11: 0000000000000167 x10: 0000000000000750
x9 : ffff8000784573c0 x8 : ffff8000784787b0
x7 : ffff80000087a610 x6 : 0000000000000001
x5 : 0000000000000000 x4 : 00000000fffffffe
x3 : 0000000000000000 x2 : ffffffffffffffff
x1 : ffff800078454000 x0 : 0000000000000060

Call trace:
[<ffff8000001b38a8>] iput+0x144/0x170
[<ffff8000004d0a08>] sock_release+0xb8/0xdc
[<ffff800000577e88>] xs_reset_transport+0x98/0xcc
[<ffff800000577f0c>] xs_close+0x50/0x6c
[<ffff800000577f50>] xs_destroy+0x28/0x64
[<ffff800000575000>] xprt_destroy+0x68/0x8c
[<ffff80000057680c>] xprt_put+0x24/0x30
[<ffff800000571754>] rpc_free_client+0x78/0xd8
[<ffff80000057191c>] rpc_release_client+0x94/0xec
[<ffff800000584aa4>] rpcb_getport_async+0x2d8/0x490
[<ffff800000571218>] call_bind+0x58/0x88
[<ffff80000057b540>] __rpc_execute+0x64/0x338
[<ffff80000057bb20>] rpc_execute+0x5c/0x6c
[<ffff800000573eac>] rpc_run_task+0x8c/0xb0
[<ffff800000573f14>] rpc_call_sync+0x44/0xb0
[<ffff800000573fd0>] rpc_ping+0x50/0x70
[<ffff8000005740a0>] rpc_create_xprt+0xb0/0xcc
[<ffff80000057415c>] rpc_create+0xa0/0x150
[<ffff8000002774e0>] nfs_mount+0xcc/0x234
[<ffff80000026bc14>] nfs_request_mount+0xa8/0x12c
[<ffff80000026d4f0>] nfs_try_mount+0x54/0x2b4
[<ffff80000026e0cc>] nfs_fs_mount+0x5cc/0xac0
[<ffff80000019e124>] mount_fs+0x38/0x158
[<ffff8000001b7130>] vfs_kern_mount+0x48/0x11c
[<ffff8000001ba314>] do_mount+0x204/0xc00
[<ffff8000001bb034>] SyS_mount+0x78/0xd0
[<ffff8000007e9fac>] mount_root+0x80/0x148
[<ffff8000007ea1ac>] prepare_namespace+0x138/0x184
[<ffff8000007e9b20>] kernel_init_freeable+0x1cc/0x1f4
[<ffff8000005a1a18>] kernel_init+0xc/0xd8
Code: b5fffc00 17ffffed d4210000 17ffffd7 (d4210000)
---[ end trace 9b8c1c0ef92dab57 ]---
Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b


Thanks
Suzuki


2015-09-18 11:19:11

by Suzuki K Poulose

[permalink] [raw]
Subject: Re: [PATCHv2] SUNRPC: Fix a race in xs_reset_transport

On 16/09/15 12:17, Jeff Layton wrote:
> On Wed, 16 Sep 2015 10:35:49 +0100
> "Suzuki K. Poulose" <[email protected]> wrote:
>
>> From: "Suzuki K. Poulose" <[email protected]>
>>

...

>> + write_unlock_bh(&sk->sk_callback_lock);
>> + return;
>> + }
>> + sock = transport->sock;
>> +
>> transport->inet = NULL;
>> transport->sock = NULL;
>>
>> @@ -833,6 +838,10 @@ static void xs_reset_transport(struct sock_xprt *transport)
>> xs_restore_old_callbacks(transport, sk);
>> xprt_clear_connected(xprt);
>> write_unlock_bh(&sk->sk_callback_lock);
>> +
>> + if (sock)
>> + kernel_sock_shutdown(sock, SHUT_RDWR);
>> +
>> xs_sock_reset_connection_flags(xprt);
>>
>> trace_rpc_socket_close(xprt, sock);
>
> Better, but now I'm wondering...is it problematic to restore the old
> callbacks before calling kernel_sock_shutdown? I can't quite tell
> whether it matters in all cases.
>
> It might be best to just go ahead and take the spinlock twice here. Do
> it once to clear the transport->sock pointer, call
> kernel_sock_shutdown, and then take it again to restore the old
> callbacks, etc.
>
> I don't know though...I get the feeling there are races all over the
> place in this code. It seems like there's a similar one wrt to the
> transport->inet pointer. It seems a little silly that we clear it under
> the sk->sk_callback_lock. You have to dereference that pointer
> in order to get to the lock.
>
> Maybe the right solution is to use an xchg to swap the inet pointer
> with NULL so it can act as a gatekeeper. Whoever gets there first does
> the rest of the shutdown.
>
> Something like this maybe? Would this also fix the original problem?
> Note that this patch is untested...
>
> [PATCH] sunrpc: use xchg to fetch and clear the transport->inet pointer in xs_reset_transport
>
> Reported-by: "Suzuki K. Poulose" <[email protected]>
> Signed-off-by: Jeff Layton <[email protected]>
> ---
> net/sunrpc/xprtsock.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
> index 7be90bc1a7c2..57f79dcab493 100644
> --- a/net/sunrpc/xprtsock.c
> +++ b/net/sunrpc/xprtsock.c
> @@ -813,9 +813,10 @@ static void xs_error_report(struct sock *sk)
> static void xs_reset_transport(struct sock_xprt *transport)
> {
> struct socket *sock = transport->sock;
> - struct sock *sk = transport->inet;
> + struct sock *sk;
> struct rpc_xprt *xprt = &transport->xprt;
>
> + sk = xchg(&transport->inet, NULL);
> if (sk == NULL)
> return;
>
> @@ -825,7 +826,6 @@ static void xs_reset_transport(struct sock_xprt *transport)
> kernel_sock_shutdown(sock, SHUT_RDWR);
>
> write_lock_bh(&sk->sk_callback_lock);
> - transport->inet = NULL;
> transport->sock = NULL;
>
> sk->sk_user_data = NULL;
>


This one seemed to fix it, so if it matters :

Tested-by: Suzuki K. Poulose <[email protected]>

Suzuki


2015-09-18 16:51:58

by Trond Myklebust

[permalink] [raw]
Subject: Re: [PATCHv2] SUNRPC: Fix a race in xs_reset_transport

On Fri, 2015-09-18 at 12:19 +0100, Suzuki K. Poulose wrote:
> On 16/09/15 12:17, Jeff Layton wrote:
> > On Wed, 16 Sep 2015 10:35:49 +0100
> > "Suzuki K. Poulose" <[email protected]> wrote:
> >
> > > From: "Suzuki K. Poulose" <[email protected]>
> > >
>
> ...
>
> > > + write_unlock_bh(&sk->sk_callback_lock);
> > > + return;
> > > + }
> > > + sock = transport->sock;
> > > +
> > > transport->inet = NULL;
> > > transport->sock = NULL;
> > >
> > > @@ -833,6 +838,10 @@ static void xs_reset_transport(struct
> > > sock_xprt *transport)
> > > xs_restore_old_callbacks(transport, sk);
> > > xprt_clear_connected(xprt);
> > > write_unlock_bh(&sk->sk_callback_lock);
> > > +
> > > + if (sock)
> > > + kernel_sock_shutdown(sock, SHUT_RDWR);
> > > +
> > > xs_sock_reset_connection_flags(xprt);
> > >
> > > trace_rpc_socket_close(xprt, sock);
> >
> > Better, but now I'm wondering...is it problematic to restore the
> > old
> > callbacks before calling kernel_sock_shutdown? I can't quite tell
> > whether it matters in all cases.
> >
> > It might be best to just go ahead and take the spinlock twice here.
> > Do
> > it once to clear the transport->sock pointer, call
> > kernel_sock_shutdown, and then take it again to restore the old
> > callbacks, etc.
> >
> > I don't know though...I get the feeling there are races all over
> > the
> > place in this code. It seems like there's a similar one wrt to the
> > transport->inet pointer. It seems a little silly that we clear it
> > under
> > the sk->sk_callback_lock. You have to dereference that pointer
> > in order to get to the lock.
> >
> > Maybe the right solution is to use an xchg to swap the inet pointer
> > with NULL so it can act as a gatekeeper. Whoever gets there first
> > does
> > the rest of the shutdown.
> >
> > Something like this maybe? Would this also fix the original
> > problem?
> > Note that this patch is untested...
> >
> > [PATCH] sunrpc: use xchg to fetch and clear the transport->inet
> > pointer in xs_reset_transport
> >
> > Reported-by: "Suzuki K. Poulose" <[email protected]>
> > Signed-off-by: Jeff Layton <[email protected]>
> > ---
> > net/sunrpc/xprtsock.c | 4 ++--
> > 1 file changed, 2 insertions(+), 2 deletions(-)
> >
> > diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
> > index 7be90bc1a7c2..57f79dcab493 100644
> > --- a/net/sunrpc/xprtsock.c
> > +++ b/net/sunrpc/xprtsock.c
> > @@ -813,9 +813,10 @@ static void xs_error_report(struct sock *sk)
> > static void xs_reset_transport(struct sock_xprt *transport)
> > {
> > struct socket *sock = transport->sock;
> > - struct sock *sk = transport->inet;
> > + struct sock *sk;
> > struct rpc_xprt *xprt = &transport->xprt;
> >
> > + sk = xchg(&transport->inet, NULL);
> > if (sk == NULL)
> > return;
> >
> > @@ -825,7 +826,6 @@ static void xs_reset_transport(struct sock_xprt
> > *transport)
> > kernel_sock_shutdown(sock, SHUT_RDWR);
> >
> > write_lock_bh(&sk->sk_callback_lock);
> > - transport->inet = NULL;
> > transport->sock = NULL;
> >
> > sk->sk_user_data = NULL;
> >
>
>
> This one seemed to fix it, so if it matters :
>
> Tested-by: Suzuki K. Poulose <[email protected]>


I don't think it does. It addresses a symptom, but the actual problem
is that we're running 2 parallel close() calls on the same socket
during a shutdown. That must not happen because it means we have
something else trying to use the socket while it is being freed.

I think what is happening is that we're triggering the socket autoclose
mechanism from the state change callback. You're seeing the problem
more frequently because we added the call to kernel_sock_shutdown() as
part of the socket shutdown, but AFAICS, it could still be triggered
from some external event such as a server-initiated shutdown that
happened at the same time.
In fact, looking at the code, it could even be triggered from the data
receive side of things.
Both of these things are bad, because autoclose puts the transport
struct that is being freed onto a workqueue. That again can lead to a
really bad use-after-free situation if the timing is just a little
different.

So how about the following patch? It should apply cleanly on top of the
first one (which is still needed, btw).

8<--------------------------------------------------------------------

2015-09-18 22:00:30

by Trond Myklebust

[permalink] [raw]
Subject: Re: [PATCHv2] SUNRPC: Fix a race in xs_reset_transport

On Fri, 2015-09-18 at 12:51 -0400, Trond Myklebust wrote:
> On Fri, 2015-09-18 at 12:19 +0100, Suzuki K. Poulose wrote:
> > On 16/09/15 12:17, Jeff Layton wrote:
> > > On Wed, 16 Sep 2015 10:35:49 +0100
> > > "Suzuki K. Poulose" <[email protected]> wrote:
> > >
> > > > From: "Suzuki K. Poulose" <[email protected]>
> > > >
> >
> > ...
> >
> > > > + write_unlock_bh(&sk->sk_callback_lock);
> > > > + return;
> > > > + }
> > > > + sock = transport->sock;
> > > > +
> > > > transport->inet = NULL;
> > > > transport->sock = NULL;
> > > >
> > > > @@ -833,6 +838,10 @@ static void xs_reset_transport(struct
> > > > sock_xprt *transport)
> > > > xs_restore_old_callbacks(transport, sk);
> > > > xprt_clear_connected(xprt);
> > > > write_unlock_bh(&sk->sk_callback_lock);
> > > > +
> > > > + if (sock)
> > > > + kernel_sock_shutdown(sock, SHUT_RDWR);
> > > > +
> > > > xs_sock_reset_connection_flags(xprt);
> > > >
> > > > trace_rpc_socket_close(xprt, sock);
> > >
> > > Better, but now I'm wondering...is it problematic to restore the
> > > old
> > > callbacks before calling kernel_sock_shutdown? I can't quite tell
> > > whether it matters in all cases.
> > >
> > > It might be best to just go ahead and take the spinlock twice
> > > here.
> > > Do
> > > it once to clear the transport->sock pointer, call
> > > kernel_sock_shutdown, and then take it again to restore the old
> > > callbacks, etc.
> > >
> > > I don't know though...I get the feeling there are races all over
> > > the
> > > place in this code. It seems like there's a similar one wrt to
> > > the
> > > transport->inet pointer. It seems a little silly that we clear it
> > > under
> > > the sk->sk_callback_lock. You have to dereference that pointer
> > > in order to get to the lock.
> > >
> > > Maybe the right solution is to use an xchg to swap the inet
> > > pointer
> > > with NULL so it can act as a gatekeeper. Whoever gets there first
> > > does
> > > the rest of the shutdown.
> > >
> > > Something like this maybe? Would this also fix the original
> > > problem?
> > > Note that this patch is untested...
> > >
> > > [PATCH] sunrpc: use xchg to fetch and clear the transport->inet
> > > pointer in xs_reset_transport
> > >
> > > Reported-by: "Suzuki K. Poulose" <[email protected]>
> > > Signed-off-by: Jeff Layton <[email protected]>
> > > ---
> > > net/sunrpc/xprtsock.c | 4 ++--
> > > 1 file changed, 2 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
> > > index 7be90bc1a7c2..57f79dcab493 100644
> > > --- a/net/sunrpc/xprtsock.c
> > > +++ b/net/sunrpc/xprtsock.c
> > > @@ -813,9 +813,10 @@ static void xs_error_report(struct sock *sk)
> > > static void xs_reset_transport(struct sock_xprt *transport)
> > > {
> > > struct socket *sock = transport->sock;
> > > - struct sock *sk = transport->inet;
> > > + struct sock *sk;
> > > struct rpc_xprt *xprt = &transport->xprt;
> > >
> > > + sk = xchg(&transport->inet, NULL);
> > > if (sk == NULL)
> > > return;
> > >
> > > @@ -825,7 +826,6 @@ static void xs_reset_transport(struct
> > > sock_xprt
> > > *transport)
> > > kernel_sock_shutdown(sock, SHUT_RDWR);
> > >
> > > write_lock_bh(&sk->sk_callback_lock);
> > > - transport->inet = NULL;
> > > transport->sock = NULL;
> > >
> > > sk->sk_user_data = NULL;
> > >
> >
> >
> > This one seemed to fix it, so if it matters :
> >
> > Tested-by: Suzuki K. Poulose <[email protected]>
>
>
> I don't think it does. It addresses a symptom, but the actual problem
> is that we're running 2 parallel close() calls on the same socket
> during a shutdown. That must not happen because it means we have
> something else trying to use the socket while it is being freed.
>
> I think what is happening is that we're triggering the socket
> autoclose
> mechanism from the state change callback. You're seeing the problem
> more frequently because we added the call to kernel_sock_shutdown()
> as
> part of the socket shutdown, but AFAICS, it could still be triggered
> from some external event such as a server-initiated shutdown that
> happened at the same time.
> In fact, looking at the code, it could even be triggered from the
> data
> receive side of things.
> Both of these things are bad, because autoclose puts the transport
> struct that is being freed onto a workqueue. That again can lead to a
> really bad use-after-free situation if the timing is just a little
> different.
>
> So how about the following patch? It should apply cleanly on top of
> the
> first one (which is still needed, btw).

Having thought some more about this, I think the safest thing in order
to avoid races is simply to have the shutdown set XPRT_LOCKED. That way
we can keep the current desirable behaviour of closing the socket
automatically any time the server initiates a close, while still
preventing it during shutdown.

8<-------------------------------------------------------------------

2015-09-19 15:07:47

by Trond Myklebust

[permalink] [raw]
Subject: Re: [PATCHv2] SUNRPC: Fix a race in xs_reset_transport

On Sat, Sep 19, 2015 at 8:08 AM, Jeff Layton <[email protected]> wrote:
> On Fri, 18 Sep 2015 18:00:07 -0400
> Trond Myklebust <[email protected]> wrote:
>
>> 8<-------------------------------------------------------------------
>> From 3e1c9d8092e2fa4509d84a00fcf21e7e0c581fe2 Mon Sep 17 00:00:00 2001
>> From: Trond Myklebust <[email protected]>
>> Date: Fri, 18 Sep 2015 15:53:24 -0400
>> Subject: [PATCH] SUNRPC: Lock the transport layer on shutdown
>>
>> Avoid all races with the connect/disconnect handlers by taking the
>> transport lock.
>>
>> Signed-off-by: Trond Myklebust <[email protected]>
>> ---
>> net/sunrpc/xprt.c | 6 ++++++
>> 1 file changed, 6 insertions(+)
>>
>> diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
>> index ab5dd621ae0c..2e98f4a243e5 100644
>> --- a/net/sunrpc/xprt.c
>> +++ b/net/sunrpc/xprt.c
>> @@ -614,6 +614,7 @@ static void xprt_autoclose(struct work_struct *work)
>> clear_bit(XPRT_CLOSE_WAIT, &xprt->state);
>> xprt->ops->close(xprt);
>> xprt_release_write(xprt, NULL);
>> + wake_up_bit(&xprt->state, XPRT_LOCKED);
>> }
>>
>> /**
>> @@ -723,6 +724,7 @@ void xprt_unlock_connect(struct rpc_xprt *xprt, void *cookie)
>> xprt->ops->release_xprt(xprt, NULL);
>> out:
>> spin_unlock_bh(&xprt->transport_lock);
>> + wake_up_bit(&xprt->state, XPRT_LOCKED);
>> }
>>
>> /**
>> @@ -1394,6 +1396,10 @@ out:
>> static void xprt_destroy(struct rpc_xprt *xprt)
>> {
>> dprintk("RPC: destroying transport %p\n", xprt);
>> +
>> + /* Exclude transport connect/disconnect handlers */
>> + wait_on_bit_lock(&xprt->state, XPRT_LOCKED, TASK_UNINTERRUPTIBLE);
>> +
>> del_timer_sync(&xprt->timer);
>>
>> rpc_xprt_debugfs_unregister(xprt);
>
>
> Yeah, that does seem cleaner and better follows the xs_close locking
> rules. Are those wake_up_bit calls sufficient though? Would it be
> better to just add it to xprt_clear_locked? In principle we should only
> sleep on this bit if there are no other users of the xprt, but I'm
> having a hard time following the code to ensure that that's the case.

If we're in xprt_destroy, then the refcount on the xprt has to be
zero. There should be no struct rpc_clnt referencing it, and hence
also no rpc_tasks to set/clear the lock state.

> Also, it seems like part of the problem is that we're ending up with
> these workqueue jobs being requeued after we've cancelled them. Perhaps
> we ought to have some way to ensure that once the xprt is on its way to
> destruction, the workqueue jobs cannot be requeued?

The workqueue jobs are required to own the lock before being enqueued,
since they need exclusive access to the socket until completed.
Since xprt_destroy now owns the lock, that means the socket callbacks
etc can no longer enqueue any new jobs.

2015-09-21 13:48:55

by Suzuki K Poulose

[permalink] [raw]
Subject: Re: [PATCHv2] SUNRPC: Fix a race in xs_reset_transport

On 18/09/15 23:00, Trond Myklebust wrote:
> On Fri, 2015-09-18 at 12:51 -0400, Trond Myklebust wrote:
>> On Fri, 2015-09-18 at 12:19 +0100, Suzuki K. Poulose wrote:
>>> On 16/09/15 12:17, Jeff Layton wrote:
>>>> On Wed, 16 Sep 2015 10:35:49 +0100
>>>> "Suzuki K. Poulose" <[email protected]> wrote:
>>>>
>>>>> From: "Suzuki K. Poulose" <[email protected]>
>>>>>
>>>
>>> ...
>>>
>>>>> + write_unlock_bh(&sk->sk_callback_lock);
>>>>> + return;
>>>>> + }
>>>>> + sock = transport->sock;
>>>>> +
>>>>> transport->inet = NULL;
>>>>> transport->sock = NULL;
>>>>>
>>>>> @@ -833,6 +838,10 @@ static void xs_reset_transport(struct
>>>>> sock_xprt *transport)
>>>>> xs_restore_old_callbacks(transport, sk);
>>>>> xprt_clear_connected(xprt);
>>>>> write_unlock_bh(&sk->sk_callback_lock);
>>>>> +

...

>>
>> So how about the following patch? It should apply cleanly on top of
>> the
>> first one (which is still needed, btw).
>
> Having thought some more about this, I think the safest thing in order
> to avoid races is simply to have the shutdown set XPRT_LOCKED. That way
> we can keep the current desirable behaviour of closing the socket
> automatically any time the server initiates a close, while still
> preventing it during shutdown.
>
> 8<-------------------------------------------------------------------
> From 3e1c9d8092e2fa4509d84a00fcf21e7e0c581fe2 Mon Sep 17 00:00:00 2001
> From: Trond Myklebust <[email protected]>
> Date: Fri, 18 Sep 2015 15:53:24 -0400
> Subject: [PATCH] SUNRPC: Lock the transport layer on shutdown
>
> Avoid all races with the connect/disconnect handlers by taking the
> transport lock.
>
> Signed-off-by: Trond Myklebust <[email protected]>
> ---
> net/sunrpc/xprt.c | 6 ++++++
> 1 file changed, 6 insertions(+)
>
> diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
> index ab5dd621ae0c..2e98f4a243e5 100644
> --- a/net/sunrpc/xprt.c
> +++ b/net/sunrpc/xprt.c
> @@ -614,6 +614,7 @@ static void xprt_autoclose(struct work_struct *work)
> clear_bit(XPRT_CLOSE_WAIT, &xprt->state);
> xprt->ops->close(xprt);
> xprt_release_write(xprt, NULL);
> + wake_up_bit(&xprt->state, XPRT_LOCKED);
> }
>
> /**
> @@ -723,6 +724,7 @@ void xprt_unlock_connect(struct rpc_xprt *xprt, void *cookie)
> xprt->ops->release_xprt(xprt, NULL);
> out:
> spin_unlock_bh(&xprt->transport_lock);
> + wake_up_bit(&xprt->state, XPRT_LOCKED);
> }
>
> /**
> @@ -1394,6 +1396,10 @@ out:
> static void xprt_destroy(struct rpc_xprt *xprt)
> {
> dprintk("RPC: destroying transport %p\n", xprt);
> +
> + /* Exclude transport connect/disconnect handlers */
> + wait_on_bit_lock(&xprt->state, XPRT_LOCKED, TASK_UNINTERRUPTIBLE);
> +
> del_timer_sync(&xprt->timer);
>
> rpc_xprt_debugfs_unregister(xprt);
>


That works for me, please feel free to add:

Reported-by: Suzuki K. Poulose <[email protected]>
Tested-by: Suzuki K. Poulose <[email protected]>



Thanks
Suzuki