I recently upgraded my main server to 4.0.4 from 3.19.5 and rkhunter
started reporting a hidden port on my box.
Running unhide-tcp I see this:
# unhide-tcp
Unhide-tcp 20121229
Copyright © 2012 Yago Jesus & Patrick Gouin
License GPLv3+ : GNU GPL version 3 or later
http://www.unhide-forensics.info
Used options:
[*]Starting TCP checking
Found Hidden port that not appears in ss: 946
[*]Starting UDP checking
This scared the hell out of me as I'm thinking that I have got some kind
of NSA backdoor hooked into my server and it is monitoring my plans to
smuggle Kinder Überraschung into the USA from Germany. I panicked!
Well, I wasted the day writing modules to first look at all the sockets
opened by all processes (via their file descriptors) and posted their
port numbers.
http://rostedt.homelinux.com/private/tasklist.c
But this port wasn't there either.
Then I decided to look at the ports in tcp_hashinfo.
http://rostedt.homelinux.com/private/portlist.c
This found the port but no file was connected to it, and worse yet,
when I first ran it without using probe_kernel_read(), it crashed my
kernel, because sk->sk_socket pointed to a freed socket!
Note, each boot, the hidden port is different.
Finally, I decided to bring in the big guns, and inserted a
trace_printk() into the bind logic, to see if I could find the culprit.
After fiddling with it a few times, I found a suspect:
kworker/3:1H-123 [003] ..s. 96.696213: inet_bind_hash: add 946
Bah, it's a kernel thread doing it, via a work queue. I then added a
trace_dump_stack() to find what was calling this, and here it is:
kworker/3:1H-123 [003] ..s. 96.696222: <stack trace>
=> inet_csk_get_port
=> inet_addr_type
=> inet_bind
=> xs_bind
=> sock_setsockopt
=> __sock_create
=> xs_create_sock.isra.18
=> xs_tcp_setup_socket
=> process_one_work
=> worker_thread
=> worker_thread
=> kthread
=> kthread
=> ret_from_fork
=> kthread
I rebooted, and examined what happens. I see the kworker binding that
port, and all seems well:
# netstat -tapn |grep 946
tcp 0 0 192.168.23.9:946 192.168.23.22:55201 ESTABLISHED -
But waiting for a bit, the connection goes into a TIME_WAIT, and then
it just disappears. But the bind to the port does not get released, and
that port is from then on, taken.
This never happened with my 3.19 kernels. I would bisect it but this is
happening on my main server box which I usually only reboot every other
month doing upgrades. It causes too much disturbance for myself (and my
family) as when this box is offline, basically the rest of my machines
are too.
I figured this may be enough information to see if you can fix it.
Otherwise I can try to do the bisect, but that's not going to happen
any time soon. I may just go back to 3.19 for now, such that rkhunter
stops complaining about the hidden port.
If you need anymore information, let me know.
-- Steve
On Thu, Jun 11, 2015 at 11:49 PM, Steven Rostedt <[email protected]> wrote:
>
> I recently upgraded my main server to 4.0.4 from 3.19.5 and rkhunter
> started reporting a hidden port on my box.
>
> Running unhide-tcp I see this:
>
> # unhide-tcp
> Unhide-tcp 20121229
> Copyright © 2012 Yago Jesus & Patrick Gouin
> License GPLv3+ : GNU GPL version 3 or later
> http://www.unhide-forensics.info
> Used options:
> [*]Starting TCP checking
>
> Found Hidden port that not appears in ss: 946
> [*]Starting UDP checking
>
> This scared the hell out of me as I'm thinking that I have got some kind
> of NSA backdoor hooked into my server and it is monitoring my plans to
> smuggle Kinder Überraschung into the USA from Germany. I panicked!
>
> Well, I wasted the day writing modules to first look at all the sockets
> opened by all processes (via their file descriptors) and posted their
> port numbers.
>
> http://rostedt.homelinux.com/private/tasklist.c
>
> But this port wasn't there either.
>
> Then I decided to look at the ports in tcp_hashinfo.
>
> http://rostedt.homelinux.com/private/portlist.c
>
> This found the port but no file was connected to it, and worse yet,
> when I first ran it without using probe_kernel_read(), it crashed my
> kernel, because sk->sk_socket pointed to a freed socket!
>
> Note, each boot, the hidden port is different.
>
> Finally, I decided to bring in the big guns, and inserted a
> trace_printk() into the bind logic, to see if I could find the culprit.
> After fiddling with it a few times, I found a suspect:
>
> kworker/3:1H-123 [003] ..s. 96.696213: inet_bind_hash: add 946
>
> Bah, it's a kernel thread doing it, via a work queue. I then added a
> trace_dump_stack() to find what was calling this, and here it is:
>
> kworker/3:1H-123 [003] ..s. 96.696222: <stack trace>
> => inet_csk_get_port
> => inet_addr_type
> => inet_bind
> => xs_bind
> => sock_setsockopt
> => __sock_create
> => xs_create_sock.isra.18
> => xs_tcp_setup_socket
> => process_one_work
> => worker_thread
> => worker_thread
> => kthread
> => kthread
> => ret_from_fork
> => kthread
>
> I rebooted, and examined what happens. I see the kworker binding that
> port, and all seems well:
>
> # netstat -tapn |grep 946
> tcp 0 0 192.168.23.9:946 192.168.23.22:55201 ESTABLISHED -
>
> But waiting for a bit, the connection goes into a TIME_WAIT, and then
> it just disappears. But the bind to the port does not get released, and
> that port is from then on, taken.
>
> This never happened with my 3.19 kernels. I would bisect it but this is
> happening on my main server box which I usually only reboot every other
> month doing upgrades. It causes too much disturbance for myself (and my
> family) as when this box is offline, basically the rest of my machines
> are too.
>
> I figured this may be enough information to see if you can fix it.
> Otherwise I can try to do the bisect, but that's not going to happen
> any time soon. I may just go back to 3.19 for now, such that rkhunter
> stops complaining about the hidden port.
>
The only new thing that we're doing with 4.0 is to set SO_REUSEPORT on
the socket before binding the port (commit 4dda9c8a5e34: "SUNRPC: Set
SO_REUSEPORT socket option for TCP connections"). Perhaps there is an
issue with that?
Cheers
Trond
On Fri, 2015-06-12 at 10:10 -0400, Trond Myklebust wrote:
> On Thu, Jun 11, 2015 at 11:49 PM, Steven Rostedt <[email protected]> wrote:
> >
> > I recently upgraded my main server to 4.0.4 from 3.19.5 and rkhunter
> > started reporting a hidden port on my box.
> >
> > Running unhide-tcp I see this:
> >
> > # unhide-tcp
> > Unhide-tcp 20121229
> > Copyright © 2012 Yago Jesus & Patrick Gouin
> > License GPLv3+ : GNU GPL version 3 or later
> > http://www.unhide-forensics.info
> > Used options:
> > [*]Starting TCP checking
> >
> > Found Hidden port that not appears in ss: 946
> > [*]Starting UDP checking
> >
> > This scared the hell out of me as I'm thinking that I have got some kind
> > of NSA backdoor hooked into my server and it is monitoring my plans to
> > smuggle Kinder Überraschung into the USA from Germany. I panicked!
> >
> > Well, I wasted the day writing modules to first look at all the sockets
> > opened by all processes (via their file descriptors) and posted their
> > port numbers.
> >
> > http://rostedt.homelinux.com/private/tasklist.c
> >
> > But this port wasn't there either.
> >
> > Then I decided to look at the ports in tcp_hashinfo.
> >
> > http://rostedt.homelinux.com/private/portlist.c
> >
> > This found the port but no file was connected to it, and worse yet,
> > when I first ran it without using probe_kernel_read(), it crashed my
> > kernel, because sk->sk_socket pointed to a freed socket!
> >
> > Note, each boot, the hidden port is different.
> >
> > Finally, I decided to bring in the big guns, and inserted a
> > trace_printk() into the bind logic, to see if I could find the culprit.
> > After fiddling with it a few times, I found a suspect:
> >
> > kworker/3:1H-123 [003] ..s. 96.696213: inet_bind_hash: add 946
> >
> > Bah, it's a kernel thread doing it, via a work queue. I then added a
> > trace_dump_stack() to find what was calling this, and here it is:
> >
> > kworker/3:1H-123 [003] ..s. 96.696222: <stack trace>
> > => inet_csk_get_port
> > => inet_addr_type
> > => inet_bind
> > => xs_bind
> > => sock_setsockopt
> > => __sock_create
> > => xs_create_sock.isra.18
> > => xs_tcp_setup_socket
> > => process_one_work
> > => worker_thread
> > => worker_thread
> > => kthread
> > => kthread
> > => ret_from_fork
> > => kthread
> >
> > I rebooted, and examined what happens. I see the kworker binding that
> > port, and all seems well:
> >
> > # netstat -tapn |grep 946
> > tcp 0 0 192.168.23.9:946 192.168.23.22:55201 ESTABLISHED -
> >
> > But waiting for a bit, the connection goes into a TIME_WAIT, and then
> > it just disappears. But the bind to the port does not get released, and
> > that port is from then on, taken.
> >
> > This never happened with my 3.19 kernels. I would bisect it but this is
> > happening on my main server box which I usually only reboot every other
> > month doing upgrades. It causes too much disturbance for myself (and my
> > family) as when this box is offline, basically the rest of my machines
> > are too.
> >
> > I figured this may be enough information to see if you can fix it.
> > Otherwise I can try to do the bisect, but that's not going to happen
> > any time soon. I may just go back to 3.19 for now, such that rkhunter
> > stops complaining about the hidden port.
> >
>
> The only new thing that we're doing with 4.0 is to set SO_REUSEPORT on
> the socket before binding the port (commit 4dda9c8a5e34: "SUNRPC: Set
> SO_REUSEPORT socket option for TCP connections"). Perhaps there is an
> issue with that?
Strange, because the usual way to not have time-wait is to use SO_LINGER
with linger=0
And apparently xs_tcp_finish_connecting() has this :
sock_reset_flag(sk, SOCK_LINGER);
tcp_sk(sk)->linger2 = 0;
Are you sure SO_REUSEADDR was not the thing you wanted ?
Steven, have you tried kmemleak ?
On Fri, Jun 12, 2015 at 10:40 AM, Eric Dumazet <[email protected]> wrote:
> On Fri, 2015-06-12 at 10:10 -0400, Trond Myklebust wrote:
>> On Thu, Jun 11, 2015 at 11:49 PM, Steven Rostedt <[email protected]> wrote:
>> >
>> > I recently upgraded my main server to 4.0.4 from 3.19.5 and rkhunter
>> > started reporting a hidden port on my box.
>> >
>> > Running unhide-tcp I see this:
>> >
>> > # unhide-tcp
>> > Unhide-tcp 20121229
>> > Copyright © 2012 Yago Jesus & Patrick Gouin
>> > License GPLv3+ : GNU GPL version 3 or later
>> > http://www.unhide-forensics.info
>> > Used options:
>> > [*]Starting TCP checking
>> >
>> > Found Hidden port that not appears in ss: 946
>> > [*]Starting UDP checking
>> >
>> > This scared the hell out of me as I'm thinking that I have got some kind
>> > of NSA backdoor hooked into my server and it is monitoring my plans to
>> > smuggle Kinder Überraschung into the USA from Germany. I panicked!
>> >
>> > Well, I wasted the day writing modules to first look at all the sockets
>> > opened by all processes (via their file descriptors) and posted their
>> > port numbers.
>> >
>> > http://rostedt.homelinux.com/private/tasklist.c
>> >
>> > But this port wasn't there either.
>> >
>> > Then I decided to look at the ports in tcp_hashinfo.
>> >
>> > http://rostedt.homelinux.com/private/portlist.c
>> >
>> > This found the port but no file was connected to it, and worse yet,
>> > when I first ran it without using probe_kernel_read(), it crashed my
>> > kernel, because sk->sk_socket pointed to a freed socket!
>> >
>> > Note, each boot, the hidden port is different.
>> >
>> > Finally, I decided to bring in the big guns, and inserted a
>> > trace_printk() into the bind logic, to see if I could find the culprit.
>> > After fiddling with it a few times, I found a suspect:
>> >
>> > kworker/3:1H-123 [003] ..s. 96.696213: inet_bind_hash: add 946
>> >
>> > Bah, it's a kernel thread doing it, via a work queue. I then added a
>> > trace_dump_stack() to find what was calling this, and here it is:
>> >
>> > kworker/3:1H-123 [003] ..s. 96.696222: <stack trace>
>> > => inet_csk_get_port
>> > => inet_addr_type
>> > => inet_bind
>> > => xs_bind
>> > => sock_setsockopt
>> > => __sock_create
>> > => xs_create_sock.isra.18
>> > => xs_tcp_setup_socket
>> > => process_one_work
>> > => worker_thread
>> > => worker_thread
>> > => kthread
>> > => kthread
>> > => ret_from_fork
>> > => kthread
>> >
>> > I rebooted, and examined what happens. I see the kworker binding that
>> > port, and all seems well:
>> >
>> > # netstat -tapn |grep 946
>> > tcp 0 0 192.168.23.9:946 192.168.23.22:55201 ESTABLISHED -
>> >
>> > But waiting for a bit, the connection goes into a TIME_WAIT, and then
>> > it just disappears. But the bind to the port does not get released, and
>> > that port is from then on, taken.
>> >
>> > This never happened with my 3.19 kernels. I would bisect it but this is
>> > happening on my main server box which I usually only reboot every other
>> > month doing upgrades. It causes too much disturbance for myself (and my
>> > family) as when this box is offline, basically the rest of my machines
>> > are too.
>> >
>> > I figured this may be enough information to see if you can fix it.
>> > Otherwise I can try to do the bisect, but that's not going to happen
>> > any time soon. I may just go back to 3.19 for now, such that rkhunter
>> > stops complaining about the hidden port.
>> >
>>
>> The only new thing that we're doing with 4.0 is to set SO_REUSEPORT on
>> the socket before binding the port (commit 4dda9c8a5e34: "SUNRPC: Set
>> SO_REUSEPORT socket option for TCP connections"). Perhaps there is an
>> issue with that?
>
> Strange, because the usual way to not have time-wait is to use SO_LINGER
> with linger=0
>
> And apparently xs_tcp_finish_connecting() has this :
>
> sock_reset_flag(sk, SOCK_LINGER);
> tcp_sk(sk)->linger2 = 0;
Are you sure? I thought that SO_LINGER is more about controlling how
the socket behaves w.r.t. waiting for the TCP_CLOSE state to be
achieved (i.e. about aborting the FIN state negotiation early). I've
never observed an effect on the TCP time-wait states.
> Are you sure SO_REUSEADDR was not the thing you wanted ?
Yes. SO_REUSEADDR has the problem that it requires you bind to
something other than 0.0.0.0, so it is less appropriate for outgoing
connections; the RPC code really should not have to worry about
routing and routability of a particular source address.
Cheers
Trond
On Fri, 12 Jun 2015 07:40:35 -0700
Eric Dumazet <[email protected]> wrote:
> Strange, because the usual way to not have time-wait is to use SO_LINGER
> with linger=0
>
> And apparently xs_tcp_finish_connecting() has this :
>
> sock_reset_flag(sk, SOCK_LINGER);
> tcp_sk(sk)->linger2 = 0;
>
> Are you sure SO_REUSEADDR was not the thing you wanted ?
>
> Steven, have you tried kmemleak ?
Nope, and again, I'm hesitant on adding too much debug. This is my main
server (build server, ssh server, web server, mail server, proxy
server, irc server, etc).
Although, I made dprintk() into trace_printk() in xprtsock.c and
xprt.c, and reran it. Here's the output:
(port 684 was the bad one this time)
# tracer: nop
#
# entries-in-buffer/entries-written: 396/396 #P:4
#
# _-----=> irqs-off
# / _----=> need-resched
# | / _---=> hardirq/softirq
# || / _--=> preempt-depth
# ||| / delay
# TASK-PID CPU# |||| TIMESTAMP FUNCTION
# | | | |||| | |
rpc.nfsd-4710 [002] .... 48.615382: xs_local_setup_socket: RPC: worker connecting xprt ffff8800d9018000 via AF_LOCAL to /var/run/rpcbind.sock
rpc.nfsd-4710 [002] .... 48.615393: xs_local_setup_socket: RPC: xprt ffff8800d9018000 connected to /var/run/rpcbind.sock
rpc.nfsd-4710 [002] .... 48.615394: xs_setup_local: RPC: set up xprt to /var/run/rpcbind.sock via AF_LOCAL
rpc.nfsd-4710 [002] .... 48.615399: xprt_create_transport: RPC: created transport ffff8800d9018000 with 65536 slots
rpc.nfsd-4710 [002] .... 48.615416: xprt_alloc_slot: RPC: 1 reserved req ffff8800db829600 xid cb06d5e8
rpc.nfsd-4710 [002] .... 48.615419: xprt_prepare_transmit: RPC: 1 xprt_prepare_transmit
rpc.nfsd-4710 [002] .... 48.615420: xprt_transmit: RPC: 1 xprt_transmit(44)
rpc.nfsd-4710 [002] .... 48.615424: xs_local_send_request: RPC: xs_local_send_request(44) = 0
rpc.nfsd-4710 [002] .... 48.615425: xprt_transmit: RPC: 1 xmit complete
rpcbind-1829 [003] ..s. 48.615503: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [003] ..s. 48.615506: xprt_complete_rqst: RPC: 1 xid cb06d5e8 complete (24 bytes received)
rpc.nfsd-4710 [002] .... 48.615556: xprt_release: RPC: 1 release request ffff8800db829600
rpc.nfsd-4710 [002] .... 48.615568: xprt_alloc_slot: RPC: 2 reserved req ffff8800db829600 xid cc06d5e8
rpc.nfsd-4710 [002] .... 48.615569: xprt_prepare_transmit: RPC: 2 xprt_prepare_transmit
rpc.nfsd-4710 [002] .... 48.615569: xprt_transmit: RPC: 2 xprt_transmit(44)
rpc.nfsd-4710 [002] .... 48.615578: xs_local_send_request: RPC: xs_local_send_request(44) = 0
rpc.nfsd-4710 [002] .... 48.615578: xprt_transmit: RPC: 2 xmit complete
rpcbind-1829 [003] ..s. 48.615643: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [003] ..s. 48.615645: xprt_complete_rqst: RPC: 2 xid cc06d5e8 complete (24 bytes received)
rpc.nfsd-4710 [002] .... 48.615695: xprt_release: RPC: 2 release request ffff8800db829600
rpc.nfsd-4710 [002] .... 48.615698: xprt_alloc_slot: RPC: 3 reserved req ffff8800db829600 xid cd06d5e8
rpc.nfsd-4710 [002] .... 48.615699: xprt_prepare_transmit: RPC: 3 xprt_prepare_transmit
rpc.nfsd-4710 [002] .... 48.615700: xprt_transmit: RPC: 3 xprt_transmit(68)
rpc.nfsd-4710 [002] .... 48.615706: xs_local_send_request: RPC: xs_local_send_request(68) = 0
rpc.nfsd-4710 [002] .... 48.615707: xprt_transmit: RPC: 3 xmit complete
rpcbind-1829 [003] ..s. 48.615784: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [003] ..s. 48.615785: xprt_complete_rqst: RPC: 3 xid cd06d5e8 complete (28 bytes received)
rpc.nfsd-4710 [002] .... 48.615830: xprt_release: RPC: 3 release request ffff8800db829600
rpc.nfsd-4710 [002] .... 48.615833: xprt_alloc_slot: RPC: 4 reserved req ffff8800db829600 xid ce06d5e8
rpc.nfsd-4710 [002] .... 48.615834: xprt_prepare_transmit: RPC: 4 xprt_prepare_transmit
rpc.nfsd-4710 [002] .... 48.615835: xprt_transmit: RPC: 4 xprt_transmit(68)
rpc.nfsd-4710 [002] .... 48.615841: xs_local_send_request: RPC: xs_local_send_request(68) = 0
rpc.nfsd-4710 [002] .... 48.615841: xprt_transmit: RPC: 4 xmit complete
rpcbind-1829 [003] ..s. 48.615892: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [003] ..s. 48.615894: xprt_complete_rqst: RPC: 4 xid ce06d5e8 complete (28 bytes received)
rpc.nfsd-4710 [002] .... 48.615958: xprt_release: RPC: 4 release request ffff8800db829600
rpc.nfsd-4710 [002] .... 48.615961: xprt_alloc_slot: RPC: 5 reserved req ffff8800db829600 xid cf06d5e8
rpc.nfsd-4710 [002] .... 48.615962: xprt_prepare_transmit: RPC: 5 xprt_prepare_transmit
rpc.nfsd-4710 [002] .... 48.615966: xprt_transmit: RPC: 5 xprt_transmit(68)
rpc.nfsd-4710 [002] .... 48.615971: xs_local_send_request: RPC: xs_local_send_request(68) = 0
rpc.nfsd-4710 [002] .... 48.615972: xprt_transmit: RPC: 5 xmit complete
rpcbind-1829 [003] ..s. 48.616011: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [003] ..s. 48.616012: xprt_complete_rqst: RPC: 5 xid cf06d5e8 complete (28 bytes received)
rpc.nfsd-4710 [002] .... 48.616057: xprt_release: RPC: 5 release request ffff8800db829600
rpc.nfsd-4710 [002] .... 48.616059: xprt_alloc_slot: RPC: 6 reserved req ffff8800db829600 xid d006d5e8
rpc.nfsd-4710 [002] .... 48.616060: xprt_prepare_transmit: RPC: 6 xprt_prepare_transmit
rpc.nfsd-4710 [002] .... 48.616061: xprt_transmit: RPC: 6 xprt_transmit(68)
rpc.nfsd-4710 [002] .... 48.616065: xs_local_send_request: RPC: xs_local_send_request(68) = 0
rpc.nfsd-4710 [002] .... 48.616066: xprt_transmit: RPC: 6 xmit complete
rpcbind-1829 [003] ..s. 48.616117: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [003] ..s. 48.616119: xprt_complete_rqst: RPC: 6 xid d006d5e8 complete (28 bytes received)
rpc.nfsd-4710 [002] .... 48.616163: xprt_release: RPC: 6 release request ffff8800db829600
rpc.nfsd-4710 [002] .... 48.616165: xprt_alloc_slot: RPC: 7 reserved req ffff8800db829600 xid d106d5e8
rpc.nfsd-4710 [002] .... 48.616166: xprt_prepare_transmit: RPC: 7 xprt_prepare_transmit
rpc.nfsd-4710 [002] .... 48.616167: xprt_transmit: RPC: 7 xprt_transmit(68)
rpc.nfsd-4710 [002] .... 48.616172: xs_local_send_request: RPC: xs_local_send_request(68) = 0
rpc.nfsd-4710 [002] .... 48.616172: xprt_transmit: RPC: 7 xmit complete
rpcbind-1829 [000] ..s. 48.616247: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [000] ..s. 48.616249: xprt_complete_rqst: RPC: 7 xid d106d5e8 complete (28 bytes received)
rpc.nfsd-4710 [002] .... 48.616289: xprt_release: RPC: 7 release request ffff8800db829600
rpc.nfsd-4710 [002] .... 48.616296: xprt_alloc_slot: RPC: 8 reserved req ffff8800db829600 xid d206d5e8
rpc.nfsd-4710 [002] .... 48.616297: xprt_prepare_transmit: RPC: 8 xprt_prepare_transmit
rpc.nfsd-4710 [002] .... 48.616298: xprt_transmit: RPC: 8 xprt_transmit(84)
rpc.nfsd-4710 [002] .... 48.616302: xs_local_send_request: RPC: xs_local_send_request(84) = 0
rpc.nfsd-4710 [002] .... 48.616302: xprt_transmit: RPC: 8 xmit complete
rpcbind-1829 [000] ..s. 48.616324: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [000] ..s. 48.616326: xprt_complete_rqst: RPC: 8 xid d206d5e8 complete (28 bytes received)
rpc.nfsd-4710 [002] .... 48.616340: xprt_release: RPC: 8 release request ffff8800db829600
rpc.nfsd-4710 [002] .... 48.616346: xprt_alloc_slot: RPC: 9 reserved req ffff8800db829600 xid d306d5e8
rpc.nfsd-4710 [002] .... 48.616347: xprt_prepare_transmit: RPC: 9 xprt_prepare_transmit
rpc.nfsd-4710 [002] .... 48.616348: xprt_transmit: RPC: 9 xprt_transmit(84)
rpc.nfsd-4710 [002] .... 48.616355: xs_local_send_request: RPC: xs_local_send_request(84) = 0
rpc.nfsd-4710 [002] .... 48.616355: xprt_transmit: RPC: 9 xmit complete
rpcbind-1829 [000] ..s. 48.616380: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [000] ..s. 48.616381: xprt_complete_rqst: RPC: 9 xid d306d5e8 complete (28 bytes received)
rpc.nfsd-4710 [002] .... 48.616392: xprt_release: RPC: 9 release request ffff8800db829600
rpc.nfsd-4710 [002] .... 48.616396: xprt_alloc_slot: RPC: 10 reserved req ffff8800db829600 xid d406d5e8
rpc.nfsd-4710 [002] .... 48.616396: xprt_prepare_transmit: RPC: 10 xprt_prepare_transmit
rpc.nfsd-4710 [002] .... 48.616397: xprt_transmit: RPC: 10 xprt_transmit(84)
rpc.nfsd-4710 [002] .... 48.616401: xs_local_send_request: RPC: xs_local_send_request(84) = 0
rpc.nfsd-4710 [002] .... 48.616402: xprt_transmit: RPC: 10 xmit complete
rpcbind-1829 [000] ..s. 48.616421: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [000] ..s. 48.616422: xprt_complete_rqst: RPC: 10 xid d406d5e8 complete (28 bytes received)
rpc.nfsd-4710 [002] .... 48.616433: xprt_release: RPC: 10 release request ffff8800db829600
rpc.nfsd-4710 [002] .... 48.616436: xprt_alloc_slot: RPC: 11 reserved req ffff8800db829600 xid d506d5e8
rpc.nfsd-4710 [002] .... 48.616437: xprt_prepare_transmit: RPC: 11 xprt_prepare_transmit
rpc.nfsd-4710 [002] .... 48.616438: xprt_transmit: RPC: 11 xprt_transmit(84)
rpc.nfsd-4710 [002] .... 48.616442: xs_local_send_request: RPC: xs_local_send_request(84) = 0
rpc.nfsd-4710 [002] .... 48.616442: xprt_transmit: RPC: 11 xmit complete
rpcbind-1829 [000] ..s. 48.616461: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [000] ..s. 48.616462: xprt_complete_rqst: RPC: 11 xid d506d5e8 complete (28 bytes received)
rpc.nfsd-4710 [002] .... 48.616473: xprt_release: RPC: 11 release request ffff8800db829600
rpc.nfsd-4710 [002] .... 48.616476: xprt_alloc_slot: RPC: 12 reserved req ffff8800db829600 xid d606d5e8
rpc.nfsd-4710 [002] .... 48.616477: xprt_prepare_transmit: RPC: 12 xprt_prepare_transmit
rpc.nfsd-4710 [002] .... 48.616478: xprt_transmit: RPC: 12 xprt_transmit(84)
rpc.nfsd-4710 [002] .... 48.616482: xs_local_send_request: RPC: xs_local_send_request(84) = 0
rpc.nfsd-4710 [002] .... 48.616482: xprt_transmit: RPC: 12 xmit complete
rpcbind-1829 [000] ..s. 48.616501: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [000] ..s. 48.616502: xprt_complete_rqst: RPC: 12 xid d606d5e8 complete (28 bytes received)
rpc.nfsd-4710 [002] .... 48.616511: xprt_release: RPC: 12 release request ffff8800db829600
rpc.nfsd-4710 [002] .... 48.616535: xprt_alloc_slot: RPC: 13 reserved req ffff8800db829600 xid d706d5e8
rpc.nfsd-4710 [002] .... 48.616536: xprt_prepare_transmit: RPC: 13 xprt_prepare_transmit
rpc.nfsd-4710 [002] .... 48.616537: xprt_transmit: RPC: 13 xprt_transmit(84)
rpc.nfsd-4710 [002] .... 48.616541: xs_local_send_request: RPC: xs_local_send_request(84) = 0
rpc.nfsd-4710 [002] .... 48.616541: xprt_transmit: RPC: 13 xmit complete
rpcbind-1829 [000] ..s. 48.616580: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [000] ..s. 48.616581: xprt_complete_rqst: RPC: 13 xid d706d5e8 complete (28 bytes received)
rpc.nfsd-4710 [002] .... 48.616589: xprt_release: RPC: 13 release request ffff8800db829600
rpc.nfsd-4710 [002] .... 48.616591: xprt_alloc_slot: RPC: 14 reserved req ffff8800db829600 xid d806d5e8
rpc.nfsd-4710 [002] .... 48.616591: xprt_prepare_transmit: RPC: 14 xprt_prepare_transmit
rpc.nfsd-4710 [002] .... 48.616592: xprt_transmit: RPC: 14 xprt_transmit(84)
rpc.nfsd-4710 [002] .... 48.616594: xs_local_send_request: RPC: xs_local_send_request(84) = 0
rpc.nfsd-4710 [002] .... 48.616595: xprt_transmit: RPC: 14 xmit complete
rpcbind-1829 [000] ..s. 48.616610: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [000] ..s. 48.616611: xprt_complete_rqst: RPC: 14 xid d806d5e8 complete (28 bytes received)
rpc.nfsd-4710 [002] .... 48.616618: xprt_release: RPC: 14 release request ffff8800db829600
rpc.nfsd-4710 [002] .... 48.616619: xprt_alloc_slot: RPC: 15 reserved req ffff8800db829600 xid d906d5e8
rpc.nfsd-4710 [002] .... 48.616620: xprt_prepare_transmit: RPC: 15 xprt_prepare_transmit
rpc.nfsd-4710 [002] .... 48.616620: xprt_transmit: RPC: 15 xprt_transmit(84)
rpc.nfsd-4710 [002] .... 48.616623: xs_local_send_request: RPC: xs_local_send_request(84) = 0
rpc.nfsd-4710 [002] .... 48.616623: xprt_transmit: RPC: 15 xmit complete
rpcbind-1829 [000] ..s. 48.616635: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [000] ..s. 48.616637: xprt_complete_rqst: RPC: 15 xid d906d5e8 complete (28 bytes received)
rpc.nfsd-4710 [002] .... 48.616643: xprt_release: RPC: 15 release request ffff8800db829600
rpc.nfsd-4710 [002] .... 48.616644: xprt_alloc_slot: RPC: 16 reserved req ffff8800db829600 xid da06d5e8
rpc.nfsd-4710 [002] .... 48.616645: xprt_prepare_transmit: RPC: 16 xprt_prepare_transmit
rpc.nfsd-4710 [002] .... 48.616645: xprt_transmit: RPC: 16 xprt_transmit(84)
rpc.nfsd-4710 [002] .... 48.616648: xs_local_send_request: RPC: xs_local_send_request(84) = 0
rpc.nfsd-4710 [002] .... 48.616648: xprt_transmit: RPC: 16 xmit complete
rpcbind-1829 [000] ..s. 48.616658: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [000] ..s. 48.616659: xprt_complete_rqst: RPC: 16 xid da06d5e8 complete (28 bytes received)
rpc.nfsd-4710 [002] .... 48.616665: xprt_release: RPC: 16 release request ffff8800db829600
rpc.nfsd-4710 [002] .... 48.616666: xprt_alloc_slot: RPC: 17 reserved req ffff8800db829600 xid db06d5e8
rpc.nfsd-4710 [002] .... 48.616667: xprt_prepare_transmit: RPC: 17 xprt_prepare_transmit
rpc.nfsd-4710 [002] .... 48.616667: xprt_transmit: RPC: 17 xprt_transmit(84)
rpc.nfsd-4710 [002] .... 48.616670: xs_local_send_request: RPC: xs_local_send_request(84) = 0
rpc.nfsd-4710 [002] .... 48.616670: xprt_transmit: RPC: 17 xmit complete
rpcbind-1829 [000] ..s. 48.616680: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [000] ..s. 48.616681: xprt_complete_rqst: RPC: 17 xid db06d5e8 complete (28 bytes received)
rpc.nfsd-4710 [002] .... 48.616687: xprt_release: RPC: 17 release request ffff8800db829600
rpc.nfsd-4710 [002] .... 48.617251: xprt_alloc_slot: RPC: 18 reserved req ffff8800db829600 xid dc06d5e8
rpc.nfsd-4710 [002] .... 48.617252: xprt_prepare_transmit: RPC: 18 xprt_prepare_transmit
rpc.nfsd-4710 [002] .... 48.617252: xprt_transmit: RPC: 18 xprt_transmit(80)
rpc.nfsd-4710 [002] .... 48.617256: xs_local_send_request: RPC: xs_local_send_request(80) = 0
rpc.nfsd-4710 [002] .... 48.617257: xprt_transmit: RPC: 18 xmit complete
rpcbind-1829 [000] ..s. 48.617265: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [000] ..s. 48.617265: xprt_complete_rqst: RPC: 18 xid dc06d5e8 complete (28 bytes received)
rpc.nfsd-4710 [002] .... 48.617272: xprt_release: RPC: 18 release request ffff8800db829600
rpc.nfsd-4710 [002] .... 48.617274: xprt_alloc_slot: RPC: 19 reserved req ffff8800db829600 xid dd06d5e8
rpc.nfsd-4710 [002] .... 48.617274: xprt_prepare_transmit: RPC: 19 xprt_prepare_transmit
rpc.nfsd-4710 [002] .... 48.617275: xprt_transmit: RPC: 19 xprt_transmit(80)
rpc.nfsd-4710 [002] .... 48.617277: xs_local_send_request: RPC: xs_local_send_request(80) = 0
rpc.nfsd-4710 [002] .... 48.617277: xprt_transmit: RPC: 19 xmit complete
rpcbind-1829 [000] ..s. 48.617287: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [000] ..s. 48.617288: xprt_complete_rqst: RPC: 19 xid dd06d5e8 complete (28 bytes received)
rpc.nfsd-4710 [002] .... 48.617293: xprt_release: RPC: 19 release request ffff8800db829600
rpc.nfsd-4710 [002] .... 48.617295: xprt_alloc_slot: RPC: 20 reserved req ffff8800db829600 xid de06d5e8
rpc.nfsd-4710 [002] .... 48.617295: xprt_prepare_transmit: RPC: 20 xprt_prepare_transmit
rpc.nfsd-4710 [002] .... 48.617296: xprt_transmit: RPC: 20 xprt_transmit(80)
rpc.nfsd-4710 [002] .... 48.617298: xs_local_send_request: RPC: xs_local_send_request(80) = 0
rpc.nfsd-4710 [002] .... 48.617298: xprt_transmit: RPC: 20 xmit complete
rpcbind-1829 [000] ..s. 48.617307: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [000] ..s. 48.617308: xprt_complete_rqst: RPC: 20 xid de06d5e8 complete (28 bytes received)
rpc.nfsd-4710 [002] .... 48.617314: xprt_release: RPC: 20 release request ffff8800db829600
rpc.nfsd-4710 [002] .... 48.617315: xprt_alloc_slot: RPC: 21 reserved req ffff8800db829600 xid df06d5e8
rpc.nfsd-4710 [002] .... 48.617316: xprt_prepare_transmit: RPC: 21 xprt_prepare_transmit
rpc.nfsd-4710 [002] .... 48.617316: xprt_transmit: RPC: 21 xprt_transmit(80)
rpc.nfsd-4710 [002] .... 48.617318: xs_local_send_request: RPC: xs_local_send_request(80) = 0
rpc.nfsd-4710 [002] .... 48.617319: xprt_transmit: RPC: 21 xmit complete
rpcbind-1829 [000] ..s. 48.617328: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [000] ..s. 48.617329: xprt_complete_rqst: RPC: 21 xid df06d5e8 complete (28 bytes received)
rpc.nfsd-4710 [002] .... 48.617334: xprt_release: RPC: 21 release request ffff8800db829600
rpc.nfsd-4710 [002] .... 48.617336: xprt_alloc_slot: RPC: 22 reserved req ffff8800db829600 xid e006d5e8
rpc.nfsd-4710 [002] .... 48.617336: xprt_prepare_transmit: RPC: 22 xprt_prepare_transmit
rpc.nfsd-4710 [002] .... 48.617336: xprt_transmit: RPC: 22 xprt_transmit(80)
rpc.nfsd-4710 [002] .... 48.617339: xs_local_send_request: RPC: xs_local_send_request(80) = 0
rpc.nfsd-4710 [002] .... 48.617339: xprt_transmit: RPC: 22 xmit complete
rpcbind-1829 [000] ..s. 48.617348: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [000] ..s. 48.617349: xprt_complete_rqst: RPC: 22 xid e006d5e8 complete (28 bytes received)
rpc.nfsd-4710 [002] .... 48.617354: xprt_release: RPC: 22 release request ffff8800db829600
rpc.nfsd-4710 [002] .... 48.617370: xprt_alloc_slot: RPC: 23 reserved req ffff8800db829600 xid e106d5e8
rpc.nfsd-4710 [002] .... 48.617371: xprt_prepare_transmit: RPC: 23 xprt_prepare_transmit
rpc.nfsd-4710 [002] .... 48.617371: xprt_transmit: RPC: 23 xprt_transmit(80)
rpc.nfsd-4710 [002] .... 48.617374: xs_local_send_request: RPC: xs_local_send_request(80) = 0
rpc.nfsd-4710 [002] .... 48.617374: xprt_transmit: RPC: 23 xmit complete
rpcbind-1829 [000] ..s. 48.617382: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [000] ..s. 48.617383: xprt_complete_rqst: RPC: 23 xid e106d5e8 complete (28 bytes received)
rpc.nfsd-4710 [002] .... 48.617389: xprt_release: RPC: 23 release request ffff8800db829600
rpc.nfsd-4710 [002] .... 48.617390: xprt_alloc_slot: RPC: 24 reserved req ffff8800db829600 xid e206d5e8
rpc.nfsd-4710 [002] .... 48.617391: xprt_prepare_transmit: RPC: 24 xprt_prepare_transmit
rpc.nfsd-4710 [002] .... 48.617391: xprt_transmit: RPC: 24 xprt_transmit(80)
rpc.nfsd-4710 [002] .... 48.617394: xs_local_send_request: RPC: xs_local_send_request(80) = 0
rpc.nfsd-4710 [002] .... 48.617394: xprt_transmit: RPC: 24 xmit complete
rpcbind-1829 [000] ..s. 48.617403: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [000] ..s. 48.617404: xprt_complete_rqst: RPC: 24 xid e206d5e8 complete (28 bytes received)
rpc.nfsd-4710 [002] .... 48.617410: xprt_release: RPC: 24 release request ffff8800db829600
rpc.nfsd-4710 [002] .... 48.617411: xprt_alloc_slot: RPC: 25 reserved req ffff8800db829600 xid e306d5e8
rpc.nfsd-4710 [002] .... 48.617412: xprt_prepare_transmit: RPC: 25 xprt_prepare_transmit
rpc.nfsd-4710 [002] .... 48.617412: xprt_transmit: RPC: 25 xprt_transmit(80)
rpc.nfsd-4710 [002] .... 48.617414: xs_local_send_request: RPC: xs_local_send_request(80) = 0
rpc.nfsd-4710 [002] .... 48.617414: xprt_transmit: RPC: 25 xmit complete
rpcbind-1829 [000] ..s. 48.617424: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [000] ..s. 48.617424: xprt_complete_rqst: RPC: 25 xid e306d5e8 complete (28 bytes received)
rpc.nfsd-4710 [002] .... 48.617430: xprt_release: RPC: 25 release request ffff8800db829600
rpc.nfsd-4710 [002] .... 48.617431: xprt_alloc_slot: RPC: 26 reserved req ffff8800db829600 xid e406d5e8
rpc.nfsd-4710 [002] .... 48.617432: xprt_prepare_transmit: RPC: 26 xprt_prepare_transmit
rpc.nfsd-4710 [002] .... 48.617432: xprt_transmit: RPC: 26 xprt_transmit(80)
rpc.nfsd-4710 [002] .... 48.617434: xs_local_send_request: RPC: xs_local_send_request(80) = 0
rpc.nfsd-4710 [002] .... 48.617434: xprt_transmit: RPC: 26 xmit complete
rpcbind-1829 [000] ..s. 48.617444: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [000] ..s. 48.617444: xprt_complete_rqst: RPC: 26 xid e406d5e8 complete (28 bytes received)
rpc.nfsd-4710 [002] .... 48.617450: xprt_release: RPC: 26 release request ffff8800db829600
rpc.nfsd-4710 [002] .... 48.617451: xprt_alloc_slot: RPC: 27 reserved req ffff8800db829600 xid e506d5e8
rpc.nfsd-4710 [002] .... 48.617452: xprt_prepare_transmit: RPC: 27 xprt_prepare_transmit
rpc.nfsd-4710 [002] .... 48.617452: xprt_transmit: RPC: 27 xprt_transmit(80)
rpc.nfsd-4710 [002] .... 48.617454: xs_local_send_request: RPC: xs_local_send_request(80) = 0
rpc.nfsd-4710 [002] .... 48.617455: xprt_transmit: RPC: 27 xmit complete
rpcbind-1829 [000] ..s. 48.617464: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [000] ..s. 48.617464: xprt_complete_rqst: RPC: 27 xid e506d5e8 complete (28 bytes received)
rpc.nfsd-4710 [002] .... 48.617470: xprt_release: RPC: 27 release request ffff8800db829600
rpc.nfsd-4710 [002] .... 48.617575: xprt_alloc_slot: RPC: 28 reserved req ffff8800db829600 xid e606d5e8
rpc.nfsd-4710 [002] .... 48.617576: xprt_prepare_transmit: RPC: 28 xprt_prepare_transmit
rpc.nfsd-4710 [002] .... 48.617577: xprt_transmit: RPC: 28 xprt_transmit(68)
rpc.nfsd-4710 [002] .... 48.617580: xs_local_send_request: RPC: xs_local_send_request(68) = 0
rpc.nfsd-4710 [002] .... 48.617580: xprt_transmit: RPC: 28 xmit complete
rpcbind-1829 [000] ..s. 48.617590: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [000] ..s. 48.617591: xprt_complete_rqst: RPC: 28 xid e606d5e8 complete (28 bytes received)
rpc.nfsd-4710 [002] .... 48.617598: xprt_release: RPC: 28 release request ffff8800db829600
rpc.nfsd-4710 [002] .... 48.617599: xprt_alloc_slot: RPC: 29 reserved req ffff8800db829600 xid e706d5e8
rpc.nfsd-4710 [002] .... 48.617599: xprt_prepare_transmit: RPC: 29 xprt_prepare_transmit
rpc.nfsd-4710 [002] .... 48.617600: xprt_transmit: RPC: 29 xprt_transmit(68)
rpc.nfsd-4710 [002] .... 48.617602: xs_local_send_request: RPC: xs_local_send_request(68) = 0
rpc.nfsd-4710 [002] .... 48.617602: xprt_transmit: RPC: 29 xmit complete
rpcbind-1829 [000] ..s. 48.617614: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [000] ..s. 48.617615: xprt_complete_rqst: RPC: 29 xid e706d5e8 complete (28 bytes received)
rpc.nfsd-4710 [002] .... 48.617621: xprt_release: RPC: 29 release request ffff8800db829600
rpc.nfsd-4710 [002] .... 48.617622: xprt_alloc_slot: RPC: 30 reserved req ffff8800db829600 xid e806d5e8
rpc.nfsd-4710 [002] .... 48.617622: xprt_prepare_transmit: RPC: 30 xprt_prepare_transmit
rpc.nfsd-4710 [002] .... 48.617623: xprt_transmit: RPC: 30 xprt_transmit(68)
rpc.nfsd-4710 [002] .... 48.617625: xs_local_send_request: RPC: xs_local_send_request(68) = 0
rpc.nfsd-4710 [002] .... 48.617625: xprt_transmit: RPC: 30 xmit complete
rpcbind-1829 [000] ..s. 48.617634: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [000] ..s. 48.617635: xprt_complete_rqst: RPC: 30 xid e806d5e8 complete (28 bytes received)
rpc.nfsd-4710 [002] .... 48.617640: xprt_release: RPC: 30 release request ffff8800db829600
rpc.nfsd-4710 [002] .... 48.617647: xprt_alloc_slot: RPC: 31 reserved req ffff8800db829600 xid e906d5e8
rpc.nfsd-4710 [002] .... 48.617647: xprt_prepare_transmit: RPC: 31 xprt_prepare_transmit
rpc.nfsd-4710 [002] .... 48.617648: xprt_transmit: RPC: 31 xprt_transmit(88)
rpc.nfsd-4710 [002] .... 48.617650: xs_local_send_request: RPC: xs_local_send_request(88) = 0
rpc.nfsd-4710 [002] .... 48.617650: xprt_transmit: RPC: 31 xmit complete
rpcbind-1829 [000] ..s. 48.617659: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [000] ..s. 48.617660: xprt_complete_rqst: RPC: 31 xid e906d5e8 complete (28 bytes received)
rpc.nfsd-4710 [002] .... 48.617666: xprt_release: RPC: 31 release request ffff8800db829600
rpc.nfsd-4710 [002] .... 48.617668: xprt_alloc_slot: RPC: 32 reserved req ffff8800db829600 xid ea06d5e8
rpc.nfsd-4710 [002] .... 48.617668: xprt_prepare_transmit: RPC: 32 xprt_prepare_transmit
rpc.nfsd-4710 [002] .... 48.617669: xprt_transmit: RPC: 32 xprt_transmit(88)
rpc.nfsd-4710 [002] .... 48.617671: xs_local_send_request: RPC: xs_local_send_request(88) = 0
rpc.nfsd-4710 [002] .... 48.617671: xprt_transmit: RPC: 32 xmit complete
rpcbind-1829 [000] ..s. 48.617681: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [000] ..s. 48.617681: xprt_complete_rqst: RPC: 32 xid ea06d5e8 complete (28 bytes received)
rpc.nfsd-4710 [002] .... 48.617687: xprt_release: RPC: 32 release request ffff8800db829600
rpc.nfsd-4710 [002] .... 48.617688: xprt_alloc_slot: RPC: 33 reserved req ffff8800db829600 xid eb06d5e8
rpc.nfsd-4710 [002] .... 48.617689: xprt_prepare_transmit: RPC: 33 xprt_prepare_transmit
rpc.nfsd-4710 [002] .... 48.617689: xprt_transmit: RPC: 33 xprt_transmit(88)
rpc.nfsd-4710 [002] .... 48.617692: xs_local_send_request: RPC: xs_local_send_request(88) = 0
rpc.nfsd-4710 [002] .... 48.617692: xprt_transmit: RPC: 33 xmit complete
rpcbind-1829 [000] ..s. 48.617701: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [000] ..s. 48.617702: xprt_complete_rqst: RPC: 33 xid eb06d5e8 complete (28 bytes received)
rpc.nfsd-4710 [002] .... 48.617707: xprt_release: RPC: 33 release request ffff8800db829600
rpc.nfsd-4710 [002] .... 48.617716: xprt_alloc_slot: RPC: 34 reserved req ffff8800db829600 xid ec06d5e8
rpc.nfsd-4710 [002] .... 48.617716: xprt_prepare_transmit: RPC: 34 xprt_prepare_transmit
rpc.nfsd-4710 [002] .... 48.617717: xprt_transmit: RPC: 34 xprt_transmit(88)
rpc.nfsd-4710 [002] .... 48.617719: xs_local_send_request: RPC: xs_local_send_request(88) = 0
rpc.nfsd-4710 [002] .... 48.617719: xprt_transmit: RPC: 34 xmit complete
rpcbind-1829 [000] ..s. 48.617728: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [000] ..s. 48.617729: xprt_complete_rqst: RPC: 34 xid ec06d5e8 complete (28 bytes received)
rpc.nfsd-4710 [002] .... 48.617735: xprt_release: RPC: 34 release request ffff8800db829600
rpc.nfsd-4710 [002] .... 48.617737: xprt_alloc_slot: RPC: 35 reserved req ffff8800db829600 xid ed06d5e8
rpc.nfsd-4710 [002] .... 48.617737: xprt_prepare_transmit: RPC: 35 xprt_prepare_transmit
rpc.nfsd-4710 [002] .... 48.617738: xprt_transmit: RPC: 35 xprt_transmit(88)
rpc.nfsd-4710 [002] .... 48.617740: xs_local_send_request: RPC: xs_local_send_request(88) = 0
rpc.nfsd-4710 [002] .... 48.617740: xprt_transmit: RPC: 35 xmit complete
rpcbind-1829 [000] ..s. 48.617749: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [000] ..s. 48.617750: xprt_complete_rqst: RPC: 35 xid ed06d5e8 complete (28 bytes received)
rpc.nfsd-4710 [002] .... 48.617756: xprt_release: RPC: 35 release request ffff8800db829600
rpc.nfsd-4710 [002] .... 48.617757: xprt_alloc_slot: RPC: 36 reserved req ffff8800db829600 xid ee06d5e8
rpc.nfsd-4710 [002] .... 48.617758: xprt_prepare_transmit: RPC: 36 xprt_prepare_transmit
rpc.nfsd-4710 [002] .... 48.617758: xprt_transmit: RPC: 36 xprt_transmit(88)
rpc.nfsd-4710 [002] .... 48.617760: xs_local_send_request: RPC: xs_local_send_request(88) = 0
rpc.nfsd-4710 [002] .... 48.617760: xprt_transmit: RPC: 36 xmit complete
rpcbind-1829 [000] ..s. 48.617770: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [000] ..s. 48.617770: xprt_complete_rqst: RPC: 36 xid ee06d5e8 complete (28 bytes received)
rpc.nfsd-4710 [002] .... 48.617776: xprt_release: RPC: 36 release request ffff8800db829600
rpc.nfsd-4710 [002] .... 48.617782: xprt_alloc_slot: RPC: 37 reserved req ffff8800db829600 xid ef06d5e8
rpc.nfsd-4710 [002] .... 48.617782: xprt_prepare_transmit: RPC: 37 xprt_prepare_transmit
rpc.nfsd-4710 [002] .... 48.617783: xprt_transmit: RPC: 37 xprt_transmit(84)
rpc.nfsd-4710 [002] .... 48.617785: xs_local_send_request: RPC: xs_local_send_request(84) = 0
rpc.nfsd-4710 [002] .... 48.617785: xprt_transmit: RPC: 37 xmit complete
rpcbind-1829 [000] ..s. 48.617794: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [000] ..s. 48.617795: xprt_complete_rqst: RPC: 37 xid ef06d5e8 complete (28 bytes received)
rpc.nfsd-4710 [002] .... 48.617800: xprt_release: RPC: 37 release request ffff8800db829600
rpc.nfsd-4710 [002] .... 48.617802: xprt_alloc_slot: RPC: 38 reserved req ffff8800db829600 xid f006d5e8
rpc.nfsd-4710 [002] .... 48.617802: xprt_prepare_transmit: RPC: 38 xprt_prepare_transmit
rpc.nfsd-4710 [002] .... 48.617803: xprt_transmit: RPC: 38 xprt_transmit(84)
rpc.nfsd-4710 [002] .... 48.617805: xs_local_send_request: RPC: xs_local_send_request(84) = 0
rpc.nfsd-4710 [002] .... 48.617805: xprt_transmit: RPC: 38 xmit complete
rpcbind-1829 [000] ..s. 48.617814: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [000] ..s. 48.617815: xprt_complete_rqst: RPC: 38 xid f006d5e8 complete (28 bytes received)
rpc.nfsd-4710 [002] .... 48.617821: xprt_release: RPC: 38 release request ffff8800db829600
rpc.nfsd-4710 [002] .... 48.617822: xprt_alloc_slot: RPC: 39 reserved req ffff8800db829600 xid f106d5e8
rpc.nfsd-4710 [002] .... 48.617822: xprt_prepare_transmit: RPC: 39 xprt_prepare_transmit
rpc.nfsd-4710 [002] .... 48.617823: xprt_transmit: RPC: 39 xprt_transmit(84)
rpc.nfsd-4710 [002] .... 48.617825: xs_local_send_request: RPC: xs_local_send_request(84) = 0
rpc.nfsd-4710 [002] .... 48.617825: xprt_transmit: RPC: 39 xmit complete
rpcbind-1829 [000] ..s. 48.617834: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [000] ..s. 48.617835: xprt_complete_rqst: RPC: 39 xid f106d5e8 complete (28 bytes received)
rpc.nfsd-4710 [002] .... 48.617840: xprt_release: RPC: 39 release request ffff8800db829600
rpc.nfsd-4710 [002] .... 48.617848: xprt_alloc_slot: RPC: 40 reserved req ffff8800db829600 xid f206d5e8
rpc.nfsd-4710 [002] .... 48.617849: xprt_prepare_transmit: RPC: 40 xprt_prepare_transmit
rpc.nfsd-4710 [002] .... 48.617849: xprt_transmit: RPC: 40 xprt_transmit(80)
rpc.nfsd-4710 [002] .... 48.617851: xs_local_send_request: RPC: xs_local_send_request(80) = 0
rpc.nfsd-4710 [002] .... 48.617851: xprt_transmit: RPC: 40 xmit complete
rpcbind-1829 [000] ..s. 48.617860: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [000] ..s. 48.617861: xprt_complete_rqst: RPC: 40 xid f206d5e8 complete (28 bytes received)
rpc.nfsd-4710 [002] .... 48.617867: xprt_release: RPC: 40 release request ffff8800db829600
rpc.nfsd-4710 [002] .... 48.617868: xprt_alloc_slot: RPC: 41 reserved req ffff8800db829600 xid f306d5e8
rpc.nfsd-4710 [002] .... 48.617869: xprt_prepare_transmit: RPC: 41 xprt_prepare_transmit
rpc.nfsd-4710 [002] .... 48.617869: xprt_transmit: RPC: 41 xprt_transmit(80)
rpc.nfsd-4710 [002] .... 48.617871: xs_local_send_request: RPC: xs_local_send_request(80) = 0
rpc.nfsd-4710 [002] .... 48.617871: xprt_transmit: RPC: 41 xmit complete
rpcbind-1829 [000] ..s. 48.617881: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [000] ..s. 48.617881: xprt_complete_rqst: RPC: 41 xid f306d5e8 complete (28 bytes received)
rpc.nfsd-4710 [002] .... 48.617887: xprt_release: RPC: 41 release request ffff8800db829600
rpc.nfsd-4710 [002] .... 48.617888: xprt_alloc_slot: RPC: 42 reserved req ffff8800db829600 xid f406d5e8
rpc.nfsd-4710 [002] .... 48.617889: xprt_prepare_transmit: RPC: 42 xprt_prepare_transmit
rpc.nfsd-4710 [002] .... 48.617889: xprt_transmit: RPC: 42 xprt_transmit(80)
rpc.nfsd-4710 [002] .... 48.617891: xs_local_send_request: RPC: xs_local_send_request(80) = 0
rpc.nfsd-4710 [002] .... 48.617891: xprt_transmit: RPC: 42 xmit complete
rpcbind-1829 [000] ..s. 48.617901: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [000] ..s. 48.617902: xprt_complete_rqst: RPC: 42 xid f406d5e8 complete (28 bytes received)
rpc.nfsd-4710 [002] .... 48.617907: xprt_release: RPC: 42 release request ffff8800db829600
<idle>-0 [003] ..s. 57.765235: inet_bind_hash: add 2049
<idle>-0 [003] ..s. 57.765278: <stack trace>
=> __inet_inherit_port
=> tcp_v4_syn_recv_sock
=> tcp_check_req
=> tcp_v4_do_rcv
=> tcp_v4_rcv
=> ip_local_deliver_finish
=> __netif_receive_skb_core
=> netif_receive_skb
=> netif_receive_skb_internal
=> br_handle_frame_finish
=> csum_block_add_ext
=> __skb_gro_checksum_complete
=> br_handle_frame
=> br_handle_frame
=> __netif_receive_skb_core
=> read_tsc
=> netif_receive_skb_internal
=> napi_gro_receive
=> e1000_clean_rx_irq
=> e1000_clean
=> try_to_wake_up
=> net_rx_action
=> add_interrupt_randomness
=> __do_softirq
=> ack_ioapic_level
=> irq_exit
=> do_IRQ
=> ret_from_intr
=> cpuidle_enter_state
=> cpuidle_enter_state
=> cpu_startup_entry
=> start_secondary
kworker/u32:7-118 [000] .... 57.767716: xs_setup_tcp: RPC: set up xprt to 192.168.23.22 (port 55201) via tcp
kworker/u32:7-118 [000] .... 57.767726: xprt_create_transport: RPC: created transport ffff88040b251000 with 65536 slots
kworker/0:1H-128 [000] .... 57.767758: xprt_alloc_slot: RPC: 43 reserved req ffff8804033c3800 xid f4185658
kworker/0:1H-128 [000] .... 57.767764: xprt_connect: RPC: 43 xprt_connect xprt ffff88040b251000 is not connected
kworker/0:1H-128 [000] .... 57.767767: xs_connect: RPC: xs_connect scheduled xprt ffff88040b251000
kworker/0:1H-128 [000] ..s. 57.767780: inet_csk_get_port: snum 684
kworker/0:1H-128 [000] ..s. 57.767792: <stack trace>
=> inet_addr_type
=> inet_bind
=> xs_bind
=> sock_setsockopt
=> __sock_create
=> xs_create_sock.isra.18
=> xs_tcp_setup_socket
=> process_one_work
=> worker_thread
=> worker_thread
=> kthread
=> kthread
=> ret_from_fork
=> kthread
kworker/0:1H-128 [000] ..s. 57.767793: inet_bind_hash: add 684
kworker/0:1H-128 [000] ..s. 57.767801: <stack trace>
=> inet_csk_get_port
=> inet_addr_type
=> inet_bind
=> xs_bind
=> sock_setsockopt
=> __sock_create
=> xs_create_sock.isra.18
=> xs_tcp_setup_socket
=> process_one_work
=> worker_thread
=> worker_thread
=> kthread
=> kthread
=> ret_from_fork
=> kthread
kworker/0:1H-128 [000] .... 57.767803: xs_bind: RPC: xs_bind 4.136.255.255:684: ok (0)
kworker/0:1H-128 [000] .... 57.767805: xs_tcp_setup_socket: RPC: worker connecting xprt ffff88040b251000 via tcp to 192.168.23.22 (port 55201)
kworker/0:1H-128 [000] .... 57.767841: xs_tcp_setup_socket: RPC: ffff88040b251000 connect status 115 connected 0 sock state 2
<idle>-0 [003] ..s. 57.768178: xs_tcp_state_change: RPC: xs_tcp_state_change client ffff88040b251000...
<idle>-0 [003] ..s. 57.768180: xs_tcp_state_change: RPC: state 1 conn 0 dead 0 zapped 1 sk_shutdown 0
kworker/3:1H-127 [003] .... 57.768216: xprt_connect_status: RPC: 43 xprt_connect_status: retrying
kworker/3:1H-127 [003] .... 57.768218: xprt_prepare_transmit: RPC: 43 xprt_prepare_transmit
kworker/3:1H-127 [003] .... 57.768229: xprt_transmit: RPC: 43 xprt_transmit(72)
kworker/3:1H-127 [003] .... 57.768245: xs_tcp_send_request: RPC: xs_tcp_send_request(72) = 0
kworker/3:1H-127 [003] .... 57.768246: xprt_transmit: RPC: 43 xmit complete
<idle>-0 [003] ..s. 57.768621: xs_tcp_data_ready: RPC: xs_tcp_data_ready...
<idle>-0 [003] ..s. 57.768622: xs_tcp_data_recv: RPC: xs_tcp_data_recv started
<idle>-0 [003] ..s. 57.768624: xs_tcp_data_recv: RPC: reading TCP record fragment of length 24
<idle>-0 [003] ..s. 57.768625: xs_tcp_data_recv: RPC: reading XID (4 bytes)
<idle>-0 [003] ..s. 57.768626: xs_tcp_data_recv: RPC: reading request with XID f4185658
<idle>-0 [003] ..s. 57.768627: xs_tcp_data_recv: RPC: reading CALL/REPLY flag (4 bytes)
<idle>-0 [003] ..s. 57.768628: xs_tcp_data_recv: RPC: read reply XID f4185658
<idle>-0 [003] ..s. 57.768630: xs_tcp_data_recv: RPC: XID f4185658 read 16 bytes
<idle>-0 [003] ..s. 57.768631: xs_tcp_data_recv: RPC: xprt = ffff88040b251000, tcp_copied = 24, tcp_offset = 24, tcp_reclen = 24
<idle>-0 [003] ..s. 57.768632: xprt_complete_rqst: RPC: 43 xid f4185658 complete (24 bytes received)
<idle>-0 [003] .Ns. 57.768637: xs_tcp_data_recv: RPC: xs_tcp_data_recv done
kworker/3:1H-127 [003] .... 57.768656: xprt_release: RPC: 43 release request ffff8804033c3800
<idle>-0 [003] ..s. 96.518571: inet_bind_hash: add 10993
<idle>-0 [003] ..s. 96.518612: <stack trace>
=> __inet_inherit_port
=> tcp_v4_syn_recv_sock
=> tcp_check_req
=> tcp_v4_do_rcv
=> tcp_v4_rcv
=> ip_local_deliver_finish
=> __netif_receive_skb_core
=> netif_receive_skb
=> netif_receive_skb_internal
=> br_handle_frame_finish
=> br_handle_frame
=> br_handle_frame
=> __netif_receive_skb_core
=> read_tsc
=> netif_receive_skb_internal
=> napi_gro_receive
=> e1000_clean_rx_irq
=> e1000_clean
=> try_to_wake_up
=> net_rx_action
=> add_interrupt_randomness
=> __do_softirq
=> ack_ioapic_level
=> irq_exit
=> do_IRQ
=> ret_from_intr
=> cpuidle_enter_state
=> cpuidle_enter_state
=> cpu_startup_entry
=> start_secondary
<idle>-0 [003] ..s. 138.868936: inet_bind_hash: add 22
<idle>-0 [003] ..s. 138.868978: <stack trace>
=> __inet_inherit_port
=> tcp_v4_syn_recv_sock
=> tcp_check_req
=> tcp_v4_do_rcv
=> tcp_v4_rcv
=> ip_local_deliver_finish
=> __netif_receive_skb_core
=> netif_receive_skb
=> netif_receive_skb_internal
=> br_handle_frame_finish
=> br_handle_frame
=> br_handle_frame
=> __netif_receive_skb_core
=> read_tsc
=> netif_receive_skb_internal
=> napi_gro_receive
=> e1000_clean_rx_irq
=> e1000_clean
=> net_rx_action
=> add_interrupt_randomness
=> __do_softirq
=> ack_ioapic_level
=> irq_exit
=> do_IRQ
=> ret_from_intr
=> cpuidle_enter_state
=> cpuidle_enter_state
=> cpu_startup_entry
=> start_secondary
<idle>-0 [003] ..s. 144.103035: inet_bind_hash: add 22
<idle>-0 [003] ..s. 144.103078: <stack trace>
=> __inet_inherit_port
=> tcp_v4_syn_recv_sock
=> tcp_check_req
=> tcp_v4_do_rcv
=> tcp_v4_rcv
=> ip_local_deliver_finish
=> __netif_receive_skb_core
=> netif_receive_skb
=> netif_receive_skb_internal
=> br_handle_frame_finish
=> br_handle_frame
=> br_handle_frame
=> __netif_receive_skb_core
=> read_tsc
=> netif_receive_skb_internal
=> napi_gro_receive
=> e1000_clean_rx_irq
=> e1000_clean
=> try_to_wake_up
=> net_rx_action
=> add_interrupt_randomness
=> __do_softirq
=> ack_ioapic_level
=> irq_exit
=> do_IRQ
=> ret_from_intr
=> cpuidle_enter_state
=> cpuidle_enter_state
=> cpu_startup_entry
=> start_secondary
<idle>-0 [003] ..s. 174.758123: inet_bind_hash: add 10993
<idle>-0 [003] ..s. 174.758151: <stack trace>
=> __inet_inherit_port
=> tcp_v4_syn_recv_sock
=> tcp_check_req
=> tcp_v4_do_rcv
=> tcp_v4_rcv
=> ip_local_deliver_finish
=> __netif_receive_skb_core
=> netif_receive_skb
=> netif_receive_skb_internal
=> br_handle_frame_finish
=> try_to_wake_up
=> ipt_do_table
=> br_handle_frame
=> br_handle_frame
=> __netif_receive_skb_core
=> read_tsc
=> netif_receive_skb_internal
=> napi_gro_receive
=> e1000_clean_rx_irq
=> e1000_clean
=> net_rx_action
=> add_interrupt_randomness
=> __do_softirq
=> ack_ioapic_level
=> irq_exit
=> do_IRQ
=> ret_from_intr
=> cpuidle_enter_state
=> cpuidle_enter_state
=> cpu_startup_entry
=> start_secondary
<idle>-0 [003] ..s. 216.551651: inet_bind_hash: add 10993
<idle>-0 [003] ..s. 216.551689: <stack trace>
=> __inet_inherit_port
=> tcp_v4_syn_recv_sock
=> tcp_check_req
=> tcp_v4_do_rcv
=> tcp_v4_rcv
=> ip_local_deliver_finish
=> __netif_receive_skb_core
=> netif_receive_skb
=> netif_receive_skb_internal
=> br_handle_frame_finish
=> br_handle_frame
=> br_handle_frame
=> __netif_receive_skb_core
=> read_tsc
=> netif_receive_skb_internal
=> napi_gro_receive
=> e1000_clean_rx_irq
=> e1000_clean
=> try_to_wake_up
=> net_rx_action
=> add_interrupt_randomness
=> __do_softirq
=> ack_ioapic_level
=> irq_exit
=> do_IRQ
=> ret_from_intr
=> cpuidle_enter_state
=> cpuidle_enter_state
=> cpu_startup_entry
=> start_secondary
kworker/3:1H-127 [003] ..s. 358.800834: xs_tcp_state_change: RPC: xs_tcp_state_change client ffff88040b251000...
kworker/3:1H-127 [003] ..s. 358.800837: xs_tcp_state_change: RPC: state 4 conn 1 dead 0 zapped 1 sk_shutdown 3
<idle>-0 [003] ..s. 358.801180: xs_tcp_state_change: RPC: xs_tcp_state_change client ffff88040b251000...
<idle>-0 [003] ..s. 358.801182: xs_tcp_state_change: RPC: state 5 conn 0 dead 0 zapped 1 sk_shutdown 3
<idle>-0 [003] ..s. 358.801204: xs_tcp_state_change: RPC: xs_tcp_state_change client ffff88040b251000...
<idle>-0 [003] ..s. 358.801205: xs_tcp_state_change: RPC: state 7 conn 0 dead 0 zapped 1 sk_shutdown 3
<idle>-0 [003] ..s. 358.801206: xprt_disconnect_done: RPC: disconnected transport ffff88040b251000
<idle>-0 [003] ..s. 358.801207: xs_tcp_state_change: RPC: xs_tcp_state_change client ffff88040b251000...
<idle>-0 [003] ..s. 358.801208: xs_tcp_state_change: RPC: state 7 conn 0 dead 0 zapped 1 sk_shutdown 3
<idle>-0 [003] ..s. 358.801208: xprt_disconnect_done: RPC: disconnected transport ffff88040b251000
<idle>-0 [003] ..s. 358.801209: xs_tcp_data_ready: RPC: xs_tcp_data_ready...
<idle>-0 [003] ..s. 476.855136: inet_bind_hash: add 10993
<idle>-0 [003] ..s. 476.855172: <stack trace>
=> __inet_inherit_port
=> tcp_v4_syn_recv_sock
=> tcp_check_req
=> tcp_v4_do_rcv
=> tcp_v4_rcv
=> ip_local_deliver_finish
=> __netif_receive_skb_core
=> netif_receive_skb
=> netif_receive_skb_internal
=> br_handle_frame_finish
=> br_handle_frame
=> br_handle_frame
=> __netif_receive_skb_core
=> read_tsc
=> netif_receive_skb_internal
=> napi_gro_receive
=> e1000_clean_rx_irq
=> e1000_clean
=> net_rx_action
=> add_interrupt_randomness
=> __do_softirq
=> ack_ioapic_level
=> irq_exit
=> do_IRQ
=> ret_from_intr
=> cpuidle_enter_state
=> cpuidle_enter_state
=> cpu_startup_entry
=> start_secondary
-- Steve
On Fri, 2015-06-12 at 10:57 -0400, Trond Myklebust wrote:
> On Fri, Jun 12, 2015 at 10:40 AM, Eric Dumazet <[email protected]> wrote:
> > Strange, because the usual way to not have time-wait is to use SO_LINGER
> > with linger=0
> >
> > And apparently xs_tcp_finish_connecting() has this :
> >
> > sock_reset_flag(sk, SOCK_LINGER);
> > tcp_sk(sk)->linger2 = 0;
>
> Are you sure? I thought that SO_LINGER is more about controlling how
> the socket behaves w.r.t. waiting for the TCP_CLOSE state to be
> achieved (i.e. about aborting the FIN state negotiation early). I've
> never observed an effect on the TCP time-wait states.
Definitely this is standard way to avoid time-wait states.
Maybe not very well documented. We probably should...
http://stackoverflow.com/questions/3757289/tcp-option-so-linger-zero-when-its-required
> Yes. SO_REUSEADDR has the problem that it requires you bind to
> something other than 0.0.0.0, so it is less appropriate for outgoing
> connections; the RPC code really should not have to worry about
> routing and routability of a particular source address.
OK understood.
Are you trying to reuse same 4-tuple ?
On Fri, 12 Jun 2015 11:34:20 -0400
Steven Rostedt <[email protected]> wrote:
> On Fri, 12 Jun 2015 07:40:35 -0700
> Eric Dumazet <[email protected]> wrote:
>
> > Strange, because the usual way to not have time-wait is to use SO_LINGER
> > with linger=0
> >
> > And apparently xs_tcp_finish_connecting() has this :
> >
> > sock_reset_flag(sk, SOCK_LINGER);
> > tcp_sk(sk)->linger2 = 0;
> >
> > Are you sure SO_REUSEADDR was not the thing you wanted ?
> >
> > Steven, have you tried kmemleak ?
>
> Nope, and again, I'm hesitant on adding too much debug. This is my main
> server (build server, ssh server, web server, mail server, proxy
> server, irc server, etc).
>
> Although, I made dprintk() into trace_printk() in xprtsock.c and
> xprt.c, and reran it. Here's the output:
>
I reverted the following commits:
c627d31ba0696cbd829437af2be2f2dee3546b1e
9e2b9f37760e129cee053cc7b6e7288acc2a7134
caf4ccd4e88cf2795c927834bc488c8321437586
And the issue goes away. That is, I watched the port go from
ESTABLISHED to TIME_WAIT, and then gone, and theirs no hidden port.
In fact, I watched the port with my portlist.c module, and it
disappeared there too when it entered the TIME_WAIT state.
Here's the trace of that run:
# tracer: nop
#
# entries-in-buffer/entries-written: 397/397 #P:4
#
# _-----=> irqs-off
# / _----=> need-resched
# | / _---=> hardirq/softirq
# || / _--=> preempt-depth
# ||| / delay
# TASK-PID CPU# |||| TIMESTAMP FUNCTION
# | | | |||| | |
rpc.nfsd-3932 [002] .... 44.098689: xs_local_setup_socket: RPC: worker connecting xprt ffff88040b6f5800 via AF_LOCAL to /var/run/rpcbind.sock
rpc.nfsd-3932 [002] .... 44.098699: xs_local_setup_socket: RPC: xprt ffff88040b6f5800 connected to /var/run/rpcbind.sock
rpc.nfsd-3932 [002] .... 44.098700: xs_setup_local: RPC: set up xprt to /var/run/rpcbind.sock via AF_LOCAL
rpc.nfsd-3932 [002] .... 44.098704: xprt_create_transport: RPC: created transport ffff88040b6f5800 with 65536 slots
rpc.nfsd-3932 [002] .... 44.098717: xprt_alloc_slot: RPC: 1 reserved req ffff8800d8cc6800 xid 0850084b
rpc.nfsd-3932 [002] .... 44.098720: xprt_prepare_transmit: RPC: 1 xprt_prepare_transmit
rpc.nfsd-3932 [002] .... 44.098721: xprt_transmit: RPC: 1 xprt_transmit(44)
rpc.nfsd-3932 [002] .... 44.098724: xs_local_send_request: RPC: xs_local_send_request(44) = 0
rpc.nfsd-3932 [002] .... 44.098724: xprt_transmit: RPC: 1 xmit complete
rpcbind-1829 [001] ..s. 44.098812: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [001] ..s. 44.098815: xprt_complete_rqst: RPC: 1 xid 0850084b complete (24 bytes received)
rpc.nfsd-3932 [002] .... 44.098854: xprt_release: RPC: 1 release request ffff8800d8cc6800
rpc.nfsd-3932 [002] .... 44.098864: xprt_alloc_slot: RPC: 2 reserved req ffff8800d8cc6800 xid 0950084b
rpc.nfsd-3932 [002] .... 44.098865: xprt_prepare_transmit: RPC: 2 xprt_prepare_transmit
rpc.nfsd-3932 [002] .... 44.098865: xprt_transmit: RPC: 2 xprt_transmit(44)
rpc.nfsd-3932 [002] .... 44.098870: xs_local_send_request: RPC: xs_local_send_request(44) = 0
rpc.nfsd-3932 [002] .... 44.098870: xprt_transmit: RPC: 2 xmit complete
rpcbind-1829 [001] ..s. 44.098915: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [001] ..s. 44.098917: xprt_complete_rqst: RPC: 2 xid 0950084b complete (24 bytes received)
rpc.nfsd-3932 [002] .... 44.098968: xprt_release: RPC: 2 release request ffff8800d8cc6800
rpc.nfsd-3932 [002] .... 44.098971: xprt_alloc_slot: RPC: 3 reserved req ffff8800d8cc6800 xid 0a50084b
rpc.nfsd-3932 [002] .... 44.098972: xprt_prepare_transmit: RPC: 3 xprt_prepare_transmit
rpc.nfsd-3932 [002] .... 44.098973: xprt_transmit: RPC: 3 xprt_transmit(68)
rpc.nfsd-3932 [002] .... 44.098978: xs_local_send_request: RPC: xs_local_send_request(68) = 0
rpc.nfsd-3932 [002] .... 44.098978: xprt_transmit: RPC: 3 xmit complete
rpcbind-1829 [001] ..s. 44.099029: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [001] ..s. 44.099031: xprt_complete_rqst: RPC: 3 xid 0a50084b complete (28 bytes received)
rpc.nfsd-3932 [002] .... 44.099083: xprt_release: RPC: 3 release request ffff8800d8cc6800
rpc.nfsd-3932 [002] .... 44.099086: xprt_alloc_slot: RPC: 4 reserved req ffff8800d8cc6800 xid 0b50084b
rpc.nfsd-3932 [002] .... 44.099086: xprt_prepare_transmit: RPC: 4 xprt_prepare_transmit
rpc.nfsd-3932 [002] .... 44.099087: xprt_transmit: RPC: 4 xprt_transmit(68)
rpc.nfsd-3932 [002] .... 44.099091: xs_local_send_request: RPC: xs_local_send_request(68) = 0
rpc.nfsd-3932 [002] .... 44.099092: xprt_transmit: RPC: 4 xmit complete
rpcbind-1829 [001] ..s. 44.099145: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [001] ..s. 44.099147: xprt_complete_rqst: RPC: 4 xid 0b50084b complete (28 bytes received)
rpc.nfsd-3932 [002] .... 44.099155: xprt_release: RPC: 4 release request ffff8800d8cc6800
rpc.nfsd-3932 [002] .... 44.099157: xprt_alloc_slot: RPC: 5 reserved req ffff8800d8cc6800 xid 0c50084b
rpc.nfsd-3932 [002] .... 44.099157: xprt_prepare_transmit: RPC: 5 xprt_prepare_transmit
rpc.nfsd-3932 [002] .... 44.099158: xprt_transmit: RPC: 5 xprt_transmit(68)
rpc.nfsd-3932 [002] .... 44.099161: xs_local_send_request: RPC: xs_local_send_request(68) = 0
rpc.nfsd-3932 [002] .... 44.099162: xprt_transmit: RPC: 5 xmit complete
rpcbind-1829 [001] ..s. 44.099172: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [001] ..s. 44.099173: xprt_complete_rqst: RPC: 5 xid 0c50084b complete (28 bytes received)
rpc.nfsd-3932 [002] .... 44.099180: xprt_release: RPC: 5 release request ffff8800d8cc6800
rpc.nfsd-3932 [002] .... 44.099181: xprt_alloc_slot: RPC: 6 reserved req ffff8800d8cc6800 xid 0d50084b
rpc.nfsd-3932 [002] .... 44.099181: xprt_prepare_transmit: RPC: 6 xprt_prepare_transmit
rpc.nfsd-3932 [002] .... 44.099182: xprt_transmit: RPC: 6 xprt_transmit(68)
rpc.nfsd-3932 [002] .... 44.099184: xs_local_send_request: RPC: xs_local_send_request(68) = 0
rpc.nfsd-3932 [002] .... 44.099184: xprt_transmit: RPC: 6 xmit complete
rpcbind-1829 [001] ..s. 44.099204: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [001] ..s. 44.099205: xprt_complete_rqst: RPC: 6 xid 0d50084b complete (28 bytes received)
rpc.nfsd-3932 [002] .... 44.099212: xprt_release: RPC: 6 release request ffff8800d8cc6800
rpc.nfsd-3932 [002] .... 44.099213: xprt_alloc_slot: RPC: 7 reserved req ffff8800d8cc6800 xid 0e50084b
rpc.nfsd-3932 [002] .... 44.099214: xprt_prepare_transmit: RPC: 7 xprt_prepare_transmit
rpc.nfsd-3932 [002] .... 44.099214: xprt_transmit: RPC: 7 xprt_transmit(68)
rpc.nfsd-3932 [002] .... 44.099217: xs_local_send_request: RPC: xs_local_send_request(68) = 0
rpc.nfsd-3932 [002] .... 44.099217: xprt_transmit: RPC: 7 xmit complete
rpcbind-1829 [001] ..s. 44.099228: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [001] ..s. 44.099229: xprt_complete_rqst: RPC: 7 xid 0e50084b complete (28 bytes received)
rpc.nfsd-3932 [002] .... 44.099236: xprt_release: RPC: 7 release request ffff8800d8cc6800
rpc.nfsd-3932 [002] .... 44.099241: xprt_alloc_slot: RPC: 8 reserved req ffff8800d8cc6800 xid 0f50084b
rpc.nfsd-3932 [002] .... 44.099241: xprt_prepare_transmit: RPC: 8 xprt_prepare_transmit
rpc.nfsd-3932 [002] .... 44.099242: xprt_transmit: RPC: 8 xprt_transmit(84)
rpc.nfsd-3932 [002] .... 44.099244: xs_local_send_request: RPC: xs_local_send_request(84) = 0
rpc.nfsd-3932 [002] .... 44.099244: xprt_transmit: RPC: 8 xmit complete
rpcbind-1829 [001] ..s. 44.099261: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [001] ..s. 44.099263: xprt_complete_rqst: RPC: 8 xid 0f50084b complete (28 bytes received)
rpc.nfsd-3932 [002] .... 44.099270: xprt_release: RPC: 8 release request ffff8800d8cc6800
rpc.nfsd-3932 [002] .... 44.099271: xprt_alloc_slot: RPC: 9 reserved req ffff8800d8cc6800 xid 1050084b
rpc.nfsd-3932 [002] .... 44.099272: xprt_prepare_transmit: RPC: 9 xprt_prepare_transmit
rpc.nfsd-3932 [002] .... 44.099272: xprt_transmit: RPC: 9 xprt_transmit(84)
rpc.nfsd-3932 [002] .... 44.099275: xs_local_send_request: RPC: xs_local_send_request(84) = 0
rpc.nfsd-3932 [002] .... 44.099275: xprt_transmit: RPC: 9 xmit complete
rpcbind-1829 [001] ..s. 44.099290: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [001] ..s. 44.099291: xprt_complete_rqst: RPC: 9 xid 1050084b complete (28 bytes received)
rpc.nfsd-3932 [002] .... 44.099298: xprt_release: RPC: 9 release request ffff8800d8cc6800
rpc.nfsd-3932 [002] .... 44.099300: xprt_alloc_slot: RPC: 10 reserved req ffff8800d8cc6800 xid 1150084b
rpc.nfsd-3932 [002] .... 44.099301: xprt_prepare_transmit: RPC: 10 xprt_prepare_transmit
rpc.nfsd-3932 [002] .... 44.099301: xprt_transmit: RPC: 10 xprt_transmit(84)
rpc.nfsd-3932 [002] .... 44.099303: xs_local_send_request: RPC: xs_local_send_request(84) = 0
rpc.nfsd-3932 [002] .... 44.099304: xprt_transmit: RPC: 10 xmit complete
rpcbind-1829 [001] ..s. 44.099318: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [001] ..s. 44.099320: xprt_complete_rqst: RPC: 10 xid 1150084b complete (28 bytes received)
rpc.nfsd-3932 [002] .... 44.099327: xprt_release: RPC: 10 release request ffff8800d8cc6800
rpc.nfsd-3932 [002] .... 44.099329: xprt_alloc_slot: RPC: 11 reserved req ffff8800d8cc6800 xid 1250084b
rpc.nfsd-3932 [002] .... 44.099329: xprt_prepare_transmit: RPC: 11 xprt_prepare_transmit
rpc.nfsd-3932 [002] .... 44.099330: xprt_transmit: RPC: 11 xprt_transmit(84)
rpc.nfsd-3932 [002] .... 44.099332: xs_local_send_request: RPC: xs_local_send_request(84) = 0
rpc.nfsd-3932 [002] .... 44.099332: xprt_transmit: RPC: 11 xmit complete
rpcbind-1829 [001] ..s. 44.099344: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [001] ..s. 44.099345: xprt_complete_rqst: RPC: 11 xid 1250084b complete (28 bytes received)
rpc.nfsd-3932 [002] .... 44.099352: xprt_release: RPC: 11 release request ffff8800d8cc6800
rpc.nfsd-3932 [002] .... 44.099354: xprt_alloc_slot: RPC: 12 reserved req ffff8800d8cc6800 xid 1350084b
rpc.nfsd-3932 [002] .... 44.099354: xprt_prepare_transmit: RPC: 12 xprt_prepare_transmit
rpc.nfsd-3932 [002] .... 44.099355: xprt_transmit: RPC: 12 xprt_transmit(84)
rpc.nfsd-3932 [002] .... 44.099357: xs_local_send_request: RPC: xs_local_send_request(84) = 0
rpc.nfsd-3932 [002] .... 44.099357: xprt_transmit: RPC: 12 xmit complete
rpcbind-1829 [001] ..s. 44.099368: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [001] ..s. 44.099369: xprt_complete_rqst: RPC: 12 xid 1350084b complete (28 bytes received)
rpc.nfsd-3932 [002] .... 44.099376: xprt_release: RPC: 12 release request ffff8800d8cc6800
rpc.nfsd-3932 [002] .... 44.099394: xprt_alloc_slot: RPC: 13 reserved req ffff8800d8cc6800 xid 1450084b
rpc.nfsd-3932 [002] .... 44.099395: xprt_prepare_transmit: RPC: 13 xprt_prepare_transmit
rpc.nfsd-3932 [002] .... 44.099395: xprt_transmit: RPC: 13 xprt_transmit(84)
rpc.nfsd-3932 [002] .... 44.099399: xs_local_send_request: RPC: xs_local_send_request(84) = 0
rpc.nfsd-3932 [002] .... 44.099399: xprt_transmit: RPC: 13 xmit complete
rpcbind-1829 [001] ..s. 44.099405: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [001] ..s. 44.099406: xprt_complete_rqst: RPC: 13 xid 1450084b complete (28 bytes received)
rpc.nfsd-3932 [002] .... 44.099413: xprt_release: RPC: 13 release request ffff8800d8cc6800
rpc.nfsd-3932 [002] .... 44.099414: xprt_alloc_slot: RPC: 14 reserved req ffff8800d8cc6800 xid 1550084b
rpc.nfsd-3932 [002] .... 44.099415: xprt_prepare_transmit: RPC: 14 xprt_prepare_transmit
rpc.nfsd-3932 [002] .... 44.099415: xprt_transmit: RPC: 14 xprt_transmit(84)
rpc.nfsd-3932 [002] .... 44.099418: xs_local_send_request: RPC: xs_local_send_request(84) = 0
rpc.nfsd-3932 [002] .... 44.099418: xprt_transmit: RPC: 14 xmit complete
rpcbind-1829 [001] ..s. 44.099424: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [001] ..s. 44.099425: xprt_complete_rqst: RPC: 14 xid 1550084b complete (28 bytes received)
rpc.nfsd-3932 [002] .... 44.099431: xprt_release: RPC: 14 release request ffff8800d8cc6800
rpc.nfsd-3932 [002] .... 44.099433: xprt_alloc_slot: RPC: 15 reserved req ffff8800d8cc6800 xid 1650084b
rpc.nfsd-3932 [002] .... 44.099433: xprt_prepare_transmit: RPC: 15 xprt_prepare_transmit
rpc.nfsd-3932 [002] .... 44.099434: xprt_transmit: RPC: 15 xprt_transmit(84)
rpc.nfsd-3932 [002] .... 44.099436: xs_local_send_request: RPC: xs_local_send_request(84) = 0
rpc.nfsd-3932 [002] .... 44.099436: xprt_transmit: RPC: 15 xmit complete
rpcbind-1829 [001] ..s. 44.099443: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [001] ..s. 44.099443: xprt_complete_rqst: RPC: 15 xid 1650084b complete (28 bytes received)
rpc.nfsd-3932 [002] .... 44.099449: xprt_release: RPC: 15 release request ffff8800d8cc6800
rpc.nfsd-3932 [002] .... 44.099451: xprt_alloc_slot: RPC: 16 reserved req ffff8800d8cc6800 xid 1750084b
rpc.nfsd-3932 [002] .... 44.099451: xprt_prepare_transmit: RPC: 16 xprt_prepare_transmit
rpc.nfsd-3932 [002] .... 44.099452: xprt_transmit: RPC: 16 xprt_transmit(84)
rpc.nfsd-3932 [002] .... 44.099454: xs_local_send_request: RPC: xs_local_send_request(84) = 0
rpc.nfsd-3932 [002] .... 44.099455: xprt_transmit: RPC: 16 xmit complete
rpcbind-1829 [001] ..s. 44.099461: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [001] ..s. 44.099461: xprt_complete_rqst: RPC: 16 xid 1750084b complete (28 bytes received)
rpc.nfsd-3932 [002] .... 44.099467: xprt_release: RPC: 16 release request ffff8800d8cc6800
rpc.nfsd-3932 [002] .... 44.099469: xprt_alloc_slot: RPC: 17 reserved req ffff8800d8cc6800 xid 1850084b
rpc.nfsd-3932 [002] .... 44.099469: xprt_prepare_transmit: RPC: 17 xprt_prepare_transmit
rpc.nfsd-3932 [002] .... 44.099470: xprt_transmit: RPC: 17 xprt_transmit(84)
rpc.nfsd-3932 [002] .... 44.099472: xs_local_send_request: RPC: xs_local_send_request(84) = 0
rpc.nfsd-3932 [002] .... 44.099472: xprt_transmit: RPC: 17 xmit complete
rpcbind-1829 [001] ..s. 44.099479: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [001] ..s. 44.099479: xprt_complete_rqst: RPC: 17 xid 1850084b complete (28 bytes received)
rpc.nfsd-3932 [002] .... 44.099485: xprt_release: RPC: 17 release request ffff8800d8cc6800
rpc.nfsd-3932 [002] .... 44.100111: xprt_alloc_slot: RPC: 18 reserved req ffff8800d8cc6800 xid 1950084b
rpc.nfsd-3932 [002] .... 44.100112: xprt_prepare_transmit: RPC: 18 xprt_prepare_transmit
rpc.nfsd-3932 [002] .... 44.100113: xprt_transmit: RPC: 18 xprt_transmit(80)
rpc.nfsd-3932 [002] .... 44.100118: xs_local_send_request: RPC: xs_local_send_request(80) = 0
rpc.nfsd-3932 [002] .... 44.100118: xprt_transmit: RPC: 18 xmit complete
rpcbind-1829 [001] ..s. 44.100124: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [001] ..s. 44.100125: xprt_complete_rqst: RPC: 18 xid 1950084b complete (28 bytes received)
rpc.nfsd-3932 [002] .... 44.100132: xprt_release: RPC: 18 release request ffff8800d8cc6800
rpc.nfsd-3932 [002] .... 44.100134: xprt_alloc_slot: RPC: 19 reserved req ffff8800d8cc6800 xid 1a50084b
rpc.nfsd-3932 [002] .... 44.100135: xprt_prepare_transmit: RPC: 19 xprt_prepare_transmit
rpc.nfsd-3932 [002] .... 44.100135: xprt_transmit: RPC: 19 xprt_transmit(80)
rpc.nfsd-3932 [002] .... 44.100138: xs_local_send_request: RPC: xs_local_send_request(80) = 0
rpc.nfsd-3932 [002] .... 44.100138: xprt_transmit: RPC: 19 xmit complete
rpcbind-1829 [001] ..s. 44.100144: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [001] ..s. 44.100145: xprt_complete_rqst: RPC: 19 xid 1a50084b complete (28 bytes received)
rpc.nfsd-3932 [002] .... 44.100151: xprt_release: RPC: 19 release request ffff8800d8cc6800
rpc.nfsd-3932 [002] .... 44.100153: xprt_alloc_slot: RPC: 20 reserved req ffff8800d8cc6800 xid 1b50084b
rpc.nfsd-3932 [002] .... 44.100153: xprt_prepare_transmit: RPC: 20 xprt_prepare_transmit
rpc.nfsd-3932 [002] .... 44.100154: xprt_transmit: RPC: 20 xprt_transmit(80)
rpc.nfsd-3932 [002] .... 44.100156: xs_local_send_request: RPC: xs_local_send_request(80) = 0
rpc.nfsd-3932 [002] .... 44.100156: xprt_transmit: RPC: 20 xmit complete
rpcbind-1829 [001] ..s. 44.100162: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [001] ..s. 44.100163: xprt_complete_rqst: RPC: 20 xid 1b50084b complete (28 bytes received)
rpc.nfsd-3932 [002] .... 44.100169: xprt_release: RPC: 20 release request ffff8800d8cc6800
rpc.nfsd-3932 [002] .... 44.100171: xprt_alloc_slot: RPC: 21 reserved req ffff8800d8cc6800 xid 1c50084b
rpc.nfsd-3932 [002] .... 44.100171: xprt_prepare_transmit: RPC: 21 xprt_prepare_transmit
rpc.nfsd-3932 [002] .... 44.100172: xprt_transmit: RPC: 21 xprt_transmit(80)
rpc.nfsd-3932 [002] .... 44.100174: xs_local_send_request: RPC: xs_local_send_request(80) = 0
rpc.nfsd-3932 [002] .... 44.100174: xprt_transmit: RPC: 21 xmit complete
rpcbind-1829 [001] ..s. 44.100180: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [001] ..s. 44.100181: xprt_complete_rqst: RPC: 21 xid 1c50084b complete (28 bytes received)
rpc.nfsd-3932 [002] .... 44.100187: xprt_release: RPC: 21 release request ffff8800d8cc6800
rpc.nfsd-3932 [002] .... 44.100189: xprt_alloc_slot: RPC: 22 reserved req ffff8800d8cc6800 xid 1d50084b
rpc.nfsd-3932 [002] .... 44.100189: xprt_prepare_transmit: RPC: 22 xprt_prepare_transmit
rpc.nfsd-3932 [002] .... 44.100190: xprt_transmit: RPC: 22 xprt_transmit(80)
rpc.nfsd-3932 [002] .... 44.100192: xs_local_send_request: RPC: xs_local_send_request(80) = 0
rpc.nfsd-3932 [002] .... 44.100192: xprt_transmit: RPC: 22 xmit complete
rpcbind-1829 [001] ..s. 44.100198: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [001] ..s. 44.100199: xprt_complete_rqst: RPC: 22 xid 1d50084b complete (28 bytes received)
rpc.nfsd-3932 [002] .... 44.100205: xprt_release: RPC: 22 release request ffff8800d8cc6800
rpc.nfsd-3932 [002] .... 44.100223: xprt_alloc_slot: RPC: 23 reserved req ffff8800d8cc6800 xid 1e50084b
rpc.nfsd-3932 [002] .... 44.100223: xprt_prepare_transmit: RPC: 23 xprt_prepare_transmit
rpc.nfsd-3932 [002] .... 44.100224: xprt_transmit: RPC: 23 xprt_transmit(80)
rpc.nfsd-3932 [002] .... 44.100227: xs_local_send_request: RPC: xs_local_send_request(80) = 0
rpc.nfsd-3932 [002] .... 44.100227: xprt_transmit: RPC: 23 xmit complete
rpcbind-1829 [001] ..s. 44.100233: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [001] ..s. 44.100234: xprt_complete_rqst: RPC: 23 xid 1e50084b complete (28 bytes received)
rpc.nfsd-3932 [002] .... 44.100241: xprt_release: RPC: 23 release request ffff8800d8cc6800
rpc.nfsd-3932 [002] .... 44.100242: xprt_alloc_slot: RPC: 24 reserved req ffff8800d8cc6800 xid 1f50084b
rpc.nfsd-3932 [002] .... 44.100243: xprt_prepare_transmit: RPC: 24 xprt_prepare_transmit
rpc.nfsd-3932 [002] .... 44.100243: xprt_transmit: RPC: 24 xprt_transmit(80)
rpc.nfsd-3932 [002] .... 44.100246: xs_local_send_request: RPC: xs_local_send_request(80) = 0
rpc.nfsd-3932 [002] .... 44.100246: xprt_transmit: RPC: 24 xmit complete
rpcbind-1829 [001] ..s. 44.100252: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [001] ..s. 44.100253: xprt_complete_rqst: RPC: 24 xid 1f50084b complete (28 bytes received)
rpc.nfsd-3932 [002] .... 44.100259: xprt_release: RPC: 24 release request ffff8800d8cc6800
rpc.nfsd-3932 [002] .... 44.100260: xprt_alloc_slot: RPC: 25 reserved req ffff8800d8cc6800 xid 2050084b
rpc.nfsd-3932 [002] .... 44.100261: xprt_prepare_transmit: RPC: 25 xprt_prepare_transmit
rpc.nfsd-3932 [002] .... 44.100261: xprt_transmit: RPC: 25 xprt_transmit(80)
rpc.nfsd-3932 [002] .... 44.100263: xs_local_send_request: RPC: xs_local_send_request(80) = 0
rpc.nfsd-3932 [002] .... 44.100264: xprt_transmit: RPC: 25 xmit complete
rpcbind-1829 [001] ..s. 44.100270: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [001] ..s. 44.100270: xprt_complete_rqst: RPC: 25 xid 2050084b complete (28 bytes received)
rpc.nfsd-3932 [002] .... 44.100276: xprt_release: RPC: 25 release request ffff8800d8cc6800
rpc.nfsd-3932 [002] .... 44.100278: xprt_alloc_slot: RPC: 26 reserved req ffff8800d8cc6800 xid 2150084b
rpc.nfsd-3932 [002] .... 44.100278: xprt_prepare_transmit: RPC: 26 xprt_prepare_transmit
rpc.nfsd-3932 [002] .... 44.100279: xprt_transmit: RPC: 26 xprt_transmit(80)
rpc.nfsd-3932 [002] .... 44.100281: xs_local_send_request: RPC: xs_local_send_request(80) = 0
rpc.nfsd-3932 [002] .... 44.100281: xprt_transmit: RPC: 26 xmit complete
rpcbind-1829 [001] ..s. 44.100287: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [001] ..s. 44.100288: xprt_complete_rqst: RPC: 26 xid 2150084b complete (28 bytes received)
rpc.nfsd-3932 [002] .... 44.100294: xprt_release: RPC: 26 release request ffff8800d8cc6800
rpc.nfsd-3932 [002] .... 44.100296: xprt_alloc_slot: RPC: 27 reserved req ffff8800d8cc6800 xid 2250084b
rpc.nfsd-3932 [002] .... 44.100296: xprt_prepare_transmit: RPC: 27 xprt_prepare_transmit
rpc.nfsd-3932 [002] .... 44.100297: xprt_transmit: RPC: 27 xprt_transmit(80)
rpc.nfsd-3932 [002] .... 44.100299: xs_local_send_request: RPC: xs_local_send_request(80) = 0
rpc.nfsd-3932 [002] .... 44.100299: xprt_transmit: RPC: 27 xmit complete
rpcbind-1829 [001] ..s. 44.100305: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [001] ..s. 44.100306: xprt_complete_rqst: RPC: 27 xid 2250084b complete (28 bytes received)
rpc.nfsd-3932 [002] .... 44.100312: xprt_release: RPC: 27 release request ffff8800d8cc6800
rpc.nfsd-3932 [002] .... 44.100403: xprt_alloc_slot: RPC: 28 reserved req ffff8800d8cc6800 xid 2350084b
rpc.nfsd-3932 [002] .... 44.100404: xprt_prepare_transmit: RPC: 28 xprt_prepare_transmit
rpc.nfsd-3932 [002] .... 44.100405: xprt_transmit: RPC: 28 xprt_transmit(68)
rpc.nfsd-3932 [002] .... 44.100409: xs_local_send_request: RPC: xs_local_send_request(68) = 0
rpc.nfsd-3932 [002] .... 44.100409: xprt_transmit: RPC: 28 xmit complete
rpcbind-1829 [001] ..s. 44.100415: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [001] ..s. 44.100416: xprt_complete_rqst: RPC: 28 xid 2350084b complete (28 bytes received)
rpc.nfsd-3932 [002] .... 44.100422: xprt_release: RPC: 28 release request ffff8800d8cc6800
rpc.nfsd-3932 [002] .... 44.100423: xprt_alloc_slot: RPC: 29 reserved req ffff8800d8cc6800 xid 2450084b
rpc.nfsd-3932 [002] .... 44.100424: xprt_prepare_transmit: RPC: 29 xprt_prepare_transmit
rpc.nfsd-3932 [002] .... 44.100424: xprt_transmit: RPC: 29 xprt_transmit(68)
rpc.nfsd-3932 [002] .... 44.100427: xs_local_send_request: RPC: xs_local_send_request(68) = 0
rpc.nfsd-3932 [002] .... 44.100427: xprt_transmit: RPC: 29 xmit complete
rpcbind-1829 [001] ..s. 44.100432: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [001] ..s. 44.100433: xprt_complete_rqst: RPC: 29 xid 2450084b complete (28 bytes received)
rpc.nfsd-3932 [002] .... 44.100439: xprt_release: RPC: 29 release request ffff8800d8cc6800
rpc.nfsd-3932 [002] .... 44.100440: xprt_alloc_slot: RPC: 30 reserved req ffff8800d8cc6800 xid 2550084b
rpc.nfsd-3932 [002] .... 44.100441: xprt_prepare_transmit: RPC: 30 xprt_prepare_transmit
rpc.nfsd-3932 [002] .... 44.100441: xprt_transmit: RPC: 30 xprt_transmit(68)
rpc.nfsd-3932 [002] .... 44.100443: xs_local_send_request: RPC: xs_local_send_request(68) = 0
rpc.nfsd-3932 [002] .... 44.100444: xprt_transmit: RPC: 30 xmit complete
rpcbind-1829 [001] ..s. 44.100450: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [001] ..s. 44.100450: xprt_complete_rqst: RPC: 30 xid 2550084b complete (28 bytes received)
rpc.nfsd-3932 [002] .... 44.100456: xprt_release: RPC: 30 release request ffff8800d8cc6800
rpc.nfsd-3932 [002] .... 44.100463: xprt_alloc_slot: RPC: 31 reserved req ffff8800d8cc6800 xid 2650084b
rpc.nfsd-3932 [002] .... 44.100463: xprt_prepare_transmit: RPC: 31 xprt_prepare_transmit
rpc.nfsd-3932 [002] .... 44.100464: xprt_transmit: RPC: 31 xprt_transmit(88)
rpc.nfsd-3932 [002] .... 44.100467: xs_local_send_request: RPC: xs_local_send_request(88) = 0
rpc.nfsd-3932 [002] .... 44.100467: xprt_transmit: RPC: 31 xmit complete
rpcbind-1829 [001] ..s. 44.100473: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [001] ..s. 44.100474: xprt_complete_rqst: RPC: 31 xid 2650084b complete (28 bytes received)
rpc.nfsd-3932 [002] .... 44.100480: xprt_release: RPC: 31 release request ffff8800d8cc6800
rpc.nfsd-3932 [002] .... 44.100482: xprt_alloc_slot: RPC: 32 reserved req ffff8800d8cc6800 xid 2750084b
rpc.nfsd-3932 [002] .... 44.100482: xprt_prepare_transmit: RPC: 32 xprt_prepare_transmit
rpc.nfsd-3932 [002] .... 44.100483: xprt_transmit: RPC: 32 xprt_transmit(88)
rpc.nfsd-3932 [002] .... 44.100485: xs_local_send_request: RPC: xs_local_send_request(88) = 0
rpc.nfsd-3932 [002] .... 44.100485: xprt_transmit: RPC: 32 xmit complete
rpcbind-1829 [001] ..s. 44.100492: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [001] ..s. 44.100493: xprt_complete_rqst: RPC: 32 xid 2750084b complete (28 bytes received)
rpc.nfsd-3932 [002] .... 44.100498: xprt_release: RPC: 32 release request ffff8800d8cc6800
rpc.nfsd-3932 [002] .... 44.100500: xprt_alloc_slot: RPC: 33 reserved req ffff8800d8cc6800 xid 2850084b
rpc.nfsd-3932 [002] .... 44.100501: xprt_prepare_transmit: RPC: 33 xprt_prepare_transmit
rpc.nfsd-3932 [002] .... 44.100501: xprt_transmit: RPC: 33 xprt_transmit(88)
rpc.nfsd-3932 [002] .... 44.100504: xs_local_send_request: RPC: xs_local_send_request(88) = 0
rpc.nfsd-3932 [002] .... 44.100504: xprt_transmit: RPC: 33 xmit complete
rpcbind-1829 [001] ..s. 44.100510: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [001] ..s. 44.100511: xprt_complete_rqst: RPC: 33 xid 2850084b complete (28 bytes received)
rpc.nfsd-3932 [002] .... 44.100517: xprt_release: RPC: 33 release request ffff8800d8cc6800
rpc.nfsd-3932 [002] .... 44.100526: xprt_alloc_slot: RPC: 34 reserved req ffff8800d8cc6800 xid 2950084b
rpc.nfsd-3932 [002] .... 44.100527: xprt_prepare_transmit: RPC: 34 xprt_prepare_transmit
rpc.nfsd-3932 [002] .... 44.100528: xprt_transmit: RPC: 34 xprt_transmit(88)
rpc.nfsd-3932 [002] .... 44.100530: xs_local_send_request: RPC: xs_local_send_request(88) = 0
rpc.nfsd-3932 [002] .... 44.100531: xprt_transmit: RPC: 34 xmit complete
rpcbind-1829 [001] ..s. 44.100537: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [001] ..s. 44.100538: xprt_complete_rqst: RPC: 34 xid 2950084b complete (28 bytes received)
rpc.nfsd-3932 [002] .... 44.100544: xprt_release: RPC: 34 release request ffff8800d8cc6800
rpc.nfsd-3932 [002] .... 44.100546: xprt_alloc_slot: RPC: 35 reserved req ffff8800d8cc6800 xid 2a50084b
rpc.nfsd-3932 [002] .... 44.100546: xprt_prepare_transmit: RPC: 35 xprt_prepare_transmit
rpc.nfsd-3932 [002] .... 44.100547: xprt_transmit: RPC: 35 xprt_transmit(88)
rpc.nfsd-3932 [002] .... 44.100549: xs_local_send_request: RPC: xs_local_send_request(88) = 0
rpc.nfsd-3932 [002] .... 44.100549: xprt_transmit: RPC: 35 xmit complete
rpcbind-1829 [001] ..s. 44.100556: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [001] ..s. 44.100556: xprt_complete_rqst: RPC: 35 xid 2a50084b complete (28 bytes received)
rpc.nfsd-3932 [002] .... 44.100562: xprt_release: RPC: 35 release request ffff8800d8cc6800
rpc.nfsd-3932 [002] .... 44.100564: xprt_alloc_slot: RPC: 36 reserved req ffff8800d8cc6800 xid 2b50084b
rpc.nfsd-3932 [002] .... 44.100565: xprt_prepare_transmit: RPC: 36 xprt_prepare_transmit
rpc.nfsd-3932 [002] .... 44.100565: xprt_transmit: RPC: 36 xprt_transmit(88)
rpc.nfsd-3932 [002] .... 44.100567: xs_local_send_request: RPC: xs_local_send_request(88) = 0
rpc.nfsd-3932 [002] .... 44.100568: xprt_transmit: RPC: 36 xmit complete
rpcbind-1829 [001] ..s. 44.100574: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [001] ..s. 44.100575: xprt_complete_rqst: RPC: 36 xid 2b50084b complete (28 bytes received)
rpc.nfsd-3932 [002] .... 44.100581: xprt_release: RPC: 36 release request ffff8800d8cc6800
rpc.nfsd-3932 [002] .... 44.100587: xprt_alloc_slot: RPC: 37 reserved req ffff8800d8cc6800 xid 2c50084b
rpc.nfsd-3932 [002] .... 44.100587: xprt_prepare_transmit: RPC: 37 xprt_prepare_transmit
rpc.nfsd-3932 [002] .... 44.100588: xprt_transmit: RPC: 37 xprt_transmit(84)
rpc.nfsd-3932 [002] .... 44.100590: xs_local_send_request: RPC: xs_local_send_request(84) = 0
rpc.nfsd-3932 [002] .... 44.100590: xprt_transmit: RPC: 37 xmit complete
rpcbind-1829 [001] ..s. 44.100597: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [001] ..s. 44.100597: xprt_complete_rqst: RPC: 37 xid 2c50084b complete (28 bytes received)
rpc.nfsd-3932 [002] .... 44.100604: xprt_release: RPC: 37 release request ffff8800d8cc6800
rpc.nfsd-3932 [002] .... 44.100605: xprt_alloc_slot: RPC: 38 reserved req ffff8800d8cc6800 xid 2d50084b
rpc.nfsd-3932 [002] .... 44.100606: xprt_prepare_transmit: RPC: 38 xprt_prepare_transmit
rpc.nfsd-3932 [002] .... 44.100606: xprt_transmit: RPC: 38 xprt_transmit(84)
rpc.nfsd-3932 [002] .... 44.100608: xs_local_send_request: RPC: xs_local_send_request(84) = 0
rpc.nfsd-3932 [002] .... 44.100609: xprt_transmit: RPC: 38 xmit complete
rpcbind-1829 [001] ..s. 44.100615: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [001] ..s. 44.100616: xprt_complete_rqst: RPC: 38 xid 2d50084b complete (28 bytes received)
rpc.nfsd-3932 [002] .... 44.100622: xprt_release: RPC: 38 release request ffff8800d8cc6800
rpc.nfsd-3932 [002] .... 44.100623: xprt_alloc_slot: RPC: 39 reserved req ffff8800d8cc6800 xid 2e50084b
rpc.nfsd-3932 [002] .... 44.100624: xprt_prepare_transmit: RPC: 39 xprt_prepare_transmit
rpc.nfsd-3932 [002] .... 44.100624: xprt_transmit: RPC: 39 xprt_transmit(84)
rpc.nfsd-3932 [002] .... 44.100626: xs_local_send_request: RPC: xs_local_send_request(84) = 0
rpc.nfsd-3932 [002] .... 44.100627: xprt_transmit: RPC: 39 xmit complete
rpcbind-1829 [001] ..s. 44.100633: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [001] ..s. 44.100634: xprt_complete_rqst: RPC: 39 xid 2e50084b complete (28 bytes received)
rpc.nfsd-3932 [002] .... 44.100640: xprt_release: RPC: 39 release request ffff8800d8cc6800
rpc.nfsd-3932 [002] .... 44.100647: xprt_alloc_slot: RPC: 40 reserved req ffff8800d8cc6800 xid 2f50084b
rpc.nfsd-3932 [002] .... 44.100648: xprt_prepare_transmit: RPC: 40 xprt_prepare_transmit
rpc.nfsd-3932 [002] .... 44.100648: xprt_transmit: RPC: 40 xprt_transmit(84)
rpc.nfsd-3932 [002] .... 44.100651: xs_local_send_request: RPC: xs_local_send_request(84) = 0
rpc.nfsd-3932 [002] .... 44.100651: xprt_transmit: RPC: 40 xmit complete
rpcbind-1829 [001] ..s. 44.100657: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [001] ..s. 44.100658: xprt_complete_rqst: RPC: 40 xid 2f50084b complete (28 bytes received)
rpc.nfsd-3932 [002] .... 44.100664: xprt_release: RPC: 40 release request ffff8800d8cc6800
rpc.nfsd-3932 [002] .... 44.100666: xprt_alloc_slot: RPC: 41 reserved req ffff8800d8cc6800 xid 3050084b
rpc.nfsd-3932 [002] .... 44.100666: xprt_prepare_transmit: RPC: 41 xprt_prepare_transmit
rpc.nfsd-3932 [002] .... 44.100667: xprt_transmit: RPC: 41 xprt_transmit(84)
rpc.nfsd-3932 [002] .... 44.100669: xs_local_send_request: RPC: xs_local_send_request(84) = 0
rpc.nfsd-3932 [002] .... 44.100669: xprt_transmit: RPC: 41 xmit complete
rpcbind-1829 [001] ..s. 44.100675: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [001] ..s. 44.100676: xprt_complete_rqst: RPC: 41 xid 3050084b complete (28 bytes received)
rpc.nfsd-3932 [002] .... 44.100682: xprt_release: RPC: 41 release request ffff8800d8cc6800
rpc.nfsd-3932 [002] .... 44.100684: xprt_alloc_slot: RPC: 42 reserved req ffff8800d8cc6800 xid 3150084b
rpc.nfsd-3932 [002] .... 44.100684: xprt_prepare_transmit: RPC: 42 xprt_prepare_transmit
rpc.nfsd-3932 [002] .... 44.100685: xprt_transmit: RPC: 42 xprt_transmit(84)
rpc.nfsd-3932 [002] .... 44.100687: xs_local_send_request: RPC: xs_local_send_request(84) = 0
rpc.nfsd-3932 [002] .... 44.100687: xprt_transmit: RPC: 42 xmit complete
rpcbind-1829 [001] ..s. 44.100693: xs_local_data_ready: RPC: xs_local_data_ready...
rpcbind-1829 [001] ..s. 44.100694: xprt_complete_rqst: RPC: 42 xid 3150084b complete (28 bytes received)
rpc.nfsd-3932 [002] .... 44.100700: xprt_release: RPC: 42 release request ffff8800d8cc6800
<idle>-0 [003] ..s. 52.302416: inet_bind_hash: add 22
<idle>-0 [003] ..s. 52.302456: <stack trace>
=> __inet_inherit_port
=> tcp_v4_syn_recv_sock
=> tcp_check_req
=> tcp_v4_do_rcv
=> tcp_v4_rcv
=> ip_local_deliver_finish
=> __netif_receive_skb_core
=> netif_receive_skb
=> netif_receive_skb_internal
=> br_handle_frame_finish
=> delay_tsc
=> br_handle_frame
=> br_handle_frame
=> __netif_receive_skb_core
=> read_tsc
=> netif_receive_skb_internal
=> napi_gro_receive
=> e1000_clean_rx_irq
=> e1000_clean
=> ack_ioapic_level
=> do_IRQ
=> net_rx_action
=> __do_softirq
=> ack_ioapic_level
=> irq_exit
=> do_IRQ
=> ret_from_intr
=> cpuidle_enter_state
=> cpuidle_enter_state
=> cpu_startup_entry
=> start_secondary
kworker/u32:2-105 [001] .... 77.750302: xs_setup_tcp: RPC: set up xprt to 192.168.23.22 (port 55201) via tcp
kworker/u32:2-105 [001] .... 77.750310: xprt_create_transport: RPC: created transport ffff8804082fb000 with 65536 slots
kworker/1:1H-133 [001] .... 77.750352: xprt_alloc_slot: RPC: 43 reserved req ffff88040ab08200 xid 83da2dc3
kworker/1:1H-133 [001] .... 77.750356: xprt_connect: RPC: 43 xprt_connect xprt ffff8804082fb000 is not connected
kworker/1:1H-133 [001] .... 77.750358: xs_connect: RPC: xs_connect scheduled xprt ffff8804082fb000
kworker/1:1H-133 [001] ..s. 77.750365: inet_csk_get_port: snum 737
kworker/1:1H-133 [001] ..s. 77.750374: <stack trace>
=> inet_addr_type
=> inet_bind
=> xs_bind
=> sock_setsockopt
=> __sock_create
=> xs_create_sock.isra.19
=> xs_tcp_setup_socket
=> process_one_work
=> worker_thread
=> worker_thread
=> kthread
=> kthread
=> ret_from_fork
=> kthread
kworker/1:1H-133 [001] ..s. 77.750374: inet_bind_hash: add 737
kworker/1:1H-133 [001] ..s. 77.750377: <stack trace>
=> inet_csk_get_port
=> inet_addr_type
=> inet_bind
=> xs_bind
=> sock_setsockopt
=> __sock_create
=> xs_create_sock.isra.19
=> xs_tcp_setup_socket
=> process_one_work
=> worker_thread
=> worker_thread
=> kthread
=> kthread
=> ret_from_fork
=> kthread
kworker/1:1H-133 [001] .... 77.750378: xs_bind: RPC: xs_bind 4.136.255.255:737: ok (0)
kworker/1:1H-133 [001] .... 77.750379: xs_tcp_setup_socket: RPC: worker connecting xprt ffff8804082fb000 via tcp to 192.168.23.22 (port 55201)
kworker/1:1H-133 [001] .... 77.750397: xs_tcp_setup_socket: xprt=ffff8804082fb000 sock=ffff880408a47d40 status=-115
kworker/1:1H-133 [001] .... 77.750397: xs_tcp_setup_socket: RPC: ffff8804082fb000 connect status 115 connected 0 sock state 2
fail2ban-server-4683 [002] ..s. 77.750554: xs_tcp_state_change: RPC: xs_tcp_state_change client ffff8804082fb000...
fail2ban-server-4683 [002] ..s. 77.750555: xs_tcp_state_change: RPC: state 1 conn 0 dead 0 zapped 1 sk_shutdown 0
kworker/2:1H-126 [002] .... 77.750571: xprt_connect_status: RPC: 43 xprt_connect_status: retrying
kworker/2:1H-126 [002] .... 77.750572: xprt_prepare_transmit: RPC: 43 xprt_prepare_transmit
kworker/2:1H-126 [002] .... 77.750573: xprt_transmit: RPC: 43 xprt_transmit(72)
kworker/2:1H-126 [002] .... 77.750581: xs_tcp_send_request: RPC: xs_tcp_send_request(72) = 0
kworker/2:1H-126 [002] .... 77.750581: xprt_transmit: RPC: 43 xmit complete
fail2ban-server-4683 [002] ..s. 77.750798: xs_tcp_data_ready: RPC: xs_tcp_data_ready...
fail2ban-server-4683 [002] ..s. 77.750799: xs_tcp_data_recv: RPC: xs_tcp_data_recv started
fail2ban-server-4683 [002] ..s. 77.750800: xs_tcp_data_recv: RPC: reading TCP record fragment of length 24
fail2ban-server-4683 [002] ..s. 77.750800: xs_tcp_data_recv: RPC: reading XID (4 bytes)
fail2ban-server-4683 [002] ..s. 77.750801: xs_tcp_data_recv: RPC: reading request with XID 83da2dc3
fail2ban-server-4683 [002] ..s. 77.750801: xs_tcp_data_recv: RPC: reading CALL/REPLY flag (4 bytes)
fail2ban-server-4683 [002] ..s. 77.750801: xs_tcp_data_recv: RPC: read reply XID 83da2dc3
fail2ban-server-4683 [002] ..s. 77.750802: xs_tcp_data_recv: RPC: XID 83da2dc3 read 16 bytes
fail2ban-server-4683 [002] ..s. 77.750803: xs_tcp_data_recv: RPC: xprt = ffff8804082fb000, tcp_copied = 24, tcp_offset = 24, tcp_reclen = 24
fail2ban-server-4683 [002] ..s. 77.750803: xprt_complete_rqst: RPC: 43 xid 83da2dc3 complete (24 bytes received)
fail2ban-server-4683 [002] .Ns. 77.750805: xs_tcp_data_recv: RPC: xs_tcp_data_recv done
kworker/2:1H-126 [002] .... 77.750813: xprt_release: RPC: 43 release request ffff88040ab08200
<idle>-0 [003] ..s. 94.613312: inet_bind_hash: add 22
<idle>-0 [003] ..s. 94.613354: <stack trace>
=> __inet_inherit_port
=> tcp_v4_syn_recv_sock
=> tcp_check_req
=> tcp_v4_do_rcv
=> tcp_v4_rcv
=> ip_local_deliver_finish
=> __netif_receive_skb_core
=> netif_receive_skb
=> netif_receive_skb_internal
=> br_handle_frame_finish
=> br_handle_frame
=> br_handle_frame
=> __netif_receive_skb_core
=> read_tsc
=> netif_receive_skb_internal
=> napi_gro_receive
=> e1000_clean_rx_irq
=> e1000_clean
=> net_rx_action
=> add_interrupt_randomness
=> __do_softirq
=> ack_ioapic_level
=> irq_exit
=> do_IRQ
=> ret_from_intr
=> cpuidle_enter_state
=> cpuidle_enter_state
=> cpu_startup_entry
=> start_secondary
<idle>-0 [003] ..s. 98.776868: inet_bind_hash: add 10993
<idle>-0 [003] ..s. 98.776910: <stack trace>
=> __inet_inherit_port
=> tcp_v4_syn_recv_sock
=> tcp_check_req
=> tcp_v4_do_rcv
=> tcp_v4_rcv
=> ip_local_deliver_finish
=> __netif_receive_skb_core
=> netif_receive_skb
=> netif_receive_skb_internal
=> br_handle_frame_finish
=> br_handle_frame
=> br_handle_frame
=> __netif_receive_skb_core
=> read_tsc
=> netif_receive_skb_internal
=> napi_gro_receive
=> e1000_clean_rx_irq
=> e1000_clean
=> net_rx_action
=> add_interrupt_randomness
=> __do_softirq
=> ack_ioapic_level
=> irq_exit
=> do_IRQ
=> ret_from_intr
=> cpuidle_enter_state
=> cpuidle_enter_state
=> cpu_startup_entry
=> start_secondary
<idle>-0 [003] ..s. 151.179778: inet_bind_hash: add 80
<idle>-0 [003] ..s. 151.179822: <stack trace>
=> __inet_inherit_port
=> tcp_v4_syn_recv_sock
=> tcp_v6_syn_recv_sock
=> ipt_do_table
=> nf_conntrack_in
=> tcp_check_req
=> tcp_v4_do_rcv
=> tcp_v4_rcv
=> ip_local_deliver_finish
=> __netif_receive_skb_core
=> netif_receive_skb
=> netif_receive_skb_internal
=> br_handle_frame_finish
=> br_handle_frame
=> br_handle_frame
=> __netif_receive_skb_core
=> read_tsc
=> netif_receive_skb_internal
=> napi_gro_receive
=> e1000_clean_rx_irq
=> e1000_clean
=> try_to_wake_up
=> ktime_get
=> net_rx_action
=> add_interrupt_randomness
=> __do_softirq
=> ack_ioapic_level
=> irq_exit
=> do_IRQ
=> ret_from_intr
=> cpuidle_enter_state
=> cpuidle_enter_state
=> cpu_startup_entry
=> start_secondary
<idle>-0 [003] ..s. 172.217453: inet_bind_hash: add 10993
<idle>-0 [003] ..s. 172.217496: <stack trace>
=> __inet_inherit_port
=> tcp_v4_syn_recv_sock
=> tcp_check_req
=> tcp_v4_do_rcv
=> tcp_v4_rcv
=> ip_local_deliver_finish
=> __netif_receive_skb_core
=> netif_receive_skb
=> netif_receive_skb_internal
=> br_handle_frame_finish
=> br_handle_frame
=> br_handle_frame
=> __netif_receive_skb_core
=> read_tsc
=> netif_receive_skb_internal
=> napi_gro_receive
=> e1000_clean_rx_irq
=> e1000_clean
=> net_rx_action
=> add_interrupt_randomness
=> __do_softirq
=> ack_ioapic_level
=> irq_exit
=> do_IRQ
=> ret_from_intr
=> cpuidle_enter_state
=> cpuidle_enter_state
=> cpu_startup_entry
=> start_secondary
<idle>-0 [003] .Ns. 181.603150: inet_bind_hash: add 80
<idle>-0 [003] .Ns. 181.603194: <stack trace>
=> __inet_inherit_port
=> tcp_v4_syn_recv_sock
=> tcp_v6_syn_recv_sock
=> ipt_do_table
=> nf_conntrack_in
=> tcp_check_req
=> tcp_v4_do_rcv
=> tcp_v4_rcv
=> ip_local_deliver_finish
=> __netif_receive_skb_core
=> netif_receive_skb
=> netif_receive_skb_internal
=> br_handle_frame_finish
=> br_handle_frame
=> br_handle_frame
=> __netif_receive_skb_core
=> read_tsc
=> netif_receive_skb_internal
=> napi_gro_receive
=> e1000_clean_rx_irq
=> e1000_clean
=> try_to_wake_up
=> net_rx_action
=> add_interrupt_randomness
=> __do_softirq
=> ack_ioapic_level
=> irq_exit
=> do_IRQ
=> ret_from_intr
=> cpuidle_enter_state
=> cpuidle_enter_state
=> cpu_startup_entry
=> start_secondary
<idle>-0 [003] ..s. 234.638237: inet_bind_hash: add 10993
<idle>-0 [003] ..s. 234.638281: <stack trace>
=> __inet_inherit_port
=> tcp_v4_syn_recv_sock
=> tcp_check_req
=> tcp_v4_do_rcv
=> tcp_v4_rcv
=> ip_local_deliver_finish
=> __netif_receive_skb_core
=> netif_receive_skb
=> netif_receive_skb_internal
=> br_handle_frame_finish
=> br_handle_frame
=> br_handle_frame
=> __netif_receive_skb_core
=> read_tsc
=> netif_receive_skb_internal
=> napi_gro_receive
=> e1000_clean_rx_irq
=> e1000_clean
=> try_to_wake_up
=> net_rx_action
=> add_interrupt_randomness
=> __do_softirq
=> ack_ioapic_level
=> irq_exit
=> do_IRQ
=> ret_from_intr
=> cpuidle_enter_state
=> cpuidle_enter_state
=> cpu_startup_entry
=> start_secondary
<idle>-0 [003] ..s. 241.694872: inet_bind_hash: add 57000
<idle>-0 [003] ..s. 241.694915: <stack trace>
=> __inet_inherit_port
=> tcp_v4_syn_recv_sock
=> tcp_check_req
=> tcp_v4_do_rcv
=> tcp_v4_rcv
=> ip_local_deliver_finish
=> __netif_receive_skb_core
=> netif_receive_skb
=> netif_receive_skb_internal
=> br_handle_frame_finish
=> br_handle_frame
=> br_handle_frame
=> __netif_receive_skb_core
=> read_tsc
=> netif_receive_skb_internal
=> napi_gro_receive
=> e1000_clean_rx_irq
=> e1000_clean
=> try_to_wake_up
=> net_rx_action
=> add_interrupt_randomness
=> __do_softirq
=> ack_ioapic_level
=> irq_exit
=> do_IRQ
=> ret_from_intr
=> cpuidle_enter_state
=> cpuidle_enter_state
=> cpu_startup_entry
=> start_secondary
<idle>-0 [003] ..s. 242.308627: inet_bind_hash: add 10993
<idle>-0 [003] ..s. 242.308670: <stack trace>
=> __inet_inherit_port
=> tcp_v4_syn_recv_sock
=> tcp_check_req
=> tcp_v4_do_rcv
=> tcp_v4_rcv
=> ip_local_deliver_finish
=> __netif_receive_skb_core
=> netif_receive_skb
=> netif_receive_skb_internal
=> br_handle_frame_finish
=> br_handle_frame
=> br_handle_frame
=> __netif_receive_skb_core
=> read_tsc
=> netif_receive_skb_internal
=> napi_gro_receive
=> e1000_clean_rx_irq
=> e1000_clean
=> try_to_wake_up
=> net_rx_action
=> add_interrupt_randomness
=> __do_softirq
=> ack_ioapic_level
=> irq_exit
=> do_IRQ
=> ret_from_intr
=> cpuidle_enter_state
=> cpuidle_enter_state
=> cpu_startup_entry
=> start_secondary
<idle>-0 [003] ..s. 296.125499: inet_bind_hash: add 80
<idle>-0 [003] ..s. 296.125543: <stack trace>
=> __inet_inherit_port
=> tcp_v4_syn_recv_sock
=> tcp_v6_syn_recv_sock
=> ipt_do_table
=> nf_conntrack_in
=> tcp_check_req
=> fib_validate_source
=> tcp_v4_do_rcv
=> tcp_v4_rcv
=> ip_local_deliver_finish
=> __netif_receive_skb_core
=> netif_receive_skb
=> netif_receive_skb_internal
=> br_handle_frame_finish
=> br_handle_frame
=> br_handle_frame
=> __netif_receive_skb_core
=> read_tsc
=> netif_receive_skb_internal
=> napi_gro_receive
=> e1000_clean_rx_irq
=> e1000_clean
=> net_rx_action
=> add_interrupt_randomness
=> __do_softirq
=> ack_ioapic_level
=> irq_exit
=> do_IRQ
=> ret_from_intr
=> cpuidle_enter_state
=> cpuidle_enter_state
=> cpu_startup_entry
=> start_secondary
<idle>-0 [003] ..s. 304.196576: inet_bind_hash: add 80
<idle>-0 [003] ..s. 304.196618: <stack trace>
=> __inet_inherit_port
=> tcp_v4_syn_recv_sock
=> tcp_v6_syn_recv_sock
=> ipt_do_table
=> nf_conntrack_in
=> tcp_check_req
=> fib_validate_source
=> tcp_v4_do_rcv
=> tcp_v4_rcv
=> ip_local_deliver_finish
=> __netif_receive_skb_core
=> netif_receive_skb
=> netif_receive_skb_internal
=> br_handle_frame_finish
=> br_handle_frame
=> br_handle_frame
=> __netif_receive_skb_core
=> read_tsc
=> netif_receive_skb_internal
=> napi_gro_receive
=> e1000_clean_rx_irq
=> e1000_clean
=> raise_softirq_irqoff
=> netif_schedule_queue
=> dev_watchdog
=> net_rx_action
=> add_interrupt_randomness
=> __do_softirq
=> ack_ioapic_level
=> irq_exit
=> do_IRQ
=> ret_from_intr
=> cpuidle_enter_state
=> cpuidle_enter_state
=> cpu_startup_entry
=> start_secondary
kworker/2:1H-126 [002] .... 378.264745: xs_tcp_close: close %p
kworker/2:1H-126 [002] .... 378.264748: xs_close: RPC: xs_close xprt ffff8804082fb000
kworker/2:1H-126 [002] .... 378.264786: xprt_disconnect_done: RPC: disconnected transport ffff8804082fb000
-- Steve
On Fri, 12 Jun 2015 11:50:38 -0400
Steven Rostedt <[email protected]> wrote:
> On Fri, 12 Jun 2015 11:34:20 -0400
> Steven Rostedt <[email protected]> wrote:
>
>
> And the issue goes away. That is, I watched the port go from
> ESTABLISHED to TIME_WAIT, and then gone, and theirs no hidden port.
>
s/theirs/there's/
Time to go back to grammar school. :-p
-- Steve
On Fri, 12 Jun 2015 11:50:38 -0400
Steven Rostedt <[email protected]> wrote:
> I reverted the following commits:
>
> c627d31ba0696cbd829437af2be2f2dee3546b1e
> 9e2b9f37760e129cee053cc7b6e7288acc2a7134
> caf4ccd4e88cf2795c927834bc488c8321437586
>
> And the issue goes away. That is, I watched the port go from
> ESTABLISHED to TIME_WAIT, and then gone, and theirs no hidden port.
>
> In fact, I watched the port with my portlist.c module, and it
> disappeared there too when it entered the TIME_WAIT state.
>
I've been running v4.0.5 with the above commits reverted for 5 days
now, and there's still no hidden port appearing.
What's the status on this? Should those commits be reverted or is there
another solution to this bug?
-- Steve
On Wed, Jun 17, 2015 at 11:08 PM, Steven Rostedt <[email protected]> wrote:
> On Fri, 12 Jun 2015 11:50:38 -0400
> Steven Rostedt <[email protected]> wrote:
>
>> I reverted the following commits:
>>
>> c627d31ba0696cbd829437af2be2f2dee3546b1e
>> 9e2b9f37760e129cee053cc7b6e7288acc2a7134
>> caf4ccd4e88cf2795c927834bc488c8321437586
>>
>> And the issue goes away. That is, I watched the port go from
>> ESTABLISHED to TIME_WAIT, and then gone, and theirs no hidden port.
>>
>> In fact, I watched the port with my portlist.c module, and it
>> disappeared there too when it entered the TIME_WAIT state.
>>
I've scanned those commits again and again, and I'm not seeing how we
could be introducing a socket leak there. The only suspect I can see
would be the NFS swap bugs that Jeff fixed a few weeks ago. Are you
using NFS swap?
> I've been running v4.0.5 with the above commits reverted for 5 days
> now, and there's still no hidden port appearing.
>
> What's the status on this? Should those commits be reverted or is there
> another solution to this bug?
>
I'm trying to reproduce, but I've had no luck yet.
Cheers
Trond
On Thu, 18 Jun 2015 15:24:52 -0400
Trond Myklebust <[email protected]> wrote:
> On Wed, Jun 17, 2015 at 11:08 PM, Steven Rostedt <[email protected]> wrote:
> > On Fri, 12 Jun 2015 11:50:38 -0400
> > Steven Rostedt <[email protected]> wrote:
> >
> >> I reverted the following commits:
> >>
> >> c627d31ba0696cbd829437af2be2f2dee3546b1e
> >> 9e2b9f37760e129cee053cc7b6e7288acc2a7134
> >> caf4ccd4e88cf2795c927834bc488c8321437586
> >>
> >> And the issue goes away. That is, I watched the port go from
> >> ESTABLISHED to TIME_WAIT, and then gone, and theirs no hidden port.
> >>
> >> In fact, I watched the port with my portlist.c module, and it
> >> disappeared there too when it entered the TIME_WAIT state.
> >>
>
> I've scanned those commits again and again, and I'm not seeing how we
> could be introducing a socket leak there. The only suspect I can see
> would be the NFS swap bugs that Jeff fixed a few weeks ago. Are you
> using NFS swap?
Not that I'm aware of.
>
> > I've been running v4.0.5 with the above commits reverted for 5 days
> > now, and there's still no hidden port appearing.
> >
> > What's the status on this? Should those commits be reverted or is there
> > another solution to this bug?
> >
>
> I'm trying to reproduce, but I've had no luck yet.
It seems to happen with the connection to my wife's machine, and that
is where my wife's box connects two directories via nfs:
This is what's in my wife's /etc/fstab directory
goliath:/home/upload /upload nfs auto,rw,intr,soft 0 0
goliath:/home/gallery /gallery nfs auto,ro,intr,soft 0 0
And here's what's in my /etc/exports directory
/home/upload wife(no_root_squash,no_all_squash,rw,sync,no_subtree_check)
/home/gallery 192.168.23.0/24(ro,sync,no_subtree_check)
Attached is my config.
-- Steve
On Thu, 18 Jun 2015 15:49:14 -0400
Steven Rostedt <[email protected]> wrote:
> On Thu, 18 Jun 2015 15:24:52 -0400
> Trond Myklebust <[email protected]> wrote:
>
> > On Wed, Jun 17, 2015 at 11:08 PM, Steven Rostedt <[email protected]> wrote:
> > > On Fri, 12 Jun 2015 11:50:38 -0400
> > > Steven Rostedt <[email protected]> wrote:
> > >
> > >> I reverted the following commits:
> > >>
> > >> c627d31ba0696cbd829437af2be2f2dee3546b1e
> > >> 9e2b9f37760e129cee053cc7b6e7288acc2a7134
> > >> caf4ccd4e88cf2795c927834bc488c8321437586
> > >>
> > >> And the issue goes away. That is, I watched the port go from
> > >> ESTABLISHED to TIME_WAIT, and then gone, and theirs no hidden port.
> > >>
> > >> In fact, I watched the port with my portlist.c module, and it
> > >> disappeared there too when it entered the TIME_WAIT state.
> > >>
> >
> > I've scanned those commits again and again, and I'm not seeing how we
> > could be introducing a socket leak there. The only suspect I can see
> > would be the NFS swap bugs that Jeff fixed a few weeks ago. Are you
> > using NFS swap?
>
> Not that I'm aware of.
>
> >
> > > I've been running v4.0.5 with the above commits reverted for 5 days
> > > now, and there's still no hidden port appearing.
> > >
> > > What's the status on this? Should those commits be reverted or is there
> > > another solution to this bug?
> > >
> >
> > I'm trying to reproduce, but I've had no luck yet.
>
> It seems to happen with the connection to my wife's machine, and that
> is where my wife's box connects two directories via nfs:
>
> This is what's in my wife's /etc/fstab directory
>
> goliath:/home/upload /upload nfs auto,rw,intr,soft 0 0
> goliath:/home/gallery /gallery nfs auto,ro,intr,soft 0 0
>
> And here's what's in my /etc/exports directory
>
> /home/upload wife(no_root_squash,no_all_squash,rw,sync,no_subtree_check)
> /home/gallery 192.168.23.0/24(ro,sync,no_subtree_check)
>
> Attached is my config.
>
The interesting bit here is that the sockets all seem to connect to port
55201 on the remote host, if I'm reading these traces correctly. What's
listening on that port on the server?
This might give some helpful info:
$ rpcinfo -p <NFS servername>
Also, what NFS version are you using to mount here? Your fstab entries
suggest that you're using the default version (for whatever distro this
is), but have you (e.g.) set up nfsmount.conf to default to v3 on this
box?
--
Jeff Layton <[email protected]>
On Thu, 18 Jun 2015 18:50:51 -0400
Jeff Layton <[email protected]> wrote:
> The interesting bit here is that the sockets all seem to connect to port
> 55201 on the remote host, if I'm reading these traces correctly. What's
> listening on that port on the server?
>
> This might give some helpful info:
>
> $ rpcinfo -p <NFS servername>
# rpcinfo -p wife
program vers proto port service
100000 4 tcp 111 portmapper
100000 3 tcp 111 portmapper
100000 2 tcp 111 portmapper
100000 4 udp 111 portmapper
100000 3 udp 111 portmapper
100000 2 udp 111 portmapper
100024 1 udp 34243 status
100024 1 tcp 34498 status
# rpcinfo -p localhost
program vers proto port service
100000 4 tcp 111 portmapper
100000 3 tcp 111 portmapper
100000 2 tcp 111 portmapper
100000 4 udp 111 portmapper
100000 3 udp 111 portmapper
100000 2 udp 111 portmapper
100024 1 udp 38332 status
100024 1 tcp 52684 status
100003 2 tcp 2049 nfs
100003 3 tcp 2049 nfs
100003 4 tcp 2049 nfs
100227 2 tcp 2049
100227 3 tcp 2049
100003 2 udp 2049 nfs
100003 3 udp 2049 nfs
100003 4 udp 2049 nfs
100227 2 udp 2049
100227 3 udp 2049
100021 1 udp 53218 nlockmgr
100021 3 udp 53218 nlockmgr
100021 4 udp 53218 nlockmgr
100021 1 tcp 49825 nlockmgr
100021 3 tcp 49825 nlockmgr
100021 4 tcp 49825 nlockmgr
100005 1 udp 49166 mountd
100005 1 tcp 48797 mountd
100005 2 udp 47856 mountd
100005 2 tcp 53839 mountd
100005 3 udp 36090 mountd
100005 3 tcp 46390 mountd
Note, the box has been rebooted since I posted my last trace.
>
> Also, what NFS version are you using to mount here? Your fstab entries
> suggest that you're using the default version (for whatever distro this
> is), but have you (e.g.) set up nfsmount.conf to default to v3 on this
> box?
>
My box is Debian testing (recently updated).
# dpkg -l nfs-*
ii nfs-common 1:1.2.8-9 amd64 NFS support files common to clien
ii nfs-kernel-ser 1:1.2.8-9 amd64 support for NFS kernel server
same for both boxes.
nfsmount.conf doesn't exist on either box.
I'm assuming it is using nfs4.
Anything else I can provide?
-- Steve
On Thu, 18 Jun 2015 21:08:43 -0400
Steven Rostedt <[email protected]> wrote:
> On Thu, 18 Jun 2015 18:50:51 -0400
> Jeff Layton <[email protected]> wrote:
>
> > The interesting bit here is that the sockets all seem to connect to port
> > 55201 on the remote host, if I'm reading these traces correctly. What's
> > listening on that port on the server?
> >
> > This might give some helpful info:
> >
> > $ rpcinfo -p <NFS servername>
>
> # rpcinfo -p wife
> program vers proto port service
> 100000 4 tcp 111 portmapper
> 100000 3 tcp 111 portmapper
> 100000 2 tcp 111 portmapper
> 100000 4 udp 111 portmapper
> 100000 3 udp 111 portmapper
> 100000 2 udp 111 portmapper
> 100024 1 udp 34243 status
> 100024 1 tcp 34498 status
>
> # rpcinfo -p localhost
> program vers proto port service
> 100000 4 tcp 111 portmapper
> 100000 3 tcp 111 portmapper
> 100000 2 tcp 111 portmapper
> 100000 4 udp 111 portmapper
> 100000 3 udp 111 portmapper
> 100000 2 udp 111 portmapper
> 100024 1 udp 38332 status
> 100024 1 tcp 52684 status
> 100003 2 tcp 2049 nfs
> 100003 3 tcp 2049 nfs
> 100003 4 tcp 2049 nfs
> 100227 2 tcp 2049
> 100227 3 tcp 2049
> 100003 2 udp 2049 nfs
> 100003 3 udp 2049 nfs
> 100003 4 udp 2049 nfs
> 100227 2 udp 2049
> 100227 3 udp 2049
> 100021 1 udp 53218 nlockmgr
> 100021 3 udp 53218 nlockmgr
> 100021 4 udp 53218 nlockmgr
> 100021 1 tcp 49825 nlockmgr
> 100021 3 tcp 49825 nlockmgr
> 100021 4 tcp 49825 nlockmgr
> 100005 1 udp 49166 mountd
> 100005 1 tcp 48797 mountd
> 100005 2 udp 47856 mountd
> 100005 2 tcp 53839 mountd
> 100005 3 udp 36090 mountd
> 100005 3 tcp 46390 mountd
>
> Note, the box has been rebooted since I posted my last trace.
>
Ahh pity. The port has probably changed...if you trace it again maybe
try to figure out what it's talking to before rebooting the server?
> >
> > Also, what NFS version are you using to mount here? Your fstab entries
> > suggest that you're using the default version (for whatever distro this
> > is), but have you (e.g.) set up nfsmount.conf to default to v3 on this
> > box?
> >
>
> My box is Debian testing (recently updated).
>
> # dpkg -l nfs-*
>
> ii nfs-common 1:1.2.8-9 amd64 NFS support files common to clien
> ii nfs-kernel-ser 1:1.2.8-9 amd64 support for NFS kernel server
>
>
> same for both boxes.
>
> nfsmount.conf doesn't exist on either box.
>
> I'm assuming it is using nfs4.
>
(cc'ing Bruce)
Oh! I was thinking that you were seeing this extra port on the
_client_, but now rereading your original mail I see that it's
appearing up on the NFS server. Is that correct?
So, assuming that this is NFSv4.0, then this port is probably bound
when the server is establishing the callback channel to the client. So
we may need to look at how those xprts are being created and whether
there are differences from a standard client xprt.
--
Jeff Layton <[email protected]>
On Thu, 18 Jun 2015 21:37:02 -0400
Jeff Layton <[email protected]> wrote:
> > Note, the box has been rebooted since I posted my last trace.
> >
>
> Ahh pity. The port has probably changed...if you trace it again maybe
> try to figure out what it's talking to before rebooting the server?
I could probably re-enable the trace again.
Would it be best if I put back the commits and run it with the buggy
kernel. I could then run these commands after the bug happens and/or
before the port goes away.
> Oh! I was thinking that you were seeing this extra port on the
> _client_, but now rereading your original mail I see that it's
> appearing up on the NFS server. Is that correct?
Correct, the bug is on the NFS server, not the client. The client is
already up and running, and had the filesystem mounted when the server
rebooted. I take it that this happened when the client tried to
reconnect.
Just let me know what you would like to do. As this is my main
production server of my local network, I would only be able to do this
a few times. Let me know all the commands and tracing you would like to
have. I'll try it tomorrow (going to bed now).
-- Steve
>
> So, assuming that this is NFSv4.0, then this port is probably bound
> when the server is establishing the callback channel to the client. So
> we may need to look at how those xprts are being created and whether
> there are differences from a standard client xprt.
>
On Fri, 19 Jun 2015 12:25:53 -0400
Steven Rostedt <[email protected]> wrote:
> I don't see that 55201 anywhere. But then again, I didn't look for it
> before the port disappeared. I could reboot and look for it again. I
> should have saved the full netstat -tapn as well :-/
Of course I didn't find it anywhere, that's the port on my wife's box
that port 947 was connected to.
Now I even went over to my wife's box and ran
# rpcinfo -p localhost
program vers proto port service
100000 4 tcp 111 portmapper
100000 3 tcp 111 portmapper
100000 2 tcp 111 portmapper
100000 4 udp 111 portmapper
100000 3 udp 111 portmapper
100000 2 udp 111 portmapper
100024 1 udp 34243 status
100024 1 tcp 34498 status
which doesn't show anything.
but something is listening to that port...
# netstat -ntap |grep 55201
tcp 0 0 0.0.0.0:55201 0.0.0.0:* LISTEN
I rebooted again, but this time I ran this on my wife's box:
# trace-cmd record -e nfs -e nfs4 -e net -e skb -e sock -e udp -e workqueue -e sunrpc
I started it when my server started booting the kernel, and kept it
running till the port vanished.
The full trace can be downloaded from
http://rostedt.homelinux.com/private/wife-trace.txt
Here's some interesting output from that trace:
ksoftirq-13 1..s. 12272627.681760: netif_receive_skb: dev=lo skbaddr=0xffff88020944c600 len=88
ksoftirq-13 1..s. 12272627.681776: net_dev_queue: dev=eth0 skbaddr=0xffff880234e5b100 len=42
ksoftirq-13 1..s. 12272627.681777: net_dev_start_xmit: dev=eth0 queue_mapping=0 skbaddr=0xffff880234e5b100 vlan_tagged=0 vlan_proto=0x0000 vlan_tci=0x0000 protocol=0x0806 ip_
summed=0 len=42 data_len=0 network_offset=14 transport_offset_valid=0 transport_offset=65533 tx_flags=0 gso_size=0 gso_segs=0 gso_type=0
ksoftirq-13 1..s. 12272627.681779: net_dev_xmit: dev=eth0 skbaddr=0xffff880234e5b100 len=42 rc=0
ksoftirq-13 1..s. 12272627.681780: kfree_skb: skbaddr=0xffff88023444cf00 protocol=2048 location=0xffffffff81422a72
ksoftirq-13 1..s. 12272627.681783: rpc_socket_error: error=-113 socket:[11886206] dstaddr=192.168.23.9/2049 state=2 () sk_state=2 ()
ksoftirq-13 1..s. 12272627.681785: rpc_task_wakeup: task:18128@0 flags=5281 state=0006 status=-113 timeout=45000 queue=xprt_pending
ksoftirq-13 1d.s. 12272627.681786: workqueue_queue_work: work struct=0xffff8800b5a94588 function=rpc_async_schedule workqueue=0xffff880234666800 req_cpu=512 cpu=1
ksoftirq-13 1d.s. 12272627.681787: workqueue_activate_work: work struct 0xffff8800b5a94588
ksoftirq-13 1..s. 12272627.681791: rpc_socket_state_change: socket:[11886206] dstaddr=192.168.23.9/2049 state=2 () sk_state=7 ()
ksoftirq-13 1..s. 12272627.681792: kfree_skb: skbaddr=0xffff88020944c600 protocol=2048 location=0xffffffff81482c05
kworker/-20111 1.... 12272627.681796: workqueue_execute_start: work struct 0xffff8800b5a94588: function rpc_async_schedule
kworker/-20111 1.... 12272627.681797: rpc_task_run_action: task:18128@0 flags=5281 state=0005 status=-113 action=call_connect_status
kworker/-20111 1.... 12272627.681798: rpc_task_run_action: task:18128@0 flags=5281 state=0005 status=-113 action=call_connect_status
kworker/-20111 1.... 12272627.681798: rpc_connect_status: task:18128@0, status -113
kworker/-20111 1..s. 12272627.681799: rpc_task_sleep: task:18128@0 flags=5281 state=0005 status=0 timeout=750 queue=delayq
kworker/-20111 1.... 12272627.681800: workqueue_execute_end: work struct 0xffff8800b5a94588
<idle>-0 1..s. 12272630.688741: rpc_task_wakeup: task:18128@0 flags=5281 state=0006 status=-110 timeout=750 queue=delayq
<idle>-0 1dNs. 12272630.688749: workqueue_queue_work: work struct=0xffff8800b5a94588 function=rpc_async_schedule workqueue=0xffff880234666800 req_cpu=512 cpu=1
<idle>-0 1dNs. 12272630.688749: workqueue_activate_work: work struct 0xffff8800b5a94588
kworker/-20111 1.... 12272630.688758: workqueue_execute_start: work struct 0xffff8800b5a94588: function rpc_async_schedule
kworker/-20111 1.... 12272630.688759: rpc_task_run_action: task:18128@0 flags=5281 state=0005 status=-110 action=call_timeout
kworker/-20111 1.... 12272630.688760: rpc_task_run_action: task:18128@0 flags=5281 state=0005 status=0 action=call_timeout
kworker/-20111 1.... 12272630.688760: rpc_task_run_action: task:18128@0 flags=5281 state=0005 status=0 action=call_bind
kworker/-20111 1.... 12272630.688761: rpc_task_run_action: task:18128@0 flags=5281 state=0005 status=0 action=call_connect
kworker/-20111 1..s. 12272630.688762: rpc_task_sleep: task:18128@0 flags=5281 state=0005 status=0 timeout=45000 queue=xprt_pending
kworker/-20111 1.... 12272630.688765: workqueue_execute_end: work struct 0xffff8800b5a94588
<idle>-0 3d.s. 12272630.696742: workqueue_queue_work: work struct=0xffff880234ac9060 function=disk_events_workfn workqueue=0xffff8802370d9000 req_cpu=512 cpu=3
<idle>-0 3d.s. 12272630.696744: workqueue_activate_work: work struct 0xffff880234ac9060
kworker/-7491 3.... 12272630.696760: workqueue_execute_start: work struct 0xffff880234ac9060: function disk_events_workfn
kworker/-7491 3d... 12272630.696827: workqueue_queue_work: work struct=0xffff8802347440b8 function=ata_sff_pio_task workqueue=0xffff880234491c00 req_cpu=512 cpu=3
kworker/-7491 3d... 12272630.696828: workqueue_activate_work: work struct 0xffff8802347440b8
kworker/-16140 3.... 12272630.696837: workqueue_execute_start: work struct 0xffff8802347440b8: function ata_sff_pio_task
kworker/-16140 3.... 12272630.696853: workqueue_execute_end: work struct 0xffff8802347440b8
kworker/-7491 3.... 12272630.697383: workqueue_execute_end: work struct 0xffff880234ac9060
<idle>-0 1d.s. 12272654.753029: workqueue_queue_work: work struct=0xffff8802361f4de0 function=xs_tcp_setup_socket workqueue=0xffff880234666800 req_cpu=512 cpu=1
<idle>-0 1d.s. 12272654.753031: workqueue_activate_work: work struct 0xffff8802361f4de0
kworker/-20111 1.... 12272654.753049: workqueue_execute_start: work struct 0xffff8802361f4de0: function xs_tcp_setup_socket
kworker/-20111 1..s. 12272654.753054: rpc_socket_error: error=-113 socket:[11886206] dstaddr=192.168.23.9/2049 state=2 () sk_state=7 ()
kworker/-20111 1.... 12272654.753055: rpc_socket_reset_connection: error=0 socket:[11886206] dstaddr=192.168.23.9/2049 state=1 () sk_state=7 ()
kworker/-20111 1..s. 12272654.753075: net_dev_queue: dev=eth0 skbaddr=0xffff880082117ae8 len=74
kworker/-20111 1..s. 12272654.753083: net_dev_start_xmit: dev=eth0 queue_mapping=0 skbaddr=0xffff880082117ae8 vlan_tagged=0 vlan_proto=0x0000 vlan_tci=0x0000 protocol=0x0800 ip_summed=0 len=74 data_len=0 network_offset=14 transport_offset_valid=1 transport_offset=34 tx_flags=0 gso_size=0 gso_segs=1 gso_type=0
kworker/-20111 1..s. 12272654.753088: net_dev_xmit: dev=eth0 skbaddr=0xffff880082117ae8 len=74 rc=0
kworker/-20111 1.... 12272654.753090: rpc_socket_connect: error=-115 socket:[11886206] dstaddr=192.168.23.9/2049 state=2 () sk_state=2 ()
kworker/-20111 1.... 12272654.753093: workqueue_execute_end: work struct 0xffff8802361f4de0
<idle>-0 1..s. 12272654.753320: consume_skb: skbaddr=0xffff880082117ae8
<idle>-0 1..s. 12272654.753503: napi_gro_receive_entry: dev=eth0 napi_id=0 queue_mapping=0 skbaddr=0xffff8801f647d100 vlan_tagged=0 vlan_proto=0x0000 vlan_tci=0x0000 protocol=0x0806 ip_summed=0 hash=0x00000000 l4_hash=0 len=46 data_len=0 truesize=704 mac_header_valid=1 mac_header=-14 nr_frags=0 gso_size=0 gso_type=0
<idle>-0 1.Ns. 12272654.753508: netif_receive_skb: dev=eth0 skbaddr=0xffff8801f647d100 len=46
<idle>-0 1.Ns. 12272654.753519: consume_skb: skbaddr=0xffff8800a9aa2d00
<idle>-0 1.Ns. 12272654.753522: net_dev_queue: dev=eth0 skbaddr=0xffff8800a9aa2d00 len=42
<idle>-0 1.Ns. 12272654.753523: net_dev_start_xmit: dev=eth0 queue_mapping=0 skbaddr=0xffff8800a9aa2d00 vlan_tagged=0 vlan_proto=0x0000 vlan_tci=0x0000 protocol=0x0806 ip_summed=0 len=42 data_len=0 network_offset=14 transport_offset_valid=0 transport_offset=65533 tx_flags=0 gso_size=0 gso_segs=0 gso_type=0
<idle>-0 1.Ns. 12272654.753525: net_dev_xmit: dev=eth0 skbaddr=0xffff8800a9aa2d00 len=42 rc=0
<idle>-0 1.Ns. 12272654.753526: consume_skb: skbaddr=0xffff8801f647d100
<idle>-0 1..s. 12272654.753585: napi_gro_receive_entry: dev=eth0 napi_id=0 queue_mapping=0 skbaddr=0xffff8801f647d100 vlan_tagged=0 vlan_proto=0x0000 vlan_tci=0x0000 protocol=0x0800 ip_summed=1 hash=0x00000000 l4_hash=0 len=46 data_len=0 truesize=704 mac_header_valid=1 mac_header=-14 nr_frags=0 gso_size=0 gso_type=0
<idle>-0 1.Ns. 12272654.753589: netif_receive_skb: dev=eth0 skbaddr=0xffff8801f647d100 len=46
<idle>-0 1.Ns. 12272654.753595: rpc_socket_error: error=-111 socket:[11886206] dstaddr=192.168.23.9/2049 state=2 () sk_state=2 ()
<idle>-0 1.Ns. 12272654.753597: rpc_task_wakeup: task:18128@0 flags=5281 state=0006 status=-111 timeout=45000 queue=xprt_pending
<idle>-0 1dNs. 12272654.753598: workqueue_queue_work: work struct=0xffff8800b5a94588 function=rpc_async_schedule workqueue=0xffff880234666800 req_cpu=512 cpu=1
<idle>-0 1dNs. 12272654.753599: workqueue_activate_work: work struct 0xffff8800b5a94588
<idle>-0 1.Ns. 12272654.753601: rpc_socket_state_change: socket:[11886206] dstaddr=192.168.23.9/2049 state=2 () sk_state=7 ()
kworker/-20111 1.... 12272654.753607: workqueue_execute_start: work struct 0xffff8800b5a94588: function rpc_async_schedule
kworker/-20111 1.... 12272654.753608: rpc_task_run_action: task:18128@0 flags=5281 state=0005 status=-111 action=call_connect_status
kworker/-20111 1.... 12272654.753609: rpc_task_run_action: task:18128@0 flags=5281 state=0005 status=-111 action=call_connect_status
kworker/-20111 1.... 12272654.753609: rpc_connect_status: task:18128@0, status -111
kworker/-20111 1..s. 12272654.753610: rpc_task_sleep: task:18128@0 flags=5281 state=0005 status=0 timeout=750 queue=delayq
kworker/-20111 1.... 12272654.753612: workqueue_execute_end: work struct 0xffff8800b5a94588
<idle>-0 1..s. 12272654.753692: consume_skb: skbaddr=0xffff8800a9aa2d00
<idle>-0 1.Ns. 12272657.605105: netif_receive_skb: dev=eth0 skbaddr=0xffff8802223d1e00 len=345
<idle>-0 1.Ns. 12272657.605108: kfree_skb: skbaddr=0xffff8802223d1e00 protocol=2048 location=0xffffffff8147e361
<idle>-0 1..s. 12272657.760044: rpc_task_wakeup: task:18128@0 flags=5281 state=0006 status=-110 timeout=750 queue=delayq
<idle>-0 1dNs. 12272657.760051: workqueue_queue_work: work struct=0xffff8800b5a94588 function=rpc_async_schedule workqueue=0xffff880234666800 req_cpu=512 cpu=1
<idle>-0 1dNs. 12272657.760052: workqueue_activate_work: work struct 0xffff8800b5a94588
kworker/-20111 1.... 12272657.760063: workqueue_execute_start: work struct 0xffff8800b5a94588: function rpc_async_schedule
kworker/-20111 1.... 12272657.760064: rpc_task_run_action: task:18128@0 flags=5281 state=0005 status=-110 action=call_timeout
kworker/-20111 1.... 12272657.760065: rpc_task_run_action: task:18128@0 flags=5281 state=0005 status=0 action=call_timeout
kworker/-20111 1.... 12272657.760066: rpc_task_run_action: task:18128@0 flags=5281 state=0005 status=0 action=call_bind
kworker/-20111 1.... 12272657.760066: rpc_task_run_action: task:18128@0 flags=5281 state=0005 status=0 action=call_connect
kworker/-20111 1..s. 12272657.760068: rpc_task_sleep: task:18128@0 flags=5281 state=0005 status=0 timeout=45000 queue=xprt_pending
kworker/-20111 1.... 12272657.760070: workqueue_execute_end: work struct 0xffff8800b5a94588
<idle>-0 3d.s. 12272657.824024: workqueue_queue_work: work struct=0xffff8800b5551760 function=disk_events_workfn workqueue=0xffff8802370d9000 req_cpu=512 cpu=3
<idle>-0 3d.s. 12272657.824025: workqueue_activate_work: work struct 0xffff8800b5551760
kworker/-7491 3.... 12272657.824041: workqueue_execute_start: work struct 0xffff8800b5551760: function disk_events_workfn
kworker/-7491 3.... 12272657.824807: workqueue_execute_end: work struct 0xffff8800b5551760
<idle>-0 1d.s. 12272705.808564: workqueue_queue_work: work struct=0xffff8802361f4de0 function=xs_tcp_setup_socket workqueue=0xffff880234666800 req_cpu=512 cpu=1
<idle>-0 1d.s. 12272705.808565: workqueue_activate_work: work struct 0xffff8802361f4de0
kworker/-20111 1.... 12272705.808574: workqueue_execute_start: work struct 0xffff8802361f4de0: function xs_tcp_setup_socket
kworker/-20111 1..s. 12272705.808580: rpc_socket_error: error=-111 socket:[11886206] dstaddr=192.168.23.9/2049 state=2 () sk_state=7 ()
kworker/-20111 1.... 12272705.808581: rpc_socket_reset_connection: error=0 socket:[11886206] dstaddr=192.168.23.9/2049 state=1 () sk_state=7 ()
kworker/-20111 1..s. 12272705.808599: net_dev_queue: dev=eth0 skbaddr=0xffff880082117ae8 len=74
kworker/-20111 1..s. 12272705.808602: net_dev_start_xmit: dev=eth0 queue_mapping=0 skbaddr=0xffff880082117ae8 vlan_tagged=0 vlan_proto=0x0000 vlan_tci=0x0000 protocol=0x0800 ip_summed=0 len=74 data_len=0 network_offset=14 transport_offset_valid=1 transport_offset=34 tx_flags=0 gso_size=0 gso_segs=1 gso_type=0
kworker/-20111 1..s. 12272705.808605: net_dev_xmit: dev=eth0 skbaddr=0xffff880082117ae8 len=74 rc=0
kworker/-20111 1.... 12272705.808614: rpc_socket_connect: error=-115 socket:[11886206] dstaddr=192.168.23.9/2049 state=2 () sk_state=2 ()
kworker/-20111 1.... 12272705.808615: workqueue_execute_end: work struct 0xffff8802361f4de0
<idle>-0 1..s. 12272705.808841: napi_gro_receive_entry: dev=eth0 napi_id=0 queue_mapping=0 skbaddr=0xffff8800a60e8900 vlan_tagged=0 vlan_proto=0x0000 vlan_tci=0x0000 protocol=0x0800 ip_summed=1 hash=0x00000000 l4_hash=0 len=60 data_len=0 truesize=768 mac_header_valid=1 mac_header=-14 nr_frags=0 gso_size=0 gso_type=0
<idle>-0 1.Ns. 12272705.808849: netif_receive_skb: dev=eth0 skbaddr=0xffff8800a60e8900 len=60
<idle>-0 1.Ns. 12272705.808872: rpc_socket_state_change: socket:[11886206] dstaddr=192.168.23.9/2049 state=2 () sk_state=1 ()
<idle>-0 1.Ns. 12272705.808874: rpc_task_wakeup: task:18128@0 flags=5281 state=0006 status=-11 timeout=45000 queue=xprt_pending
<idle>-0 1dNs. 12272705.808875: workqueue_queue_work: work struct=0xffff8800b5a94588 function=rpc_async_schedule workqueue=0xffff880234666800 req_cpu=512 cpu=1
<idle>-0 1dNs. 12272705.808876: workqueue_activate_work: work struct 0xffff8800b5a94588
<idle>-0 1.Ns. 12272705.808881: net_dev_queue: dev=eth0 skbaddr=0xffff8800a60e8500 len=66
<idle>-0 1.Ns. 12272705.808883: net_dev_start_xmit: dev=eth0 queue_mapping=0 skbaddr=0xffff8800a60e8500 vlan_tagged=0 vlan_proto=0x0000 vlan_tci=0x0000 protocol=0x0800 ip_summed=0 len=66 data_len=0 network_offset=14 transport_offset_valid=1 transport_offset=34 tx_flags=0 gso_size=0 gso_segs=1 gso_type=0
<idle>-0 1.Ns. 12272705.808885: net_dev_xmit: dev=eth0 skbaddr=0xffff8800a60e8500 len=66 rc=0
<idle>-0 1.Ns. 12272705.808887: consume_skb: skbaddr=0xffff880082117ae8
kworker/-20111 1.... 12272705.808895: workqueue_execute_start: work struct 0xffff8800b5a94588: function rpc_async_schedule
kworker/-20111 1.... 12272705.808896: rpc_task_run_action: task:18128@0 flags=5281 state=0005 status=-11 action=call_connect_status
kworker/-20111 1.... 12272705.808897: rpc_task_run_action: task:18128@0 flags=5281 state=0005 status=-11 action=call_connect_status
kworker/-20111 1.... 12272705.808897: rpc_connect_status: task:18128@0, status -11
kworker/-20111 1.... 12272705.808897: rpc_task_run_action: task:18128@0 flags=5281 state=0005 status=0 action=call_timeout
kworker/-20111 1.... 12272705.808898: rpc_task_run_action: task:18128@0 flags=5281 state=0005 status=0 action=call_bind
kworker/-20111 1.... 12272705.808899: rpc_task_run_action: task:18128@0 flags=5281 state=0005 status=0 action=call_connect
kworker/-20111 1.... 12272705.808899: rpc_task_run_action: task:18128@0 flags=5281 state=0005 status=0 action=call_transmit
kworker/-20111 1..s. 12272705.808912: net_dev_queue: dev=eth0 skbaddr=0xffff880082117ae8 len=162
kworker/-20111 1..s. 12272705.808913: net_dev_start_xmit: dev=eth0 queue_mapping=0 skbaddr=0xffff880082117ae8 vlan_tagged=0 vlan_proto=0x0000 vlan_tci=0x0000 protocol=0x0800 ip_summed=0 len=162 data_len=0 network_offset=14 transport_offset_valid=1 transport_offset=34 tx_flags=0 gso_size=0 gso_segs=1 gso_type=0
kworker/-20111 1..s. 12272705.808913: net_dev_xmit: dev=eth0 skbaddr=0xffff880082117ae8 len=162 rc=0
kworker/-20111 1..s. 12272705.808916: rpc_task_sleep: task:18128@0 flags=5a81 state=0005 status=0 timeout=45000 queue=xprt_pending
kworker/-20111 1.... 12272705.808917: workqueue_execute_end: work struct 0xffff8800b5a94588
<idle>-0 1..s. 12272705.809098: consume_skb: skbaddr=0xffff8800a60e8500
<idle>-0 1.Ns. 12272705.809840: rpc_task_wakeup: task:18128@0 flags=5a81 state=0006 status=0 timeout=45000 queue=xprt_pending
<idle>-0 1dNs. 12272705.809842: workqueue_queue_work: work struct=0xffff8800b5a94588 function=rpc_async_schedule workqueue=0xffff880234666800 req_cpu=512 cpu=1
<idle>-0 1dNs. 12272705.809842: workqueue_activate_work: work struct 0xffff8800b5a94588
kworker/-20111 1.... 12272705.809853: workqueue_execute_start: work struct 0xffff8800b5a94588: function rpc_async_schedule
kworker/-20111 1.... 12272705.809853: rpc_task_run_action: task:18128@0 flags=5a81 state=0005 status=0 action=call_status
kworker/-20111 1.... 12272705.809854: rpc_task_run_action: task:18128@0 flags=5a81 state=0005 status=0 action=call_status
kworker/-20111 1.... 12272705.809854: rpc_task_run_action: task:18128@0 flags=5a81 state=0005 status=44 action=call_decode
kworker/-20111 1.... 12272705.809856: rpc_task_run_action: task:18128@0 flags=5a81 state=0005 status=-10022 action=rpc_exit_task
kworker/-20111 1.... 12272705.809858: nfs4_renew_async: error=-10022 () dstaddr=192.168.23.9
<idle>-0 1.Ns. 12272705.810000: consume_skb: skbaddr=0xffff8800a60e8900
kworker/-20111 1.... 12272705.810033: workqueue_execute_end: work struct 0xffff8800b5a94588
192.168.-16171 3.... 12272705.810062: rpc_task_begin: task:0@0 flags=5280 state=0000 status=0 action=irq_stack_union
192.168.-16171 3.... 12272705.810068: rpc_task_run_action: task:18129@0 flags=5280 state=0005 status=0 action=call_start
192.168.-16171 3.... 12272705.810069: rpc_task_run_action: task:18129@0 flags=5280 state=0005 status=0 action=call_reserve
192.168.-16171 3.... 12272705.810071: rpc_task_run_action: task:18129@0 flags=5280 state=0005 status=0 action=call_reserveresult
192.168.-16171 3.... 12272705.810071: rpc_task_run_action: task:18129@0 flags=5280 state=0005 status=0 action=call_refresh
192.168.-16171 3.... 12272705.810073: rpc_task_run_action: task:18129@0 flags=5280 state=0005 status=0 action=call_refreshresult
192.168.-16171 3.... 12272705.810073: rpc_task_run_action: task:18129@0 flags=5280 state=0005 status=0 action=call_allocate
192.168.-16171 3.... 12272705.810074: rpc_task_run_action: task:18129@0 flags=5280 state=0005 status=0 action=call_bind
192.168.-16171 3.... 12272705.810075: rpc_task_run_action: task:18129@0 flags=5280 state=0005 status=0 action=call_connect
192.168.-16171 3.... 12272705.810075: rpc_task_run_action: task:18129@0 flags=5280 state=0005 status=0 action=call_transmit
192.168.-16171 3..s. 12272705.810095: net_dev_queue: dev=eth0 skbaddr=0xffff8800a9b924e8 len=162
192.168.-16171 3..s. 12272705.810097: net_dev_start_xmit: dev=eth0 queue_mapping=0 skbaddr=0xffff8800a9b924e8 vlan_tagged=0 vlan_proto=0x0000 vlan_tci=0x0000 protocol=0x0800 ip_summed=0 len=162 data_len=0 network_offset=14 transport_offset_valid=1 transport_offset=34 tx_flags=0 gso_size=0 gso_segs=1 gso_type=0
192.168.-16171 3..s. 12272705.810098: net_dev_xmit: dev=eth0 skbaddr=0xffff8800a9b924e8 len=162 rc=0
192.168.-16171 3..s. 12272705.810101: rpc_task_sleep: task:18129@0 flags=5a80 state=0005 status=0 timeout=15000 queue=xprt_pending
<idle>-0 1..s. 12272705.810318: napi_gro_receive_entry: dev=eth0 napi_id=0 queue_mapping=0 skbaddr=0xffff8800a60e8900 vlan_tagged=0 vlan_proto=0x0000 vlan_tci=0x0000 protocol=0x0800 ip_summed=1 hash=0x00000000 l4_hash=0 len=100 data_len=0 truesize=768 mac_header_valid=1 mac_header=-14 nr_frags=0 gso_size=0 gso_type=0
<idle>-0 1.Ns. 12272705.810326: netif_receive_skb: dev=eth0 skbaddr=0xffff8800a60e8900 len=100
<idle>-0 1.Ns. 12272705.810344: rpc_task_wakeup: task:18129@0 flags=5a80 state=0006 status=0 timeout=15000 queue=xprt_pending
<idle>-0 1.Ns. 12272705.810349: consume_skb: skbaddr=0xffff8800a9b924e8
192.168.-16171 3.... 12272705.810379: rpc_task_run_action: task:18129@0 flags=5a80 state=0005 status=0 action=call_status
192.168.-16171 3.... 12272705.810385: rpc_task_run_action: task:18129@0 flags=5a80 state=0005 status=0 action=call_status
192.168.-16171 3.... 12272705.810385: rpc_task_run_action: task:18129@0 flags=5a80 state=0005 status=44 action=call_decode
192.168.-16171 3.... 12272705.810387: rpc_task_run_action: task:18129@0 flags=5a80 state=0005 status=-10022 action=rpc_exit_task
192.168.-16171 3.... 12272705.810397: rpc_task_begin: task:0@0 flags=5280 state=0000 status=0 action=irq_stack_union
192.168.-16171 3.... 12272705.810398: rpc_task_run_action: task:18130@0 flags=5280 state=0005 status=0 action=call_start
192.168.-16171 3.... 12272705.810398: rpc_task_run_action: task:18130@0 flags=5280 state=0005 status=0 action=call_reserve
192.168.-16171 3.... 12272705.810399: rpc_task_run_action: task:18130@0 flags=5280 state=0005 status=0 action=call_reserveresult
192.168.-16171 3.... 12272705.810399: rpc_task_run_action: task:18130@0 flags=5280 state=0005 status=0 action=call_refresh
192.168.-16171 3.... 12272705.810400: rpc_task_run_action: task:18130@0 flags=5280 state=0005 status=0 action=call_refreshresult
192.168.-16171 3.... 12272705.810400: rpc_task_run_action: task:18130@0 flags=5280 state=0005 status=0 action=call_allocate
192.168.-16171 3.... 12272705.810404: rpc_task_run_action: task:18130@0 flags=5280 state=0005 status=0 action=call_bind
192.168.-16171 3.... 12272705.810404: rpc_task_run_action: task:18130@0 flags=5280 state=0005 status=0 action=call_connect
192.168.-16171 3.... 12272705.810404: rpc_task_run_action: task:18130@0 flags=5280 state=0005 status=0 action=call_transmit
192.168.-16171 3..s. 12272705.810417: net_dev_queue: dev=eth0 skbaddr=0xffff8800a9b92ae8 len=254
192.168.-16171 3..s. 12272705.810418: net_dev_start_xmit: dev=eth0 queue_mapping=0 skbaddr=0xffff8800a9b92ae8 vlan_tagged=0 vlan_proto=0x0000 vlan_tci=0x0000 protocol=0x0800 ip_summed=0 len=254 data_len=0 network_offset=14 transport_offset_valid=1 transport_offset=34 tx_flags=0 gso_size=0 gso_segs=1 gso_type=0
192.168.-16171 3..s. 12272705.810426: net_dev_xmit: dev=eth0 skbaddr=0xffff8800a9b92ae8 len=254 rc=0
192.168.-16171 3..s. 12272705.810428: rpc_task_sleep: task:18130@0 flags=5a80 state=0005 status=0 timeout=15000 queue=xprt_pending
192.168.-16171 3.... 12272705.810902: rpc_task_run_action: task:18130@0 flags=5a80 state=0005 status=0 action=call_status
192.168.-16171 3.... 12272705.810908: rpc_task_run_action: task:18130@0 flags=5a80 state=0005 status=0 action=call_status
192.168.-16171 3.... 12272705.810908: rpc_task_run_action: task:18130@0 flags=5a80 state=0005 status=60 action=call_decode
192.168.-16171 3.... 12272705.810910: rpc_task_run_action: task:18130@0 flags=5a80 state=0005 status=0 action=rpc_exit_task
192.168.-16171 3.... 12272705.810914: nfs4_setclientid: error=0 (ACCESS) dstaddr=192.168.23.9
192.168.-16171 3.... 12272705.810915: rpc_task_begin: task:0@0 flags=5280 state=0000 status=0 action=irq_stack_union
192.168.-16171 3.... 12272705.810916: rpc_task_run_action: task:18131@0 flags=5280 state=0005 status=0 action=call_start
192.168.-16171 3.... 12272705.810916: rpc_task_run_action: task:18131@0 flags=5280 state=0005 status=0 action=call_reserve
192.168.-16171 3.... 12272705.810917: rpc_task_run_action: task:18131@0 flags=5280 state=0005 status=0 action=call_reserveresult
192.168.-16171 3.... 12272705.810917: rpc_task_run_action: task:18131@0 flags=5280 state=0005 status=0 action=call_refresh
192.168.-16171 3.... 12272705.810918: rpc_task_run_action: task:18131@0 flags=5280 state=0005 status=0 action=call_refreshresult
192.168.-16171 3.... 12272705.810918: rpc_task_run_action: task:18131@0 flags=5280 state=0005 status=0 action=call_allocate
192.168.-16171 3.... 12272705.810919: rpc_task_run_action: task:18131@0 flags=5280 state=0005 status=0 action=call_bind
192.168.-16171 3.... 12272705.810919: rpc_task_run_action: task:18131@0 flags=5280 state=0005 status=0 action=call_connect
192.168.-16171 3.... 12272705.810919: rpc_task_run_action: task:18131@0 flags=5280 state=0005 status=0 action=call_transmit
192.168.-16171 3..s. 12272705.810931: net_dev_queue: dev=eth0 skbaddr=0xffff8800a9b92ce8 len=170
192.168.-16171 3..s. 12272705.810932: net_dev_start_xmit: dev=eth0 queue_mapping=0 skbaddr=0xffff8800a9b92ce8 vlan_tagged=0 vlan_proto=0x0000 vlan_tci=0x0000 protocol=0x0800 ip_summed=0 len=170 data_len=0 network_offset=14 transport_offset_valid=1 transport_offset=34 tx_flags=0 gso_size=0 gso_segs=1 gso_type=0
192.168.-16171 3..s. 12272705.810933: net_dev_xmit: dev=eth0 skbaddr=0xffff8800a9b92ce8 len=170 rc=0
192.168.-16171 3..s. 12272705.810936: rpc_task_sleep: task:18131@0 flags=5a80 state=0005 status=0 timeout=15000 queue=xprt_pending
192.168.-16171 3.... 12272705.811213: rpc_task_run_action: task:18131@0 flags=5a80 state=0005 status=0 action=call_status
192.168.-16171 3.... 12272705.811220: rpc_task_run_action: task:18131@0 flags=5a80 state=0005 status=0 action=call_status
192.168.-16171 3.... 12272705.811220: rpc_task_run_action: task:18131@0 flags=5a80 state=0005 status=44 action=call_decode
192.168.-16171 3.... 12272705.811222: rpc_task_run_action: task:18131@0 flags=5a80 state=0005 status=0 action=rpc_exit_task
192.168.-16171 3.... 12272705.811227: nfs4_setclientid_confirm: error=0 (ACCESS) dstaddr=192.168.23.9
And it goes on, but you can look at the full trace. I just searched
for "rpc".
Maybe this will shed more light on the issue. I'll keep this kernel on
my server for a little longer, but it's going to start triggering
rkhunter warnings about hidden ports again.
-- Steve
On Fri, Jun 19, 2015 at 1:17 PM, Steven Rostedt <[email protected]> wrote:
> On Fri, 19 Jun 2015 12:25:53 -0400
> Steven Rostedt <[email protected]> wrote:
>
>
>> I don't see that 55201 anywhere. But then again, I didn't look for it
>> before the port disappeared. I could reboot and look for it again. I
>> should have saved the full netstat -tapn as well :-/
>
> Of course I didn't find it anywhere, that's the port on my wife's box
> that port 947 was connected to.
>
> Now I even went over to my wife's box and ran
>
> # rpcinfo -p localhost
> program vers proto port service
> 100000 4 tcp 111 portmapper
> 100000 3 tcp 111 portmapper
> 100000 2 tcp 111 portmapper
> 100000 4 udp 111 portmapper
> 100000 3 udp 111 portmapper
> 100000 2 udp 111 portmapper
> 100024 1 udp 34243 status
> 100024 1 tcp 34498 status
>
> which doesn't show anything.
>
> but something is listening to that port...
>
> # netstat -ntap |grep 55201
> tcp 0 0 0.0.0.0:55201 0.0.0.0:* LISTEN
Hang on. This is on the client box while there is an active NFSv4
mount? Then that's probably the NFSv4 callback channel listening for
delegation callbacks.
Can you please try:
echo "options nfs callback_tcpport=4048" > /etc/modprobe.d/nfs-local.conf
and then either reboot the client or unload and then reload the nfs
modules before reattempting the mount. If this is indeed the callback
channel, then that will move your phantom listener to port 4048...
Cheers
Trond
On Fri, 19 Jun 2015 13:39:08 -0400
Trond Myklebust <[email protected]> wrote:
> On Fri, Jun 19, 2015 at 1:17 PM, Steven Rostedt <[email protected]> wrote:
> > On Fri, 19 Jun 2015 12:25:53 -0400
> > Steven Rostedt <[email protected]> wrote:
> >
> >
> >> I don't see that 55201 anywhere. But then again, I didn't look for it
> >> before the port disappeared. I could reboot and look for it again. I
> >> should have saved the full netstat -tapn as well :-/
> >
> > Of course I didn't find it anywhere, that's the port on my wife's box
> > that port 947 was connected to.
> >
> > Now I even went over to my wife's box and ran
> >
> > # rpcinfo -p localhost
> > program vers proto port service
> > 100000 4 tcp 111 portmapper
> > 100000 3 tcp 111 portmapper
> > 100000 2 tcp 111 portmapper
> > 100000 4 udp 111 portmapper
> > 100000 3 udp 111 portmapper
> > 100000 2 udp 111 portmapper
> > 100024 1 udp 34243 status
> > 100024 1 tcp 34498 status
> >
> > which doesn't show anything.
> >
> > but something is listening to that port...
> >
> > # netstat -ntap |grep 55201
> > tcp 0 0 0.0.0.0:55201 0.0.0.0:* LISTEN
>
>
> Hang on. This is on the client box while there is an active NFSv4
> mount? Then that's probably the NFSv4 callback channel listening for
> delegation callbacks.
>
> Can you please try:
>
> echo "options nfs callback_tcpport=4048" > /etc/modprobe.d/nfs-local.conf
>
> and then either reboot the client or unload and then reload the nfs
> modules before reattempting the mount. If this is indeed the callback
> channel, then that will move your phantom listener to port 4048...
>
Right, it was a little unclear to me before, but it now seems clear
that the callback socket that the server is opening to the client is
the one squatting on the port.
...and that sort of makes sense, doesn't it? That rpc_clnt will stick
around for the life of the client's lease, and the rpc_clnt binds to a
particular port so that it can reconnect using the same one.
Given that Stephen has done the legwork and figured out that reverting
those commits fixes the issue, then I suspect that the real culprit is
caf4ccd4e88cf2.
The client is likely closing down the other end of the callback
socket when it goes idle. Before that commit, we probably did an
xs_close on it, but now we're doing a xs_tcp_shutdown and that leaves
the port bound.
I'm travelling this weekend and am not set up to reproduce it to
confirm, but that does seem to be a plausible scenario.
--
Jeff Layton <[email protected]>
On Fri, 2015-06-19 at 15:52 -0400, Jeff Layton wrote:
> On Fri, 19 Jun 2015 13:39:08 -0400
> Trond Myklebust <[email protected]> wrote:
>
> > On Fri, Jun 19, 2015 at 1:17 PM, Steven Rostedt <
> > [email protected]> wrote:
> > > On Fri, 19 Jun 2015 12:25:53 -0400
> > > Steven Rostedt <[email protected]> wrote:
> > >
> > >
> > > > I don't see that 55201 anywhere. But then again, I didn't look
> > > > for it
> > > > before the port disappeared. I could reboot and look for it
> > > > again. I
> > > > should have saved the full netstat -tapn as well :-/
> > >
> > > Of course I didn't find it anywhere, that's the port on my wife's
> > > box
> > > that port 947 was connected to.
> > >
> > > Now I even went over to my wife's box and ran
> > >
> > > # rpcinfo -p localhost
> > > program vers proto port service
> > > 100000 4 tcp 111 portmapper
> > > 100000 3 tcp 111 portmapper
> > > 100000 2 tcp 111 portmapper
> > > 100000 4 udp 111 portmapper
> > > 100000 3 udp 111 portmapper
> > > 100000 2 udp 111 portmapper
> > > 100024 1 udp 34243 status
> > > 100024 1 tcp 34498 status
> > >
> > > which doesn't show anything.
> > >
> > > but something is listening to that port...
> > >
> > > # netstat -ntap |grep 55201
> > > tcp 0 0 0.0.0.0:55201 0.0.0.0:*
> > > LISTEN
> >
> >
> > Hang on. This is on the client box while there is an active NFSv4
> > mount? Then that's probably the NFSv4 callback channel listening
> > for
> > delegation callbacks.
> >
> > Can you please try:
> >
> > echo "options nfs callback_tcpport=4048" > /etc/modprobe.d/nfs
> > -local.conf
> >
> > and then either reboot the client or unload and then reload the nfs
> > modules before reattempting the mount. If this is indeed the
> > callback
> > channel, then that will move your phantom listener to port 4048...
> >
>
> Right, it was a little unclear to me before, but it now seems clear
> that the callback socket that the server is opening to the client is
> the one squatting on the port.
>
> ...and that sort of makes sense, doesn't it? That rpc_clnt will stick
> around for the life of the client's lease, and the rpc_clnt binds to
> a
> particular port so that it can reconnect using the same one.
>
> Given that Stephen has done the legwork and figured out that
> reverting
> those commits fixes the issue, then I suspect that the real culprit
> is
> caf4ccd4e88cf2.
>
> The client is likely closing down the other end of the callback
> socket when it goes idle. Before that commit, we probably did an
> xs_close on it, but now we're doing a xs_tcp_shutdown and that leaves
> the port bound.
>
Agreed. I've been looking into whether or not there is a simple fix.
Reverting those patches is not an option, because the whole point was
to ensure that the socket is in the TCP_CLOSED state before we release
the socket.
Steven, how about something like the following patch?
8<-----------------------------------------------------------------
On Fri, 19 Jun 2015 13:39:08 -0400
Trond Myklebust <[email protected]> wrote:
> Hang on. This is on the client box while there is an active NFSv4
> mount? Then that's probably the NFSv4 callback channel listening for
> delegation callbacks.
>
> Can you please try:
>
> echo "options nfs callback_tcpport=4048" > /etc/modprobe.d/nfs-local.conf
>
> and then either reboot the client or unload and then reload the nfs
> modules before reattempting the mount. If this is indeed the callback
> channel, then that will move your phantom listener to port 4048...
I unmounted the directories, removed the nfs modules, and then add this
file, and loaded the modules back and remounted the directories.
# netstat -ntap |grep 4048
tcp 0 0 0.0.0.0:4048 0.0.0.0:* LISTEN -
tcp 0 0 192.168.23.22:4048 192.168.23.9:1010 ESTABLISHED -
tcp6 0 0 :::4048 :::* LISTEN -
-- Steve
On Fri, 19 Jun 2015 16:30:18 -0400
Trond Myklebust <[email protected]> wrote:
> Steven, how about something like the following patch?
Building it now. Will let you know in a bit.
>
> 8<-----------------------------------------------------------------
> >From 9a0bcfdbdbc793eae1ed6d901a6396b6c66f9513 Mon Sep 17 00:00:00 2001
> From: Trond Myklebust <[email protected]>
> Date: Fri, 19 Jun 2015 16:17:57 -0400
> Subject: [PATCH] SUNRPC: Ensure we release the TCP socket once it has been
> closed
>
> This fixes a regression introduced by commit caf4ccd4e88cf2 ("SUNRPC:
> Make xs_tcp_close() do a socket shutdown rather than a sock_release").
> Prior to that commit, the autoclose feature would ensure that an
> idle connection would result in the socket being both disconnected and
> released, whereas now only gets disconnected.
>
> While the current behaviour is harmless, it does leave the port bound
> until either RPC traffic resumes or the RPC client is shut down.
Hmm, is this true? The port is bound, but the socket has been freed.
That is sk->sk_socket points to garbage. As my portlist.c module
verified.
It doesn't seem that anything can attach to that port again that I
know of. Is there a way to verify that something can attach to it again?
-- Steve
>
> Reported-by: Steven Rostedt <[email protected]>
> Signed-off-by: Trond Myklebust <[email protected]>
> ---
> net/sunrpc/xprt.c | 2 +-
> net/sunrpc/xprtsock.c | 8 ++++++--
> 2 files changed, 7 insertions(+), 3 deletions(-)
>
> diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
> index 3ca31f20b97c..ab5dd621ae0c 100644
> --- a/net/sunrpc/xprt.c
> +++ b/net/sunrpc/xprt.c
> @@ -611,8 +611,8 @@ static void xprt_autoclose(struct work_struct *work)
> struct rpc_xprt *xprt =
> container_of(work, struct rpc_xprt, task_cleanup);
>
> - xprt->ops->close(xprt);
> clear_bit(XPRT_CLOSE_WAIT, &xprt->state);
> + xprt->ops->close(xprt);
> xprt_release_write(xprt, NULL);
> }
>
> diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
> index fda8ec8c74c0..75dcdadf0269 100644
> --- a/net/sunrpc/xprtsock.c
> +++ b/net/sunrpc/xprtsock.c
> @@ -634,10 +634,13 @@ static void xs_tcp_shutdown(struct rpc_xprt *xprt)
> struct sock_xprt *transport = container_of(xprt, struct sock_xprt, xprt);
> struct socket *sock = transport->sock;
>
> - if (sock != NULL) {
> + if (sock == NULL)
> + return;
> + if (xprt_connected(xprt)) {
> kernel_sock_shutdown(sock, SHUT_RDWR);
> trace_rpc_socket_shutdown(xprt, sock);
> - }
> + } else
> + xs_reset_transport(transport);
> }
>
> /**
> @@ -786,6 +789,7 @@ static void xs_sock_mark_closed(struct rpc_xprt *xprt)
> xs_sock_reset_connection_flags(xprt);
> /* Mark transport as closed and wake up all pending tasks */
> xprt_disconnect_done(xprt);
> + xprt_force_disconnect(xprt);
> }
>
> /**
On Fri, 19 Jun 2015 16:30:18 -0400
Trond Myklebust <[email protected]> wrote:
> Steven, how about something like the following patch?
>
OK, the box I'm running this on is using v4.0.5, can you make a patch
based on that, as whatever you make needs to go to stable as well.
distcc[31554] ERROR: compile /home/rostedt/work/git/nobackup/linux-build.git/net/sunrpc/xprtsock.c on fedora/8 failed
distcc[31554] (dcc_build_somewhere) Warning: remote compilation of '/home/rostedt/work/git/nobackup/linux-build.git/net/sunrpc/xprtsock.c' failed, retrying locally
distcc[31554] Warning: failed to distribute /home/rostedt/work/git/nobackup/linux-build.git/net/sunrpc/xprtsock.c to fedora/8, running locally instead
/home/rostedt/work/git/nobackup/linux-build.git/net/sunrpc/xprtsock.c: In function 'xs_tcp_shutdown':
/home/rostedt/work/git/nobackup/linux-build.git/net/sunrpc/xprtsock.c:643:3: error: implicit declaration of function 'xs_reset_transport' [-Werror=implicit-function-declaration]
/home/rostedt/work/git/nobackup/linux-build.git/net/sunrpc/xprtsock.c: At top level:
/home/rostedt/work/git/nobackup/linux-build.git/net/sunrpc/xprtsock.c:825:13: warning: conflicting types for 'xs_reset_transport' [enabled by default]
/home/rostedt/work/git/nobackup/linux-build.git/net/sunrpc/xprtsock.c:825:13: error: static declaration of 'xs_reset_transport' follows non-static declaration
/home/rostedt/work/git/nobackup/linux-build.git/net/sunrpc/xprtsock.c:643:3: note: previous implicit declaration of 'xs_reset_transport' was here
cc1: some warnings being treated as errors
distcc[31554] ERROR: compile /home/rostedt/work/git/nobackup/linux-build.git/net/sunrpc/xprtsock.c on localhost failed
/home/rostedt/work/git/nobackup/linux-build.git/scripts/Makefile.build:258: recipe for target 'net/sunrpc/xprtsock.o' failed
make[3]: *** [net/sunrpc/xprtsock.o] Error 1
-- Steve
> 8<-----------------------------------------------------------------
> >From 9a0bcfdbdbc793eae1ed6d901a6396b6c66f9513 Mon Sep 17 00:00:00 2001
> From: Trond Myklebust <[email protected]>
> Date: Fri, 19 Jun 2015 16:17:57 -0400
> Subject: [PATCH] SUNRPC: Ensure we release the TCP socket once it has been
> closed
>
> This fixes a regression introduced by commit caf4ccd4e88cf2 ("SUNRPC:
> Make xs_tcp_close() do a socket shutdown rather than a sock_release").
> Prior to that commit, the autoclose feature would ensure that an
> idle connection would result in the socket being both disconnected and
> released, whereas now only gets disconnected.
>
> While the current behaviour is harmless, it does leave the port bound
> until either RPC traffic resumes or the RPC client is shut down.
>
> Reported-by: Steven Rostedt <[email protected]>
> Signed-off-by: Trond Myklebust <[email protected]>
> ---
> net/sunrpc/xprt.c | 2 +-
> net/sunrpc/xprtsock.c | 8 ++++++--
> 2 files changed, 7 insertions(+), 3 deletions(-)
>
> diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
> index 3ca31f20b97c..ab5dd621ae0c 100644
> --- a/net/sunrpc/xprt.c
> +++ b/net/sunrpc/xprt.c
> @@ -611,8 +611,8 @@ static void xprt_autoclose(struct work_struct *work)
> struct rpc_xprt *xprt =
> container_of(work, struct rpc_xprt, task_cleanup);
>
> - xprt->ops->close(xprt);
> clear_bit(XPRT_CLOSE_WAIT, &xprt->state);
> + xprt->ops->close(xprt);
> xprt_release_write(xprt, NULL);
> }
>
> diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
> index fda8ec8c74c0..75dcdadf0269 100644
> --- a/net/sunrpc/xprtsock.c
> +++ b/net/sunrpc/xprtsock.c
> @@ -634,10 +634,13 @@ static void xs_tcp_shutdown(struct rpc_xprt *xprt)
> struct sock_xprt *transport = container_of(xprt, struct sock_xprt, xprt);
> struct socket *sock = transport->sock;
>
> - if (sock != NULL) {
> + if (sock == NULL)
> + return;
> + if (xprt_connected(xprt)) {
> kernel_sock_shutdown(sock, SHUT_RDWR);
> trace_rpc_socket_shutdown(xprt, sock);
> - }
> + } else
> + xs_reset_transport(transport);
> }
>
> /**
> @@ -786,6 +789,7 @@ static void xs_sock_mark_closed(struct rpc_xprt *xprt)
> xs_sock_reset_connection_flags(xprt);
> /* Mark transport as closed and wake up all pending tasks */
> xprt_disconnect_done(xprt);
> + xprt_force_disconnect(xprt);
> }
>
> /**
On Fri, 2015-06-19 at 18:14 -0400, Steven Rostedt wrote:
> On Fri, 19 Jun 2015 16:30:18 -0400
> Trond Myklebust <[email protected]> wrote:
>
> > Steven, how about something like the following patch?
> >
>
> OK, the box I'm running this on is using v4.0.5, can you make a patch
> based on that, as whatever you make needs to go to stable as well.
Is it causing any other damage than the rkhunter warning you reported?
> distcc[31554] ERROR: compile /home/rostedt/work/git/nobackup/linux
> -build.git/net/sunrpc/xprtsock.c on fedora/8 failed
> distcc[31554] (dcc_build_somewhere) Warning: remote compilation of
> '/home/rostedt/work/git/nobackup/linux
> -build.git/net/sunrpc/xprtsock.c' failed, retrying locally
> distcc[31554] Warning: failed to distribute
> /home/rostedt/work/git/nobackup/linux-build.git/net/sunrpc/xprtsock.c
> to fedora/8, running locally instead
> /home/rostedt/work/git/nobackup/linux
> -build.git/net/sunrpc/xprtsock.c: In function 'xs_tcp_shutdown':
> /home/rostedt/work/git/nobackup/linux
> -build.git/net/sunrpc/xprtsock.c:643:3: error: implicit declaration
> of function 'xs_reset_transport' [-Werror=implicit-function
> -declaration]
> /home/rostedt/work/git/nobackup/linux
> -build.git/net/sunrpc/xprtsock.c: At top level:
> /home/rostedt/work/git/nobackup/linux
> -build.git/net/sunrpc/xprtsock.c:825:13: warning: conflicting types
> for 'xs_reset_transport' [enabled by default]
> /home/rostedt/work/git/nobackup/linux
> -build.git/net/sunrpc/xprtsock.c:825:13: error: static declaration of
> 'xs_reset_transport' follows non-static declaration
> /home/rostedt/work/git/nobackup/linux
> -build.git/net/sunrpc/xprtsock.c:643:3: note: previous implicit
> declaration of 'xs_reset_transport' was here
> cc1: some warnings being treated as errors
> distcc[31554] ERROR: compile /home/rostedt/work/git/nobackup/linux
> -build.git/net/sunrpc/xprtsock.c on localhost failed
> /home/rostedt/work/git/nobackup/linux
> -build.git/scripts/Makefile.build:258: recipe for target
> 'net/sunrpc/xprtsock.o' failed
> make[3]: *** [net/sunrpc/xprtsock.o] Error 1
Sorry. I sent that one off too quickly. Try the following.
8<--------------------------------------------------------------
On Fri, 19 Jun 2015 19:25:59 -0400
Trond Myklebust <[email protected]> wrote:
> On Fri, 2015-06-19 at 18:14 -0400, Steven Rostedt wrote:
> > On Fri, 19 Jun 2015 16:30:18 -0400
> > Trond Myklebust <[email protected]> wrote:
> >
> > > Steven, how about something like the following patch?
> > >
> >
> > OK, the box I'm running this on is using v4.0.5, can you make a patch
> > based on that, as whatever you make needs to go to stable as well.
>
> Is it causing any other damage than the rkhunter warning you reported?
Well, not that I know of. Are you sure that this port will be
reconnected, and is not just a leak. Not sure if you could waste more
ports this way with connections to other machines. I only have my
wife's box connect to this server. This server is actually a client to
my other boxes.
Although the rkhunter warning is the only thing that triggers, I still
would think this is a stable fix, especially if the port is leaked and
not taken again.
>
> Sorry. I sent that one off too quickly. Try the following.
This built, will be testing it shortly.
-- Steve
On Fri, 19 Jun 2015 20:37:45 -0400
Steven Rostedt <[email protected]> wrote:
> > Is it causing any other damage than the rkhunter warning you reported?
>
> Well, not that I know of. Are you sure that this port will be
> reconnected, and is not just a leak. Not sure if you could waste more
> ports this way with connections to other machines. I only have my
> wife's box connect to this server. This server is actually a client to
> my other boxes.
>
> Although the rkhunter warning is the only thing that triggers, I still
> would think this is a stable fix, especially if the port is leaked and
> not taken again.
I did some experiments. If I unmount the directories from my wife's
machine and remount them, the port that was hidden is fully closed.
Maybe it's not that big of a deal after all.
-- Steve
On Fri, 19 Jun 2015 19:25:59 -0400
Trond Myklebust <[email protected]> wrote:
> 8<--------------------------------------------------------------
> >From 4876cc779ff525b9c2376d8076edf47815e71f2c Mon Sep 17 00:00:00 2001
> From: Trond Myklebust <[email protected]>
> Date: Fri, 19 Jun 2015 16:17:57 -0400
> Subject: [PATCH v2] SUNRPC: Ensure we release the TCP socket once it has been
> closed
>
> This fixes a regression introduced by commit caf4ccd4e88cf2 ("SUNRPC:
> Make xs_tcp_close() do a socket shutdown rather than a sock_release").
> Prior to that commit, the autoclose feature would ensure that an
> idle connection would result in the socket being both disconnected and
> released, whereas now only gets disconnected.
>
> While the current behaviour is harmless, it does leave the port bound
> until either RPC traffic resumes or the RPC client is shut down.
Is there a way to test RPC traffic resuming? I'd like to try that before
declaring this bug harmless.
>
> Reported-by: Steven Rostedt <[email protected]>
The problem appears to go away with this patch.
Tested-by: Steven Rostedt <[email protected]>
Thanks a lot!
-- Steve
> Signed-off-by: Trond Myklebust <[email protected]>
> ---
> net/sunrpc/xprt.c | 2 +-
> net/sunrpc/xprtsock.c | 40 ++++++++++++++++++++++------------------
> 2 files changed, 23 insertions(+), 19 deletions(-)
>
> diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
> index 3ca31f20b97c..ab5dd621ae0c 100644
> --- a/net/sunrpc/xprt.c
> +++ b/net/sunrpc/xprt.c
> @@ -611,8 +611,8 @@ static void xprt_autoclose(struct work_struct *work)
> struct rpc_xprt *xprt =
> container_of(work, struct rpc_xprt, task_cleanup);
>
> - xprt->ops->close(xprt);
> clear_bit(XPRT_CLOSE_WAIT, &xprt->state);
> + xprt->ops->close(xprt);
> xprt_release_write(xprt, NULL);
> }
>
> diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
> index fda8ec8c74c0..ee0715dfc3c7 100644
> --- a/net/sunrpc/xprtsock.c
> +++ b/net/sunrpc/xprtsock.c
> @@ -623,24 +623,6 @@ process_status:
> }
>
> /**
> - * xs_tcp_shutdown - gracefully shut down a TCP socket
> - * @xprt: transport
> - *
> - * Initiates a graceful shutdown of the TCP socket by calling the
> - * equivalent of shutdown(SHUT_RDWR);
> - */
> -static void xs_tcp_shutdown(struct rpc_xprt *xprt)
> -{
> - struct sock_xprt *transport = container_of(xprt, struct sock_xprt, xprt);
> - struct socket *sock = transport->sock;
> -
> - if (sock != NULL) {
> - kernel_sock_shutdown(sock, SHUT_RDWR);
> - trace_rpc_socket_shutdown(xprt, sock);
> - }
> -}
> -
> -/**
> * xs_tcp_send_request - write an RPC request to a TCP socket
> * @task: address of RPC task that manages the state of an RPC request
> *
> @@ -786,6 +768,7 @@ static void xs_sock_mark_closed(struct rpc_xprt *xprt)
> xs_sock_reset_connection_flags(xprt);
> /* Mark transport as closed and wake up all pending tasks */
> xprt_disconnect_done(xprt);
> + xprt_force_disconnect(xprt);
> }
>
> /**
> @@ -2103,6 +2086,27 @@ out:
> xprt_wake_pending_tasks(xprt, status);
> }
>
> +/**
> + * xs_tcp_shutdown - gracefully shut down a TCP socket
> + * @xprt: transport
> + *
> + * Initiates a graceful shutdown of the TCP socket by calling the
> + * equivalent of shutdown(SHUT_RDWR);
> + */
> +static void xs_tcp_shutdown(struct rpc_xprt *xprt)
> +{
> + struct sock_xprt *transport = container_of(xprt, struct sock_xprt, xprt);
> + struct socket *sock = transport->sock;
> +
> + if (sock == NULL)
> + return;
> + if (xprt_connected(xprt)) {
> + kernel_sock_shutdown(sock, SHUT_RDWR);
> + trace_rpc_socket_shutdown(xprt, sock);
> + } else
> + xs_reset_transport(transport);
> +}
> +
> static int xs_tcp_finish_connecting(struct rpc_xprt *xprt, struct socket *sock)
> {
> struct sock_xprt *transport = container_of(xprt, struct sock_xprt, xprt);
On Fri, Jun 19, 2015 at 9:27 PM, Steven Rostedt <[email protected]> wrote:
> On Fri, 19 Jun 2015 19:25:59 -0400
> Trond Myklebust <[email protected]> wrote:
>
>
>> 8<--------------------------------------------------------------
>> >From 4876cc779ff525b9c2376d8076edf47815e71f2c Mon Sep 17 00:00:00 2001
>> From: Trond Myklebust <[email protected]>
>> Date: Fri, 19 Jun 2015 16:17:57 -0400
>> Subject: [PATCH v2] SUNRPC: Ensure we release the TCP socket once it has been
>> closed
>>
>> This fixes a regression introduced by commit caf4ccd4e88cf2 ("SUNRPC:
>> Make xs_tcp_close() do a socket shutdown rather than a sock_release").
>> Prior to that commit, the autoclose feature would ensure that an
>> idle connection would result in the socket being both disconnected and
>> released, whereas now only gets disconnected.
>>
>> While the current behaviour is harmless, it does leave the port bound
>> until either RPC traffic resumes or the RPC client is shut down.
>
> Is there a way to test RPC traffic resuming? I'd like to try that before
> declaring this bug harmless.
You should be seeing the same issue if you mount an NFSv3 partition.
After about 5 minutes of inactivity, the client will close down the
connection to the server, and rkhunter should again see the phantom
socket. If you then try to access the partition, the RPC layer should
immediately release the socket and establish a new connection on the
same port.
Cheers
Trond
I've hit this again. Not sure when it started, but I applied my old
debug trace_printk() patch (attached) and rebooted (4.5.7). I just
tested the latest kernel from Linus's tree (from last nights pull), and
it still gives me the problem.
Here's the trace I have:
kworker/3:1H-134 [003] ..s. 61.036129: inet_csk_get_port: snum 805
kworker/3:1H-134 [003] ..s. 61.036135: <stack trace>
=> sched_clock
=> inet_addr_type_table
=> security_capable
=> inet_bind
=> xs_bind
=> release_sock
=> sock_setsockopt
=> __sock_create
=> xs_create_sock.isra.19
=> xs_tcp_setup_socket
=> process_one_work
=> worker_thread
=> worker_thread
=> kthread
=> ret_from_fork
=> kthread
kworker/3:1H-134 [003] ..s. 61.036136: inet_bind_hash: add 805
kworker/3:1H-134 [003] ..s. 61.036138: <stack trace>
=> inet_csk_get_port
=> sched_clock
=> inet_addr_type_table
=> security_capable
=> inet_bind
=> xs_bind
=> release_sock
=> sock_setsockopt
=> __sock_create
=> xs_create_sock.isra.19
=> xs_tcp_setup_socket
=> process_one_work
=> worker_thread
=> worker_thread
=> kthread
=> ret_from_fork
=> kthread
kworker/3:1H-134 [003] .... 61.036139: xs_bind: RPC: xs_bind 4.136.255.255:805: ok (0)
kworker/3:1H-134 [003] .... 61.036140: xs_tcp_setup_socket: RPC: worker connecting xprt ffff880407eca800 via tcp to 192.168.23.22 (port 43651)
kworker/3:1H-134 [003] .... 61.036162: xs_tcp_setup_socket: RPC: ffff880407eca800 connect status 115 connected 0 sock state 2
<idle>-0 [001] ..s. 61.036450: xs_tcp_state_change: RPC: xs_tcp_state_change client ffff880407eca800...
<idle>-0 [001] ..s. 61.036452: xs_tcp_state_change: RPC: state 1 conn 0 dead 0 zapped 1 sk_shutdown 0
kworker/1:1H-136 [001] .... 61.036476: xprt_connect_status: RPC: 43 xprt_connect_status: retrying
kworker/1:1H-136 [001] .... 61.036478: xprt_prepare_transmit: RPC: 43 xprt_prepare_transmit
kworker/1:1H-136 [001] .... 61.036479: xprt_transmit: RPC: 43 xprt_transmit(72)
kworker/1:1H-136 [001] .... 61.036486: xs_tcp_send_request: RPC: xs_tcp_send_request(72) = 0
kworker/1:1H-136 [001] .... 61.036487: xprt_transmit: RPC: 43 xmit complete
<idle>-0 [001] ..s. 61.036789: xs_tcp_data_ready: RPC: xs_tcp_data_ready...
kworker/1:1H-136 [001] .... 61.036798: xs_tcp_data_recv: RPC: xs_tcp_data_recv started
kworker/1:1H-136 [001] .... 61.036799: xs_tcp_data_recv: RPC: reading TCP record fragment of length 24
kworker/1:1H-136 [001] .... 61.036799: xs_tcp_data_recv: RPC: reading XID (4 bytes)
kworker/1:1H-136 [001] .... 61.036800: xs_tcp_data_recv: RPC: reading request with XID 2f4c3f88
kworker/1:1H-136 [001] .... 61.036800: xs_tcp_data_recv: RPC: reading CALL/REPLY flag (4 bytes)
kworker/1:1H-136 [001] .... 61.036801: xs_tcp_data_recv: RPC: read reply XID 2f4c3f88
kworker/1:1H-136 [001] ..s. 61.036801: xs_tcp_data_recv: RPC: XID 2f4c3f88 read 16 bytes
kworker/1:1H-136 [001] ..s. 61.036802: xs_tcp_data_recv: RPC: xprt = ffff880407eca800, tcp_copied = 24, tcp_offset = 24, tcp_reclen = 24
kworker/1:1H-136 [001] ..s. 61.036802: xprt_complete_rqst: RPC: 43 xid 2f4c3f88 complete (24 bytes received)
kworker/1:1H-136 [001] .... 61.036803: xs_tcp_data_recv: RPC: xs_tcp_data_recv done
kworker/1:1H-136 [001] .... 61.036812: xprt_release: RPC: 43 release request ffff88040b270800
# unhide-tcp
Unhide-tcp 20130526
Copyright © 2013 Yago Jesus & Patrick Gouin
License GPLv3+ : GNU GPL version 3 or later
http://www.unhide-forensics.info
Used options:
[*]Starting TCP checking
Found Hidden port that not appears in ss: 805
-- Steve