Hi,
I hate to bore you all with the same old stuff, but I'm still fighting
problems caused by nfsd's dropping active connections.
The most recent episode in this saga is a problem with the Linux
client.
Consider a network with a single Linux 2.4 based home server, a few
hundred clients, all using TCP. In Linux 2.4, nfsd starts dropping
connections when it reaches a limit of (nrthreads + 3) * 10 open
connections. With 4 threads, this means 70 connections, and with 8 threads
this means 110 connections max. Both of which is totally inadequate for
this network. To get out of the congestion zone, we would need to bump
the number of threads to about 20, which is just silly.
The very same network has been served well with just 4 threads all
the time while using UDP.
With the 2.6 kernel, things get even worse as the formula was changed to
(nrthreads + 3) * 5, so you'll max out at 35 (4 threads) and 55 (with
8 threads), respectively. To serve 200 mounts via TCP simultaneously,
you'd need close to 40 nfsd threads.
In theory, all clients should be able to cope gracefully with such drops,
but even the Linux client runs into a couple of SNAFUs with these.
One: with a 50% probability, nfsd decides to drop the _newest_ connection,
which is the one it just accepted. When the Linux client sees a fresh
connection go down before it was able to send anything across, it
backs off for 15 to 60 seconds, hanging the NFS mount (with 2.6.5-pre,
it's always 60 seconds). Which is kind of annoying the KDE users here,
because KDE applications like to scribble to the home directory all
the time, and their entire session freezes when NFS hangs.
Second: People have reported that files vanished and/or rename/remove
operations failed.
I also think this is due to the TCP disconnects. What I think
is happening here is this:
- user X: unlink("blafoo")
- kernel: sends NFS call to server REMOVE "blafoo"
- nfsd thread A receives request, removes file blafoo. waits for
some file system i/o to sync the change to disk
- a new tcp connection comes in. Another nfsd thread B decides
it needs to nuke some connections, selects user X's connection
- nfsd thread A decides it should send the response now,
but finds the socket is gone. Drops the reply.
- client kernel: reconnect to NFS server
- server drops connection
- client waits for a while, reconnects again,
resends REMOVE "blafoo"
- NFS server: sorry, ENOENT: there's no such file "blafoo"
Normally, the NFS server's replay cache should protect from this sort
of behavior, but the long timeouts before the client can reconnect
effectively mean the cached reply has been forgotten by the time the
retransmitted call arrives.
This is not a theoretical case; users here have reported that
files vanish mysteriously several times a day.
Three: people reported lots of messages in their syslog saying
"nfs_rename: target foo/bar busy, d_count=2". This is a variation
of the above. nfs_rename finds that someone still has foo/bar
open and decides it needs to do a sillyrename. The rename
fails with the spurious ENOENT error described above, causing
the entire rename operation to fail
Four: Some buggy clients can't deal with it, but I think I mentioned
that already. Prime offender is zOS; when a fresh connection is killed,
it simply propagates the error to the application, hard mount or not. I
know it's broken, but that doesn't mean we can't be gentler and make
these clients work more smoothly with Linux.
I propose to add the following two patches to the server and client. They
increase the connection limit, stop dropping the neweset socket, and
add some printk's to alert the admin of the contention.
As an alternative to hardcoding a formula based on the number of threads,
I could also make the max number of connections a sysctl.
Comments,
Olaf
--
Olaf Kirch | The Hardware Gods hate me.
[email protected] |
---------------+
quick comment --
i think a shorter wait in the client before attempting to
reconnect would improve the likelihood that your completed
but unreplied operations would still be in the server's
replay cache.
that might be a good additional change (if you haven't
suggested that already).
> -----Original Message-----
> From: Olaf Kirch [mailto:[email protected]]=20
> Sent: Thursday, April 01, 2004 5:24 AM
> To: Neil Brown
> Cc: [email protected]
> Subject: [NFS] nfsd random drop
>=20
>=20
> Hi,
>=20
> I hate to bore you all with the same old stuff, but I'm still fighting
> problems caused by nfsd's dropping active connections.
>=20
> The most recent episode in this saga is a problem with the Linux
> client.
>=20
> Consider a network with a single Linux 2.4 based home server, a few
> hundred clients, all using TCP. In Linux 2.4, nfsd starts dropping
> connections when it reaches a limit of (nrthreads + 3) * 10 open
> connections. With 4 threads, this means 70 connections, and=20
> with 8 threads
> this means 110 connections max. Both of which is totally=20
> inadequate for
> this network. To get out of the congestion zone, we would need to bump
> the number of threads to about 20, which is just silly.
>=20
> The very same network has been served well with just 4 threads all
> the time while using UDP.
>=20
> With the 2.6 kernel, things get even worse as the formula was=20
> changed to
> (nrthreads + 3) * 5, so you'll max out at 35 (4 threads) and 55 (with
> 8 threads), respectively. To serve 200 mounts via TCP simultaneously,
> you'd need close to 40 nfsd threads.
>=20
> In theory, all clients should be able to cope gracefully with=20
> such drops,
> but even the Linux client runs into a couple of SNAFUs with these.
>=20
> One: with a 50% probability, nfsd decides to drop the=20
> _newest_ connection,
> which is the one it just accepted. When the Linux client sees a fresh
> connection go down before it was able to send anything across, it
> backs off for 15 to 60 seconds, hanging the NFS mount (with 2.6.5-pre,
> it's always 60 seconds). Which is kind of annoying the KDE users here,
> because KDE applications like to scribble to the home directory all
> the time, and their entire session freezes when NFS hangs.
>=20
> Second: People have reported that files vanished and/or rename/remove
> operations failed.
>=20
> I also think this is due to the TCP disconnects. What I think
> is happening here is this:
>=20
> - user X: unlink("blafoo")=20
> - kernel: sends NFS call to server REMOVE "blafoo"=20
> - nfsd thread A receives request, removes file blafoo.=20
> waits for=20
> some file system i/o to sync the change to disk=20
> - a new tcp connection comes in. Another nfsd thread B decides=20
> it needs to nuke some connections, selects user X's connection=20
> - nfsd thread A decides it should send the response now,
> but finds the socket is gone. Drops the reply.
> - client kernel: reconnect to NFS server
> - server drops connection
> - client waits for a while, reconnects again,
> resends REMOVE "blafoo"=20
> - NFS server: sorry, ENOENT: there's no such file "blafoo"=20
>=20
> Normally, the NFS server's replay cache should protect from this sort
> of behavior, but the long timeouts before the client can reconnect
> effectively mean the cached reply has been forgotten by the time the
> retransmitted call arrives.
>=20
> This is not a theoretical case; users here have reported that
> files vanish mysteriously several times a day.
>=20
> Three: people reported lots of messages in their syslog saying
> "nfs_rename: target foo/bar busy, d_count=3D2". This is a variation
> of the above. nfs_rename finds that someone still has foo/bar
> open and decides it needs to do a sillyrename. The rename
> fails with the spurious ENOENT error described above, causing
> the entire rename operation to fail
>=20
> Four: Some buggy clients can't deal with it, but I think I mentioned
> that already. Prime offender is zOS; when a fresh connection=20
> is killed,
> it simply propagates the error to the application, hard mount=20
> or not. I
> know it's broken, but that doesn't mean we can't be gentler and make
> these clients work more smoothly with Linux.
>=20
> I propose to add the following two patches to the server and=20
> client. They
> increase the connection limit, stop dropping the neweset socket, and
> add some printk's to alert the admin of the contention.
>=20
> As an alternative to hardcoding a formula based on the number=20
> of threads,
> I could also make the max number of connections a sysctl.
>=20
> Comments,
> Olaf
> --=20
> Olaf Kirch | The Hardware Gods hate me.
> [email protected] |
> ---------------+=20
>=20
-------------------------------------------------------
This SF.Net email is sponsored by: IBM Linux Tutorials
Free Linux tutorial presented by Daniel Robbins, President and CEO of
GenToo technologies. Learn everything from fundamentals to system
administration.http://ads.osdn.com/?ad_id=1470&alloc_id=3638&op=click
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
On Thu, 1 Apr 2004 at 12:23:34 +0200, Olaf Kirch wrote:
> I hate to bore you all with the same old stuff, but I'm still fighting
> problems caused by nfsd's dropping active connections.
thank you for trying to resolve this!
> Consider a network with a single Linux 2.4 based home server, a few
> hundred clients, all using TCP. In Linux 2.4, nfsd starts dropping
> connections when it reaches a limit of (nrthreads + 3) * 10 open
> connections. With 4 threads, this means 70 connections, and with 8 thre=
ads
> this means 110 connections max. Both of which is totally inadequate for
> this network. To get out of the congestion zone, we would need to bump
> the number of threads to about 20, which is just silly.
>=20
> The very same network has been served well with just 4 threads all
> the time while using UDP.
YES! we have exactly such setup and we need tcp.
> Second: People have reported that files vanished and/or rename/remove
> operations failed.
YES! we do have such problems. not often (about once or twice per week)
but it's really annoying and we had to insert a lot of checks and
timeouts in our soft as a workaround. we never found out what was the
cause of it and how to reliably repeate the fault...
> I propose to add the following two patches to the server and client. Th=
ey
> increase the connection limit, stop dropping the neweset socket, and
> add some printk's to alert the admin of the contention.
i hope this will help. i'll try this patches next time we reboot
fileserver...
> As an alternative to hardcoding a formula based on the number of thread=
s,
> I could also make the max number of connections a sysctl.
this would be better i think...
-------------------------------------------------------
This SF.Net email is sponsored by: IBM Linux Tutorials
Free Linux tutorial presented by Daniel Robbins, President and CEO of
GenToo technologies. Learn everything from fundamentals to system
administration.http://ads.osdn.com/?ad_id=1470&alloc_id=3638&op=click
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
On Thursday April 1, [email protected] wrote:
> Hi,
>
> I hate to bore you all with the same old stuff, but I'm still fighting
> problems caused by nfsd's dropping active connections.
>
..snip..
>
> I propose to add the following two patches to the server and client. They
> increase the connection limit, stop dropping the neweset socket, and
> add some printk's to alert the admin of the contention.
I'm happy for the server patch to go in.
Andrew Morton already picked it up and asked if it was OK. I have
confirmed that it is so you might see it in 2.6.6.
>
> As an alternative to hardcoding a formula based on the number of threads,
> I could also make the max number of connections a sysctl.
Might be good, but I would only bother if a real need presents itself.
Thanks,
NeilBrown
-------------------------------------------------------
This SF.Net email is sponsored by: IBM Linux Tutorials
Free Linux tutorial presented by Daniel Robbins, President and CEO of
GenToo technologies. Learn everything from fundamentals to system
administration.http://ads.osdn.com/?ad_id=1470&alloc_id=3638&op=click
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
On Sun, 2004-04-04 at 20:13, Neil Brown wrote:
> > As an alternative to hardcoding a formula based on the number of threads,
> > I could also make the max number of connections a sysctl.
>
> Might be good, but I would only bother if a real need presents itself.
Some of the stuff we're discussing for NFSv4.1 might make this of
interest. Many people want to make use of the fact that IPSEC tunnels
offer secure point-to-point communications and that hardware
acceleration for this is pretty cheap. The so-called "CCM"
authentication model for RPCSEC_GSS makes use of this.
...we are talking about 2.7 development here (at the earliest), so
please don't take the above to be anything other than a FYI. I just
wanted to ensure that you are aware that the cost of reconnection might
well rise as we offload more and more to the socket layer.
Cheers,
Trond
-------------------------------------------------------
This SF.Net email is sponsored by: IBM Linux Tutorials
Free Linux tutorial presented by Daniel Robbins, President and CEO of
GenToo technologies. Learn everything from fundamentals to system
administration.http://ads.osdn.com/?ad_id=1470&alloc_id=3638&op=click
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs