2008-04-11 23:08:10

by [email protected]

[permalink] [raw]
Subject: Re: [NFS] How to set-up a Linux NFS server to handle massive number of requests

On Thu, Apr 10, 2008 at 02:12:58PM +0200, Carsten Aulbert wrote:
> Hi all,
>
> we have a pretty extreme problem here and I try to figure out how to get
> it done right.
>
> We have a large cluster consisting of 1340 compute nodes who have a
> automount directory which will subsequently trigger a NFS mount (read-only):
>
> $ ypcat auto.data
> -fstype=nfs,nfsvers=3,hard,intr,rsize=8192,wsize=8192,tcp &:/data
>
> $ grep auto.data /etc/auto.master
> /atlas/data yp:auto.data --timeout=5
>
> So far so good.
>
> When submitting 1000 jobs just doing a md5sum of the very same file from
> one single data server, I see very weird effects.
>
> In the standard set-up many connections get into the box (tcp connection
> status SYN_RECV) but those fall over after some time and stay in
> CLOSE_WAIT state until I restart the nfs-kernel-server. Typically that
> looks like (netstat -an):

That's interesting! But I'm not sure how to figure this out.

Is it possible to get a network trace that shows what's going on?

What happens on the clients?

What kernel version are you using?--b.

>
> tcp 0 0 10.20.10.14:687 10.10.2.87:799 SYN_RECV
> tcp 0 0 10.20.10.14:687 10.10.4.1:823 SYN_RECV
> tcp 0 0 10.20.10.14:687 10.10.1.65:656 SYN_RECV
> tcp 0 0 10.20.10.14:687 10.10.1.30:650 SYN_RECV
> tcp 0 0 10.20.10.14:687 10.10.0.71:789 SYN_RECV
> tcp 0 0 10.20.10.14:687 10.10.1.4:602 SYN_RECV
> tcp 0 0 10.20.10.14:687 10.10.1.1:967 SYN_RECV
> tcp 0 0 10.20.10.14:687 10.10.3.66:915 SYN_RECV
> tcp 0 0 10.20.10.14:687 10.10.0.55:620 SYN_RECV
> tcp 0 0 10.20.10.14:687 10.10.1.41:835 SYN_RECV
> tcp 0 0 10.20.10.14:687 10.10.2.29:958 SYN_RECV
> tcp 0 0 10.20.10.14:687 10.10.1.12:998 SYN_RECV
> tcp 0 0 10.20.10.14:687 10.10.1.30:651 SYN_RECV
> tcp 0 0 10.20.10.14:687 10.10.1.4:601 SYN_RECV
> tcp 0 0 10.20.10.14:2049 10.10.1.19:846
> ESTABLISHED
> tcp 45 0 10.20.10.14:687 10.10.0.68:979
> CLOSE_WAIT
> tcp 45 0 10.20.10.14:687 10.10.3.83:680
> CLOSE_WAIT
> tcp 89 0 10.20.10.14:687 10.10.0.79:604
> CLOSE_WAIT
> tcp 0 0 10.20.10.14:2049 10.10.2.6:676
> ESTABLISHED
> tcp 45 0 10.20.10.14:687 10.10.2.56:913
> CLOSE_WAIT
> tcp 45 0 10.20.10.14:687 10.10.0.60:827
> CLOSE_WAIT
> tcp 0 0 10.20.10.14:2049 10.10.3.55:778
> ESTABLISHED
> tcp 45 0 10.20.10.14:687 10.10.2.86:981
> CLOSE_WAIT
> tcp 45 0 10.20.10.14:687 10.10.9.13:792
> CLOSE_WAIT
> tcp 89 0 10.20.10.14:687 10.10.2.93:728
> CLOSE_WAIT
> tcp 45 0 10.20.10.14:687 10.10.0.20:742
> CLOSE_WAIT
> tcp 45 0 10.20.10.14:687 10.10.3.44:982
> CLOSE_WAIT
>
>
> I played with different numbers of of nfsd (ranging from 8-1024) and
> increasing the number of threads for rpc.mountd from 1 to 64, in quite a
> few combinations, but so far I have not found a consistent set of
> parameters where 1000 nodes are able to read this file at the same time.
>
> Any ideas from anyone or do you need more input from me?
>
> TIA
>
> Carsten
>
> PS: Please Cc me, I'm not yet subscribed.
>
> -------------------------------------------------------------------------
> This SF.net email is sponsored by the 2008 JavaOne(SM) Conference
> Don't miss this year's exciting event. There's still time to save $100.
> Use priority code J8TL2D2.
> http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
> _______________________________________________
> NFS maillist - [email protected]
> https://lists.sourceforge.net/lists/listinfo/nfs
> _______________________________________________
> Please note that [email protected] is being discontinued.
> Please subscribe to [email protected] instead.
> http://vger.kernel.org/vger-lists.html#linux-nfs
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html

-------------------------------------------------------------------------
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference
Don't miss this year's exciting event. There's still time to save $100.
Use priority code J8TL2D2.
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
_______________________________________________
Please note that [email protected] is being discontinued.
Please subscribe to [email protected] instead.
http://vger.kernel.org/vger-lists.html#linux-nfs



2008-04-15 04:48:13

by Tom Tucker

[permalink] [raw]
Subject: Re: [NFS] How to set-up a Linux NFS server to handle massive number of requests


Maybe this this is a TCP_BACKLOG issue?

BTW, with that many mounts won't you run out of "secure" ports (< 1024),
so you'll need to use 'insecure' as a mount option.


On Fri, 2008-04-11 at 19:07 -0400, J. Bruce Fields wrote:
> On Thu, Apr 10, 2008 at 02:12:58PM +0200, Carsten Aulbert wrote:
> > Hi all,
> >
> > we have a pretty extreme problem here and I try to figure out how to get
> > it done right.
> >
> > We have a large cluster consisting of 1340 compute nodes who have a
> > automount directory which will subsequently trigger a NFS mount (read-only):
> >
> > $ ypcat auto.data
> > -fstype=nfs,nfsvers=3,hard,intr,rsize=8192,wsize=8192,tcp &:/data
> >
> > $ grep auto.data /etc/auto.master
> > /atlas/data yp:auto.data --timeout=5
> >
> > So far so good.
> >
> > When submitting 1000 jobs just doing a md5sum of the very same file from
> > one single data server, I see very weird effects.
> >
> > In the standard set-up many connections get into the box (tcp connection
> > status SYN_RECV) but those fall over after some time and stay in
> > CLOSE_WAIT state until I restart the nfs-kernel-server. Typically that
> > looks like (netstat -an):
>
> That's interesting! But I'm not sure how to figure this out.
>
> Is it possible to get a network trace that shows what's going on?
>
> What happens on the clients?
>
> What kernel version are you using?--b.
>
> >
> > tcp 0 0 10.20.10.14:687 10.10.2.87:799 SYN_RECV
> > tcp 0 0 10.20.10.14:687 10.10.4.1:823 SYN_RECV
> > tcp 0 0 10.20.10.14:687 10.10.1.65:656 SYN_RECV
> > tcp 0 0 10.20.10.14:687 10.10.1.30:650 SYN_RECV
> > tcp 0 0 10.20.10.14:687 10.10.0.71:789 SYN_RECV
> > tcp 0 0 10.20.10.14:687 10.10.1.4:602 SYN_RECV
> > tcp 0 0 10.20.10.14:687 10.10.1.1:967 SYN_RECV
> > tcp 0 0 10.20.10.14:687 10.10.3.66:915 SYN_RECV
> > tcp 0 0 10.20.10.14:687 10.10.0.55:620 SYN_RECV
> > tcp 0 0 10.20.10.14:687 10.10.1.41:835 SYN_RECV
> > tcp 0 0 10.20.10.14:687 10.10.2.29:958 SYN_RECV
> > tcp 0 0 10.20.10.14:687 10.10.1.12:998 SYN_RECV
> > tcp 0 0 10.20.10.14:687 10.10.1.30:651 SYN_RECV
> > tcp 0 0 10.20.10.14:687 10.10.1.4:601 SYN_RECV
> > tcp 0 0 10.20.10.14:2049 10.10.1.19:846
> > ESTABLISHED
> > tcp 45 0 10.20.10.14:687 10.10.0.68:979
> > CLOSE_WAIT
> > tcp 45 0 10.20.10.14:687 10.10.3.83:680
> > CLOSE_WAIT
> > tcp 89 0 10.20.10.14:687 10.10.0.79:604
> > CLOSE_WAIT
> > tcp 0 0 10.20.10.14:2049 10.10.2.6:676
> > ESTABLISHED
> > tcp 45 0 10.20.10.14:687 10.10.2.56:913
> > CLOSE_WAIT
> > tcp 45 0 10.20.10.14:687 10.10.0.60:827
> > CLOSE_WAIT
> > tcp 0 0 10.20.10.14:2049 10.10.3.55:778
> > ESTABLISHED
> > tcp 45 0 10.20.10.14:687 10.10.2.86:981
> > CLOSE_WAIT
> > tcp 45 0 10.20.10.14:687 10.10.9.13:792
> > CLOSE_WAIT
> > tcp 89 0 10.20.10.14:687 10.10.2.93:728
> > CLOSE_WAIT
> > tcp 45 0 10.20.10.14:687 10.10.0.20:742
> > CLOSE_WAIT
> > tcp 45 0 10.20.10.14:687 10.10.3.44:982
> > CLOSE_WAIT
> >
> >
> > I played with different numbers of of nfsd (ranging from 8-1024) and
> > increasing the number of threads for rpc.mountd from 1 to 64, in quite a
> > few combinations, but so far I have not found a consistent set of
> > parameters where 1000 nodes are able to read this file at the same time.
> >
> > Any ideas from anyone or do you need more input from me?
> >
> > TIA
> >
> > Carsten
> >
> > PS: Please Cc me, I'm not yet subscribed.
> >
> > -------------------------------------------------------------------------
> > This SF.net email is sponsored by the 2008 JavaOne(SM) Conference
> > Don't miss this year's exciting event. There's still time to save $100.
> > Use priority code J8TL2D2.
> > http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
> > _______________________________________________
> > NFS maillist - [email protected]
> > https://lists.sourceforge.net/lists/listinfo/nfs
> > _______________________________________________
> > Please note that [email protected] is being discontinued.
> > Please subscribe to [email protected] instead.
> > http://vger.kernel.org/vger-lists.html#linux-nfs
> >
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> > the body of a message to [email protected]
> > More majordomo info at http://vger.kernel.org/majordomo-info.html
>
> -------------------------------------------------------------------------
> This SF.net email is sponsored by the 2008 JavaOne(SM) Conference
> Don't miss this year's exciting event. There's still time to save $100.
> Use priority code J8TL2D2.
> http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
> _______________________________________________
> NFS maillist - [email protected]
> https://lists.sourceforge.net/lists/listinfo/nfs
> _______________________________________________
> Please note that [email protected] is being discontinued.
> Please subscribe to [email protected] instead.
> http://vger.kernel.org/vger-lists.html#linux-nfs


-------------------------------------------------------------------------
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference
Don't miss this year's exciting event. There's still time to save $100.
Use priority code J8TL2D2.
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
_______________________________________________
Please note that [email protected] is being discontinued.
Please subscribe to [email protected] instead.
http://vger.kernel.org/vger-lists.html#linux-nfs


2008-04-12 06:47:30

by Carsten Aulbert

[permalink] [raw]
Subject: Re: [NFS] How to set-up a Linux NFS server to handle massive number of requests

2.6.24.Hi,

J. Bruce Fields wrote:
>> In the standard set-up many connections get into the box (tcp connection
>> status SYN_RECV) but those fall over after some time and stay in
>> CLOSE_WAIT state until I restart the nfs-kernel-server. Typically that
>> looks like (netstat -an):
>
> That's interesting! But I'm not sure how to figure this out.
>
> Is it possible to get a network trace that shows what's going on?
>

In principle yes, but
(1) it's huge. I only get this when doing this with 500-1000 clients
starting at about the same time
(2) It seems that I don't get a full trace, i.e. the session seem to be
incomplete - sometimes I only see a single packet with FIN set. I tried
doing this both with wireshark running locally and with ntap's capturing
device.

> What happens on the clients?
>
In the logs (/var/log/daemon.log) I only see that the mount request
fails in different ways.

Apr 9 12:07:55 n0078 automount[26838]: >> mount: RPC: Timed out
Apr 9 12:07:55 n0078 automount[26838]: mount(nfs): nfs: mount failure
d14:/data on /atlas/data/d14
Apr 9 12:07:55 n0078 automount[26838]: failed to mount /atlas/data/d14
Apr 9 12:18:56 n0078 automount[27977]: >> mount: RPC: Remote system
error - Connection timed out
Apr 9 12:18:56 n0078 automount[27977]: mount(nfs): nfs: mount failure
d14:/data on /atlas/data/d14

I have not yet run tshark in the background on many nodes to see if I
can capture the client's view. Would that be beneficial?

> What kernel version are you using?--b.

2.6.24.4 on Debian Etch

Right now, it seems that running 196 nfsd plus 64 threads for mountd
solves the problem for the time being. Although it would be nice to
understand these "magic" numbers ;)

Thanks!

Carsten

-------------------------------------------------------------------------
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference
Don't miss this year's exciting event. There's still time to save $100.
Use priority code J8TL2D2.
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
_______________________________________________
Please note that [email protected] is being discontinued.
Please subscribe to [email protected] instead.
http://vger.kernel.org/vger-lists.html#linux-nfs