2008-07-01 08:43:01

by Carsten Aulbert

[permalink] [raw]
Subject: Massive NFS problems on large cluster with large number of mounts

Hi all (now to the right email list),

We are running a large cluster and do a lot of cross-mounting between
the nodes. To get this running we are running a lot of nfsd (196) and
use mountd with 64 threads, just in case we get a massive number of hit=
s
onto a single node. All this is on Debian Etch with a recent 2.6.24
kernel using autofs4 at the moment to do the automounts.

When running these two not nice scripts:

$ cat test_mount
#!/bin/sh

n_node=3D1000

for i in `seq 1 $n_node`;do
n=3D`echo $RANDOM%1342+10001 | bc| sed -e "s/1/n/"`
$HOME/bin/mount.sh $n&
echo -n .
done

$ cat mount.sh
#!/bin/sh

dir=3D"/distributed/spray/data/EatH/S5R1"

ping -c1 -w1 $1 > /dev/null&& file=3D"/atlas/node/$1$dir/"`ls -f
/atlas/node/$1$dir/|head -n 50 | tail -n 1`
md5sum ${file}

With that we encounter different problems:

Running this gives this in syslog:
Jul 1 07:37:19 n1312 rpc.idmapd[2309]: nfsopen:
open(/var/lib/nfs/rpc_pipefs/nfs/clntaa58/idmap): Too many open files
Jul 1 07:37:19 n1312 rpc.idmapd[2309]: nfsopen:
open(/var/lib/nfs/rpc_pipefs/nfs/clntaa58/idmap): Too many open files
Jul 1 07:37:19 n1312 rpc.idmapd[2309]: nfsopen:
open(/var/lib/nfs/rpc_pipefs/nfs/clntaa5e/idmap): Too many open files
Jul 1 07:37:19 n1312 rpc.idmapd[2309]: nfsopen:
open(/var/lib/nfs/rpc_pipefs/nfs/clntaa5e/idmap): Too many open files
Jul 1 07:37:19 n1312 rpc.idmapd[2309]: nfsopen:
open(/var/lib/nfs/rpc_pipefs/nfs/clntaa9c/idmap): Too many open files

Which is not surprising to me. However, there are a few things I'm
wondering about.

(1) All our mounts use nfsvers=3D3 why is rpc.idmapd involved at all?
(2) Why is this daemon growing so extremely large?
# ps aux|grep rpc.idmapd
root 2309 0.1 16.2 2037152 1326944 ? Ss Jun30 1:24
/usr/sbin/rpc.idmapd
NOTE: We are now disabling this one, but still it wouldbe nice to
understand why there seem to be a memory leak.

(3) The script maxes out at about 340 concurrent mounts, any idea how t=
o
increase this number? We are already running all servers with the
insecure option, thus low ports should not be a restriction.
(4) After running this script /etc/mtab and /proc/mounts are out of
sync. Ian Kent from autofs fame suggested a broken local mount
implementation which does not lock mtab well enough. Any idee about tha=
t?

We are currently testing autofs5 and this is not giving these messages,
but still we are not using high/unprivilidged ports.

TIA for any help you might give us.

Cheers

Carsten

--=20
Dr. Carsten Aulbert - Max Planck Institut f=C3=BCr Gravitationsphysik
Callinstra=C3=9Fe 38, 30167 Hannover, Germany
=46on: +49 511 762 17185, Fax: +49 511 762 17193
http://www.top500.org/system/9234 | http://www.top500.org/connfam/6/lis=
t/31


2008-07-16 09:49:55

by Carsten Aulbert

[permalink] [raw]
Subject: Re: Massive NFS problems on large cluster with large number of mounts

Hi Trond et al.

I'm following up on this discussion because we hit another problem:

Trond Myklebust wrote:

>
> Alternatively, just change the values of /proc/sys/sunrpc/min_resvport
> and /proc/sys/sunrpc/max_resvport to whatever range of ports you
> actually want to use.

This works like a charm, however, if you set these values before
restarting the nfs-kernel-server then you are in deep trouble, since
when nfsd wants to start it needs to register with the portmapper, right?

But what happens if this requests comes from a high^Wunpriviliged port?
Right:
Jul 16 11:46:43 d23 portmap[8216]: connect from 127.0.0.1 to set(nfs):
request from unprivileged port
Jul 16 11:46:43 d23 nfsd[8214]: nfssvc: writting fds to kernel failed:
errno 13 (Permission denied)
Jul 16 11:46:44 d23 kernel: [ 8437.726223] NFSD: Using
/var/lib/nfs/v4recovery as the NFSv4 state recovery directory
Jul 16 11:46:44 d23 kernel: [ 8437.800607] NFSD: starting 90-second
grace period
Jul 16 11:46:44 d23 kernel: [ 8437.842891] nfsd: last server has exited
Jul 16 11:46:44 d23 kernel: [ 8437.879940] nfsd: unexporting all filesystems
Jul 16 11:46:44 d23 nfsd[8214]: nfssvc: Address already in use


Changing /proc/sys/sunrpc/max_resvport to 1023 again resolves this
issue, however defeats the purpose for the initial problem. I still need
to look into the code for hte portmapper, but is it easily possible that
the portmapper would accept nfsd requests from "insecure" ports also?
Since e are (mostly) in a controlled environment that should not pose a
problem.

Anyone with an idea?

Thanks a lot

Carsten

2008-07-02 21:04:43

by Trond Myklebust

[permalink] [raw]
Subject: Re: Massive NFS problems on large cluster with large number of mounts

On Wed, 2008-07-02 at 16:31 -0400, J. Bruce Fields wrote:
> On Wed, Jul 02, 2008 at 04:00:21PM +0200, Carsten Aulbert wrote:
> > Hi all,
> >
> >
> > J. Bruce Fields wrote:
> > >
> > > I'm slightly confused--the above is all about server configuration, but
> > > the below seems to describe only client problems?
> >
> > Well, yes and no. All our servers are clients as well. I.e. we have
> > ~1340 nodes which all export a local directory to be cross-mounted.
> >
> > >> (1) All our mounts use nfsvers=3 why is rpc.idmapd involved at all?
> > >
> > > Are there actually files named "idmap" in those directories? (Looks to
> > > me like they're only created in the v4 case, so I assume those open
> > > calls would return ENOENT if they didn't return ENFILE....)
> >
> > No there is not and since we are not running v4 yet, we've disabled the
> > start for these on all nodes now.
> >
> >
> > >
> > >> (2) Why is this daemon growing so extremely large?
> > >> # ps aux|grep rpc.idmapd
> > >> root 2309 0.1 16.2 2037152 1326944 ? Ss Jun30 1:24
> > >> /usr/sbin/rpc.idmapd
> > >
> > > I think rpc.idmapd has some state for each directory whether they're for
> > > a v4 client or not, since it's using dnotify to watch for an "idmap"
> > > file to appear in each one. The above shows about 2k per mount?
> >
> > As you have written in your other email, yes that's 2 GByte and I've
> > seen boxes where > 500 mounts hung that the process was using all of the
> > 8 GByte. So I do think there is a bug.
> >
> > OTOH, we still have the problem, that we can only mount up to ~ 350
> > remote directories. This one we think we tracked down to the fact that
> > the NFS clients refuse to use ports >1023 even though the servers are
> > exporting with the "insecure" option. Is there a way to force this?
> > Right now the NFS clients use ports 665-1023 (except a few odd ports
> > which were in use earlier).
> >
> > Any hint for us how we shall proceed and maybe force the clients to also
> > use ports > 1023? I think that would solve our problems.
>
> I think the below (untested) would tell the client to stop demanding a
> privileged port.

Alternatively, just change the values of /proc/sys/sunrpc/min_resvport
and /proc/sys/sunrpc/max_resvport to whatever range of ports you
actually want to use.

Trond


2008-07-02 21:09:02

by J. Bruce Fields

[permalink] [raw]
Subject: Re: Massive NFS problems on large cluster with large number of mounts

On Wed, Jul 02, 2008 at 05:04:36PM -0400, Trond Myklebust wrote:
> On Wed, 2008-07-02 at 16:31 -0400, J. Bruce Fields wrote:
> > On Wed, Jul 02, 2008 at 04:00:21PM +0200, Carsten Aulbert wrote:
> > > Any hint for us how we shall proceed and maybe force the clients to also
> > > use ports > 1023? I think that would solve our problems.
> >
> > I think the below (untested) would tell the client to stop demanding a
> > privileged port.
>
> Alternatively, just change the values of /proc/sys/sunrpc/min_resvport
> and /proc/sys/sunrpc/max_resvport to whatever range of ports you
> actually want to use.

Whoops, yes, I missed those, thanks.

--b.

2008-07-03 05:31:29

by Carsten Aulbert

[permalink] [raw]
Subject: Re: Massive NFS problems on large cluster with large number of mounts



Trond Myklebust wrote:

>
> Alternatively, just change the values of /proc/sys/sunrpc/min_resvport
> and /proc/sys/sunrpc/max_resvport to whatever range of ports you
> actually want to use.

That looks indeed great. We will test this hopefully today and see where
the next ceiling is, we will bang our heads into ;)

Thanks

Carsten

2008-07-01 18:22:51

by J. Bruce Fields

[permalink] [raw]
Subject: Re: Massive NFS problems on large cluster with large number of mounts

On Tue, Jul 01, 2008 at 10:19:55AM +0200, Carsten Aulbert wrote:
> Hi all (now to the right email list),
>=20
> We are running a large cluster and do a lot of cross-mounting between
> the nodes. To get this running we are running a lot of nfsd (196) and
> use mountd with 64 threads, just in case we get a massive number of h=
its
> onto a single node. All this is on Debian Etch with a recent 2.6.24
> kernel using autofs4 at the moment to do the automounts.

I'm slightly confused--the above is all about server configuration, but
the below seems to describe only client problems?

>=20
> When running these two not nice scripts:
>=20
> $ cat test_mount
> #!/bin/sh
>=20
> n_node=3D1000
>=20
> for i in `seq 1 $n_node`;do
> n=3D`echo $RANDOM%1342+10001 | bc| sed -e "s/1/n/"`
> $HOME/bin/mount.sh $n&
> echo -n .
> done
>=20
> $ cat mount.sh
> #!/bin/sh
>=20
> dir=3D"/distributed/spray/data/EatH/S5R1"
>=20
> ping -c1 -w1 $1 > /dev/null&& file=3D"/atlas/node/$1$dir/"`ls -f
> /atlas/node/$1$dir/|head -n 50 | tail -n 1`
> md5sum ${file}
>=20
> With that we encounter different problems:
>=20
> Running this gives this in syslog:
> Jul 1 07:37:19 n1312 rpc.idmapd[2309]: nfsopen:
> open(/var/lib/nfs/rpc_pipefs/nfs/clntaa58/idmap): Too many open files
> Jul 1 07:37:19 n1312 rpc.idmapd[2309]: nfsopen:
> open(/var/lib/nfs/rpc_pipefs/nfs/clntaa58/idmap): Too many open files
> Jul 1 07:37:19 n1312 rpc.idmapd[2309]: nfsopen:
> open(/var/lib/nfs/rpc_pipefs/nfs/clntaa5e/idmap): Too many open files
> Jul 1 07:37:19 n1312 rpc.idmapd[2309]: nfsopen:
> open(/var/lib/nfs/rpc_pipefs/nfs/clntaa5e/idmap): Too many open files
> Jul 1 07:37:19 n1312 rpc.idmapd[2309]: nfsopen:
> open(/var/lib/nfs/rpc_pipefs/nfs/clntaa9c/idmap): Too many open files
>=20
> Which is not surprising to me. However, there are a few things I'm
> wondering about.
>=20
> (1) All our mounts use nfsvers=3D3 why is rpc.idmapd involved at all?

Are there actually files named "idmap" in those directories? (Looks to
me like they're only created in the v4 case, so I assume those open
calls would return ENOENT if they didn't return ENFILE....)

> (2) Why is this daemon growing so extremely large?
> # ps aux|grep rpc.idmapd
> root 2309 0.1 16.2 2037152 1326944 ? Ss Jun30 1:24
> /usr/sbin/rpc.idmapd

I think rpc.idmapd has some state for each directory whether they're fo=
r
a v4 client or not, since it's using dnotify to watch for an "idmap"
file to appear in each one. The above shows about 2k per mount?

--b.

> NOTE: We are now disabling this one, but still it wouldbe nice to
> understand why there seem to be a memory leak.
>=20
> (3) The script maxes out at about 340 concurrent mounts, any idea how=
to
> increase this number? We are already running all servers with the
> insecure option, thus low ports should not be a restriction.
> (4) After running this script /etc/mtab and /proc/mounts are out of
> sync. Ian Kent from autofs fame suggested a broken local mount
> implementation which does not lock mtab well enough. Any idee about t=
hat?
>=20
> We are currently testing autofs5 and this is not giving these message=
s,
> but still we are not using high/unprivilidged ports.
>=20
> TIA for any help you might give us.
>=20
> Cheers
>=20
> Carsten
>=20
> --=20
> Dr. Carsten Aulbert - Max Planck Institut f=C3=BCr Gravitationsphysik
> Callinstra=C3=9Fe 38, 30167 Hannover, Germany
> Fon: +49 511 762 17185, Fax: +49 511 762 17193
> http://www.top500.org/system/9234 | http://www.top500.org/connfam/6/l=
ist/31
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nfs" =
in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html

2008-07-01 18:26:30

by J. Bruce Fields

[permalink] [raw]
Subject: Re: Massive NFS problems on large cluster with large number of mounts

On Tue, Jul 01, 2008 at 02:22:50PM -0400, bfields wrote:
> On Tue, Jul 01, 2008 at 10:19:55AM +0200, Carsten Aulbert wrote:
> > Hi all (now to the right email list),
> >=20
> > We are running a large cluster and do a lot of cross-mounting betwe=
en
> > the nodes. To get this running we are running a lot of nfsd (196) a=
nd
> > use mountd with 64 threads, just in case we get a massive number of=
hits
> > onto a single node. All this is on Debian Etch with a recent 2.6.24
> > kernel using autofs4 at the moment to do the automounts.
>=20
> I'm slightly confused--the above is all about server configuration, b=
ut
> the below seems to describe only client problems?
>=20
> >=20
> > When running these two not nice scripts:
> >=20
> > $ cat test_mount
> > #!/bin/sh
> >=20
> > n_node=3D1000
> >=20
> > for i in `seq 1 $n_node`;do
> > n=3D`echo $RANDOM%1342+10001 | bc| sed -e "s/1/n/"`
> > $HOME/bin/mount.sh $n&
> > echo -n .
> > done
> >=20
> > $ cat mount.sh
> > #!/bin/sh
> >=20
> > dir=3D"/distributed/spray/data/EatH/S5R1"
> >=20
> > ping -c1 -w1 $1 > /dev/null&& file=3D"/atlas/node/$1$dir/"`ls -f
> > /atlas/node/$1$dir/|head -n 50 | tail -n 1`
> > md5sum ${file}
> >=20
> > With that we encounter different problems:
> >=20
> > Running this gives this in syslog:
> > Jul 1 07:37:19 n1312 rpc.idmapd[2309]: nfsopen:
> > open(/var/lib/nfs/rpc_pipefs/nfs/clntaa58/idmap): Too many open fil=
es
> > Jul 1 07:37:19 n1312 rpc.idmapd[2309]: nfsopen:
> > open(/var/lib/nfs/rpc_pipefs/nfs/clntaa58/idmap): Too many open fil=
es
> > Jul 1 07:37:19 n1312 rpc.idmapd[2309]: nfsopen:
> > open(/var/lib/nfs/rpc_pipefs/nfs/clntaa5e/idmap): Too many open fil=
es
> > Jul 1 07:37:19 n1312 rpc.idmapd[2309]: nfsopen:
> > open(/var/lib/nfs/rpc_pipefs/nfs/clntaa5e/idmap): Too many open fil=
es
> > Jul 1 07:37:19 n1312 rpc.idmapd[2309]: nfsopen:
> > open(/var/lib/nfs/rpc_pipefs/nfs/clntaa9c/idmap): Too many open fil=
es
> >=20
> > Which is not surprising to me. However, there are a few things I'm
> > wondering about.
> >=20
> > (1) All our mounts use nfsvers=3D3 why is rpc.idmapd involved at al=
l?
>=20
> Are there actually files named "idmap" in those directories? (Looks =
to
> me like they're only created in the v4 case, so I assume those open
> calls would return ENOENT if they didn't return ENFILE....)
>=20
> > (2) Why is this daemon growing so extremely large?
> > # ps aux|grep rpc.idmapd
> > root 2309 0.1 16.2 2037152 1326944 ? Ss Jun30 1:24
> > /usr/sbin/rpc.idmapd
>=20
> I think rpc.idmapd has some state for each directory whether they're =
for
> a v4 client or not, since it's using dnotify to watch for an "idmap"
> file to appear in each one. The above shows about 2k per mount?

Sorry, no, if ps reports those fields in kilobytes, then that's
megabytes per mount, so yes there's clearly a bug here that needs
fixing.

--b.

>=20
> --b.
>=20
> > NOTE: We are now disabling this one, but still it wouldbe nice to
> > understand why there seem to be a memory leak.
> >=20
> > (3) The script maxes out at about 340 concurrent mounts, any idea h=
ow to
> > increase this number? We are already running all servers with the
> > insecure option, thus low ports should not be a restriction.
> > (4) After running this script /etc/mtab and /proc/mounts are out of
> > sync. Ian Kent from autofs fame suggested a broken local mount
> > implementation which does not lock mtab well enough. Any idee about=
that?
> >=20
> > We are currently testing autofs5 and this is not giving these messa=
ges,
> > but still we are not using high/unprivilidged ports.
> >=20
> > TIA for any help you might give us.
> >=20
> > Cheers
> >=20
> > Carsten
> >=20
> > --=20
> > Dr. Carsten Aulbert - Max Planck Institut f=C3=BCr Gravitationsphys=
ik
> > Callinstra=C3=9Fe 38, 30167 Hannover, Germany
> > Fon: +49 511 762 17185, Fax: +49 511 762 17193
> > http://www.top500.org/system/9234 | http://www.top500.org/connfam/6=
/list/31
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-nfs=
" in
> > the body of a message to [email protected]
> > More majordomo info at http://vger.kernel.org/majordomo-info.html

2008-07-02 14:00:26

by Carsten Aulbert

[permalink] [raw]
Subject: Re: Massive NFS problems on large cluster with large number of mounts

Hi all,


J. Bruce Fields wrote:
>
> I'm slightly confused--the above is all about server configuration, but
> the below seems to describe only client problems?

Well, yes and no. All our servers are clients as well. I.e. we have
~1340 nodes which all export a local directory to be cross-mounted.

>> (1) All our mounts use nfsvers=3 why is rpc.idmapd involved at all?
>
> Are there actually files named "idmap" in those directories? (Looks to
> me like they're only created in the v4 case, so I assume those open
> calls would return ENOENT if they didn't return ENFILE....)

No there is not and since we are not running v4 yet, we've disabled the
start for these on all nodes now.


>
>> (2) Why is this daemon growing so extremely large?
>> # ps aux|grep rpc.idmapd
>> root 2309 0.1 16.2 2037152 1326944 ? Ss Jun30 1:24
>> /usr/sbin/rpc.idmapd
>
> I think rpc.idmapd has some state for each directory whether they're for
> a v4 client or not, since it's using dnotify to watch for an "idmap"
> file to appear in each one. The above shows about 2k per mount?

As you have written in your other email, yes that's 2 GByte and I've
seen boxes where > 500 mounts hung that the process was using all of the
8 GByte. So I do think there is a bug.

OTOH, we still have the problem, that we can only mount up to ~ 350
remote directories. This one we think we tracked down to the fact that
the NFS clients refuse to use ports >1023 even though the servers are
exporting with the "insecure" option. Is there a way to force this?
Right now the NFS clients use ports 665-1023 (except a few odd ports
which were in use earlier).

Any hint for us how we shall proceed and maybe force the clients to also
use ports > 1023? I think that would solve our problems.

Cheers

Carsten


2008-07-02 20:31:31

by J. Bruce Fields

[permalink] [raw]
Subject: Re: Massive NFS problems on large cluster with large number of mounts

On Wed, Jul 02, 2008 at 04:00:21PM +0200, Carsten Aulbert wrote:
> Hi all,
>
>
> J. Bruce Fields wrote:
> >
> > I'm slightly confused--the above is all about server configuration, but
> > the below seems to describe only client problems?
>
> Well, yes and no. All our servers are clients as well. I.e. we have
> ~1340 nodes which all export a local directory to be cross-mounted.
>
> >> (1) All our mounts use nfsvers=3 why is rpc.idmapd involved at all?
> >
> > Are there actually files named "idmap" in those directories? (Looks to
> > me like they're only created in the v4 case, so I assume those open
> > calls would return ENOENT if they didn't return ENFILE....)
>
> No there is not and since we are not running v4 yet, we've disabled the
> start for these on all nodes now.
>
>
> >
> >> (2) Why is this daemon growing so extremely large?
> >> # ps aux|grep rpc.idmapd
> >> root 2309 0.1 16.2 2037152 1326944 ? Ss Jun30 1:24
> >> /usr/sbin/rpc.idmapd
> >
> > I think rpc.idmapd has some state for each directory whether they're for
> > a v4 client or not, since it's using dnotify to watch for an "idmap"
> > file to appear in each one. The above shows about 2k per mount?
>
> As you have written in your other email, yes that's 2 GByte and I've
> seen boxes where > 500 mounts hung that the process was using all of the
> 8 GByte. So I do think there is a bug.
>
> OTOH, we still have the problem, that we can only mount up to ~ 350
> remote directories. This one we think we tracked down to the fact that
> the NFS clients refuse to use ports >1023 even though the servers are
> exporting with the "insecure" option. Is there a way to force this?
> Right now the NFS clients use ports 665-1023 (except a few odd ports
> which were in use earlier).
>
> Any hint for us how we shall proceed and maybe force the clients to also
> use ports > 1023? I think that would solve our problems.

I think the below (untested) would tell the client to stop demanding a
privileged port.

Then you may find you run into other problems, I don't know. Sounds
like nobody's using this many mounts, so you get to find out what the
next limit is.... But if it works, then maybe someday we should add a
mount option to control this.

--b.


diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
index 8945307..51f68cc 100644
--- a/net/sunrpc/clnt.c
+++ b/net/sunrpc/clnt.c
@@ -300,9 +300,7 @@ struct rpc_clnt *rpc_create(struct rpc_create_args *args)
* but it is always enabled for rpciod, which handles the connect
* operation.
*/
- xprt->resvport = 1;
- if (args->flags & RPC_CLNT_CREATE_NONPRIVPORT)
- xprt->resvport = 0;
+ xprt->resvport = 0;

clnt = rpc_new_client(args, xprt);
if (IS_ERR(clnt))

2008-08-15 20:34:05

by Chuck Lever

[permalink] [raw]
Subject: Re: Massive NFS problems on large cluster with large number of mounts

On Wed, Jul 30, 2008 at 6:01 PM, Chuck Lever <[email protected]> wrote:
> On Wed, Jul 30, 2008 at 3:33 PM, Chuck Lever <[email protected]> wrote:
>> On Wed, Jul 30, 2008 at 1:53 PM, J. Bruce Fields <[email protected]> wrote:
>>> On Mon, Jul 28, 2008 at 04:55:50PM -0400, Chuck Lever wrote:
>>>> On Thu, Jul 17, 2008 at 11:11 AM, Chuck Lever <[email protected]> wrote:
>>>> > On Thu, Jul 17, 2008 at 10:48 AM, J. Bruce Fields <[email protected]> wrote:
>>>> >> On Thu, Jul 17, 2008 at 10:47:25AM -0400, Chuck Lever wrote:
>>>> >>> On Wed, Jul 16, 2008 at 3:06 PM, J. Bruce Fields <[email protected]> wrote:
>>>> >>> > The immediate problem seems like a kernel bug to me--it seems to me that
>>>> >>> > the calls to local daemons should be ignoring {min_,max}_resvport. (Or
>>>> >>> > is there some way the daemons can still know that those calls come from
>>>> >>> > the local kernel?)
>>>> >>>
>>>> >>> I tend to agree. The rpcbind client (at least) does specifically
>>>> >>> require a privileged port, so a large min/max port range would be out
>>>> >>> of the question for those rpc_clients.
>>>> >>
>>>> >> Any chance I could talk you into doing a patch for that?
>>>> >
>>>> > I can look at it when I get back next week.
>>>>
>>>> I've been pondering this.
>>>>
>>>> It seems like the NFS client is a rather unique case for using
>>>> unprivileged ports; most or all of the other RPC clients in the kernel
>>>> want to use privileged ports pretty much all the time, and have
>>>> learned to switch this off as needed and appropriate. We even have an
>>>> internal API feature for doing this: the RPC_CLNT_CREATE_NONPRIVPORT
>>>> flag to rpc_create().
>>>>
>>>> And instead of allowing a wide source port range, it would be better
>>>> for the NFS client to use either privileged ports, or unprivileged
>>>> ports, but not both, for the same mount point. Otherwise we could be
>>>> opening ourselves up for non-deterministic behavior: "How come
>>>> sometimes I get EPERM when I try to mount my NFS servers, but other
>>>> times the same mount command works fine?" or "Sometimes after a long
>>>> idle period my NFS mount points stop working, and all the programs
>>>> running on the mount point get EACCES."
>>>>
>>>> It seems like a good solution would be to:
>>>>
>>>> 1. Make the xprt_minresvport and xprt_maxresvport sysctls mean what
>>>> they say: they are _reserved_ port limits. Thus xprt_maxresvport
>>>> should never be allowed to be larger than 1023, and xprt_minresvport
>>>> should always be made to be strictly less than xprt_maxresvport; and
>>>
>>> That would break existing setups: so, someone googles for "nfs linux
>>> large numbers of mounts" and comes across:
>>>
>>> http://marc.info/?l=linux-nfs&m=121509091004851&w=2
>>>
>>> They add
>>>
>>> echo 2000 >/proc/sys/sunrpc/max_resvport
>>>
>>> to their initscripts, and their problem goes away. A year later, with
>>> this incident long forgotten, they upgrade their kernel, start getting
>>> failed mounts, and in the worst case end up debugging the whole problem
>>> from scratch again.
>>
>>>> 2. Introduce a mechanism to specifically enable the NFS client to use
>>>> non-privileged ports. It could be a new mount option like "insecure"
>>>> (which is what some other O/Ses use) or "unpriv-source-port" for
>>>> example. I tend to dislike the former because such a feature is
>>>> likely to be quite useful with Kerberos-authenticated NFS, and
>>>> "sec=krb5,insecure" is probably a little funny looking, but
>>>> "sec=krb5,unpriv-source-port" makes it pretty clear what is going on.
>>>
>>> But I can see the argument for the mount option.
>>>
>>> Maybe we could leave the meaning of the sysctls alone, and allowing
>>> noresvport as an alternate way to allow use of nonreserved ports?
>>>
>>> In any case, this all seems a bit orthogonal to the problem of what
>>> ports the rpcbind client uses, right?
>>
>> No, this is exactly the original problem. The reason xprt_maxresvport
>> is allowed to go larger than 1023 is to permit more NFS mounts. There
>> really is no other reason for it I can think of.
>>
>> But it's broken (or at least inconsistent) behavior that max_resvport
>> can go past 1023 in the first place. The name is "max_resvport" --
>> Maximum Reserved Port. A port value of more than 1024 is not a
>> reserved port. These sysctls are designed to restrict the range of
>> ports used when a _reserved_ port is requested, not when _any_ source
>> port is requested. Trond's suggestion is an "off label" use of this
>> facility.
>>
>> And rpcbind isn't the only kernel-level RPC service that requires a
>> reserved port. The kernel-level NSM code that calls user space, for
>> example, is one such service. In other words, rpcbind isn't the only
>> service that could potentially hit this issue, so an rpcbind-only fix
>> would be incomplete.
>>
>> We already have an appropriate interface for kernel RPC services to
>> request a non-privileged port. The NFS client should use that
>> interface.
>>
>> Now, we don't have to change both at the same time. We can introduce
>> the mount option now; the default reserved port range is still good.
>> And eventually folks using the sysctl will hit the rpcbind bug (or a
>> lock recovery problem), trace it back to this issue, and change their
>> mount options and reset their resvport sysctls.
>
> Unfortunately we are out of NFS_MOUNT_ flags: there are already 16
> defined and this is a legacy kernel ABI, so I'm not sure if we are
> allowed to use the upper 16 bits in the flags word.
>
> Will think about this more.

We had some discussion about this at the pub last night.

Trond, NFS_MOUNT_FLAGMASK is used in nfs_init_server() and
nfs4_init_server() for both legacy binary and text-based mounts. This
needs to be moved to a legacy-only path if we want to use the
high-order 16 bits in the 'flags' field for text-based mounts.

I reviewed the Solaris mount_nfs(1M) man page (I hope this is the
correct place to look). There doesn't appear to be a mount option to
make Solaris NFS clients use a reserved port. Not sure if there's some
other UI (like a config file in /etc).

FreeBSD and Mac OS both use "[no]resvport" as Mike pointed out
earlier. That's my vote for the new Linux mount option.

[ Sidebar: I found this in the Mac OS mount_nfs(8) man page:

noconn Do not connect UDP sockets. For UDP mount points, do not do a
connect(2). This must be used for servers that do not reply to
requests from the standard NFS port number 2049. It may also be
required for servers with more than one IP address if replies come
from an address other than the one specified in the requests.

An interesting consideration if we support connected UDP sockets for
NFS at some point. ]

--
"Officer. Ma'am. Squeaker."
-- Mr. Incredible

2008-08-15 20:48:08

by Myklebust, Trond

[permalink] [raw]
Subject: Re: Massive NFS problems on large cluster with large number of mounts

On Fri, 2008-08-15 at 16:34 -0400, Chuck Lever wrote:
> Trond, NFS_MOUNT_FLAGMASK is used in nfs_init_server() and
> nfs4_init_server() for both legacy binary and text-based mounts. This
> needs to be moved to a legacy-only path if we want to use the
> high-order 16 bits in the 'flags' field for text-based mounts.

We definitely want to do this. The point of introducing text-based
mounts was to allow us to add functionality without having to worry
about legacy binary mount formats. The mask should be there in order to
ensure that binary formats don't start enabling features that they
cannot support. There is no justification for applying it to the text
mount path.

> I reviewed the Solaris mount_nfs(1M) man page (I hope this is the
> correct place to look). There doesn't appear to be a mount option to
> make Solaris NFS clients use a reserved port. Not sure if there's some
> other UI (like a config file in /etc).
>
> FreeBSD and Mac OS both use "[no]resvport" as Mike pointed out
> earlier. That's my vote for the new Linux mount option

Agreed: we should try to follow the standard set by existing
implementations wherever we can...

> [ Sidebar: I found this in the Mac OS mount_nfs(8) man page:
>
> noconn Do not connect UDP sockets. For UDP mount points, do not do a
> connect(2). This must be used for servers that do not reply to
> requests from the standard NFS port number 2049. It may also be
> required for servers with more than one IP address if replies come
> from an address other than the one specified in the requests.
>
> An interesting consideration if we support connected UDP sockets for
> NFS at some point. ]

Hmm... Well, we already don't support servers that reply to a UDP
request from a different IP address, and I can't see that we should
really care. Aside from the fact that most clients will use TCP by
default these days, it is quite trivial for a server to track on which
interface a UDP request was received, and ensure that the reply is sent
on the same interface. In fact, we already do this in the Linux server
AFAICR...

Cheers
Trond
--
Trond Myklebust
Linux NFS client maintainer

NetApp
[email protected]
http://www.netapp.com

2008-08-15 21:04:37

by Myklebust, Trond

[permalink] [raw]
Subject: Re: Massive NFS problems on large cluster with large number of mounts

On Fri, 2008-08-15 at 16:47 -0400, Trond Myklebust wrote:
> On Fri, 2008-08-15 at 16:34 -0400, Chuck Lever wrote:
> > Trond, NFS_MOUNT_FLAGMASK is used in nfs_init_server() and
> > nfs4_init_server() for both legacy binary and text-based mounts. This
> > needs to be moved to a legacy-only path if we want to use the
> > high-order 16 bits in the 'flags' field for text-based mounts.
>
> We definitely want to do this. The point of introducing text-based
> mounts was to allow us to add functionality without having to worry
> about legacy binary mount formats. The mask should be there in order to
> ensure that binary formats don't start enabling features that they
> cannot support. There is no justification for applying it to the text
> mount path.

I've attached the patch...

Cheers
Trond
--
Trond Myklebust
Linux NFS client maintainer

NetApp
[email protected]
http://www.netapp.com


Attachments:
linux-2.6.27-005-dont_apply_nfs_mount_flagmask_to_text_mounts.dif (1.89 kB)

2008-08-15 21:39:37

by Chuck Lever

[permalink] [raw]
Subject: Re: Massive NFS problems on large cluster with large number of mounts

On Fri, Aug 15, 2008 at 5:04 PM, Trond Myklebust
<[email protected]> wrote:
> On Fri, 2008-08-15 at 16:47 -0400, Trond Myklebust wrote:
>> On Fri, 2008-08-15 at 16:34 -0400, Chuck Lever wrote:
>> > Trond, NFS_MOUNT_FLAGMASK is used in nfs_init_server() and
>> > nfs4_init_server() for both legacy binary and text-based mounts. This
>> > needs to be moved to a legacy-only path if we want to use the
>> > high-order 16 bits in the 'flags' field for text-based mounts.
>>
>> We definitely want to do this. The point of introducing text-based
>> mounts was to allow us to add functionality without having to worry
>> about legacy binary mount formats. The mask should be there in order to
>> ensure that binary formats don't start enabling features that they
>> cannot support. There is no justification for applying it to the text
>> mount path.
>
> I've attached the patch...

Thanks!

--
"Officer. Ma'am. Squeaker."
-- Mr. Incredible