Hi,
We have a large linux cluster with 256 backend nodes, 10 development
nodes, plus a few other mounts/machines of various sorts.
When using autofs in an attempt to mount all backend disks at once we
hit the 256 mounts limitation and only get to about 246 backend disks
mounted - this is a pain :-/
so yes, your message (that I eventually found via Neil Brown) was most
helpful and please add it to the FAQ and also into linux/Documentation/*
somewhere.
http://sourceforge.net/mailarchive/message.php?msg_id=4215392
also, can you guess whether 2.5/2.6 will lift the lame 256 mounts
limit... ??
cheers,
robin
ps. please CC me on replies as I'm not subscribed to the list
-------------------------------------------------------
This SF.net email is sponsored by: ValueWeb:
Dedicated Hosting for just $79/mo with 500 GB of bandwidth!
No other company gives more support or power for your dedicated server
http://click.atdmt.com/AFF/go/sdnxxaff00300020aff/direct/01/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
On Monday April 7, [email protected] wrote:
>
> Hi,
>
> We have a large linux cluster with 256 backend nodes, 10 development
> nodes, plus a few other mounts/machines of various sorts.
>
> When using autofs in an attempt to mount all backend disks at once we
> hit the 256 mounts limitation and only get to about 246 backend disks
> mounted - this is a pain :-/
>
> so yes, your message (that I eventually found via Neil Brown) was most
> helpful and please add it to the FAQ and also into linux/Documentation/*
> somewhere.
> http://sourceforge.net/mailarchive/message.php?msg_id=4215392
I think it isn't that hard to raise the limit. The attached patch,
which is entirely untested, should increase it to 2560.
The issue is that a unique device number must be allocated for each
nfs mount, and the code will currently only allocate from the 256
device number with MAJOR number 0. This patch allows it to also use
major numbers 238, 237, 236,.... which aren't used.
I don't think this patch would (or should) get into main-line, but it
ought to work.
>
> also, can you guess whether 2.5/2.6 will lift the lame 256 mounts
> limit... ??
If the 32bit device number stuff really gets in (which is very
likely), then there should be no trouble raising the limit, though we
might need a better data structure to record "in-use" device numbers.
NeilBrown
diff ./fs/super.c~current~ ./fs/super.c
--- ./fs/super.c~current~ 2003-04-08 20:36:25.000000000 +1000
+++ ./fs/super.c 2003-04-08 20:38:22.000000000 +1000
@@ -572,7 +572,7 @@ int do_remount_sb(struct super_block *sb
* filesystems which don't use real block-devices. -- jrs
*/
-enum {Max_anon = 256};
+enum {Max_anon = 2560};
static unsigned long unnamed_dev_in_use[Max_anon/(8*sizeof(unsigned long))];
static spinlock_t unnamed_dev_lock = SPIN_LOCK_UNLOCKED;/* protects the above */
@@ -643,6 +643,7 @@ retry:
set_bit(dev, unnamed_dev_in_use);
spin_unlock(&unnamed_dev_lock);
+ if (dev>=256) dev = (239*256+255)-dev;
s->s_dev = dev;
insert_super(s, type);
return s;
-------------------------------------------------------
This SF.net email is sponsored by: ValueWeb:
Dedicated Hosting for just $79/mo with 500 GB of bandwidth!
No other company gives more support or power for your dedicated server
http://click.atdmt.com/AFF/go/sdnxxaff00300020aff/direct/01/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
hi robin-
> We have a large linux cluster with 256 backend nodes, 10 development
> nodes, plus a few other mounts/machines of various sorts.
>=20
> When using autofs in an attempt to mount all backend disks at once we
> hit the 256 mounts limitation and only get to about 246 backend disks
> mounted - this is a pain :-/
>=20
> so yes, your message (that I eventually found via Neil Brown) was most
> helpful and please add it to the FAQ and also into=20
> linux/Documentation/*
> somewhere.
> http://sourceforge.net/mailarchive/message.php?msg_id=3D4215392
a new FAQ is planned to address this issue. thanks for the
suggestion!
-------------------------------------------------------
This SF.net email is sponsored by: ValueWeb:
Dedicated Hosting for just $79/mo with 500 GB of bandwidth!
No other company gives more support or power for your dedicated server
http://click.atdmt.com/AFF/go/sdnxxaff00300020aff/direct/01/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
We are seeing very poor NFS _read_ performance with a standard 2.6.2
kernel. We get around 7MB/s over gigabit ethernet, wheras with a
2.4.21-pre2 (don't ask...) kernel we get more like 30MB/s.
ext3 is the filesystem. Uni-processor kernel, dual, or quad
(hyperthreading enabled) makes little difference. using a software
raid0 doesn't change much either. We are using fedora core 1 which has
nfs-utils-1.0.6-1.
We are doing nothing special with NFS mount options. Output from
'mount' looks like:
... type nfs (rw,nosuid,hard,intr,addr=...)
and the /etc/exports options look like
...(rw,no_root_squash,async)
performance testing:
Version 1.03 ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
kernel 2.4.21-pre2:
to local disk 2G 19968 97 34922 23 13903 5 22815 89 32775 6 176.4 0
over NFS 2G 20816 96 20009 6 15590 16 22655 94 30771 7 190.7 0
kernel 2.6.2:
to local disk 2G 29896 99 39256 20 12758 5 25267 69 35817 5 188.1 0
over NFS 2G 31321 99 37164 19 2111 71 7995 22 7497 1 197.8 0
so everything is better or ok with 2.6.2, except NFS reads and rewrites
which are amazingly slow. timing tests with dd instead of bonnie++ back
these up - ie. about 5x slower than you might expect.
A second (unrelated) issue:
A while ago I ping'd you guys about the 256 NFS mounts limit. This limit
still seems to be there in 2.6.2 kernels.
On Tue, Apr 08, 2003 at 08:50:30PM +1000, Neil Brown wrote:
>On Monday April 7, [email protected] wrote:
>> When using autofs in an attempt to mount all backend disks at once we
>> hit the 256 mounts limitation and only get to about 246 backend disks
>> mounted - this is a pain :-/
Currently we have 268 nodes, and are hoping for a 10x larger future machine.
Having many disks mounted seems like it'll be a fairly normal event, not
some weird exception...???
Now that Linus increased the devices structure, can a proper NFS patch
be made please?
I updated NeilB's 2560 mount patch/hack so that it works with 2.6.2 and
it works ok, but it would be nice to have mainstream support for many
mounted disks.
Please CC me on any replies as I don't subscribe to the list.
cheers,
robin
--
Dr Robin Humble http://www.cita.utoronto.ca/~rjh/
-------------------------------------------------------
The SF.Net email is sponsored by EclipseCon 2004
Premiere Conference on Open Tools Development and Integration
See the breadth of Eclipse activity. February 3-5 in Anaheim, CA.
http://www.eclipsecon.org/osdn
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs