2011-05-17 16:40:03

by James Pearson

[permalink] [raw]
Subject: How to control the order of different export options for different client formats?

I'm using CentOS 5.x (nfs-utils based on v1.0.9) - and have been using
the following in /etc/exports:

/export *(rw,async) @backup(rw,no_root_squash,async)

which works fine - hosts in the backup NIS netgroup mount the file
system with no_root_squash and other clients with root_squash

However, I now want to restrict the export to all clients in a single
subnet - so I now have /etc/exports as:

/export 172.16.0.0/20(rw,async) @backup(rw,no_root_squash,async)

Unfortunately, hosts in the backup NIS netgroup (which are also in the
172.16.0.0/20 subnet) no longer mount with no_root_squash

It appears that the subnet export takes precedence over the netgroup
export (it doesn't matter in what order the subnets/netgroups exports
are listed in /etc/exports) - so the netgroup client options are ignored
as a match has already been found in the subnet export.

Is there any way to control the order in which clients are checked for
export options?

i.e. I would like netgroups to take precedence over subnets

Thanks

James Pearson


2011-05-19 13:12:37

by J. Bruce Fields

[permalink] [raw]
Subject: Re: Performance Issue with multiple dataserver

On Thu, May 19, 2011 at 06:09:21PM +0530, [email protected] wrote:
> I have followed the way given on http://wiki.linux-nfs.org/wiki/index.php/Configuring_pNFS/spnfsd .

Oh. As noted there, spnfs is unmaintained.

And, in any case, we'd need many more details about your setup.

--b.

>
> -Taousif
>
> -----Original Message-----
> From: J. Bruce Fields [mailto:[email protected]]
> Sent: Thursday, May 19, 2011 5:20 PM
> To: Ansari, Taousif - Dell Team
> Cc: [email protected]
> Subject: Re: Performance Issue with multiple dataserver
>
> On Thu, May 19, 2011 at 10:56:44AM +0530, [email protected] wrote:
> > Hi,
> >
> > I am using on Server linux-pnfs-2.6.38(linux-pnfs-ae7441f.tar) and on client also linux-pnfs-2.6.38(linux-pnfs-ae7441f.tar) downloaded from http://git.linux-nfs.org/?p=bhalevy/linux-pnfs.git;a=summary on Fedora 14.
>
> So you're using GFS2 on the server? With what sort of storage?
>
> --b.
>
> >
> > Extremely sorry for causing confusing .
> > -----Original Message-----
> > From: J. Bruce Fields [mailto:[email protected]]
> > Sent: Wednesday, May 18, 2011 9:43 PM
> > To: Ansari, Taousif - Dell Team
> > Cc: [email protected]
> > Subject: Re: Performance Issue with multiple dataserver
> >
> > You sent this message as a reply to an unrelated message, which is
> > confusing to those of us with threaded mail readers.
> >
> > On Wed, May 18, 2011 at 05:24:45PM +0530, [email protected] wrote:
> > > I have done pNFS setup with single Dataserver and Two Dataserver and ran the IOzone tool on both, I found that the performance with multiple dataservers is less than the performance with single dataservers.
> >
> > What are you using as the server, and what as the client?
> >
> > --b.
> >
> > >
> > > Here are some numbers, which were captured by the IOzone tool.
> > >
> > >
> > > 4 8 16 32 64 128 256 512 1024 <== Record Length in KB
> > > With Single Dataserver:
> > > Read operation for file size 1 MB- 66415 66359 63630 70358 86223 70256 66047 66068 68489 <== IO kB/sec
> > > Write operation for file size 1 MB- 18827 16920 18846 17039 18896 17009 17173 19206 17947 <== IO kB/sec
> > >
> > > With Two Dataservers :
> > > Read operation for file size 1 MB- 36882 381198 38150 38084 38749 33663 34398 37313 37847 <== IO kB/sec
> > > Write operation for file size 1 MB- 5461 4661 5586 4870 5227 4922 4214 5572 4658 <== IO kB/sec
> > >
> > >
> > > Can somebody tell me What could be the issue....
> > > --
> > > To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> > > the body of a message to [email protected]
> > > More majordomo info at http://vger.kernel.org/majordomo-info.html

2011-05-19 13:15:37

by Taousif_Ansari

[permalink] [raw]
Subject: RE: Performance Issue with multiple dataserver

Then what should I follow, and what details are needed....

-----Original Message-----
From: [email protected] [mailto:[email protected]] On Behalf Of J. Bruce Fields
Sent: Thursday, May 19, 2011 6:43 PM
To: Ansari, Taousif - Dell Team
Cc: [email protected]
Subject: Re: Performance Issue with multiple dataserver

On Thu, May 19, 2011 at 06:09:21PM +0530, [email protected] wrote:
> I have followed the way given on http://wiki.linux-nfs.org/wiki/index.php/Configuring_pNFS/spnfsd .

Oh. As noted there, spnfs is unmaintained.

And, in any case, we'd need many more details about your setup.

--b.

>
> -Taousif
>
> -----Original Message-----
> From: J. Bruce Fields [mailto:[email protected]]
> Sent: Thursday, May 19, 2011 5:20 PM
> To: Ansari, Taousif - Dell Team
> Cc: [email protected]
> Subject: Re: Performance Issue with multiple dataserver
>
> On Thu, May 19, 2011 at 10:56:44AM +0530, [email protected] wrote:
> > Hi,
> >
> > I am using on Server linux-pnfs-2.6.38(linux-pnfs-ae7441f.tar) and on client also linux-pnfs-2.6.38(linux-pnfs-ae7441f.tar) downloaded from http://git.linux-nfs.org/?p=bhalevy/linux-pnfs.git;a=summary on Fedora 14.
>
> So you're using GFS2 on the server? With what sort of storage?
>
> --b.
>
> >
> > Extremely sorry for causing confusing .
> > -----Original Message-----
> > From: J. Bruce Fields [mailto:[email protected]]
> > Sent: Wednesday, May 18, 2011 9:43 PM
> > To: Ansari, Taousif - Dell Team
> > Cc: [email protected]
> > Subject: Re: Performance Issue with multiple dataserver
> >
> > You sent this message as a reply to an unrelated message, which is
> > confusing to those of us with threaded mail readers.
> >
> > On Wed, May 18, 2011 at 05:24:45PM +0530, [email protected] wrote:
> > > I have done pNFS setup with single Dataserver and Two Dataserver and ran the IOzone tool on both, I found that the performance with multiple dataservers is less than the performance with single dataservers.
> >
> > What are you using as the server, and what as the client?
> >
> > --b.
> >
> > >
> > > Here are some numbers, which were captured by the IOzone tool.
> > >
> > >
> > > 4 8 16 32 64 128 256 512 1024 <== Record Length in KB
> > > With Single Dataserver:
> > > Read operation for file size 1 MB- 66415 66359 63630 70358 86223 70256 66047 66068 68489 <== IO kB/sec
> > > Write operation for file size 1 MB- 18827 16920 18846 17039 18896 17009 17173 19206 17947 <== IO kB/sec
> > >
> > > With Two Dataservers :
> > > Read operation for file size 1 MB- 36882 381198 38150 38084 38749 33663 34398 37313 37847 <== IO kB/sec
> > > Write operation for file size 1 MB- 5461 4661 5586 4870 5227 4922 4214 5572 4658 <== IO kB/sec
> > >
> > >
> > > Can somebody tell me What could be the issue....
> > > --
> > > To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> > > the body of a message to [email protected]
> > > More majordomo info at http://vger.kernel.org/majordomo-info.html

2011-05-18 10:19:39

by James Pearson

[permalink] [raw]
Subject: Re: How to control the order of different export options for different client formats?

NeilBrown wrote:
>
> Unfortunately you cannot do that.
>
> The place in the code where this is determined is towards the end of
> 'lookup_export' in utils/mountd/cache.c
>
> Were I to try to 'fix' this I would probably define a new field in 'struct
> exportent' which holds a 'priority'.
>
> Then allow a setting like "priority=4" in /etc/exports
>
> Then change the code in lookup_export to choose the one with the higher
> priority, rather than the 'first' one.
>
> NeilBrown

I've hacked the source to make netgroups take precedence over subnets by
moving MCL_NETGROUP before MCL_SUBNETWORK in the enum in
support/include/exportfs.h - which works for me, as I only use
netgroups, subnets and anonymous (in that priority order).

IMHO the priority of exports should really be as they appear on the line
in /etc/exports, but I guess if that were to change, it would break
existing /etc/exports that use the current priority ordering (either by
design or accident!).

Having a priority option would be a very good idea - and may be in the
meantime the exports man page should be updated with info about the
current priority ordering?

Thanks

James Pearson

2011-05-20 13:38:17

by James Pearson

[permalink] [raw]
Subject: Re: How to control the order of different export options for different client formats?

J. Bruce Fields wrote:
>>Having a priority option would be a very good idea - and may be in
>>the meantime the exports man page should be updated with info about
>>the current priority ordering?
>
>
> Sounds good. Could you send in a patch?

Here's an attempt - based on the info from Max Matveev <[email protected]>
earlier in this thread

James Pearson

--- exports.man.dist 2010-09-28 13:24:16.000000000 +0100
+++ exports.man 2011-05-20 14:29:45.555314605 +0100
@@ -92,6 +92,11 @@
'''.B \-\-public\-root
'''option. Multiple specifications of a public root will be ignored.
.PP
+.SS Matched Client Priories
+The order in which the different \fIMachine Name Formats\fR are matched
+against clients is in the priority order: \fIhostname, IP address or
networks,
+wildcards, netgroup and anonymous\fR. Entries at the same level are matched
+in the same order in which they appear in \fI/etc/exports\fR.
.SS RPCSEC_GSS security
You may use the special strings "gss/krb5", "gss/krb5i", or "gss/krb5p"
to restrict access to clients using rpcsec_gss security. However, this




2011-05-18 16:12:40

by J. Bruce Fields

[permalink] [raw]
Subject: Re: Performance Issue with multiple dataserver

You sent this message as a reply to an unrelated message, which is
confusing to those of us with threaded mail readers.

On Wed, May 18, 2011 at 05:24:45PM +0530, [email protected] wrote:
> I have done pNFS setup with single Dataserver and Two Dataserver and ran the IOzone tool on both, I found that the performance with multiple dataservers is less than the performance with single dataservers.

What are you using as the server, and what as the client?

--b.

>
> Here are some numbers, which were captured by the IOzone tool.
>
>
> 4 8 16 32 64 128 256 512 1024 <== Record Length in KB
> With Single Dataserver:
> Read operation for file size 1 MB- 66415 66359 63630 70358 86223 70256 66047 66068 68489 <== IO kB/sec
> Write operation for file size 1 MB- 18827 16920 18846 17039 18896 17009 17173 19206 17947 <== IO kB/sec
>
> With Two Dataservers :
> Read operation for file size 1 MB- 36882 381198 38150 38084 38749 33663 34398 37313 37847 <== IO kB/sec
> Write operation for file size 1 MB- 5461 4661 5586 4870 5227 4922 4214 5572 4658 <== IO kB/sec
>
>
> Can somebody tell me What could be the issue....
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html

2011-05-19 13:43:57

by J. Bruce Fields

[permalink] [raw]
Subject: Re: Performance Issue with multiple dataserver

On Thu, May 19, 2011 at 06:44:59PM +0530, [email protected] wrote:
> Then what should I follow, and what details are needed....

There isn't really any supported server-side pNFS.

The closest is the GFS2-based code, for which you need to install
Benny's latest tree, configure a shared block device, create a GFS2
filesystem on it, mount it across all DS's and the MDS, and export it
from all of them--but I don't believe anyone has written step-by-step
instructions for that.

--b.

>
> -----Original Message-----
> From: [email protected] [mailto:[email protected]] On Behalf Of J. Bruce Fields
> Sent: Thursday, May 19, 2011 6:43 PM
> To: Ansari, Taousif - Dell Team
> Cc: [email protected]
> Subject: Re: Performance Issue with multiple dataserver
>
> On Thu, May 19, 2011 at 06:09:21PM +0530, [email protected] wrote:
> > I have followed the way given on http://wiki.linux-nfs.org/wiki/index.php/Configuring_pNFS/spnfsd .
>
> Oh. As noted there, spnfs is unmaintained.
>
> And, in any case, we'd need many more details about your setup.
>
> --b.
>
> >
> > -Taousif
> >
> > -----Original Message-----
> > From: J. Bruce Fields [mailto:[email protected]]
> > Sent: Thursday, May 19, 2011 5:20 PM
> > To: Ansari, Taousif - Dell Team
> > Cc: [email protected]
> > Subject: Re: Performance Issue with multiple dataserver
> >
> > On Thu, May 19, 2011 at 10:56:44AM +0530, [email protected] wrote:
> > > Hi,
> > >
> > > I am using on Server linux-pnfs-2.6.38(linux-pnfs-ae7441f.tar) and on client also linux-pnfs-2.6.38(linux-pnfs-ae7441f.tar) downloaded from http://git.linux-nfs.org/?p=bhalevy/linux-pnfs.git;a=summary on Fedora 14.
> >
> > So you're using GFS2 on the server? With what sort of storage?
> >
> > --b.
> >
> > >
> > > Extremely sorry for causing confusing .
> > > -----Original Message-----
> > > From: J. Bruce Fields [mailto:[email protected]]
> > > Sent: Wednesday, May 18, 2011 9:43 PM
> > > To: Ansari, Taousif - Dell Team
> > > Cc: [email protected]
> > > Subject: Re: Performance Issue with multiple dataserver
> > >
> > > You sent this message as a reply to an unrelated message, which is
> > > confusing to those of us with threaded mail readers.
> > >
> > > On Wed, May 18, 2011 at 05:24:45PM +0530, [email protected] wrote:
> > > > I have done pNFS setup with single Dataserver and Two Dataserver and ran the IOzone tool on both, I found that the performance with multiple dataservers is less than the performance with single dataservers.
> > >
> > > What are you using as the server, and what as the client?
> > >
> > > --b.
> > >
> > > >
> > > > Here are some numbers, which were captured by the IOzone tool.
> > > >
> > > >
> > > > 4 8 16 32 64 128 256 512 1024 <== Record Length in KB
> > > > With Single Dataserver:
> > > > Read operation for file size 1 MB- 66415 66359 63630 70358 86223 70256 66047 66068 68489 <== IO kB/sec
> > > > Write operation for file size 1 MB- 18827 16920 18846 17039 18896 17009 17173 19206 17947 <== IO kB/sec
> > > >
> > > > With Two Dataservers :
> > > > Read operation for file size 1 MB- 36882 381198 38150 38084 38749 33663 34398 37313 37847 <== IO kB/sec
> > > > Write operation for file size 1 MB- 5461 4661 5586 4870 5227 4922 4214 5572 4658 <== IO kB/sec
> > > >
> > > >
> > > > Can somebody tell me What could be the issue....
> > > > --
> > > > To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> > > > the body of a message to [email protected]
> > > > More majordomo info at http://vger.kernel.org/majordomo-info.html
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html

2011-05-18 00:46:53

by Max Matveev

[permalink] [raw]
Subject: Re: How to control the order of different export options for different client formats?

On Tue, 17 May 2011 17:21:24 +0100, James Pearson wrote:

james-p> Is there any way to control the order in which clients are
james-p> checked for export options?

james-p> i.e. I would like netgroups to take precedence over subnets

You're out of luck here - the entires are checked in the following
order: FQDN, subnet, wildcard, netgroup, anonymous and finally gss.
Here 'wildcard' means anything except the bare '*' which is considered
anonymous. Any entries on the same "level", i.e. two netgroups or two
FQDNs are checked in the same order in which they appear in
/etc/exports.

max

2011-05-19 05:26:49

by Taousif_Ansari

[permalink] [raw]
Subject: RE: Performance Issue with multiple dataserver

Hi,

I am using on Server linux-pnfs-2.6.38(linux-pnfs-ae7441f.tar) and on client also linux-pnfs-2.6.38(linux-pnfs-ae7441f.tar) downloaded from http://git.linux-nfs.org/?p=bhalevy/linux-pnfs.git;a=summary on Fedora 14.

Extremely sorry for causing confusing .
-----Original Message-----
From: J. Bruce Fields [mailto:[email protected]]
Sent: Wednesday, May 18, 2011 9:43 PM
To: Ansari, Taousif - Dell Team
Cc: [email protected]
Subject: Re: Performance Issue with multiple dataserver

You sent this message as a reply to an unrelated message, which is
confusing to those of us with threaded mail readers.

On Wed, May 18, 2011 at 05:24:45PM +0530, [email protected] wrote:
> I have done pNFS setup with single Dataserver and Two Dataserver and ran the IOzone tool on both, I found that the performance with multiple dataservers is less than the performance with single dataservers.

What are you using as the server, and what as the client?

--b.

>
> Here are some numbers, which were captured by the IOzone tool.
>
>
> 4 8 16 32 64 128 256 512 1024 <== Record Length in KB
> With Single Dataserver:
> Read operation for file size 1 MB- 66415 66359 63630 70358 86223 70256 66047 66068 68489 <== IO kB/sec
> Write operation for file size 1 MB- 18827 16920 18846 17039 18896 17009 17173 19206 17947 <== IO kB/sec
>
> With Two Dataservers :
> Read operation for file size 1 MB- 36882 381198 38150 38084 38749 33663 34398 37313 37847 <== IO kB/sec
> Write operation for file size 1 MB- 5461 4661 5586 4870 5227 4922 4214 5572 4658 <== IO kB/sec
>
>
> Can somebody tell me What could be the issue....
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html

2011-05-18 16:20:51

by J. Bruce Fields

[permalink] [raw]
Subject: Re: How to control the order of different export options for different client formats?

On Wed, May 18, 2011 at 11:19:37AM +0100, James Pearson wrote:
> NeilBrown wrote:
> >
> >Unfortunately you cannot do that.
> >
> >The place in the code where this is determined is towards the end of
> >'lookup_export' in utils/mountd/cache.c
> >
> >Were I to try to 'fix' this I would probably define a new field in 'struct
> >exportent' which holds a 'priority'.
> >
> >Then allow a setting like "priority=4" in /etc/exports
> >
> >Then change the code in lookup_export to choose the one with the higher
> >priority, rather than the 'first' one.
> >
> >NeilBrown
>
> I've hacked the source to make netgroups take precedence over
> subnets by moving MCL_NETGROUP before MCL_SUBNETWORK in the enum in
> support/include/exportfs.h - which works for me, as I only use
> netgroups, subnets and anonymous (in that priority order).
>
> IMHO the priority of exports should really be as they appear on the
> line in /etc/exports,

Sounds reasonable to me.

> but I guess if that were to change, it would
> break existing /etc/exports that use the current priority ordering
> (either by design or accident!).

Maybe some new /etc/exports syntax could allow the administrator to opt
into a new priority ordering.

> Having a priority option would be a very good idea - and may be in
> the meantime the exports man page should be updated with info about
> the current priority ordering?

Sounds good. Could you send in a patch?

--b.

2011-05-24 11:40:02

by Taousif_Ansari

[permalink] [raw]
Subject: RE: Performance Issue with multiple dataserver

Hi Bruce, Shyam

As mentioned here http://wiki.linux-nfs.org/wiki/index.php/PNFS_server_projects gfs2 is also having issues(crashes, performance), so instead of going for gfs2 can we debug spNFS itself to get high performance?


-Taousif

-----Original Message-----
From: Iyer, Shyam
Sent: Thursday, May 19, 2011 8:08 PM
To: Ansari, Taousif - Dell Team; [email protected]
Cc: [email protected]
Subject: RE: Performance Issue with multiple dataserver



> -----Original Message-----
> From: [email protected] [mailto:linux-nfs-
> [email protected]] On Behalf Of Ansari, Taousif - Dell Team
>
> Can you please elaborate GFS2-setup a bit more...


I guess Bruce is saying the step-by-step procedure is not written up...

Create a Redhat cluster using the shared block storage(iSCSI in your case I guess). You would get documentation on creating a RH cluster in many places..

All the MDSs and the DSs need to be part of the cluster.

Format GFS2 on the shared iSCSI storage.

Mount the GFS2 formatted iSCSI storage on all the MDSs and DSs and export them via NFS. Use Benny's tree for NFS.

The GFS2 cluster backend is your glue to scale the MDSes and DSes.

>
> -----Original Message-----
> From: J. Bruce Fields [mailto:[email protected]]
> Sent: Thursday, May 19, 2011 7:14 PM
> To: Ansari, Taousif - Dell Team
> Cc: [email protected]
> Subject: Re: Performance Issue with multiple dataserver
>
> On Thu, May 19, 2011 at 06:44:59PM +0530, [email protected]
> wrote:
> > Then what should I follow, and what details are needed....
>
> There isn't really any supported server-side pNFS.
>
> The closest is the GFS2-based code, for which you need to install
> Benny's latest tree, configure a shared block device, create a GFS2
> filesystem on it, mount it across all DS's and the MDS, and export it
> from all of them--but I don't believe anyone has written step-by-step
> instructions for that.
>
> --b.
>
> >
> > -----Original Message-----
> > From: [email protected] [mailto:linux-nfs-
> [email protected]] On Behalf Of J. Bruce Fields
> > Sent: Thursday, May 19, 2011 6:43 PM
> > To: Ansari, Taousif - Dell Team
> > Cc: [email protected]
> > Subject: Re: Performance Issue with multiple dataserver
> >
> > On Thu, May 19, 2011 at 06:09:21PM +0530, [email protected]
> wrote:
> > > I have followed the way given on http://wiki.linux-
> nfs.org/wiki/index.php/Configuring_pNFS/spnfsd .
> >
> > Oh. As noted there, spnfs is unmaintained.
> >
> > And, in any case, we'd need many more details about your setup.
> >
> > --b.
> >
> > >
> > > -Taousif
> > >
> > > -----Original Message-----
> > > From: J. Bruce Fields [mailto:[email protected]]
> > > Sent: Thursday, May 19, 2011 5:20 PM
> > > To: Ansari, Taousif - Dell Team
> > > Cc: [email protected]
> > > Subject: Re: Performance Issue with multiple dataserver
> > >
> > > On Thu, May 19, 2011 at 10:56:44AM +0530,
> [email protected] wrote:
> > > > Hi,
> > > >
> > > > I am using on Server linux-pnfs-2.6.38(linux-pnfs-ae7441f.tar)
> and on client also linux-pnfs-2.6.38(linux-pnfs-ae7441f.tar) downloaded
> from http://git.linux-nfs.org/?p=bhalevy/linux-pnfs.git;a=summary on
> Fedora 14.
> > >
> > > So you're using GFS2 on the server? With what sort of storage?
> > >
> > > --b.
> > >
> > > >
> > > > Extremely sorry for causing confusing .
> > > > -----Original Message-----
> > > > From: J. Bruce Fields [mailto:[email protected]]
> > > > Sent: Wednesday, May 18, 2011 9:43 PM
> > > > To: Ansari, Taousif - Dell Team
> > > > Cc: [email protected]
> > > > Subject: Re: Performance Issue with multiple dataserver
> > > >
> > > > You sent this message as a reply to an unrelated message, which
> is
> > > > confusing to those of us with threaded mail readers.
> > > >
> > > > On Wed, May 18, 2011 at 05:24:45PM +0530,
> [email protected] wrote:
> > > > > I have done pNFS setup with single Dataserver and Two
> Dataserver and ran the IOzone tool on both, I found that the
> performance with multiple dataservers is less than the performance with
> single dataservers.
> > > >
> > > > What are you using as the server, and what as the client?
> > > >
> > > > --b.
> > > >
> > > > >
> > > > > Here are some numbers, which were captured by the IOzone tool.
> > > > >
> > > > >
> > > > > 4 8 16 32
> 64 128 256 512 1024 <== Record Length in KB
> > > > > With Single Dataserver:
> > > > > Read operation for file size 1 MB- 66415 66359 63630
> 70358 86223 70256 66047 66068 68489 <== IO kB/sec
> > > > > Write operation for file size 1 MB- 18827 16920 18846
> 17039 18896 17009 17173 19206 17947 <== IO kB/sec
> > > > >
> > > > > With Two Dataservers :
> > > > > Read operation for file size 1 MB- 36882 381198
> 38150 38084 38749 33663 34398 37313 37847 <== IO kB/sec
> > > > > Write operation for file size 1 MB- 5461 4661 5586
> 4870 5227 4922 4214 5572 4658 <== IO kB/sec
> > > > >
> > > > >
> > > > > Can somebody tell me What could be the issue....
> > > > > --
> > > > > To unsubscribe from this list: send the line "unsubscribe
> linux-nfs" in
> > > > > the body of a message to [email protected]
> > > > > More majordomo info at http://vger.kernel.org/majordomo-
> info.html
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-nfs"
> in
> > the body of a message to [email protected]
> > More majordomo info at http://vger.kernel.org/majordomo-info.html
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html

2011-05-20 16:41:48

by J. Bruce Fields

[permalink] [raw]
Subject: Re: How to control the order of different export options for different client formats?

On Fri, May 20, 2011 at 02:38:14PM +0100, James Pearson wrote:
> J. Bruce Fields wrote:
> >>Having a priority option would be a very good idea - and may be in
> >>the meantime the exports man page should be updated with info about
> >>the current priority ordering?
> >
> >
> >Sounds good. Could you send in a patch?
>
> Here's an attempt - based on the info from Max Matveev
> <[email protected]> earlier in this thread

>
> James Pearson
>
> --- exports.man.dist 2010-09-28 13:24:16.000000000 +0100
> +++ exports.man 2011-05-20 14:29:45.555314605 +0100
> @@ -92,6 +92,11 @@
> '''.B \-\-public\-root
> '''option. Multiple specifications of a public root will be ignored.
> .PP
> +.SS Matched Client Priories

Priorities?

But could we just combine this with the previous section--and make sure
the different possibilities are listed there in the correct priority
order to start off with.

That'd also mean adding a new subsection for the "anonymous" case.

--b.

> +The order in which the different \fIMachine Name Formats\fR are matched
> +against clients is in the priority order: \fIhostname, IP address
> or networks,
> +wildcards, netgroup and anonymous\fR. Entries at the same level are matched
> +in the same order in which they appear in \fI/etc/exports\fR.
> .SS RPCSEC_GSS security
> You may use the special strings "gss/krb5", "gss/krb5i", or "gss/krb5p"
> to restrict access to clients using rpcsec_gss security. However, this
>
>
>

2011-05-19 11:50:19

by J. Bruce Fields

[permalink] [raw]
Subject: Re: Performance Issue with multiple dataserver

On Thu, May 19, 2011 at 10:56:44AM +0530, [email protected] wrote:
> Hi,
>
> I am using on Server linux-pnfs-2.6.38(linux-pnfs-ae7441f.tar) and on client also linux-pnfs-2.6.38(linux-pnfs-ae7441f.tar) downloaded from http://git.linux-nfs.org/?p=bhalevy/linux-pnfs.git;a=summary on Fedora 14.

So you're using GFS2 on the server? With what sort of storage?

--b.

>
> Extremely sorry for causing confusing .
> -----Original Message-----
> From: J. Bruce Fields [mailto:[email protected]]
> Sent: Wednesday, May 18, 2011 9:43 PM
> To: Ansari, Taousif - Dell Team
> Cc: [email protected]
> Subject: Re: Performance Issue with multiple dataserver
>
> You sent this message as a reply to an unrelated message, which is
> confusing to those of us with threaded mail readers.
>
> On Wed, May 18, 2011 at 05:24:45PM +0530, [email protected] wrote:
> > I have done pNFS setup with single Dataserver and Two Dataserver and ran the IOzone tool on both, I found that the performance with multiple dataservers is less than the performance with single dataservers.
>
> What are you using as the server, and what as the client?
>
> --b.
>
> >
> > Here are some numbers, which were captured by the IOzone tool.
> >
> >
> > 4 8 16 32 64 128 256 512 1024 <== Record Length in KB
> > With Single Dataserver:
> > Read operation for file size 1 MB- 66415 66359 63630 70358 86223 70256 66047 66068 68489 <== IO kB/sec
> > Write operation for file size 1 MB- 18827 16920 18846 17039 18896 17009 17173 19206 17947 <== IO kB/sec
> >
> > With Two Dataservers :
> > Read operation for file size 1 MB- 36882 381198 38150 38084 38749 33663 34398 37313 37847 <== IO kB/sec
> > Write operation for file size 1 MB- 5461 4661 5586 4870 5227 4922 4214 5572 4658 <== IO kB/sec
> >
> >
> > Can somebody tell me What could be the issue....
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> > the body of a message to [email protected]
> > More majordomo info at http://vger.kernel.org/majordomo-info.html

2011-05-24 11:42:53

by Steven Whitehouse

[permalink] [raw]
Subject: RE: Performance Issue with multiple dataserver

/Hi,

On Tue, 2011-05-24 at 17:09 +0530, [email protected] wrote:
> Hi Bruce, Shyam
>
> As mentioned here http://wiki.linux-nfs.org/wiki/index.php/PNFS_server_projects gfs2 is also having issues(crashes, performance), so instead of going for gfs2 can we debug spNFS itself to get high performance?
>
>
> -Taousif
>
As far as I'm aware that is historical information. If there are still
problems with GFS2, then please report them so we can work on them,

Steve.

> -----Original Message-----
> From: Iyer, Shyam
> Sent: Thursday, May 19, 2011 8:08 PM
> To: Ansari, Taousif - Dell Team; [email protected]
> Cc: [email protected]
> Subject: RE: Performance Issue with multiple dataserver
>
>
>
> > -----Original Message-----
> > From: [email protected] [mailto:linux-nfs-
> > [email protected]] On Behalf Of Ansari, Taousif - Dell Team
> >
> > Can you please elaborate GFS2-setup a bit more...
>
>
> I guess Bruce is saying the step-by-step procedure is not written up...
>
> Create a Redhat cluster using the shared block storage(iSCSI in your case I guess). You would get documentation on creating a RH cluster in many places..
>
> All the MDSs and the DSs need to be part of the cluster.
>
> Format GFS2 on the shared iSCSI storage.
>
> Mount the GFS2 formatted iSCSI storage on all the MDSs and DSs and export them via NFS. Use Benny's tree for NFS.
>
> The GFS2 cluster backend is your glue to scale the MDSes and DSes.
>
> >
> > -----Original Message-----
> > From: J. Bruce Fields [mailto:[email protected]]
> > Sent: Thursday, May 19, 2011 7:14 PM
> > To: Ansari, Taousif - Dell Team
> > Cc: [email protected]
> > Subject: Re: Performance Issue with multiple dataserver
> >
> > On Thu, May 19, 2011 at 06:44:59PM +0530, [email protected]
> > wrote:
> > > Then what should I follow, and what details are needed....
> >
> > There isn't really any supported server-side pNFS.
> >
> > The closest is the GFS2-based code, for which you need to install
> > Benny's latest tree, configure a shared block device, create a GFS2
> > filesystem on it, mount it across all DS's and the MDS, and export it
> > from all of them--but I don't believe anyone has written step-by-step
> > instructions for that.
> >
> > --b.
> >
> > >
> > > -----Original Message-----
> > > From: [email protected] [mailto:linux-nfs-
> > [email protected]] On Behalf Of J. Bruce Fields
> > > Sent: Thursday, May 19, 2011 6:43 PM
> > > To: Ansari, Taousif - Dell Team
> > > Cc: [email protected]
> > > Subject: Re: Performance Issue with multiple dataserver
> > >
> > > On Thu, May 19, 2011 at 06:09:21PM +0530, [email protected]
> > wrote:
> > > > I have followed the way given on http://wiki.linux-
> > nfs.org/wiki/index.php/Configuring_pNFS/spnfsd .
> > >
> > > Oh. As noted there, spnfs is unmaintained.
> > >
> > > And, in any case, we'd need many more details about your setup.
> > >
> > > --b.
> > >
> > > >
> > > > -Taousif
> > > >
> > > > -----Original Message-----
> > > > From: J. Bruce Fields [mailto:[email protected]]
> > > > Sent: Thursday, May 19, 2011 5:20 PM
> > > > To: Ansari, Taousif - Dell Team
> > > > Cc: [email protected]
> > > > Subject: Re: Performance Issue with multiple dataserver
> > > >
> > > > On Thu, May 19, 2011 at 10:56:44AM +0530,
> > [email protected] wrote:
> > > > > Hi,
> > > > >
> > > > > I am using on Server linux-pnfs-2.6.38(linux-pnfs-ae7441f.tar)
> > and on client also linux-pnfs-2.6.38(linux-pnfs-ae7441f.tar) downloaded
> > from http://git.linux-nfs.org/?p=bhalevy/linux-pnfs.git;a=summary on
> > Fedora 14.
> > > >
> > > > So you're using GFS2 on the server? With what sort of storage?
> > > >
> > > > --b.
> > > >
> > > > >
> > > > > Extremely sorry for causing confusing .
> > > > > -----Original Message-----
> > > > > From: J. Bruce Fields [mailto:[email protected]]
> > > > > Sent: Wednesday, May 18, 2011 9:43 PM
> > > > > To: Ansari, Taousif - Dell Team
> > > > > Cc: [email protected]
> > > > > Subject: Re: Performance Issue with multiple dataserver
> > > > >
> > > > > You sent this message as a reply to an unrelated message, which
> > is
> > > > > confusing to those of us with threaded mail readers.
> > > > >
> > > > > On Wed, May 18, 2011 at 05:24:45PM +0530,
> > [email protected] wrote:
> > > > > > I have done pNFS setup with single Dataserver and Two
> > Dataserver and ran the IOzone tool on both, I found that the
> > performance with multiple dataservers is less than the performance with
> > single dataservers.
> > > > >
> > > > > What are you using as the server, and what as the client?
> > > > >
> > > > > --b.
> > > > >
> > > > > >
> > > > > > Here are some numbers, which were captured by the IOzone tool.
> > > > > >
> > > > > >
> > > > > > 4 8 16 32
> > 64 128 256 512 1024 <== Record Length in KB
> > > > > > With Single Dataserver:
> > > > > > Read operation for file size 1 MB- 66415 66359 63630
> > 70358 86223 70256 66047 66068 68489 <== IO kB/sec
> > > > > > Write operation for file size 1 MB- 18827 16920 18846
> > 17039 18896 17009 17173 19206 17947 <== IO kB/sec
> > > > > >
> > > > > > With Two Dataservers :
> > > > > > Read operation for file size 1 MB- 36882 381198
> > 38150 38084 38749 33663 34398 37313 37847 <== IO kB/sec
> > > > > > Write operation for file size 1 MB- 5461 4661 5586
> > 4870 5227 4922 4214 5572 4658 <== IO kB/sec
> > > > > >
> > > > > >
> > > > > > Can somebody tell me What could be the issue....
> > > > > > --
> > > > > > To unsubscribe from this list: send the line "unsubscribe
> > linux-nfs" in
> > > > > > the body of a message to [email protected]
> > > > > > More majordomo info at http://vger.kernel.org/majordomo-
> > info.html
> > > --
> > > To unsubscribe from this list: send the line "unsubscribe linux-nfs"
> > in
> > > the body of a message to [email protected]
> > > More majordomo info at http://vger.kernel.org/majordomo-info.html
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> > the body of a message to [email protected]
> > More majordomo info at http://vger.kernel.org/majordomo-info.html
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html



2011-05-17 22:01:13

by NeilBrown

[permalink] [raw]
Subject: Re: How to control the order of different export options for different client formats?

On Tue, 17 May 2011 17:21:24 +0100 James Pearson <[email protected]>
wrote:

> I'm using CentOS 5.x (nfs-utils based on v1.0.9) - and have been using
> the following in /etc/exports:
>
> /export *(rw,async) @backup(rw,no_root_squash,async)
>
> which works fine - hosts in the backup NIS netgroup mount the file
> system with no_root_squash and other clients with root_squash
>
> However, I now want to restrict the export to all clients in a single
> subnet - so I now have /etc/exports as:
>
> /export 172.16.0.0/20(rw,async) @backup(rw,no_root_squash,async)
>
> Unfortunately, hosts in the backup NIS netgroup (which are also in the
> 172.16.0.0/20 subnet) no longer mount with no_root_squash
>
> It appears that the subnet export takes precedence over the netgroup
> export (it doesn't matter in what order the subnets/netgroups exports
> are listed in /etc/exports) - so the netgroup client options are ignored
> as a match has already been found in the subnet export.
>
> Is there any way to control the order in which clients are checked for
> export options?
>
> i.e. I would like netgroups to take precedence over subnets

Unfortunately you cannot do that.

The place in the code where this is determined is towards the end of
'lookup_export' in utils/mountd/cache.c

Were I to try to 'fix' this I would probably define a new field in 'struct
exportent' which holds a 'priority'.

Then allow a setting like "priority=4" in /etc/exports

Then change the code in lookup_export to choose the one with the higher
priority, rather than the 'first' one.

NeilBrown

2011-05-24 13:17:38

by J. Bruce Fields

[permalink] [raw]
Subject: Re: Performance Issue with multiple dataserver

On Tue, May 24, 2011 at 12:44:19PM +0100, Steven Whitehouse wrote:
> /Hi,
>
> On Tue, 2011-05-24 at 17:09 +0530, [email protected] wrote:
> > Hi Bruce, Shyam
> >
> > As mentioned here http://wiki.linux-nfs.org/wiki/index.php/PNFS_server_projects gfs2 is also having issues(crashes, performance), so instead of going for gfs2 can we debug spNFS itself to get high performance?
> >
> >
> > -Taousif
> >
> As far as I'm aware that is historical information. If there are still
> problems with GFS2, then please report them so we can work on them,

Well, they may be nfs problems rather than gfs2 problems.

In either case, neither pnfs/gfs2 nor spnfs is a particularly mature
project; you will find bugs and performance problems in both.

I think a cluster-filesystem-based approach probably has the better
chance of getting merged earlier, as it solves a number of thorny
problems (such as how to do IO through the MDS) for you. But it all
depends on what your goals are. Either will require significant
development work to get into acceptable shape.

--b.

2011-05-18 11:54:54

by Taousif_Ansari

[permalink] [raw]
Subject: Performance Issue with multiple dataserver

Hi,

I have done pNFS setup with single Dataserver and Two Dataserver and ran the IOzone tool on both, I found that the performance with multiple dataservers is less than the performance with single dataservers.

Here are some numbers, which were captured by the IOzone tool.


4 8 16 32 64 128 256 512 1024 <== Record Length in KB
With Single Dataserver:
Read operation for file size 1 MB- 66415 66359 63630 70358 86223 70256 66047 66068 68489 <== IO kB/sec
Write operation for file size 1 MB- 18827 16920 18846 17039 18896 17009 17173 19206 17947 <== IO kB/sec

With Two Dataservers :
Read operation for file size 1 MB- 36882 381198 38150 38084 38749 33663 34398 37313 37847 <== IO kB/sec
Write operation for file size 1 MB- 5461 4661 5586 4870 5227 4922 4214 5572 4658 <== IO kB/sec


Can somebody tell me What could be the issue....

2011-05-19 14:37:45

by shyam_iyer

[permalink] [raw]
Subject: RE: Performance Issue with multiple dataserver



> -----Original Message-----
> From: [email protected] [mailto:linux-nfs-
> [email protected]] On Behalf Of Ansari, Taousif - Dell Team
>
> Can you please elaborate GFS2-setup a bit more...


I guess Bruce is saying the step-by-step procedure is not written up...

Create a Redhat cluster using the shared block storage(iSCSI in your case I guess). You would get documentation on creating a RH cluster in many places..

All the MDSs and the DSs need to be part of the cluster.

Format GFS2 on the shared iSCSI storage.

Mount the GFS2 formatted iSCSI storage on all the MDSs and DSs and export them via NFS. Use Benny's tree for NFS.

The GFS2 cluster backend is your glue to scale the MDSes and DSes.

>
> -----Original Message-----
> From: J. Bruce Fields [mailto:[email protected]]
> Sent: Thursday, May 19, 2011 7:14 PM
> To: Ansari, Taousif - Dell Team
> Cc: [email protected]
> Subject: Re: Performance Issue with multiple dataserver
>
> On Thu, May 19, 2011 at 06:44:59PM +0530, [email protected]
> wrote:
> > Then what should I follow, and what details are needed....
>
> There isn't really any supported server-side pNFS.
>
> The closest is the GFS2-based code, for which you need to install
> Benny's latest tree, configure a shared block device, create a GFS2
> filesystem on it, mount it across all DS's and the MDS, and export it
> from all of them--but I don't believe anyone has written step-by-step
> instructions for that.
>
> --b.
>
> >
> > -----Original Message-----
> > From: [email protected] [mailto:linux-nfs-
> [email protected]] On Behalf Of J. Bruce Fields
> > Sent: Thursday, May 19, 2011 6:43 PM
> > To: Ansari, Taousif - Dell Team
> > Cc: [email protected]
> > Subject: Re: Performance Issue with multiple dataserver
> >
> > On Thu, May 19, 2011 at 06:09:21PM +0530, [email protected]
> wrote:
> > > I have followed the way given on http://wiki.linux-
> nfs.org/wiki/index.php/Configuring_pNFS/spnfsd .
> >
> > Oh. As noted there, spnfs is unmaintained.
> >
> > And, in any case, we'd need many more details about your setup.
> >
> > --b.
> >
> > >
> > > -Taousif
> > >
> > > -----Original Message-----
> > > From: J. Bruce Fields [mailto:[email protected]]
> > > Sent: Thursday, May 19, 2011 5:20 PM
> > > To: Ansari, Taousif - Dell Team
> > > Cc: [email protected]
> > > Subject: Re: Performance Issue with multiple dataserver
> > >
> > > On Thu, May 19, 2011 at 10:56:44AM +0530,
> [email protected] wrote:
> > > > Hi,
> > > >
> > > > I am using on Server linux-pnfs-2.6.38(linux-pnfs-ae7441f.tar)
> and on client also linux-pnfs-2.6.38(linux-pnfs-ae7441f.tar) downloaded
> from http://git.linux-nfs.org/?p=bhalevy/linux-pnfs.git;a=summary on
> Fedora 14.
> > >
> > > So you're using GFS2 on the server? With what sort of storage?
> > >
> > > --b.
> > >
> > > >
> > > > Extremely sorry for causing confusing .
> > > > -----Original Message-----
> > > > From: J. Bruce Fields [mailto:[email protected]]
> > > > Sent: Wednesday, May 18, 2011 9:43 PM
> > > > To: Ansari, Taousif - Dell Team
> > > > Cc: [email protected]
> > > > Subject: Re: Performance Issue with multiple dataserver
> > > >
> > > > You sent this message as a reply to an unrelated message, which
> is
> > > > confusing to those of us with threaded mail readers.
> > > >
> > > > On Wed, May 18, 2011 at 05:24:45PM +0530,
> [email protected] wrote:
> > > > > I have done pNFS setup with single Dataserver and Two
> Dataserver and ran the IOzone tool on both, I found that the
> performance with multiple dataservers is less than the performance with
> single dataservers.
> > > >
> > > > What are you using as the server, and what as the client?
> > > >
> > > > --b.
> > > >
> > > > >
> > > > > Here are some numbers, which were captured by the IOzone tool.
> > > > >
> > > > >
> > > > > 4 8 16 32
> 64 128 256 512 1024 <== Record Length in KB
> > > > > With Single Dataserver:
> > > > > Read operation for file size 1 MB- 66415 66359 63630
> 70358 86223 70256 66047 66068 68489 <== IO kB/sec
> > > > > Write operation for file size 1 MB- 18827 16920 18846
> 17039 18896 17009 17173 19206 17947 <== IO kB/sec
> > > > >
> > > > > With Two Dataservers :
> > > > > Read operation for file size 1 MB- 36882 381198
> 38150 38084 38749 33663 34398 37313 37847 <== IO kB/sec
> > > > > Write operation for file size 1 MB- 5461 4661 5586
> 4870 5227 4922 4214 5572 4658 <== IO kB/sec
> > > > >
> > > > >
> > > > > Can somebody tell me What could be the issue....
> > > > > --
> > > > > To unsubscribe from this list: send the line "unsubscribe
> linux-nfs" in
> > > > > the body of a message to [email protected]
> > > > > More majordomo info at http://vger.kernel.org/majordomo-
> info.html
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-nfs"
> in
> > the body of a message to [email protected]
> > More majordomo info at http://vger.kernel.org/majordomo-info.html
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html

2011-05-19 12:39:30

by Taousif_Ansari

[permalink] [raw]
Subject: RE: Performance Issue with multiple dataserver

I have followed the way given on http://wiki.linux-nfs.org/wiki/index.php/Configuring_pNFS/spnfsd .

-Taousif

-----Original Message-----
From: J. Bruce Fields [mailto:[email protected]]
Sent: Thursday, May 19, 2011 5:20 PM
To: Ansari, Taousif - Dell Team
Cc: [email protected]
Subject: Re: Performance Issue with multiple dataserver

On Thu, May 19, 2011 at 10:56:44AM +0530, [email protected] wrote:
> Hi,
>
> I am using on Server linux-pnfs-2.6.38(linux-pnfs-ae7441f.tar) and on client also linux-pnfs-2.6.38(linux-pnfs-ae7441f.tar) downloaded from http://git.linux-nfs.org/?p=bhalevy/linux-pnfs.git;a=summary on Fedora 14.

So you're using GFS2 on the server? With what sort of storage?

--b.

>
> Extremely sorry for causing confusing .
> -----Original Message-----
> From: J. Bruce Fields [mailto:[email protected]]
> Sent: Wednesday, May 18, 2011 9:43 PM
> To: Ansari, Taousif - Dell Team
> Cc: [email protected]
> Subject: Re: Performance Issue with multiple dataserver
>
> You sent this message as a reply to an unrelated message, which is
> confusing to those of us with threaded mail readers.
>
> On Wed, May 18, 2011 at 05:24:45PM +0530, [email protected] wrote:
> > I have done pNFS setup with single Dataserver and Two Dataserver and ran the IOzone tool on both, I found that the performance with multiple dataservers is less than the performance with single dataservers.
>
> What are you using as the server, and what as the client?
>
> --b.
>
> >
> > Here are some numbers, which were captured by the IOzone tool.
> >
> >
> > 4 8 16 32 64 128 256 512 1024 <== Record Length in KB
> > With Single Dataserver:
> > Read operation for file size 1 MB- 66415 66359 63630 70358 86223 70256 66047 66068 68489 <== IO kB/sec
> > Write operation for file size 1 MB- 18827 16920 18846 17039 18896 17009 17173 19206 17947 <== IO kB/sec
> >
> > With Two Dataservers :
> > Read operation for file size 1 MB- 36882 381198 38150 38084 38749 33663 34398 37313 37847 <== IO kB/sec
> > Write operation for file size 1 MB- 5461 4661 5586 4870 5227 4922 4214 5572 4658 <== IO kB/sec
> >
> >
> > Can somebody tell me What could be the issue....
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> > the body of a message to [email protected]
> > More majordomo info at http://vger.kernel.org/majordomo-info.html

Subject: RE: Performance Issue with multiple dataserver

Can you please elaborate GFS2-setup a bit more...

-----Original Message-----
From: J. Bruce Fields [mailto:[email protected]]
Sent: Thursday, May 19, 2011 7:14 PM
To: Ansari, Taousif - Dell Team
Cc: [email protected]
Subject: Re: Performance Issue with multiple dataserver

On Thu, May 19, 2011 at 06:44:59PM +0530, Taousif_Ansari-G5Y5guI6XLZWk0Htik3J/[email protected] wrote:
> Then what should I follow, and what details are needed....

There isn't really any supported server-side pNFS.

The closest is the GFS2-based code, for which you need to install
Benny's latest tree, configure a shared block device, create a GFS2
filesystem on it, mount it across all DS's and the MDS, and export it
from all of them--but I don't believe anyone has written step-by-step
instructions for that.

--b.

>
> -----Original Message-----
> From: [email protected] [mailto:[email protected]] On Behalf Of J. Bruce Fields
> Sent: Thursday, May 19, 2011 6:43 PM
> To: Ansari, Taousif - Dell Team
> Cc: [email protected]
> Subject: Re: Performance Issue with multiple dataserver
>
> On Thu, May 19, 2011 at 06:09:21PM +0530, Taousif_Ansari-G5Y5guI6XLZWk0Htik3J/[email protected] wrote:
> > I have followed the way given on http://wiki.linux-nfs.org/wiki/index.php/Configuring_pNFS/spnfsd .
>
> Oh. As noted there, spnfs is unmaintained.
>
> And, in any case, we'd need many more details about your setup.
>
> --b.
>
> >
> > -Taousif
> >
> > -----Original Message-----
> > From: J. Bruce Fields [mailto:[email protected]]
> > Sent: Thursday, May 19, 2011 5:20 PM
> > To: Ansari, Taousif - Dell Team
> > Cc: [email protected]
> > Subject: Re: Performance Issue with multiple dataserver
> >
> > On Thu, May 19, 2011 at 10:56:44AM +0530, Taousif_Ansari-G5Y5guI6XLZWk0Htik3J/[email protected] wrote:
> > > Hi,
> > >
> > > I am using on Server linux-pnfs-2.6.38(linux-pnfs-ae7441f.tar) and on client also linux-pnfs-2.6.38(linux-pnfs-ae7441f.tar) downloaded from http://git.linux-nfs.org/?p=bhalevy/linux-pnfs.git;a=summary on Fedora 14.
> >
> > So you're using GFS2 on the server? With what sort of storage?
> >
> > --b.
> >
> > >
> > > Extremely sorry for causing confusing .
> > > -----Original Message-----
> > > From: J. Bruce Fields [mailto:[email protected]]
> > > Sent: Wednesday, May 18, 2011 9:43 PM
> > > To: Ansari, Taousif - Dell Team
> > > Cc: [email protected]
> > > Subject: Re: Performance Issue with multiple dataserver
> > >
> > > You sent this message as a reply to an unrelated message, which is
> > > confusing to those of us with threaded mail readers.
> > >
> > > On Wed, May 18, 2011 at 05:24:45PM +0530, Taousif_Ansari-G5Y5guI6XLZWk0Htik3J/[email protected] wrote:
> > > > I have done pNFS setup with single Dataserver and Two Dataserver and ran the IOzone tool on both, I found that the performance with multiple dataservers is less than the performance with single dataservers.
> > >
> > > What are you using as the server, and what as the client?
> > >
> > > --b.
> > >
> > > >
> > > > Here are some numbers, which were captured by the IOzone tool.
> > > >
> > > >
> > > > 4 8 16 32 64 128 256 512 1024 <== Record Length in KB
> > > > With Single Dataserver:
> > > > Read operation for file size 1 MB- 66415 66359 63630 70358 86223 70256 66047 66068 68489 <== IO kB/sec
> > > > Write operation for file size 1 MB- 18827 16920 18846 17039 18896 17009 17173 19206 17947 <== IO kB/sec
> > > >
> > > > With Two Dataservers :
> > > > Read operation for file size 1 MB- 36882 381198 38150 38084 38749 33663 34398 37313 37847 <== IO kB/sec
> > > > Write operation for file size 1 MB- 5461 4661 5586 4870 5227 4922 4214 5572 4658 <== IO kB/sec
> > > >
> > > >
> > > > Can somebody tell me What could be the issue....
> > > > --
> > > > To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> > > > the body of a message to [email protected]
> > > > More majordomo info at http://vger.kernel.org/majordomo-info.html
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html

2011-06-02 13:38:00

by James Pearson

[permalink] [raw]
Subject: Re: How to control the order of different export options for different client formats?

--- exports.man.dist 2010-09-28 13:24:16.000000000 +0100
+++ exports.man 2011-06-02 14:19:26.434486000 +0100
@@ -48,19 +48,6 @@
This is the most common format. You may specify a host either by an
abbreviated name recognized be the resolver, the fully qualified domain
name, or an IP address.
-.IP "netgroups
-NIS netgroups may be given as
-.IR @group .
-Only the host part of each
-netgroup members is consider in checking for membership. Empty host
-parts or those containing a single dash (\-) are ignored.
-.IP "wildcards
-Machine names may contain the wildcard characters \fI*\fR and \fI?\fR.
-This can be used to make the \fIexports\fR file more compact; for instance,
-\fI*.cs.foo.edu\fR matches all hosts in the domain
-\fIcs.foo.edu\fR. As these characters also match the dots in a domain
-name, the given pattern will also match all hosts within any subdomain
-of \fIcs.foo.edu\fR.
.IP "IP networks
You can also export directories to all hosts on an IP (sub-) network
simultaneously. This is done by specifying an IP address and netmask pair
@@ -72,6 +59,25 @@
to the network base IPv4 address results in identical subnetworks with 10 bits of
host. Wildcard characters generally do not work on IP addresses, though they
may work by accident when reverse DNS lookups fail.
+.IP "wildcards
+Machine names may contain the wildcard characters \fI*\fR and \fI?\fR.
+This can be used to make the \fIexports\fR file more compact; for instance,
+\fI*.cs.foo.edu\fR matches all hosts in the domain
+\fIcs.foo.edu\fR. As these characters also match the dots in a domain
+name, the given pattern will also match all hosts within any subdomain
+of \fIcs.foo.edu\fR.
+.IP "netgroups
+NIS netgroups may be given as
+.IR @group .
+Only the host part of each
+netgroup members is consider in checking for membership. Empty host
+parts or those containing a single dash (\-) are ignored.
+.IP "anonymous
+This is specified by a single
+.I *
+character (not to be confused with the
+.I wildcard
+entry above) and will match all clients.
'''.TP
'''.B =public
'''This is a special ``hostname'' that identifies the given directory name
@@ -92,6 +98,12 @@
'''.B \-\-public\-root
'''option. Multiple specifications of a public root will be ignored.
.PP
+If a client matches more than one of the specifications above, then
+the first match from the above list order takes precedence - regardless of
+the order they appear on the export line. However, if a client matches
+more than one of the same type of specification (e.g. two netgroups),
+then the first match from the order they appear on the export line takes
+precedence.
.SS RPCSEC_GSS security
You may use the special strings "gss/krb5", "gss/krb5i", or "gss/krb5p"
to restrict access to clients using rpcsec_gss security. However, this


Attachments:
exports.man.patch (2.74 kB)

2011-06-04 18:20:13

by J. Bruce Fields

[permalink] [raw]
Subject: Re: How to control the order of different export options for different client formats?

On Thu, Jun 02, 2011 at 02:37:58PM +0100, James Pearson wrote:
> J. Bruce Fields wrote:
> >
> >But could we just combine this with the previous section--and make sure
> >the different possibilities are listed there in the correct priority
> >order to start off with.
> >
> >That'd also mean adding a new subsection for the "anonymous" case.
>
> OK - how about the attached patch?

Looks good to me, thanks.

My one quibble is with the statement that "single host" "is the most
common format". (I don't think we know that.)

Fix that, and just resend with a brief changelog comment and a

Signed-off-by: James Pearson <etc...>

and steved should get around to applying it eventually....

--b.

>
> James Pearson

> --- exports.man.dist 2010-09-28 13:24:16.000000000 +0100
> +++ exports.man 2011-06-02 14:19:26.434486000 +0100
> @@ -48,19 +48,6 @@
> This is the most common format. You may specify a host either by an
> abbreviated name recognized be the resolver, the fully qualified domain
> name, or an IP address.
> -.IP "netgroups
> -NIS netgroups may be given as
> -.IR @group .
> -Only the host part of each
> -netgroup members is consider in checking for membership. Empty host
> -parts or those containing a single dash (\-) are ignored.
> -.IP "wildcards
> -Machine names may contain the wildcard characters \fI*\fR and \fI?\fR.
> -This can be used to make the \fIexports\fR file more compact; for instance,
> -\fI*.cs.foo.edu\fR matches all hosts in the domain
> -\fIcs.foo.edu\fR. As these characters also match the dots in a domain
> -name, the given pattern will also match all hosts within any subdomain
> -of \fIcs.foo.edu\fR.
> .IP "IP networks
> You can also export directories to all hosts on an IP (sub-) network
> simultaneously. This is done by specifying an IP address and netmask pair
> @@ -72,6 +59,25 @@
> to the network base IPv4 address results in identical subnetworks with 10 bits of
> host. Wildcard characters generally do not work on IP addresses, though they
> may work by accident when reverse DNS lookups fail.
> +.IP "wildcards
> +Machine names may contain the wildcard characters \fI*\fR and \fI?\fR.
> +This can be used to make the \fIexports\fR file more compact; for instance,
> +\fI*.cs.foo.edu\fR matches all hosts in the domain
> +\fIcs.foo.edu\fR. As these characters also match the dots in a domain
> +name, the given pattern will also match all hosts within any subdomain
> +of \fIcs.foo.edu\fR.
> +.IP "netgroups
> +NIS netgroups may be given as
> +.IR @group .
> +Only the host part of each
> +netgroup members is consider in checking for membership. Empty host
> +parts or those containing a single dash (\-) are ignored.
> +.IP "anonymous
> +This is specified by a single
> +.I *
> +character (not to be confused with the
> +.I wildcard
> +entry above) and will match all clients.
> '''.TP
> '''.B =public
> '''This is a special ``hostname'' that identifies the given directory name
> @@ -92,6 +98,12 @@
> '''.B \-\-public\-root
> '''option. Multiple specifications of a public root will be ignored.
> .PP
> +If a client matches more than one of the specifications above, then
> +the first match from the above list order takes precedence - regardless of
> +the order they appear on the export line. However, if a client matches
> +more than one of the same type of specification (e.g. two netgroups),
> +then the first match from the order they appear on the export line takes
> +precedence.
> .SS RPCSEC_GSS security
> You may use the special strings "gss/krb5", "gss/krb5i", or "gss/krb5p"
> to restrict access to clients using rpcsec_gss security. However, this


2011-06-06 12:14:13

by James Pearson

[permalink] [raw]
Subject: Re: How to control the order of different export options for different client formats?

J. Bruce Fields wrote:
> Looks good to me, thanks.
>
> My one quibble is with the statement that "single host" "is the most
> common format". (I don't think we know that.)
>
> Fix that, and just resend with a brief changelog comment and a
>
> Signed-off-by: James Pearson <etc...>
>
> and steved should get around to applying it eventually....

The "This is the most common format" statement is in the existing
exports man page - i.e. nothing to do with my patch ...

However, I'll remove that statement as well and submit the patch

James Pearson

2011-06-07 20:33:21

by Steve Dickson

[permalink] [raw]
Subject: Re: How to control the order of different export options for different client formats?



On 06/02/2011 09:37 AM, James Pearson wrote:
> J. Bruce Fields wrote:
>>
>> But could we just combine this with the previous section--and make sure
>> the different possibilities are listed there in the correct priority
>> order to start off with.
>>
>> That'd also mean adding a new subsection for the "anonymous" case.
>
> OK - how about the attached patch?
>
> James Pearson
Committed....

steved.