I have NFS server running on a Dell 2850:
Name: Amazon
CPU: 3000 Mhz Xeon (X2)
Memory: 2G
Swap: 8G
OS: Fedora Core 3
NFS: Version 1-4
I export a 150G volume thusly:
/NFS/tigris_backup
10.1.1.0/24(rw,wdelay,nohide,insecure,root_squash,no_subtree_check,anonuid=65534,anongid=65534)
The client is a Dell 2650 running Oracle 9i
Name: Tigris
CPU: 3200Mhz Xeon (X2)
Memory: 8G
Swap: 4G
OS: Redhat ES 3.2
NFS: Version 2/tcp
Both have hyperthreading enabled.
The machines are connected via a dedicated gigabit switch.
The goal (among others) is to run the nightly RMAN backups of the Oracle
database to the NFS drive on Amazon where it is backed up to tape later.
To this end I mount the NFS volume, run the backups, and unmount the volme.
That part works well. However the thruput on the connection is much
less than I expected. Simple file transfers for example give only about:
Using NFS
Amazon to Tigris: 168 Mbps
Tigris to Amazon: 104 Mbps
Using ftp
ssh A to T: 320 Mbps
ssh T to A 360 Mbps
I can accept that I may never get a full Gigabit per second over copper
but it still looks like there is room for improvement in the NFS trnsfer
speeds.
Suggestions?
--
Stephen Carville <[email protected]>
Unix and Network Admin
Nationwide Totalflood
6033 W. Century Blvd
Los Angeles, CA 90045
310-342-3602
-------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems? Stop! Download the new AJAX search engine that makes
searching your log files as easy as surfing the web. DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
On Fri, 2006-01-06 at 08:45 -0800, Stephen Carville wrote:
> I have NFS server running on a Dell 2850:
>
> Name: Amazon
> CPU: 3000 Mhz Xeon (X2)
> Memory: 2G
> Swap: 8G
> OS: Fedora Core 3
> NFS: Version 1-4
>
> I export a 150G volume thusly:
>
> /NFS/tigris_backup
> 10.1.1.0/24(rw,wdelay,nohide,insecure,root_squash,no_subtree_check,anonuid=65534,anongid=65534)
>
> The client is a Dell 2650 running Oracle 9i
>
> Name: Tigris
> CPU: 3200Mhz Xeon (X2)
> Memory: 8G
> Swap: 4G
> OS: Redhat ES 3.2
> NFS: Version 2/tcp
Why are you using NFSv2? That forces the server to use synchronous
writes (i.e. each write must be committed to disk before the server
replies to the client) and is very inefficient.
If your application is write-intensive, you should always choose NFSv3
or greater.
Cheers,
Trond
-------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems? Stop! Download the new AJAX search engine that makes
searching your log files as easy as surfing the web. DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
Trond Myklebust wrote:
>On Fri, 2006-01-06 at 08:45 -0800, Stephen Carville wrote:
>
>
>>I have NFS server running on a Dell 2850:
>>
>> Name: Amazon
>> CPU: 3000 Mhz Xeon (X2)
>>Memory: 2G
>> Swap: 8G
>> OS: Fedora Core 3
>> NFS: Version 1-4
>>
>>I export a 150G volume thusly:
>>
>>/NFS/tigris_backup
>>10.1.1.0/24(rw,wdelay,nohide,insecure,root_squash,no_subtree_check,anonuid=65534,anongid=65534)
>>
>>The client is a Dell 2650 running Oracle 9i
>>
>> Name: Tigris
>> CPU: 3200Mhz Xeon (X2)
>>Memory: 8G
>> Swap: 4G
>> OS: Redhat ES 3.2
>> NFS: Version 2/tcp
>>
>>
>
>Why are you using NFSv2? That forces the server to use synchronous
>writes (i.e. each write must be committed to disk before the server
>replies to the client) and is very inefficient.
>
>If your application is write-intensive, you should always choose NFSv3
>or greater.
>
There might also be some benefits from using the larger transfer sizes
available
in NFSv3 or greater too. NFSv2 is limited to 8kB transfers, which is
kind of
small for the larger bandwidth networks, especially in conjunction with
the use
of TCP to minimize any possible retransmitted packets.
Thanx...
ps
-------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems? Stop! Download the new AJAX search engine that makes
searching your log files as easy as surfing the web. DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
Trond Myklebust wrote:
> On Fri, 2006-01-06 at 08:45 -0800, Stephen Carville wrote:
>
>>I have NFS server running on a Dell 2850:
>>
>> Name: Amazon
>> CPU: 3000 Mhz Xeon (X2)
>>Memory: 2G
>> Swap: 8G
>> OS: Fedora Core 3
>> NFS: Version 1-4
>>
>>I export a 150G volume thusly:
>>
>>/NFS/tigris_backup
>>10.1.1.0/24(rw,wdelay,nohide,insecure,root_squash,no_subtree_check,anonuid=65534,anongid=65534)
>>
>>The client is a Dell 2650 running Oracle 9i
>>
>> Name: Tigris
>> CPU: 3200Mhz Xeon (X2)
>>Memory: 8G
>> Swap: 4G
>> OS: Redhat ES 3.2
>> NFS: Version 2/tcp
>
>
> Why are you using NFSv2? That forces the server to use synchronous
> writes (i.e. each write must be committed to disk before the server
> replies to the client) and is very inefficient.
>
> If your application is write-intensive, you should always choose NFSv3
> or greater.
No problem. I added nfsvers=3 to the mount options and I'll see how it
works tonight
--
Stephen Carville <[email protected]>
Unix and Network Admin
Nationwide Totalflood
6033 W. Century Blvd
Los Angeles, CA 90045
310-342-3602
-------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems? Stop! Download the new AJAX search engine that makes
searching your log files as easy as surfing the web. DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
Did you specify using a 32k read and write size, that I didn't see
somewhere? If you have a very clean network, you could get an
advantage of using UDP as well. I'm not sure which transport you are
using though.
Oh I see a mention of nfs v2 tcp. Well, I'd recommend moving to nfs
v3, using a 64k read/write size, and using UDP. That might give you
some gains. Of course, I'm usually a TCP bigot for nfs transport, but
for a very clean, almost back to back, dedicated switch type of
environment, I'll allow it.
-Blake
On Jan 6, 2006, at 8:45 AM, Stephen Carville wrote:
> I have NFS server running on a Dell 2850:
>
> Name: Amazon
> CPU: 3000 Mhz Xeon (X2)
> Memory: 2G
> Swap: 8G
> OS: Fedora Core 3
> NFS: Version 1-4
>
> I export a 150G volume thusly:
>
> /NFS/tigris_backup
> 10.1.1.0/
> 24(rw,wdelay,nohide,insecure,root_squash,no_subtree_check,anonuid=65534
> ,anongid=65534)
>
> The client is a Dell 2650 running Oracle 9i
>
> Name: Tigris
> CPU: 3200Mhz Xeon (X2)
> Memory: 8G
> Swap: 4G
> OS: Redhat ES 3.2
> NFS: Version 2/tcp
>
> Both have hyperthreading enabled.
> The machines are connected via a dedicated gigabit switch.
>
> The goal (among others) is to run the nightly RMAN backups of the
> Oracle
> database to the NFS drive on Amazon where it is backed up to tape
> later.
> To this end I mount the NFS volume, run the backups, and unmount the
> volme.
>
> That part works well. However the thruput on the connection is much
> less than I expected. Simple file transfers for example give only
> about:
>
> Using NFS
> Amazon to Tigris: 168 Mbps
> Tigris to Amazon: 104 Mbps
>
> Using ftp
> ssh A to T: 320 Mbps
> ssh T to A 360 Mbps
>
> I can accept that I may never get a full Gigabit per second over copper
> but it still looks like there is room for improvement in the NFS
> trnsfer
> speeds.
>
> Suggestions?
>
> --
> Stephen Carville <[email protected]>
> Unix and Network Admin
> Nationwide Totalflood
> 6033 W. Century Blvd
> Los Angeles, CA 90045
> 310-342-3602
>
>
>
> -------------------------------------------------------
> This SF.net email is sponsored by: Splunk Inc. Do you grep through log
> files
> for problems? Stop! Download the new AJAX search engine that makes
> searching your log files as easy as surfing the web. DOWNLOAD SPLUNK!
> http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
> _______________________________________________
> NFS maillist - [email protected]
> https://lists.sourceforge.net/lists/listinfo/nfs
>
>
-------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems? Stop! Download the new AJAX search engine that makes
searching your log files as easy as surfing the web. DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
On Fri, 6 Jan 2006, Blake Golliher wrote:
> Did you specify using a 32k read and write size, that I didn't see somewhere?
> If you have a very clean network, you could get an advantage of using UDP as
> well. I'm not sure which transport you are using though.
>
> Oh I see a mention of nfs v2 tcp. Well, I'd recommend moving to nfs v3, using
> a 64k read/write size, and using UDP. That might give you some gains. Of
> course, I'm usually a TCP bigot for nfs transport, but for a very clean,
> almost back to back, dedicated switch type of environment, I'll allow it.
Mmm. I thought UDP was max. 8k read and write size and 32k was the max.
for TCP.
>
> -Blake
>
> On Jan 6, 2006, at 8:45 AM, Stephen Carville wrote:
>
> > I have NFS server running on a Dell 2850:
> >
> > Name: Amazon
> > CPU: 3000 Mhz Xeon (X2)
> > Memory: 2G
> > Swap: 8G
> > OS: Fedora Core 3
> > NFS: Version 1-4
> >
> > I export a 150G volume thusly:
> >
> > /NFS/tigris_backup
> > 10.1.1.0/24(rw,wdelay,nohide,insecure,root_squash,no_subtree_check,anonuid=65534,anongid=65534)
> >
> > The client is a Dell 2650 running Oracle 9i
> >
> > Name: Tigris
> > CPU: 3200Mhz Xeon (X2)
> > Memory: 8G
> > Swap: 4G
> > OS: Redhat ES 3.2
> > NFS: Version 2/tcp
> >
> > Both have hyperthreading enabled.
> > The machines are connected via a dedicated gigabit switch.
> >
> > The goal (among others) is to run the nightly RMAN backups of the Oracle
> > database to the NFS drive on Amazon where it is backed up to tape later.
> > To this end I mount the NFS volume, run the backups, and unmount the volme.
> >
> > That part works well. However the thruput on the connection is much
> > less than I expected. Simple file transfers for example give only about:
> >
> > Using NFS
> > Amazon to Tigris: 168 Mbps
> > Tigris to Amazon: 104 Mbps
> >
> > Using ftp
> > ssh A to T: 320 Mbps
> > ssh T to A 360 Mbps
> >
> > I can accept that I may never get a full Gigabit per second over copper
> > but it still looks like there is room for improvement in the NFS trnsfer
> > speeds.
> >
> > Suggestions?
> >
> > --
> > Stephen Carville <[email protected]>
> > Unix and Network Admin
> > Nationwide Totalflood
> > 6033 W. Century Blvd
> > Los Angeles, CA 90045
> > 310-342-3602
> >
> >
> >
> > -------------------------------------------------------
> > This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
> > for problems? Stop! Download the new AJAX search engine that makes
> > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK!
> > http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
> > _______________________________________________
> > NFS maillist - [email protected]
> > https://lists.sourceforge.net/lists/listinfo/nfs
> >
> >
>
>
>
> -------------------------------------------------------
> This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
> for problems? Stop! Download the new AJAX search engine that makes
> searching your log files as easy as surfing the web. DOWNLOAD SPLUNK!
> http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
> _______________________________________________
> NFS maillist - [email protected]
> https://lists.sourceforge.net/lists/listinfo/nfs
-------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems? Stop! Download the new AJAX search engine that makes
searching your log files as easy as surfing the web. DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
On Jan 7, 2006, at 10:51 PM, Ian Kent wrote:
> On Fri, 6 Jan 2006, Blake Golliher wrote:
>
>> Did you specify using a 32k read and write size, that I didn't see
>> somewhere?
>> If you have a very clean network, you could get an advantage of using
>> UDP as
>> well. I'm not sure which transport you are using though.
>>
>> Oh I see a mention of nfs v2 tcp. Well, I'd recommend moving to nfs
>> v3, using
>> a 64k read/write size, and using UDP. That might give you some
>> gains. Of
>> course, I'm usually a TCP bigot for nfs transport, but for a very
>> clean,
>> almost back to back, dedicated switch type of environment, I'll allow
>> it.
>
> Mmm. I thought UDP was max. 8k read and write size and 32k was the max.
> for TCP.
The linux nfs client supports, at least, a 64k block transfer size. I
don't believe it's constrained by transport protocol.
-Blake
-------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems? Stop! Download the new AJAX search engine that makes
searching your log files as easy as surfing the web. DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
On Sat, 2006-01-07 at 23:44 -0800, Blake Golliher wrote:
> The linux nfs client supports, at least, a 64k block transfer size. I
> don't believe it's constrained by transport protocol.
That is a very recent feature, though. Unless you are using the NFS_ALL
patches, you will only get 64k block transfer sizes on the very latest
GIT tarball. ;-)
Cheers,
Trond
-------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems? Stop! Download the new AJAX search engine that makes
searching your log files as easy as surfing the web. DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
[email protected] wrote on 01/08/2006 12:22:13 PM:
> On Sat, 2006-01-07 at 23:44 -0800, Blake Golliher wrote:
> > The linux nfs client supports, at least, a 64k block transfer size. I
> > don't believe it's constrained by transport protocol.
>
> That is a very recent feature, though. Unless you are using the NFS_ALL
> patches, you will only get 64k block transfer sizes on the very latest
> GIT tarball. ;-)
>
> Cheers,
> Trond
>
But is still constrained by the transport protocol.
Marc.
-------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems? Stop! Download the new AJAX search engine that makes
searching your log files as easy as surfing the web. DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
Blake Golliher wrote:
>
> On Jan 7, 2006, at 10:51 PM, Ian Kent wrote:
>
>> On Fri, 6 Jan 2006, Blake Golliher wrote:
>>
>>> Did you specify using a 32k read and write size, that I didn't see
>>> somewhere?
>>> If you have a very clean network, you could get an advantage of using
>>> UDP as
>>> well. I'm not sure which transport you are using though.
>>>
>>> Oh I see a mention of nfs v2 tcp. Well, I'd recommend moving to nfs
>>> v3, using
>>> a 64k read/write size, and using UDP. That might give you some
>>> gains. Of
>>> course, I'm usually a TCP bigot for nfs transport, but for a very clean,
>>> almost back to back, dedicated switch type of environment, I'll allow
>>> it.
>>
>>
>> Mmm. I thought UDP was max. 8k read and write size and 32k was the max.
>> for TCP.
>
>
> The linux nfs client supports, at least, a 64k block transfer size. I
> don't believe it's constrained by transport protocol.
hi blake-
as of 2.6.15, the stock Linux NFS client supports up to 32KB read and
write size for NFS versions 2, 3, and 4, and for UDP and TCP.
the recommended r/wsize for UDP is 8KB simply to avoid IP fragment
storms. on a clean network with few other clients, 32KB with UDP will
probably work well.
soon, the Linux NFS client will support up to 1MB r/wsize, depending on
the network transport and the server's capabilities.
On Sun, 2006-01-08 at 13:10 -0800, Marc Eshel wrote:
> [email protected] wrote on 01/08/2006 12:22:13 PM:
>
> > On Sat, 2006-01-07 at 23:44 -0800, Blake Golliher wrote:
> > > The linux nfs client supports, at least, a 64k block transfer size. I
>
> > > don't believe it's constrained by transport protocol.
> >
> > That is a very recent feature, though. Unless you are using the NFS_ALL
> > patches, you will only get 64k block transfer sizes on the very latest
> > GIT tarball. ;-)
> >
> > Cheers,
> > Trond
> >
>
> But is still constrained by the transport protocol.
> Marc.
Yes. UDP won't allow you to use 64k read/write sizes since that would
overflow the size of a datagram. TCP has no such inherent limitation,
though.
Cheers,
Trond
-------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems? Stop! Download the new AJAX search engine that makes
searching your log files as easy as surfing the web. DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
I tried lately to change the max block size on the client and server from
32K to 64K and I got 36K. Looking at the code it looked like it got the
size from
max_rpc_payload = nfs_block_size(rpc_max_payload(server->client), NULL);
if (server->rsize > max_rpc_payload)
and max_payload was set by the network driver.
Marc.
[email protected] wrote on 01/08/2006 02:09:15 PM:
> On Sun, 2006-01-08 at 13:10 -0800, Marc Eshel wrote:
> > [email protected] wrote on 01/08/2006 12:22:13 PM:
> >
> > > On Sat, 2006-01-07 at 23:44 -0800, Blake Golliher wrote:
> > > > The linux nfs client supports, at least, a 64k block transfer
size. I
> >
> > > > don't believe it's constrained by transport protocol.
> > >
> > > That is a very recent feature, though. Unless you are using the
NFS_ALL
> > > patches, you will only get 64k block transfer sizes on the very
latest
> > > GIT tarball. ;-)
> > >
> > > Cheers,
> > > Trond
> > >
> >
> > But is still constrained by the transport protocol.
> > Marc.
>
> Yes. UDP won't allow you to use 64k read/write sizes since that would
> overflow the size of a datagram. TCP has no such inherent limitation,
> though.
>
> Cheers,
> Trond
>
>
>
> -------------------------------------------------------
> This SF.net email is sponsored by: Splunk Inc. Do you grep through log
files
> for problems? Stop! Download the new AJAX search engine that makes
> searching your log files as easy as surfing the web. DOWNLOAD SPLUNK!
> http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
> _______________________________________________
> NFS maillist - [email protected]
> https://lists.sourceforge.net/lists/listinfo/nfs
-------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems? Stop! Download the new AJAX search engine that makes
searching your log files as easy as surfing the web. DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs