2005-08-17 11:13:51

by Michael

[permalink] [raw]
Subject: can anyone explain this state?

Hi,=20

These day, I observed a strange thing when I copy a 100MB file from
nfs server, both client and server is running redhat 9.0 with kernel
2.4.20-8:

$ sudo mount -o
rw,bg,vers=3D3,tcp,timeo=3D600,rsize=3D1024,wsize=3D1024,hard,intr,ac
server1:/home/test filetest
$ time cp ./filetest/new100m /tmp/o100m

real 1m6.575s
user 0m0.040s
sys 0m1.430s
$ time cp ./filetest/new100m /tmp/o100m

real 0m4.964s =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D> i=
t is so different comparing
with above time!!
user 0m0.030s
sys 0m0.570s
$ sudo umount filetest
$ sudo mount -o
rw,bg,vers=3D3,tcp,timeo=3D600,rsize=3D102400,wsize=3D102400,hard,intr,ac
server1:/home/test filetest
$ time cp ./filetest/new100m /tmp/o100m

real 0m9.075s
user 0m0.020s
sys 0m0.470s
$ time cp ./filetest/new100m /tmp/o100m

real 0m7.501s =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D>=
only different in 2 seconds!
why not less than 4.9 seconds?
user 0m0.000s
sys 0m0.520s

Thanks!


-------------------------------------------------------
SF.Net email is Sponsored by the Better Software Conference & EXPO
September 19-22, 2005 * San Francisco, CA * Development Lifecycle Practices
Agile & Plan-Driven Development * Managing Projects & Teams * Testing & QA
Security * Process Improvement & Measurement * http://www.sqe.com/bsce5sf
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs


2005-08-17 12:03:22

by Trond Myklebust

[permalink] [raw]
Subject: Re: can anyone explain this state?

on den 17.08.2005 Klokka 19:13 (+0800) skreiv Michael:
> Hi,
>
> These day, I observed a strange thing when I copy a 100MB file from
> nfs server, both client and server is running redhat 9.0 with kernel
> 2.4.20-8:
>
> $ sudo mount -o
> rw,bg,vers=3,tcp,timeo=600,rsize=1024,wsize=1024,hard,intr,ac
> server1:/home/test filetest
> $ time cp ./filetest/new100m /tmp/o100m
>
> real 1m6.575s
> user 0m0.040s
> sys 0m1.430s
> $ time cp ./filetest/new100m /tmp/o100m
>
> real 0m4.964s =================> it is so different comparing
> with above time!!
> user 0m0.030s
> sys 0m0.570s

This is done using synchronous writes. Each write will wait for the
server to commit it to disk.

> $ sudo umount filetest
> $ sudo mount -o
> rw,bg,vers=3,tcp,timeo=600,rsize=102400,wsize=102400,hard,intr,ac
> server1:/home/test filetest
> $ time cp ./filetest/new100m /tmp/o100m
>
> real 0m9.075s
> user 0m0.020s
> sys 0m0.470s
> $ time cp ./filetest/new100m /tmp/o100m
>
> real 0m7.501s ==================>only different in 2 seconds!
> why not less than 4.9 seconds?
> user 0m0.000s
> sys 0m0.520s

This is done using asynchronous writes. Much faster, and no need (on
NFSv3) to wait for the disk before sending the next request.

The reason is that on 2.4 kernels (and early 2.6 kernels) we could only
do synchronous writes when you set wsize < PAGE_SIZE.

Cheers,
Trond



-------------------------------------------------------
SF.Net email is Sponsored by the Better Software Conference & EXPO
September 19-22, 2005 * San Francisco, CA * Development Lifecycle Practices
Agile & Plan-Driven Development * Managing Projects & Teams * Testing & QA
Security * Process Improvement & Measurement * http://www.sqe.com/bsce5sf
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs

2005-08-17 12:27:58

by Peter Staubach

[permalink] [raw]
Subject: Re: can anyone explain this state?

Trond Myklebust wrote:

>on den 17.08.2005 Klokka 19:13 (+0800) skreiv Michael:
>
>
>>Hi,
>>
>>These day, I observed a strange thing when I copy a 100MB file from
>>nfs server, both client and server is running redhat 9.0 with kernel
>>2.4.20-8:
>>
>>$ sudo mount -o
>>rw,bg,vers=3,tcp,timeo=600,rsize=1024,wsize=1024,hard,intr,ac
>>server1:/home/test filetest
>>$ time cp ./filetest/new100m /tmp/o100m
>>
>>real 1m6.575s
>>user 0m0.040s
>>sys 0m1.430s
>>$ time cp ./filetest/new100m /tmp/o100m
>>
>>real 0m4.964s =================> it is so different comparing
>>with above time!!
>>user 0m0.030s
>>sys 0m0.570s
>>
>>
>
>This is done using synchronous writes. Each write will wait for the
>server to commit it to disk.
>
>
>
>>$ sudo umount filetest
>>$ sudo mount -o
>>rw,bg,vers=3,tcp,timeo=600,rsize=102400,wsize=102400,hard,intr,ac
>>server1:/home/test filetest
>>$ time cp ./filetest/new100m /tmp/o100m
>>
>>real 0m9.075s
>>user 0m0.020s
>>sys 0m0.470s
>>$ time cp ./filetest/new100m /tmp/o100m
>>
>>real 0m7.501s ==================>only different in 2 seconds!
>>why not less than 4.9 seconds?
>>user 0m0.000s
>>sys 0m0.520s
>>
>>
>
>This is done using asynchronous writes. Much faster, and no need (on
>NFSv3) to wait for the disk before sending the next request.
>
>The reason is that on 2.4 kernels (and early 2.6 kernels) we could only
>do synchronous writes when you set wsize < PAGE_SIZE.
>

Maybe I am misreading the commands being run, but they look like they would
generate all NFS READ traffic. It appears to be copying from an NFS mounted
file system to /tmp, a local file system.

I would chalk most of these differences up to caching, as in the difference
between #1 and #2, and general system dynamics for the differences between
#3 and #4.

By the way, setting rsize and wsize large like that is probably not helping.
The transfer sizes are calculated as the minimum of the transfer sizes
advertised by the server and what the client can support. In this case,
the server would probably be limiting the transfer sizes.

One last point, I would question your original timing and what the
configuration looks like. 100M shouldn't take 66 seconds to transfer.
Even over a 100baseT wire it should take about 10 seconds...

Thanx...

ps


-------------------------------------------------------
SF.Net email is Sponsored by the Better Software Conference & EXPO
September 19-22, 2005 * San Francisco, CA * Development Lifecycle Practices
Agile & Plan-Driven Development * Managing Projects & Teams * Testing & QA
Security * Process Improvement & Measurement * http://www.sqe.com/bsce5sf
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs

2005-08-17 12:33:41

by Peter Staubach

[permalink] [raw]
Subject: Re: can anyone explain this state?

Peter Staubach wrote:

> Trond Myklebust wrote:
>
>> on den 17.08.2005 Klokka 19:13 (+0800) skreiv Michael:
>>
>>
>>> Hi,
>>> These day, I observed a strange thing when I copy a 100MB file from
>>> nfs server, both client and server is running redhat 9.0 with kernel
>>> 2.4.20-8:
>>>
>>> $ sudo mount -o
>>> rw,bg,vers=3,tcp,timeo=600,rsize=1024,wsize=1024,hard,intr,ac
>>> server1:/home/test filetest
>>>
>
>
> One last point, I would question your original timing and what the
> configuration looks like. 100M shouldn't take 66 seconds to transfer.
> Even over a 100baseT wire it should take about 10 seconds...


Oops, sorry, glossed over the 1024 byte transfer sizes used for this
transfer.
They will kill performance. Don't use such small sizes unless you have no
other options for getting any connectivity. I would recommend at least 4k,
8k is better, and 32k is better than that.

Thanx...

ps


-------------------------------------------------------
SF.Net email is Sponsored by the Better Software Conference & EXPO
September 19-22, 2005 * San Francisco, CA * Development Lifecycle Practices
Agile & Plan-Driven Development * Managing Projects & Teams * Testing & QA
Security * Process Improvement & Measurement * http://www.sqe.com/bsce5sf
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs

2005-08-17 12:40:24

by Trond Myklebust

[permalink] [raw]
Subject: Re: can anyone explain this state?

on den 17.08.2005 Klokka 08:26 (-0400) skreiv Peter Staubach:
> Trond Myklebust wrote:
>
> >on den 17.08.2005 Klokka 19:13 (+0800) skreiv Michael:
> >
> >
> >>Hi,
> >>
> >>These day, I observed a strange thing when I copy a 100MB file from
> >>nfs server, both client and server is running redhat 9.0 with kernel
> >>2.4.20-8:
> >>
> >>$ sudo mount -o
> >>rw,bg,vers=3,tcp,timeo=600,rsize=1024,wsize=1024,hard,intr,ac
> >>server1:/home/test filetest
> >>$ time cp ./filetest/new100m /tmp/o100m
> >>
> >>real 1m6.575s
> >>user 0m0.040s
> >>sys 0m1.430s
> >>$ time cp ./filetest/new100m /tmp/o100m
> >>
> >>real 0m4.964s =================> it is so different comparing
> >>with above time!!
> >>user 0m0.030s
> >>sys 0m0.570s
> >>
> >>
> >
> >This is done using synchronous writes. Each write will wait for the
> >server to commit it to disk.
> >
> >
> >
> >>$ sudo umount filetest
> >>$ sudo mount -o
> >>rw,bg,vers=3,tcp,timeo=600,rsize=102400,wsize=102400,hard,intr,ac
> >>server1:/home/test filetest
> >>$ time cp ./filetest/new100m /tmp/o100m
> >>
> >>real 0m9.075s
> >>user 0m0.020s
> >>sys 0m0.470s
> >>$ time cp ./filetest/new100m /tmp/o100m
> >>
> >>real 0m7.501s ==================>only different in 2 seconds!
> >>why not less than 4.9 seconds?
> >>user 0m0.000s
> >>sys 0m0.520s
> >>
> >>
> >
> >This is done using asynchronous writes. Much faster, and no need (on
> >NFSv3) to wait for the disk before sending the next request.
> >
> >The reason is that on 2.4 kernels (and early 2.6 kernels) we could only
> >do synchronous writes when you set wsize < PAGE_SIZE.
> >
>
> Maybe I am misreading the commands being run, but they look like they would
> generate all NFS READ traffic. It appears to be copying from an NFS mounted
> file system to /tmp, a local file system.

Oops. errno=ENOCOFFEE... You are quite right.

Yep. That would indeed put the differences down to caching.

Cheers,
Trond



-------------------------------------------------------
SF.Net email is Sponsored by the Better Software Conference & EXPO
September 19-22, 2005 * San Francisco, CA * Development Lifecycle Practices
Agile & Plan-Driven Development * Managing Projects & Teams * Testing & QA
Security * Process Improvement & Measurement * http://www.sqe.com/bsce5sf
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs

2005-08-17 13:40:03

by Michael

[permalink] [raw]
Subject: Re: can anyone explain this state?

On 8/17/05, Trond Myklebust <[email protected]> wrote:
> on den 17.08.2005 Klokka 08:26 (-0400) skreiv Peter Staubach:
> > Trond Myklebust wrote:
> >
> > >on den 17.08.2005 Klokka 19:13 (+0800) skreiv Michael:
> > >
> > >
> > >>Hi,
> > >>
> > >>These day, I observed a strange thing when I copy a 100MB file from
> > >>nfs server, both client and server is running redhat 9.0 with kernel
> > >>2.4.20-8:
> > >>
> > >>$ sudo mount -o
> > >>rw,bg,vers=3D3,tcp,timeo=3D600,rsize=3D1024,wsize=3D1024,hard,intr,ac
> > >>server1:/home/test filetest
> > >>$ time cp ./filetest/new100m /tmp/o100m
> > >>
> > >>real 1m6.575s
> > >>user 0m0.040s
> > >>sys 0m1.430s
> > >>$ time cp ./filetest/new100m /tmp/o100m
> > >>
> > >>real 0m4.964s =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D> it is so different comparing
> > >>with above time!!
> > >>user 0m0.030s
> > >>sys 0m0.570s
> > >>
> > >>
> > >
> > >This is done using synchronous writes. Each write will wait for the
> > >server to commit it to disk.
> > >
> > >
> > >
> > >>$ sudo umount filetest
> > >>$ sudo mount -o
> > >>rw,bg,vers=3D3,tcp,timeo=3D600,rsize=3D102400,wsize=3D102400,hard,int=
r,ac
> > >>server1:/home/test filetest
> > >>$ time cp ./filetest/new100m /tmp/o100m
> > >>
> > >>real 0m9.075s
> > >>user 0m0.020s
> > >>sys 0m0.470s
> > >>$ time cp ./filetest/new100m /tmp/o100m
> > >>
> > >>real 0m7.501s =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D>only different in 2 seconds!
> > >>why not less than 4.9 seconds?
> > >>user 0m0.000s
> > >>sys 0m0.520s
> > >>
> > >>
> > >
> > >This is done using asynchronous writes. Much faster, and no need (on
> > >NFSv3) to wait for the disk before sending the next request.
> > >
> > >The reason is that on 2.4 kernels (and early 2.6 kernels) we could onl=
y
> > >do synchronous writes when you set wsize < PAGE_SIZE.
> > >
> >
> > Maybe I am misreading the commands being run, but they look like they w=
ould
> > generate all NFS READ traffic. It appears to be copying from an NFS mo=
unted
> > file system to /tmp, a local file system.
>=20
> Oops. errno=3DENOCOFFEE... You are quite right.
>=20
> Yep. That would indeed put the differences down to caching.
>=20
> Cheers,
> Trond
>=20
>=20

Thanks for your feedback!

Yes, I know it should be cache related, but you can see it took me
more than 1 minute at the first time copy 100Mfile from nfs server,
but suddenly the second time took 4 second.
How can the cache result to this? NFS client cache ability? ok, if it
is, that means NFS client could cache at least 90M data of 100M, then,
how to explain the last 2 copies? with rsize=3D8k,first time copy 100M
file took 9 seconds, but the second time took 7 seconds, if cache work
as great as rsize=3D1k, why not the second time copy take less than 1
second?

Thanks,

Michael


-------------------------------------------------------
SF.Net email is Sponsored by the Better Software Conference & EXPO
September 19-22, 2005 * San Francisco, CA * Development Lifecycle Practices
Agile & Plan-Driven Development * Managing Projects & Teams * Testing & QA
Security * Process Improvement & Measurement * http://www.sqe.com/bsce5sf
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs

2005-08-17 14:41:53

by Peter Staubach

[permalink] [raw]
Subject: Re: can anyone explain this state?

Michael wrote:

>
>Thanks for your feedback!
>
>Yes, I know it should be cache related, but you can see it took me
>more than 1 minute at the first time copy 100Mfile from nfs server,
>but suddenly the second time took 4 second.
>How can the cache result to this? NFS client cache ability? ok, if it
>is, that means NFS client could cache at least 90M data of 100M, then,
>how to explain the last 2 copies? with rsize=8k,first time copy 100M
>file took 9 seconds, but the second time took 7 seconds, if cache work
>as great as rsize=1k, why not the second time copy take less than 1
>second?
>

The cache on the client can definitely account for the first two times.

As for the second two, I don't know. Was the client, network, and server
otherwise completely quiescent? What is the configuration of the client?
Can it safely cache 100MB without loosing any data because older data needed
to be pushed out to make room for newer data? I expect so, but it would be
good to check.

What does the network traffic look like between the client and the server
during these cp commands? That might tell you more about what is going on
in your environment.

Thanx...

ps


-------------------------------------------------------
SF.Net email is Sponsored by the Better Software Conference & EXPO
September 19-22, 2005 * San Francisco, CA * Development Lifecycle Practices
Agile & Plan-Driven Development * Managing Projects & Teams * Testing & QA
Security * Process Improvement & Measurement * http://www.sqe.com/bsce5sf
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs

2005-08-17 14:51:10

by bsd user

[permalink] [raw]
Subject: Re: can anyone explain this state?

Peter Staubach wrote:

> Michael wrote:
>
>>
>> Thanks for your feedback!
>>
>> Yes, I know it should be cache related, but you can see it took me
>> more than 1 minute at the first time copy 100Mfile from nfs server,
>> but suddenly the second time took 4 second.
>> How can the cache result to this? NFS client cache ability? ok, if it
>> is, that means NFS client could cache at least 90M data of 100M, then,
>> how to explain the last 2 copies? with rsize=8k,first time copy 100M
>> file took 9 seconds, but the second time took 7 seconds, if cache work
>> as great as rsize=1k, why not the second time copy take less than 1
>> second?
>>
>
> The cache on the client can definitely account for the first two times.
>
> As for the second two, I don't know. Was the client, network, and server
> otherwise completely quiescent? What is the configuration of the client?
> Can it safely cache 100MB without loosing any data because older data
> needed
> to be pushed out to make room for newer data? I expect so, but it
> would be
> good to check.
>
> What does the network traffic look like between the client and the server
> during these cp commands? That might tell you more about what is
> going on
> in your environment.
>
> Thanx...
>
> ps
>
Peter,

That's not occasional, I mount/umount for several times and copy several
times but get the same result.
Anyway, the number of read operation in tcpdump should answer my
question, I hope.

Thanks,

Michael


-------------------------------------------------------
SF.Net email is Sponsored by the Better Software Conference & EXPO
September 19-22, 2005 * San Francisco, CA * Development Lifecycle Practices
Agile & Plan-Driven Development * Managing Projects & Teams * Testing & QA
Security * Process Improvement & Measurement * http://www.sqe.com/bsce5sf
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs

2005-08-18 17:26:46

by Trond Myklebust

[permalink] [raw]
Subject: Re: can anyone explain this state?

to den 18.08.2005 Klokka 21:41 (+0800) skreiv Michael:

> ========================================================================
> Yes, I know it should be cache related, but you can see it took me
> more than 1 minute at the first time copy 100Mfile from nfs server,
> but suddenly the second time took 4 second.
> How can the cache result to this? NFS client cache ability? ok, if it
> is, that means NFS client could cache at least 90M data of 100M, then,
> how to explain the last 2 copies? with rsize=8k,first time copy 100M
> file took 9 seconds, but the second time took 7 seconds, if cache work
> as great as rsize=1k, why not the second time copy take less than 1
> second?
> ========================================================================

The "nfsstat" program should tell you exactly how many requests NFS has
put on the wire since you booted. I suggest you run it after each test,
and compare the number of READs the client has made. By looking at those
numbers, you should be able to tell exactly how much data was cached and
how much had to be retrieved from the server.

BTW: caching 100M of data is hardly much of a problem on a modern
machine. All memory that is not in active use by processes may be used
by the page cache for caching files, so in practise if you have > 100M
core memory and a lightly loaded machine, there is a high chance that
the file will be able to be kept entirely in memory.

Cheers,
Trond



-------------------------------------------------------
SF.Net email is Sponsored by the Better Software Conference & EXPO
September 19-22, 2005 * San Francisco, CA * Development Lifecycle Practices
Agile & Plan-Driven Development * Managing Projects & Teams * Testing & QA
Security * Process Improvement & Measurement * http://www.sqe.com/bsce5sf
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs

2005-08-18 18:33:35

by Greg Banks

[permalink] [raw]
Subject: Re: can anyone explain this state?

On Wed, Aug 17, 2005 at 08:39:52AM -0400, Trond Myklebust wrote:
> on den 17.08.2005 Klokka 08:26 (-0400) skreiv Peter Staubach:
> > Trond Myklebust wrote:
> > >This is done using asynchronous writes. Much faster, and no need (on
> > >NFSv3) to wait for the disk before sending the next request.
> > >
> > >The reason is that on 2.4 kernels (and early 2.6 kernels) we could only
> > >do synchronous writes when you set wsize < PAGE_SIZE.
> > >
> >
> > Maybe I am misreading the commands being run, but they look like they would
> > generate all NFS READ traffic. It appears to be copying from an NFS mounted
> > file system to /tmp, a local file system.
>
> Oops. errno=ENOCOFFEE... You are quite right.
>
> Yep. That would indeed put the differences down to caching.

Trond, your comments also apply for READs with rsize<PAGE_SIZE; 2.4
will do sync reads on the wire in this case. The performance drop
is not as dramatic as with writes because the server is doing its own
local readahead, which can't happen for writes, but there's still
a performance drop due to lack of parallelism on the wire.

Greg.
--
Greg Banks, R&D Software Engineer, SGI Australian Software Group.
I don't speak for SGI.


-------------------------------------------------------
SF.Net email is Sponsored by the Better Software Conference & EXPO
September 19-22, 2005 * San Francisco, CA * Development Lifecycle Practices
Agile & Plan-Driven Development * Managing Projects & Teams * Testing & QA
Security * Process Improvement & Measurement * http://www.sqe.com/bsce5sf
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs

2005-08-18 18:40:54

by bsd user

[permalink] [raw]
Subject: Re: can anyone explain this state?

Trond Myklebust wrote:

>to den 18.08.2005 Klokka 09:51 (+1000) skreiv Greg Banks:
>
>
>
>>Trond, your comments also apply for READs with rsize<PAGE_SIZE; 2.4
>>will do sync reads on the wire in this case. The performance drop
>>is not as dramatic as with writes because the server is doing its own
>>local readahead, which can't happen for writes, but there's still
>>a performance drop due to lack of parallelism on the wire.
>>
>>
>
>Yes, however that only affects the first time the file is read.
>The rather dramatic increase in performance between the first read and
>subsequent attempts may almost certainly be attributed to caching, as
>Peter pointed out.
>
>Note: readahead too may cause an extra performance hit when the reads
>are synchronous: if the kernel schedules more readahead than your
>process actually wants to use, this may force your process to wait for
>_all_ those reads to complete (since said process is responsible for
>actually putting each synchronous read on the wire).
>In the large rsize case, when doing asynchronous reads, this is not the
>case. Your process will wait only on those pages it actually uses.
>
>Cheers,
> Trond
>
>
>
>
Sorry, Trond and all, I don't know how can your answer solve my
problem... As I am newbie, could you explain directly against my question?

========================================================================
Yes, I know it should be cache related, but you can see it took me
more than 1 minute at the first time copy 100Mfile from nfs server,
but suddenly the second time took 4 second.
How can the cache result to this? NFS client cache ability? ok, if it
is, that means NFS client could cache at least 90M data of 100M, then,
how to explain the last 2 copies? with rsize=8k,first time copy 100M
file took 9 seconds, but the second time took 7 seconds, if cache work
as great as rsize=1k, why not the second time copy take less than 1
second?
========================================================================

Thanks again!

Michael



-------------------------------------------------------
SF.Net email is Sponsored by the Better Software Conference & EXPO
September 19-22, 2005 * San Francisco, CA * Development Lifecycle Practices
Agile & Plan-Driven Development * Managing Projects & Teams * Testing & QA
Security * Process Improvement & Measurement * http://www.sqe.com/bsce5sf
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs

2005-08-18 19:42:44

by Trond Myklebust

[permalink] [raw]
Subject: Re: can anyone explain this state?

to den 18.08.2005 Klokka 09:51 (+1000) skreiv Greg Banks:

> Trond, your comments also apply for READs with rsize<PAGE_SIZE; 2.4
> will do sync reads on the wire in this case. The performance drop
> is not as dramatic as with writes because the server is doing its own
> local readahead, which can't happen for writes, but there's still
> a performance drop due to lack of parallelism on the wire.

Yes, however that only affects the first time the file is read.
The rather dramatic increase in performance between the first read and
subsequent attempts may almost certainly be attributed to caching, as
Peter pointed out.

Note: readahead too may cause an extra performance hit when the reads
are synchronous: if the kernel schedules more readahead than your
process actually wants to use, this may force your process to wait for
_all_ those reads to complete (since said process is responsible for
actually putting each synchronous read on the wire).
In the large rsize case, when doing asynchronous reads, this is not the
case. Your process will wait only on those pages it actually uses.

Cheers,
Trond



-------------------------------------------------------
SF.Net email is Sponsored by the Better Software Conference & EXPO
September 19-22, 2005 * San Francisco, CA * Development Lifecycle Practices
Agile & Plan-Driven Development * Managing Projects & Teams * Testing & QA
Security * Process Improvement & Measurement * http://www.sqe.com/bsce5sf
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs