Trond,
Some more data from some 2.4.18 tests I just ran...
==FIRST TEST
CLIENT: CLEAN 2.4.18 kernel with NFSV3 enabled
SERVER: Solaris 8 server (E280R)
100M switched network (1G backbone connecting these two boxes -- but
results the same when staying on local switch)
Writing with putc()... done: 5941 kB/s 59.3 %CPU
Rewriting... done: 5254 kB/s 6.7 %CPU
Writing intelligently... done: 5605 kB/s 4.1 %CPU
Reading with getc()... done: 9028 kB/s 89.3 %CPU
Reading intelligently... done: 197558 kB/s 100.3 %CPU
==SECOND TEST
CLIENT: Same setup as the first but with linux-2.4.18-NFS_ALL.dif
applied. Same .config file.
SERVER: Same
Writing with putc()... done: 5977 kB/s 63.5 %CPU
Rewriting... done: 3494 kB/s 5.7 %CPU
Writing intelligently... done: 734 kB/s 0.7 %CPU
Reading with getc()... done: 9255 kB/s 72.8 %CPU
Reading intelligently... done: 195361 kB/s 34.3 %CPU
PROBLEM: the intelligent writes look bad.
==THIRD TEST
Same setup as second test, but using dd instead of bonnie
mohawk/test> time dd if=/dev/zero of=file bs=1024k count=1
1+0 records in
1+0 records out
0.000u 0.030s 0:05.06 0.5% 0+0k 0+0io 136pf+0w
About 200k/s on a 100Mbit network -- not very good.
I grabbed a simple tcpdump of that whole dd session. I'll send a
off-list copy of the dump to Trond.
eric
Trond Myklebust wrote:
>
> >>>>> " " == Trond Myklebust <[email protected]> writes:
>
> > exceeded' ICMP messages littered around the place. The first
> > one comes just after the loss of fragments, and is accompanied
> > by a 2 second delay, during which all the reads that are sent
> > time out without receiving a single reply...
>
> Note: this 2 second period of silence appears to be what is really
> causing the *100 slowdown. I've no idea what the switch is engaging in
> during that time, but you might want to take a look to see if those
> messages being sent during that period are indeed being received on
> the server.
>
> Cheers,
> Trond
>
> _______________________________________________
> NFS maillist - [email protected]
> https://lists.sourceforge.net/lists/listinfo/nfs
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
On Wed, 20 Mar 2002 11:23:43 -0700, Eric Whiting <[email protected]> wrote:
> ==THIRD TEST
> Same setup as second test, but using dd instead of bonnie
>
> mohawk/test> time dd if=/dev/zero of=file bs=1024k count=1
> 1+0 records in
> 1+0 records out
> 0.000u 0.030s 0:05.06 0.5% 0+0k 0+0io 136pf+0w
>
> About 200k/s on a 100Mbit network -- not very good.
Umm... no. I don't recall your setup (and you didn't specify it
in your email), but if this is the same setup as Lee's (GigE
bridged into 100Mbit), then you don't really have a 100Mbit network.
What you have is a GigE network with 90% packet loss. 200k/s is
actually pretty good in that case.
This is what Trond has been trying to say all along: UDP is a losing
proposition for this kind of setup. The only way you can make it work
sort of ok is by slowing it down and thus penalizing the rest of us
with sane setups.
Just use TCP and be happy.
Ion
--
It is better to keep your mouth shut and be thought a fool,
than to open it and remove all doubt.
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs