(I am not subscribed, so please CC me with any responses)
I am having a very problem with TCP throughput between two systems
running on my LAN. The problem system works fine with 2.4.14, but has
abysmal TCP throughput under 2.4.16, but only when talking to some
systems on the LAN. Here is a chart with some tests that I made:
Client Server Protocol Throughput
cbgb-2.4.16 pern NFS/UDP ~12 MB/s, .5s to copy a 5MB file
cbgb-2.4.14 pern HTTP/TCP 12 MB/s, .5s to copy a 5MB file
cbgb-2.4.16 pern HTTP/TCP 7.45 KB/s, 300s to copy a 2MB file
cbgb-2.4.16 pern SCP/TCP (Very poor; I was too impatient to measure)
cbgb-2.4.16 heechee SCP/TCP 84 KB/s, 1 minute to copy a 5MB file
cbgb-2.4.16 idoru HTTP/TCP 54 KB/s, 21s to copy a 1.2MB file
heechee-2.4.16 pern HTTP/TCP 1024 KB/s, 5s to copy a 5MB file
vmware-2.2.19 pern HTTP/TCP 297 KB/s, 18s to copy a 5MB file
pern cbgb SCP/TCP 297 KB/s, 19s to copy a 5MB file
- cbgb is the problem system; with a tulip card.
- pern is the problem server; also with a tulip card.
- heechee is my firewall with a 8139too card.
- idoru is a server on the other end of a SSH VPN (DSL at both ends).
- vmware is a virtual vmware machine running inside of cbgb.
As you can see from the chart, NFS over UDP between cbgb and pern is
fine. However HTTP over TCP has terrible performance; 2.4.14 has orders
of magnitude better performance. It's not pern's problem, because
heechee has no trouble transferring the same file. It's also not
exclusively cbgb's fault, because it doesn't seem to have any problems
talking to idoru or to heechee. Nor are there any problems going the
other way (copying from cbgb to pern). The most bizarre thing is my test
on the vmware machine. It also doesn't seem to have problems
communicating with pern, even though it's a virtual machine that is
obviously using cbgb's hardware to do the copying. I should also add
that interactive ssh from cbgb to pern seems fine.
Pern's card is a 'Lite-On Communications Inc LNE100TX (rev 32)'.
Cbgb's card is a 'Lite-On Communications Inc LNE100TX [Linksys EtherFast 10/100] (rev 37)'.
Does anyone have any suggestions as to what's going on here?
--
Dave Carrigan ([email protected]) | Yow! These PRESERVES should be
UNIX-Apache-Perl-Linux-Firewalls-LDAP-C-DNS | FORCE-FED to PENTAGON
Seattle, WA, USA | OFFICIALS!!
http://www.rudedog.org/ |
>
>
>cbgb-2.4.16 pern NFS/UDP ~12 MB/s, .5s to copy a 5MB file
> cbgb-2.4.14 pern HTTP/TCP 12 MB/s, .5s to copy a 5MB file
> cbgb-2.4.16 pern HTTP/TCP 7.45 KB/s, 300s to copy a 2MB file
>
Could you try:
- if concurrent flood pings between cbgb an dpern improve the throughput
with 2.4.16 and HTTP?
# ping -f pern
or
# ping -f cbdb
- Could you check what happens with 2.4.16 if you revert to the tulip
driver from 2.4.14?
Copy the entire linux/drivers/net/tulip/ directory from 2.4.16 into 2.4.14.
--
Manfred
Manfred Spraul <[email protected]> writes:
> Could you try:
> - if concurrent flood pings between cbgb an dpern improve the throughput
> with 2.4.16 and HTTP?
>
> # ping -f pern
> or
> # ping -f cbdb
This didn't seem to make a difference.
> - Could you check what happens with 2.4.16 if you revert to the tulip
> driver from 2.4.14?
This definitely made a difference. I compiled the 2.4.14 tulip.o as a
module for the 2.4.16 kernel. If I insmod the 2.4.14 version, TCP
throughput is fine. If I take the interface down then insmod the 2.4.16
version, TCP throughput is very poor.
Regards,
--
Dave Carrigan ([email protected]) | Yow! If I pull this SWITCH I'll
UNIX-Apache-Perl-Linux-Firewalls-LDAP-C-DNS | be RITA HAYWORTH!! Or a
Seattle, WA, USA | SCIENTOLOGIST!
http://www.rudedog.org/ |