2001-03-15 07:24:42

by M?rten Wikstr?m

[permalink] [raw]
Subject: How to optimize routing performance

I've performed a test on the routing capacity of a Linux 2.4.2 box versus a
FreeBSD 4.2 box. I used two Pentium Pro 200Mhz computers with 64Mb memory,
and two DEC 100Mbit ethernet cards. I used a Smartbits test-tool to measure
the packet throughput and the packet size was set to 64 bytes. Linux dropped
no packets up to about 27000 packets/s, but then it started to drop packets
at higher rates. Worse yet, the output rate actually decreased, so at the
input rate of 40000 packets/s almost no packets got through. The behaviour
of FreeBSD was different, it showed a steadily increased output rate up to
about 70000 packets/s before the output rate decreased. (Then the output
rate was apprx. 40000 packets/s).
I have not made any special optimizations, aside from not having any
background processes running.

So, my question is: are these figures true, or is it possible to optimize
the kernel somehow? The only changes I have made to the kernel config was to
disable advanced routing.

Thanks,

M?rten


2001-03-15 12:58:15

by Rik van Riel

[permalink] [raw]
Subject: Re: How to optimize routing performance

On Thu, 15 Mar 2001, [ISO-8859-1] M?rten Wikstr?m wrote:

> I've performed a test on the routing capacity of a Linux 2.4.2 box
> versus a FreeBSD 4.2 box. I used two Pentium Pro 200Mhz computers with
> 64Mb memory, and two DEC 100Mbit ethernet cards. I used a Smartbits
> test-tool to measure the packet throughput and the packet size was set
> to 64 bytes. Linux dropped no packets up to about 27000 packets/s, but
> then it started to drop packets at higher rates. Worse yet, the output
> rate actually decreased, so at the input rate of 40000 packets/s
> almost no packets got through. The behaviour of FreeBSD was different,
> it showed a steadily increased output rate up to about 70000 packets/s
> before the output rate decreased. (Then the output rate was apprx.
> 40000 packets/s).

> So, my question is: are these figures true, or is it possible to
> optimize the kernel somehow? The only changes I have made to the
> kernel config was to disable advanced routing.

There are some flow control options in the kernel which should
help. From your description, it looks like they aren't enabled
by default ...

At the NordU/USENIX conference in Stockholm (this february) I
saw a nice presentation on the flow control code in the Linux
networking code and how it improved networking performance.
I'm pretty convinced that flow control _should_ be saving your
system in this case.

OTOH, if they _are_ enabled, the networking people seem to have
a new item for their TODO list. ;)

regards,

Rik
--
Virtual memory is like a game you can't win;
However, without VM there's truly nothing to lose...

http://www.surriel.com/
http://www.conectiva.com/ http://distro.conectiva.com.br/

2001-03-15 14:20:29

by Robert Olsson

[permalink] [raw]
Subject: Re: How to optimize routing performance


Rik van Riel writes:
> On Thu, 15 Mar 2001, [ISO-8859-1] M?rten Wikstr?m wrote:
>
> > I've performed a test on the routing capacity of a Linux 2.4.2 box
> > versus a FreeBSD 4.2 box. I used two Pentium Pro 200Mhz computers with
> > 64Mb memory, and two DEC 100Mbit ethernet cards. I used a Smartbits
> > test-tool to measure the packet throughput and the packet size was set
> > to 64 bytes. Linux dropped no packets up to about 27000 packets/s, but
> > then it started to drop packets at higher rates. Worse yet, the output
> > rate actually decreased, so at the input rate of 40000 packets/s

It is a known problem yes. And just as Rik says its has been adressed
in 2.1.x by Alexey for first time.


> > almost no packets got through. The behaviour of FreeBSD was different,
> > it showed a steadily increased output rate up to about 70000 packets/s
> > before the output rate decreased. (Then the output rate was apprx.
> > 40000 packets/s).
>
> > So, my question is: are these figures true, or is it possible to
> > optimize the kernel somehow? The only changes I have made to the
> > kernel config was to disable advanced routing.
>
> There are some flow control options in the kernel which should
> help. From your description, it looks like they aren't enabled
> by default ...

CONFIG_NET_HW_FLOWCONTROL enables kernel code for it. But device
drivers has to have support for it. But unfortunely very few drivers
has support for it.

Also we done experiments were we move the device RX processing to
SoftIRQ rather than IRQ. With this RX is in better balance with
other kernel tasks and TX. Under very high load and under DoS
attacks the system is now manageable. It's in practical use already.


> At the NordU/USENIX conference in Stockholm (this february) I
> saw a nice presentation on the flow control code in the Linux
> networking code and how it improved networking performance.
> I'm pretty convinced that flow control _should_ be saving your
> system in this case.

Thanks Rik.

This is work/experiments by Jamal and me with support from Gurus. :-)
Jamal did this presentation at OLS 2000. At NordU/USENIX I gave an
updated presentation of it. The presentation is not yet available form
the usenix webb I think.

It can ftp from robur.slu.se:
/pub/Linux/tmp/FF-NordUSENIX.pdf or .ps

In summary Linux is very decent router. Wire speed small packets
@ 100 Mbps and capable of Gigabit routing (1440 pkts tested)
we used.

Also if people are interested we have done profiling on a Linux
production router with full BGP at pretty loaded site. This to
give us costs for route lookup, skb malloc/free, interrupts etc.

http://Linux/net-development/experiments/010313

I'm on netdev but not the kernel list.

Cheers.

--ro

2001-03-15 16:29:23

by Martin Josefsson

[permalink] [raw]
Subject: Re: How to optimize routing performance

On Thu, 15 Mar 2001, Rik van Riel wrote:

> On Thu, 15 Mar 2001, [ISO-8859-1] M?rten Wikstr?m wrote:
>
> > I've performed a test on the routing capacity of a Linux 2.4.2 box
> > versus a FreeBSD 4.2 box. I used two Pentium Pro 200Mhz computers with
> > 64Mb memory, and two DEC 100Mbit ethernet cards. I used a Smartbits
> > test-tool to measure the packet throughput and the packet size was set
> > to 64 bytes. Linux dropped no packets up to about 27000 packets/s, but
> > then it started to drop packets at higher rates. Worse yet, the output
> > rate actually decreased, so at the input rate of 40000 packets/s
> > almost no packets got through. The behaviour of FreeBSD was different,
> > it showed a steadily increased output rate up to about 70000 packets/s
> > before the output rate decreased. (Then the output rate was apprx.
> > 40000 packets/s).
>
> > So, my question is: are these figures true, or is it possible to
> > optimize the kernel somehow? The only changes I have made to the
> > kernel config was to disable advanced routing.
>
> There are some flow control options in the kernel which should
> help. From your description, it looks like they aren't enabled
> by default ...

You want to have CONFIG_NET_HW_FLOWCONTROL enabled. If you don't the
kernel gets _alot_ of interrupts from the NIC and dosn't have any cycles
left to do anything. So you want to turn this on!

> At the NordU/USENIX conference in Stockholm (this february) I
> saw a nice presentation on the flow control code in the Linux
> networking code and how it improved networking performance.
> I'm pretty convinced that flow control _should_ be saving your
> system in this case.

That was probably Jamal Hadi and Robert Olsson. They have been optimizing
the tulip driver. These optimizations havn't been integrated with the
"vanilla" driver yet, but I hope the can integrate it soon.

They have one version that is very optimized and then they have one
version that have even more optimizations, ie. it uses polling at high
interruptload.

you will find these drivers here:
ftp://robur.slu.se/pub/Linux/net-development/
The latest versions are:
tulip-ss010111.tar.gz
and
tulip-ss010116-poll.tar.gz

> OTOH, if they _are_ enabled, the networking people seem to have
> a new item for their TODO list. ;)

Yup.

You can take a look here too:

http://robur.slu.se/Linux/net-development/jamal/FF-html/

This is the presentation they gave at OLS (IIRC)

And this is the final result:

http://robur.slu.se/Linux/net-development/jamal/FF-html/img26.htm

As you can see the throughput is a _lot_ higher with this driver.

One final note: The makefile in at least tulip-ss010111.tar.gz is in the
old format (not the new as 2.4.0-testX introduced), but you can copy the
makefile from the "vanilla" driver and It'lll work like a charm.

Please redo your tests with this driver and report the results to me and
this list. I really want to know how it compares against FreeBSD.

/Martin

2001-03-15 17:26:27

by Rik van Riel

[permalink] [raw]
Subject: Re: How to optimize routing performance

On Thu, 15 Mar 2001, Robert Olsson wrote:

> CONFIG_NET_HW_FLOWCONTROL enables kernel code for it. But device
> drivers has to have support for it. But unfortunely very few drivers
> has support for it.

Isn't it possible to put something like this in the layer just
above the driver ?

It probably won't work as well as putting it directly in the
driver, but it'll at least keep Linux from collapsing under
really heavy loads ...

regards,

Rik
--
Linux MM bugzilla: http://linux-mm.org/bugzilla.shtml

Virtual memory is like a game you can't win;
However, without VM there's truly nothing to lose...

http://www.surriel.com/
http://www.conectiva.com/ http://distro.conectiva.com/

2001-03-15 18:10:51

by J Sloan

[permalink] [raw]
Subject: Re: How to optimize routing performance

Just my .02 -

There are some scheduler patches that are not part of the
main kernel tree at this point (mostly since they have yet to
be optimized for the common case) which make quite a big
difference under heavy load - you might want to check out:

http://lse.sourceforge.net/scheduling/

cu

Jup


M?rten Wikstr?m wrote:

> I've performed a test on the routing capacity of a Linux 2.4.2 box versus a
> FreeBSD 4.2 box. I used two Pentium Pro 200Mhz computers with 64Mb memory,
> and two DEC 100Mbit ethernet cards. I used a Smartbits test-tool to measure
> the packet throughput and the packet size was set to 64 bytes. Linux dropped
> no packets up to about 27000 packets/s, but then it started to drop packets
> at higher rates. Worse yet, the output rate actually decreased, so at the
> input rate of 40000 packets/s almost no packets got through. The behaviour
> of FreeBSD was different, it showed a steadily increased output rate up to
> about 70000 packets/s before the output rate decreased. (Then the output
> rate was apprx. 40000 packets/s).
> I have not made any special optimizations, aside from not having any
> background processes running.
>
> So, my question is: are these figures true, or is it possible to optimize
> the kernel somehow? The only changes I have made to the kernel config was to
> disable advanced routing.
>
> Thanks,
>
> M?rten
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/

2001-03-15 18:47:23

by Robert Olsson

[permalink] [raw]
Subject: Re: How to optimize routing performance


[Sorry for the length]

Rik van Riel writes:
> On Thu, 15 Mar 2001, Robert Olsson wrote:
>
> > CONFIG_NET_HW_FLOWCONTROL enables kernel code for it. But device
> > drivers has to have support for it. But unfortunely very few drivers
> > has support for it.
>
> Isn't it possible to put something like this in the layer just
> above the driver ?

There is a dropping point in netif_rx. The problem is that knowledge
of congestion has to be pushed back to the devices that is causing this.

Alexey added netdev_dropping for drivers to check. And via netdev_wakeup()
the drivers xon_metod can be called when the backlog below a certain
threshold.

So from here the driver has do the work. Not investing any resources and
interrupts in packets we still have to drop. This what happens at very
high load a kind of livelock. For routers routing protocols will time
out and we loose conetivity. But I would say its important for all apps.

In 2.4.0-test10 Jamal added sampling of the backlog queue so device
drivers get the current congestion level. This opens new possiblities.


> It probably won't work as well as putting it directly in the
> driver, but it'll at least keep Linux from collapsing under
> really heavy loads ...


And we have done experiments with controlling interrupts and running
the RX at "lower" priority. The idea is take RX-interrupt and immediately
postponing the RX process to tasklet. The tasklet opens for new RX-ints.
when its done. This way dropping now occurs outside the box since and
dropping becomes very undramatically.


As little example of this. I monitored a DoS attack on Linux router
equipped with this RX-tasklet driver.


Admin up 6 day(s) 13 hour(s) 37 min 54 sec
Last input NOW
Last output NOW
5min RX bit/s 22.4 M
5min TX bit/s 1.3 M
5min RX pkts/s 44079 <====
5min TX pkts/s 877
5min TX errors 0
5min RX errors 0
5min RX dropped 49913 <====

Fb: no 3127894088 low 154133938 mod 6 high 0 drp 0 <==== Congestion levels

Polling: ON starts/pkts/tasklet_count 96545881/2768574948/1850259980
HW_flowcontrol xon's 0



A bit of explanation. Above is output from tulip driver. We are forwarding
44079 and we are dropping 49913 packets per second! This box has
full BGP. The DoS attack was going on for about 30 minutes BGP survived
and the box was manageable. Under a heavy attack it still performs well.


Cheers.

--ro

2001-03-15 18:53:23

by Rik van Riel

[permalink] [raw]
Subject: Re: How to optimize routing performance

On Thu, 15 Mar 2001, J Sloan wrote:

> There are some scheduler patches that are not part of the
> main kernel tree at this point (mostly since they have yet to
> be optimized for the common case) which make quite a big
> difference under heavy load - you might want to check out:
>
> http://lse.sourceforge.net/scheduling/

Unrelated. Fun, but unrelated to networking...

Rik
--
Linux MM bugzilla: http://linux-mm.org/bugzilla.shtml

Virtual memory is like a game you can't win;
However, without VM there's truly nothing to lose...

http://www.surriel.com/
http://www.conectiva.com/ http://distro.conectiva.com/

2001-03-15 19:18:23

by J Sloan

[permalink] [raw]
Subject: Re: How to optimize routing performance

Rik van Riel wrote:

> On Thu, 15 Mar 2001, J Sloan wrote:
>
> > There are some scheduler patches that are not part of the
> > main kernel tree at this point (mostly since they have yet to
> > be optimized for the common case) which make quite a big
> > difference under heavy load - you might want to check out:
> >
> > http://lse.sourceforge.net/scheduling/
>
> Unrelated. Fun, but unrelated to networking...

Fun, yes, and perhaps not directly related, however
under high load, where the sheer numbet of interrupts
per second begins to overwhelm the kernel, might it
not be relevant? After all, the benchmarks do point to
tangible improvements in the performance of network
server apps.

Or are you saying that the bottleneck is somewhere
else completely, or that there wouldn't be a bottleneck
in this case if certain kernel parameters were correctly
set?

Just curious,

Jup


2001-03-15 19:23:43

by Rik van Riel

[permalink] [raw]
Subject: Re: How to optimize routing performance

On Thu, 15 Mar 2001, J Sloan wrote:
> Rik van Riel wrote:
> > On Thu, 15 Mar 2001, J Sloan wrote:
> >
> > > http://lse.sourceforge.net/scheduling/
> >
> > Unrelated. Fun, but unrelated to networking...
>
> Fun, yes, and perhaps not directly related, however
> under high load, where the sheer numbet of interrupts
> per second begins to overwhelm the kernel, might it
> not be relevant?

No.

> Or are you saying that the bottleneck is somewhere
> else completely,

Indeed. The bottleneck is with processing the incoming network
packets, at the interrupt level.

Rik
--
Linux MM bugzilla: http://linux-mm.org/bugzilla.shtml

Virtual memory is like a game you can't win;
However, without VM there's truly nothing to lose...

http://www.surriel.com/
http://www.conectiva.com/ http://distro.conectiva.com/

2001-03-15 19:29:13

by J Sloan

[permalink] [raw]
Subject: Re: How to optimize routing performance

Rik van Riel wrote:

> On Thu, 15 Mar 2001, J Sloan wrote:
>
> > Fun, yes, and perhaps not directly related, however
> > under high load, where the sheer numbet of interrupts
> > per second begins to overwhelm the kernel, might it
> > not be relevant?
>
> No.
>
> > Or are you saying that the bottleneck is somewhere
> > else completely,
>
> Indeed. The bottleneck is with processing the incoming network
> packets, at the interrupt level.

OK, I'll take this to kernel newbies!

:-)

Jup

2001-03-15 19:32:03

by Jonathan Morton

[permalink] [raw]
Subject: Re: How to optimize routing performance

> And we have done experiments with controlling interrupts and running
> the RX at "lower" priority. The idea is take RX-interrupt and immediately
> postponing the RX process to tasklet. The tasklet opens for new RX-ints.
> when its done. This way dropping now occurs outside the box since and
> dropping becomes very undramatically.

<snip>

> A bit of explanation. Above is output from tulip driver. We are forwarding
> 44079 and we are dropping 49913 packets per second! This box has
> full BGP. The DoS attack was going on for about 30 minutes BGP survived
> and the box was manageable. Under a heavy attack it still performs well.

Nice. Any chance of similar functionality finding its' way outside the
Tulip driver, eg. to 3c509 or via-rhine? I'd find those useful, since one
or two of my Macs appear to be capable of generating pseudo-DoS levels of
traffic under certain circumstances which totally lock a 486 (for the
duration) and heavily load a P166 - even though said Macs "only" have
10baseT Ethernet.

OTOH, proper management of the circumstances under which this flooding
occurs (it's an interaction bug which occurs when the Linux machine ends up
with a zero-sized TCP receive window) would also be rather helpful.

--------------------------------------------------------------
from: Jonathan "Chromatix" Morton
mail: [email protected] (not for attachments)
big-mail: [email protected]
uni-mail: [email protected]

The key to knowledge is not to rely on people to teach you it.

Get VNC Server for Macintosh from http://www.chromatix.uklinux.net/vnc/

-----BEGIN GEEK CODE BLOCK-----
Version 3.12
GCS$/E/S dpu(!) s:- a20 C+++ UL++ P L+++ E W+ N- o? K? w--- O-- M++$ V? PS
PE- Y+ PGP++ t- 5- X- R !tv b++ DI+++ D G e+ h+ r++ y+(*)
-----END GEEK CODE BLOCK-----


2001-03-15 19:37:43

by Gregory Maxwell

[permalink] [raw]
Subject: Re: How to optimize routing performance

On Thu, Mar 15, 2001 at 11:17:19AM -0800, J Sloan wrote:
> Rik van Riel wrote:
> > On Thu, 15 Mar 2001, J Sloan wrote:
> >
> > > There are some scheduler patches that are not part of the
> > > main kernel tree at this point (mostly since they have yet to
> > > be optimized for the common case) which make quite a big
> > > difference under heavy load - you might want to check out:
> > >
> > > http://lse.sourceforge.net/scheduling/
> >
> > Unrelated. Fun, but unrelated to networking...
>
> under high load, where the sheer numbet of interrupts
> per second begins to overwhelm the kernel, might it
[snip]
> Or are you saying that the bottleneck is somewhere
> else completely, or that there wouldn't be a bottleneck
> in this case if certain kernel parameters were correctly
> set?

The scheduler schedules tasks not interrupts. Unless it manages to thrash the
cache, the scheduler can not affect routing performance.

2001-03-15 19:47:03

by J Sloan

[permalink] [raw]
Subject: Re: How to optimize routing performance

Gregory Maxwell wrote:

> The scheduler schedules tasks not interrupts. Unless it manages to thrash the
> cache, the scheduler can not affect routing performance.

OK, thanks for the clarification - I need to get into the source.

cu

Jup

2001-03-15 19:45:43

by Mike Kravetz

[permalink] [raw]
Subject: Re: How to optimize routing performance

On Thu, Mar 15, 2001 at 11:17:19AM -0800, J Sloan wrote:
> Rik van Riel wrote:
>
> > On Thu, 15 Mar 2001, J Sloan wrote:
> >
> > > There are some scheduler patches that are not part of the
> > > main kernel tree at this point (mostly since they have yet to
> > > be optimized for the common case) which make quite a big
> > > difference under heavy load - you might want to check out:
> > >
> > > http://lse.sourceforge.net/scheduling/
> >
> > Unrelated. Fun, but unrelated to networking...
>
> Fun, yes, and perhaps not directly related, however
> under high load, where the sheer numbet of interrupts
> per second begins to overwhelm the kernel, might it
> not be relevant? After all, the benchmarks do point to
> tangible improvements in the performance of network
> server apps.

I'm not sure if these patches would be of any use here.

One benefit of the multi-queue scheduling patches is that
they allow multiple 'wakeups' to run in parallel instead
of being serialized by the global runqueue lock. Now if
you are getting lots of interrupts which result in task
wakeups that could potentially be run in parallel (on
separate CPUS with no other serialization in the way)
then you 'might' see some benefit. Those are some big IFs.

I know little about the networking stack or this workload.
Just wanted to explain how this scheduling work 'could'
be related to interrupt load.

--
Mike Kravetz [email protected]
IBM Linux Technology Center

2001-03-15 19:55:23

by Robert Olsson

[permalink] [raw]
Subject: Re: How to optimize routing performance



Jonathan Morton writes:

> Nice. Any chance of similar functionality finding its' way outside the
> Tulip driver, eg. to 3c509 or via-rhine? I'd find those useful, since one
> or two of my Macs appear to be capable of generating pseudo-DoS levels of
> traffic under certain circumstances which totally lock a 486 (for the
> duration) and heavily load a P166 - even though said Macs "only" have
> 10baseT Ethernet.

I'm not the one to tell. :-)

First its kind of experimental. Jamal has talked about putting together
a proposal for enhancing RX-process for inclusion in the 2.5 kernels.
There is meeting soon for this.


But why not experiment a bit?

Cheers.

--ro

2001-03-15 20:25:44

by Jonathan Earle

[permalink] [raw]
Subject: RE: How to optimize routing performance



> > Or are you saying that the bottleneck is somewhere
> > else completely,
>
> Indeed. The bottleneck is with processing the incoming network
> packets, at the interrupt level.

Where is the counter for these dropped packets? If we run a few mbit of
traffic through the box, we see noticeble percentages of lost packets (via
stats from the Ixia traffic generator). But where in Linux are these counts
stored? ifconfig does not appear to have the #.

Cheers!
Jon

2001-03-15 21:03:16

by jamal

[permalink] [raw]
Subject: Re: How to optimize routing performance



On Thu, 15 Mar 2001, Robert Olsson wrote:

>
>
> Jonathan Morton writes:
>
> > Nice. Any chance of similar functionality finding its' way outside the
> > Tulip driver, eg. to 3c509 or via-rhine? I'd find those useful, since one
> > or two of my Macs appear to be capable of generating pseudo-DoS levels of
> > traffic under certain circumstances which totally lock a 486 (for the
> > duration) and heavily load a P166 - even though said Macs "only" have
> > 10baseT Ethernet.
>
> I'm not the one to tell. :-)
>
> First its kind of experimental. Jamal has talked about putting together
> a proposal for enhancing RX-process for inclusion in the 2.5 kernels.
> There is meeting soon for this.
>
>
> But why not experiment a bit?

I think one of the immediate things usable to drivers is to check the
netif_rx() return value and yield the CPU if the system is congested.
This is hardware independent. For the Tulip, since it knows how to do
mitigation, it infact reduces it's interupt rate. An even simpler thing is
to use HW_FLOW_CONTROL where you shutdown rx_interupt based on system
congestion (and get worken up later when things get better).

For 2.5 the plan is to work around any hardware dependencies.

cheers,
jamal

2001-03-15 21:41:07

by Robert Olsson

[permalink] [raw]
Subject: Re: How to optimize routing performance


Manfred Spraul writes:
> >
> > http://Linux/net-development/experiments/010313
> >
> The link is broken, and I couldn't find it at http://www.linux.com. Did you
> forget the host?

Yes Sir!

The profile data from the Linux production router is at:

http://robur.slu.se/Linux/net-development/experiments/010313

Cheers.

--ro

2001-03-16 07:22:15

by M?rten Wikstr?m

[permalink] [raw]
Subject: RE: How to optimize routing performance



>
> You want to have CONFIG_NET_HW_FLOWCONTROL enabled. If you don't the
> kernel gets _alot_ of interrupts from the NIC and dosn't have
> any cycles
> left to do anything. So you want to turn this on!
>
> > At the NordU/USENIX conference in Stockholm (this february) I
> > saw a nice presentation on the flow control code in the Linux
> > networking code and how it improved networking performance.
> > I'm pretty convinced that flow control _should_ be saving your
> > system in this case.
>
> That was probably Jamal Hadi and Robert Olsson. They have
> been optimizing
> the tulip driver. These optimizations havn't been integrated with the
> "vanilla" driver yet, but I hope the can integrate it soon.
>
> They have one version that is very optimized and then they have one
> version that have even more optimizations, ie. it uses polling at high
> interruptload.
>
> you will find these drivers here:
> ftp://robur.slu.se/pub/Linux/net-development/
> The latest versions are:
> tulip-ss010111.tar.gz
> and
> tulip-ss010116-poll.tar.gz
>
> > OTOH, if they _are_ enabled, the networking people seem to have
> > a new item for their TODO list. ;)
>
> Yup.
>
> You can take a look here too:
>
> http://robur.slu.se/Linux/net-development/jamal/FF-html/
>
> This is the presentation they gave at OLS (IIRC)
>
> And this is the final result:
>
> http://robur.slu.se/Linux/net-development/jamal/FF-html/img26.htm
>
> As you can see the throughput is a _lot_ higher with this driver.
>
> One final note: The makefile in at least
> tulip-ss010111.tar.gz is in the
> old format (not the new as 2.4.0-testX introduced), but you
> can copy the
> makefile from the "vanilla" driver and It'lll work like a charm.
>
> Please redo your tests with this driver and report the
> results to me and
> this list. I really want to know how it compares against FreeBSD.
>
> /Martin

Thanks! I'll try that out. How can I tell if the driver supports
CONFIG_NET_HW_FLOWCONTROL? I'm not sure, but I think the cards are
tulip-based, can I then use Robert & Jamal's optimised drivers?
It'll probably take some time before I can do further testing. (My employer
thinks I've spent too much time on it already...).

FYI, Linux had _much_ better delay variation characteristics than FreeBSD.
Typically no packet was delayed more than 100usec, whereas FreeBSD had some
packets delayed about 2-3 msec.

/M?rten

2001-03-16 08:09:57

by Martin Josefsson

[permalink] [raw]
Subject: RE: How to optimize routing performance

On Fri, 16 Mar 2001, M?rten Wikstr?m wrote:

[much text]
> Thanks! I'll try that out. How can I tell if the driver supports
> CONFIG_NET_HW_FLOWCONTROL? I'm not sure, but I think the cards are
> tulip-based, can I then use Robert & Jamal's optimised drivers?
> It'll probably take some time before I can do further testing. (My employer
> thinks I've spent too much time on it already...).

I don't really know how to tell except
'grep CONFIG_NET_HW_FLOWCONTROL driverfiles'

You said that the cards where 100Mbit DEC cards, I assumed that by that
you meant that the cards use DECchip 21143 or similar chips.
If that's true you can use Robert & Jamal's optimised drivers.

Sorry to hear that your employer doesn't see the importance in such a test
:)

> FYI, Linux had _much_ better delay variation characteristics than FreeBSD.
> Typically no packet was delayed more than 100usec, whereas FreeBSD had some
> packets delayed about 2-3 msec.

This sounds promising. So Linux had nice variations until it broke down
completely and stopped routing because of all the interrupts. I can almost
guarantee that with the optimised driver and CONFIG_NET_HW_FLOWCONTROL
you'll see a _big_ improvement in routingperformance.

/Martin