Hello,
I have a dual Athlon w/ 512M RAM and three NICs (one gigabit
3c985B running 802.1Q with 5 VLANs, two on-board 100Mbit 3c982). This box
has almost nothing other to do apart from routing and packet filtering.
Is there anything I can do to tell the VM system to use as much memory
for network packets as possible?
I haven't got time to measure the throughput at gigabit speeds
yet, but I wonder if there is a way to tell the kernel "this box does
routing/firewalling, and almost nothing else".
-Yenya
--
| Jan "Yenya" Kasprzak <kas at {fi.muni.cz - work | yenya.net - private}> |
| GPG: ID 1024/D3498839 Fingerprint 0D99A7FB206605D7 8B35FCDE05B18A5E |
| http://www.fi.muni.cz/~kas/ Czech Linux Homepage: http://www.linux.cz/ |
#include <stdio.h>
int main(void) { printf("\t\b\b"); return 0; }
Jan Kasprzak <[email protected]> :
[...]
> I have a dual Athlon w/ 512M RAM and three NICs (one gigabit
> 3c985B running 802.1Q with 5 VLANs, two on-board 100Mbit 3c982). This box
> has almost nothing other to do apart from routing and packet filtering.
> Is there anything I can do to tell the VM system to use as much memory
> for network packets as possible?
In a sysctl fashion ? No.
However you can increase the length of the Rx/Tx rings on the 100Mb/s side
and tune the pci latency timers (depends on the hardware fifo size).
--
Ueimor
In article <[email protected]> you wrote:
>> I have a dual Athlon w/ 512M RAM and three NICs (one gigabit
>> 3c985B running 802.1Q with 5 VLANs, two on-board 100Mbit 3c982). This box
>> has almost nothing other to do apart from routing and packet filtering.
>> Is there anything I can do to tell the VM system to use as much memory
>> for network packets as possible?
> In a sysctl fashion ? No.
You can increase the reserved free memory (not sure where to do this in
2.4.x). This is important, cause network memory requests are usually within
interrupt handlers and therefore no paging can occur. You can play a bit
with the memory settings in net/*_mem. Most important you can configure the
kernel for large window sizes and advanced routing.
> However you can increase the length of the Rx/Tx rings on the 100Mb/s side
> and tune the pci latency timers (depends on the hardware fifo size).
Increasing rx/rx rings is not a particular good idea cause it slows down
TCPs adaption to network congestion and router overload.
Greetings
Bernd
Bernd Eckenfels <[email protected]> :
[...]
> You can increase the reserved free memory (not sure where to do this in
> 2.4.x). This is important, cause network memory requests are usually within
> interrupt handlers and therefore no paging can occur. You can play a bit
This reserve isn't dedicated to networking alas.
[...]
> > However you can increase the length of the Rx/Tx rings on the 100Mb/s side
> > and tune the pci latency timers (depends on the hardware fifo size).
>
> Increasing rx/rx rings is not a particular good idea cause it slows down
> TCPs adaption to network congestion and router overload.
Think about forwarding between GigaE and FastE. Think about overflow and
bad irq latency. I wouldn't cut buffering at l2 as it averages the peaks.
Different trade-offs make sense of course.
--
Ueimor
In article <[email protected]> you wrote:
>> You can increase the reserved free memory (not sure where to do this in
> This reserve isn't dedicated to networking alas.
But it is for atomic kernel memory requests, which happen to be caused by
Interrupt handlers. On a Network loaded Box most of them are from the NICs.
Greetings
Bernd
Bernd Eckenfels <[email protected]> :
> In article <[email protected]> you wrote:
> >> You can increase the reserved free memory (not sure where to do this in
> > This reserve isn't dedicated to networking alas.
>
> But it is for atomic kernel memory requests, which happen to be caused by
> Interrupt handlers. On a Network loaded Box most of them are from the NICs.
The word "firewall" triggered the syslog activity led in my head. :o)
<grep, grep>
How would you do an estimate of the required memory for the whole
networking (GFP_ATOMIC is wildly used out of the irq handlers themselves) ?
--
Ueimor
In article <[email protected]> you wrote:
> Think about forwarding between GigaE and FastE. Think about overflow and
> bad irq latency. I wouldn't cut buffering at l2 as it averages the peaks.
> Different trade-offs make sense of course.
I think in that case increasing the buffers is important:
net.core.rmem_max=262144
net.core.wmem_max=262144
default:
optmem_max:10240
rmem_default:65535
rmem_max:65535
wmem_default:65535
wmem_max:65535
Greetings
Bernd
Bernd Eckenfels <[email protected]> :
> In article <[email protected]> you wrote:
> > Think about forwarding between GigaE and FastE. Think about overflow and
> > bad irq latency. I wouldn't cut buffering at l2 as it averages the peaks.
> > Different trade-offs make sense of course.
>
> I think in that case increasing the buffers is important:
>
> net.core.rmem_max=262144
> net.core.wmem_max=262144
Aren't these only useful to userspace apps ?
--
Ueimor
Bernd Eckenfels wrote:
: Most important you can configure the
: kernel for large window sizes and advanced routing.
Advanced routing is a feature tuning, not a performance one, I think
(yes, I have advanced routing configured in - I use the "ip rule" mechanism
for simple filtering. It is faster than iptables, because it uses the
routing cache).
Large window sizes do matter only for TCP servers, not for
an IP router/firewall, I think. The only TCP my firewall does itself
is incoming ssh and outgoing smtp messages from my IDS.
: > However you can increase the length of the Rx/Tx rings on the 100Mb/s side
: > and tune the pci latency timers (depends on the hardware fifo size).
:
: Increasing rx/rx rings is not a particular good idea cause it slows down
: TCPs adaption to network congestion and router overload.
OK. I have the following:
/proc/sys/net/core/optmem_max 10240
/proc/sys/net/core/rmem_default 65535
/proc/sys/net/core/rmem_max 131071
/proc/sys/net/core/wmem_default 65535
/proc/sys/net/core/wmem_max 131071
I can surely increase rmem_max, wmem_max (and _default).
What is the optmem_max? What is the difference between [rw]mem_max
and _default?
-Yenya
--
| Jan "Yenya" Kasprzak <kas at {fi.muni.cz - work | yenya.net - private}> |
| GPG: ID 1024/D3498839 Fingerprint 0D99A7FB206605D7 8B35FCDE05B18A5E |
| http://www.fi.muni.cz/~kas/ Czech Linux Homepage: http://www.linux.cz/ |
#include <stdio.h>
int main(void) { printf("\t\b\b"); return 0; }