Subject: [ANNOUNCE] nf-hipac v0.8 released

Hi

We have released a new version of nf-hipac. We rewrote most of the code
and added a bunch of new features. The main enhancements are
user-defined chains, generic support for iptables targets and matches
and 64 bit atomic counters.


For all of you who don't know nf-hipac yet, here is a short overview:

nf-hipac is a drop-in replacement for the iptables packet filtering module.
It implements a novel framework for packet classification which uses an
advanced algorithm to reduce the number of memory lookups per packet.
The module is ideal for environments where large rulesets and/or high
bandwidth networks are involved. Its userspace tool, which is also called
'nf-hipac', is designed to be as compatible as possible to 'iptables -t
filter'.

The official project web page is: http://www.hipac.org
The releases can be downloaded from: http://sourceforge.net/projects/nf-hipac

Features:
- optimized for high performance packet classification with moderate
memory usage
- completely dynamic: data structure isn't rebuild from scratch when
inserting or deleting rules, so fast updates are possible
- very short locking times during rule updates: packet matching is
not blocked
- support for 64 bit architectures
- optimized kernel-user protocol (netlink): improved rule listing
speed
- libnfhipac: netlink library for kernel-user communication
- native match support for:
+ source/destination ip
+ in/out interface
+ protocol (udp, tcp, icmp)
+ fragments
+ source/destination ports (udp, tcp)
+ tcp flags
+ icmp type
+ connection state
+ ttl
- match negation (!)
- iptables compatibility: syntax and semantics of the userspace tool
are very similar to iptables
- coexistence of nf-hipac and iptables: both facilities can be used
at the same time
- generic support for iptables targets and matches (binary
compatibility)
- integration into the netfilter connection tracking facility
- user-defined chains support
- 64 bit atomic counters
- kernel module autoloading
- /proc/net/nf-hipac/info:
+ dynamically limit the maximum memory usage
+ change invokation order of nf-hipac and iptables
- extended statistics via /proc/net/nf-hipac/statistics/*


We are currently working on extending the hipac algorithm to do classification
with several stages. The hipac algorithm will then be capable of combining
several classification problems in one data structure, e.g. it will be
possible to solve routing and firewalling with one hipac lookup. The idea is
to shorten the packet forwarding path by combining fib_lookup and iptables
filter lookup into one hipac query. To further improve the performance in
this scenario the upcoming flow cache could be used to cache recent hipac
results.



Enjoy,

+-----------------------+----------------------+
| Michael Bellion | Thomas Heinz |
| <[email protected]> | <[email protected]> |
+-----------------------+----------------------+


2003-06-25 20:49:17

by folkert

[permalink] [raw]
Subject: Re: [ANNOUNCE] nf-hipac v0.8 released

Hi,

> nf-hipac is a drop-in replacement for the iptables packet filtering module.
> It implements a novel framework for packet classification which uses an
> advanced algorithm to reduce the number of memory lookups per packet.
> The module is ideal for environments where large rulesets and/or high
> bandwidth networks are involved. Its userspace tool, which is also called
> 'nf-hipac', is designed to be as compatible as possible to 'iptables -t
> filter'.

Looks great!
Any chance on a port to 2.5.x?


Greetings,

Folkert van Heusden

+-> http://www.vanheusden.com [email protected] +31-6-41278122 <-+
+--------------------------------------------------------------------------+
| UNIX sysop? Then give MultiTail ( http://www.vanheusden.com/multitail/ ) |
| a try, it brings monitoring logfiles (and such) to a different level! |
+--------------------------------------------------------------------------+

2003-06-25 23:39:06

by Thomas Heinz

[permalink] [raw]
Subject: Re: [ANNOUNCE] nf-hipac v0.8 released

Hi Folkert

You wrote:
> Looks great!
> Any chance on a port to 2.5.x?

It should not be that hard to port nf-hipac to 2.5 since most of the
code (the whole hipac core) is not "kernel specific".
But since we are busy planning the next hipac extension we don't have
the time to do this ourselves.
Maybe some volunteer is willing to implement the port.


Thomas

2003-06-26 13:34:52

by Daniel Egger

[permalink] [raw]
Subject: Re: [ANNOUNCE] nf-hipac v0.8 released

Am Mit, 2003-06-25 um 22.48 schrieb Michael Bellion and Thomas Heinz:

> - libnfhipac: netlink library for kernel-user communication

Is this library actually usable for applications which need to control
the firewall or is it equally braindead to libiptables?

--
Servus,
Daniel


Attachments:
signature.asc (189.00 B)
Dies ist ein digital signierter Nachrichtenteil
Subject: Re: [ANNOUNCE] nf-hipac v0.8 released

Hi Daniel

You wrote:
>> - libnfhipac: netlink library for kernel-user communication
>
> Is this library actually usable for applications which need to control
> the firewall or is it equally braindead to libiptables?

The library _is_ intended to be used by other applications than
the nf-hipac userspace tool, too. It hides the netlink communication
from the user who is only required to construct the command
data structure sent to the kernel which contains at most one single
nf-hipac rule. This is very straightforward and the kernel returns
detailed errors if the packet is misconstructed.

Taking a look at nfhp_com.h and evt. nf-hipac.c gives you some clue
on how to build valid command packets.


Regards,

+-----------------------+----------------------+
| Michael Bellion | Thomas Heinz |
| <[email protected]> | <[email protected]> |
+-----------------------+----------------------+

2003-06-26 14:33:30

by Daniel Egger

[permalink] [raw]
Subject: Re: [ANNOUNCE] nf-hipac v0.8 released

Am Don, 2003-06-26 um 16.20 schrieb Michael Bellion and Thomas Heinz:

> Taking a look at nfhp_com.h and evt. nf-hipac.c gives you some clue
> on how to build valid command packets.

Thanks. Your reply made me somewhat curious and I'll definitely have a
look, hoping the interface is much better than libiptables which is
merely a bunch of convience functions for use of the iptables utility
but unusable for real world applications which need to deal with
firewall rules.

--
Servus,
Daniel


Attachments:
signature.asc (189.00 B)
Dies ist ein digital signierter Nachrichtenteil

2003-06-27 05:52:27

by Pekka Savola

[permalink] [raw]
Subject: Re: [ANNOUNCE] nf-hipac v0.8 released

Hi,

Looks interesting. Is there experience about this in bridging firewall
scenarios? (With or without external patchset's like
http://ebtables.sourceforge.net/)

Further, you mention the performance reasons for this approach. I would
be very interested to see some figures.

(As it happens, we've done some testing with different iptables rules
ourselves, and noticed significant problems especially when you go down
from IP addresses to UDP/TCP ports, for example.)

On Wed, 25 Jun 2003, Michael Bellion and Thomas Heinz wrote:
> We have released a new version of nf-hipac. We rewrote most of the code
> and added a bunch of new features. The main enhancements are
> user-defined chains, generic support for iptables targets and matches
> and 64 bit atomic counters.
>
>
> For all of you who don't know nf-hipac yet, here is a short overview:
>
> nf-hipac is a drop-in replacement for the iptables packet filtering module.
> It implements a novel framework for packet classification which uses an
> advanced algorithm to reduce the number of memory lookups per packet.
> The module is ideal for environments where large rulesets and/or high
> bandwidth networks are involved. Its userspace tool, which is also called
> 'nf-hipac', is designed to be as compatible as possible to 'iptables -t
> filter'.
>
> The official project web page is: http://www.hipac.org
> The releases can be downloaded from: http://sourceforge.net/projects/nf-hipac
>
> Features:
> - optimized for high performance packet classification with moderate
> memory usage
> - completely dynamic: data structure isn't rebuild from scratch when
> inserting or deleting rules, so fast updates are possible
> - very short locking times during rule updates: packet matching is
> not blocked
> - support for 64 bit architectures
> - optimized kernel-user protocol (netlink): improved rule listing
> speed
> - libnfhipac: netlink library for kernel-user communication
> - native match support for:
> + source/destination ip
> + in/out interface
> + protocol (udp, tcp, icmp)
> + fragments
> + source/destination ports (udp, tcp)
> + tcp flags
> + icmp type
> + connection state
> + ttl
> - match negation (!)
> - iptables compatibility: syntax and semantics of the userspace tool
> are very similar to iptables
> - coexistence of nf-hipac and iptables: both facilities can be used
> at the same time
> - generic support for iptables targets and matches (binary
> compatibility)
> - integration into the netfilter connection tracking facility
> - user-defined chains support
> - 64 bit atomic counters
> - kernel module autoloading
> - /proc/net/nf-hipac/info:
> + dynamically limit the maximum memory usage
> + change invokation order of nf-hipac and iptables
> - extended statistics via /proc/net/nf-hipac/statistics/*
>
>
> We are currently working on extending the hipac algorithm to do classification
> with several stages. The hipac algorithm will then be capable of combining
> several classification problems in one data structure, e.g. it will be
> possible to solve routing and firewalling with one hipac lookup. The idea is
> to shorten the packet forwarding path by combining fib_lookup and iptables
> filter lookup into one hipac query. To further improve the performance in
> this scenario the upcoming flow cache could be used to cache recent hipac
> results.
>
>
>
> Enjoy,
>
> +-----------------------+----------------------+
> | Michael Bellion | Thomas Heinz |
> | <[email protected]> | <[email protected]> |
> +-----------------------+----------------------+
>
>

--
Pekka Savola "You each name yourselves king, yet the
Netcore Oy kingdom bleeds."
Systems. Networks. Security. -- George R.R. Martin: A Clash of Kings


Subject: Re: [ANNOUNCE] nf-hipac v0.8 released

Hi Pekka

You wrote:
> Looks interesting. Is there experience about this in bridging firewall
> scenarios? (With or without external patchset's like
> http://ebtables.sourceforge.net/)

Sorry for this answer being so late but we wanted to check whether
nf-hipac works with the ebtables patch first in order to give you
a definite answer. We tried on a sparc64 which was a bad decision
because the ebtables patch does not work on sparc64 systems.
We are going to test the stuff tomorrow on an i386 and tell you
the results afterwards.

In principle, nf-hipac should work properly whith the bridge patch.
We expect it to work just like iptables apart from the fact that
you cannot match on bridge ports. The iptables' in/out interface
match in 2.4 works the way that it matches if either in/out dev
_or_ in/out physdev. The nf-hipac in/out interface match matches
solely on in/out dev.

> Further, you mention the performance reasons for this approach. I would
> be very interested to see some figures.

We have done some performance tests with an older release of nf-hipac.
The results are available on http://www.hipac.org/

Apart from that Roberto Nibali did some preliminary testing on nf-hipac.
You can find his posting to linux-kernel here:
http://marc.theaimsgroup.com/?l=linux-kernel&m=103358029605079&w=2

Since there are currently no performance tests available for the
new release we want to encourage people interested in firewall
performance evaluation to include nf-hipac in their tests.


Regards,

+-----------------------+----------------------+
| Michael Bellion | Thomas Heinz |
| <[email protected]> | <[email protected]> |
+-----------------------+----------------------+

2003-06-29 06:12:42

by Pekka Savola

[permalink] [raw]
Subject: Re: [ANNOUNCE] nf-hipac v0.8 released

Hi,

On Sat, 28 Jun 2003, Michael Bellion and Thomas Heinz wrote:
> You wrote:
> > Looks interesting. Is there experience about this in bridging firewall
> > scenarios? (With or without external patchset's like
> > http://ebtables.sourceforge.net/)
>
> Sorry for this answer being so late but we wanted to check whether
> nf-hipac works with the ebtables patch first in order to give you
> a definite answer. We tried on a sparc64 which was a bad decision
> because the ebtables patch does not work on sparc64 systems.
> We are going to test the stuff tomorrow on an i386 and tell you
> the results afterwards.
>
> In principle, nf-hipac should work properly whith the bridge patch.
> We expect it to work just like iptables apart from the fact that
> you cannot match on bridge ports. The iptables' in/out interface
> match in 2.4 works the way that it matches if either in/out dev
> _or_ in/out physdev. The nf-hipac in/out interface match matches
> solely on in/out dev.

Thanks for this information.

> > Further, you mention the performance reasons for this approach. I would
> > be very interested to see some figures.
>
> We have done some performance tests with an older release of nf-hipac.
> The results are available on http://www.hipac.org/
>
> Apart from that Roberto Nibali did some preliminary testing on nf-hipac.
> You can find his posting to linux-kernel here:
> http://marc.theaimsgroup.com/?l=linux-kernel&m=103358029605079&w=2
>
> Since there are currently no performance tests available for the
> new release we want to encourage people interested in firewall
> performance evaluation to include nf-hipac in their tests.

Yes, I had missed this when I quickly looked at the web page using lynx.
Thanks.

One obvious thing that's missing in your performance and Roberto's figures
is what *exactly* are the non-matching rules. Ie. do they only match IP
address, a TCP port, or what? (TCP port matching is about a degree of
complexity more expensive with iptables, I recall.)

--
Pekka Savola "You each name yourselves king, yet the
Netcore Oy kingdom bleeds."
Systems. Networks. Security. -- George R.R. Martin: A Clash of Kings

2003-06-29 07:31:56

by Roberto Nibali

[permalink] [raw]
Subject: Re: [ANNOUNCE] nf-hipac v0.8 released

Hello,

>>Apart from that Roberto Nibali did some preliminary testing on nf-hipac.
>>You can find his posting to linux-kernel here:
>>http://marc.theaimsgroup.com/?l=linux-kernel&m=103358029605079&w=2
>>
>>Since there are currently no performance tests available for the
>>new release we want to encourage people interested in firewall
>>performance evaluation to include nf-hipac in their tests.
>
> Yes, I had missed this when I quickly looked at the web page using lynx.
> Thanks.
>
> One obvious thing that's missing in your performance and Roberto's figures
> is what *exactly* are the non-matching rules. Ie. do they only match IP
> address, a TCP port, or what? (TCP port matching is about a degree of
> complexity more expensive with iptables, I recall.)

When I did the tests I used a variant of following simple script [1].

There you can see that I only used a src port range. In an original
paper I wrote for my company (announced here [2]) I did create rules
that only matched IP addresses, the results were bad enough ;).

Meanwhile I should revise the paper as quite a few things have been
addressed since then: For example the performance issues with OpenBSD
packet filtering have mostly been squashed. I didn't continue on that
matter because I fell severely ill last autumn and first had to take
care of that.

[1] http://www.drugphish.ch/~ratz/genrules.sh
[2] http://www.ussg.iu.edu/hypermail/linux/kernel/0203.3/0847.html

HTH and Best regards,
Roberto Nibali, ratz
--
echo '[q]sa[ln0=aln256%Pln256/snlbx]sb3135071790101768542287578439snlbxq'|dc

Subject: Re: [ANNOUNCE] nf-hipac v0.8 released

Hi Pekka

You wrote:
>>We are going to test the stuff tomorrow on an i386 and tell you
>>the results afterwards.

Well, nf-hipac works fine together with the ebtables patch for 2.4.21
on an i386 machine. We expect it to work with other patches too.

>>In principle, nf-hipac should work properly whith the bridge patch.
>>We expect it to work just like iptables apart from the fact that
>>you cannot match on bridge ports.

Well, this statement holds for the native nf-hipac in/out interface
match but of course you can match on bridge ports with nf-hipac
using the iptables physdev match. So everything should be fine :)

> One obvious thing that's missing in your performance and Roberto's figures
> is what *exactly* are the non-matching rules. Ie. do they only match IP
> address, a TCP port, or what? (TCP port matching is about a degree of
> complexity more expensive with iptables, I recall.)

[answered in private e-mail]


Regards,

+-----------------------+----------------------+
| Michael Bellion | Thomas Heinz |
| <[email protected]> | <[email protected]> |
+-----------------------+----------------------+

2003-07-02 05:15:49

by Pekka Savola

[permalink] [raw]
Subject: Re: [ANNOUNCE] nf-hipac v0.8 released

Hi,

Thanks for your clarification. We've also conducted some tests with
bridging firewall functionality, and we're very pleased with nf-hipac's
performance! Results below.

In the measurements, tests were run through a bridging Linux firewall,
with a netperf UDP stream of 1450 byte packets (launched from a different
computer connected with gigabit ethernet), with a varying amount of
filtering rules checks for each packet.

I don't have the specs of the Linux PC hardware handy, but I recall
they're *very* highend dual-P4's, like 2.4Ghz, very fast PCI bus, etc.
Shouldn't be a factor here.

1. Filtering based on source address only, for example:
$fwcmd -A $MAIN -p udp -s 10.0.0.1 -j DROP
...
$fwcmd -A $MAIN -p udp -s 10.0.3.255 -j DROP
$fwcmd -A $MAIN -p udp -j ACCEPT

Results:
rules | plain NF | NF-HIPAC
| sent | got thru | sent | got thru |
(n.o) | (Mbit/s) | (Mbit/s) | (Mbit/s) | (Mbit/s) |
-------------------------------------------------------------
0 | 956,00 | 953,24 | 956,00 | 953,24 |
512 | 956,00 | 800,68 | 956,46 | 952,81 |
1024 | 956,00 | 472,78 | 956,46 | 952,81 |
2048 | 955,99 | 170,52 | 956,46 | 952,86 |
3072 | 956,00 | 51,97 | 956,46 | 952,85 |

2. Filtering based on UDP protocol's source port, for example:
$fwcmd -A $MAIN -p udp --source-port 1 -j DROP
...
$fwcmd -A $MAIN -p udp --source-port 1024 -j DROP
$fwcmd -A $MAIN -p udp -j ACCEPT

Results:
rules | plain NF | NF-HIPAC
| sent | got thru | sent | got thru |
(n.o) | (Mbit/s) | (Mbit/s) | (Mbit/s) | (Mbit/s) |
-------------------------------------------------------------
0 | 955,37 | 954,33 | 956,46 | 952,85 |
512 | 980,68 | 261,41 | 956,46 | 951,92 |
1024 | N/A | N/A | 956,47 | 952,86 |
2048 | N/A | N/A | 956,46 | 952,85 |
3072 | N/A | N/A | 956,46 | 952,85 |

N/A = Netfilter bridging can't handle this at all, no traffic can pass the
bridge.

So, plain Netfilter can tolerate about a couple of hundred rules
checking for addresses and/or ports on a gigabit line.

With HIPAC Netfilter, packet loss is very low, less than 0.5%, even with the
maximum number (of tested) rules, the same amount as without filtering at
all.


On Sun, 29 Jun 2003, Michael Bellion and Thomas Heinz wrote:
> You wrote:
> >>We are going to test the stuff tomorrow on an i386 and tell you
> >>the results afterwards.
>
> Well, nf-hipac works fine together with the ebtables patch for 2.4.21
> on an i386 machine. We expect it to work with other patches too.
>
> >>In principle, nf-hipac should work properly whith the bridge patch.
> >>We expect it to work just like iptables apart from the fact that
> >>you cannot match on bridge ports.
>
> Well, this statement holds for the native nf-hipac in/out interface
> match but of course you can match on bridge ports with nf-hipac
> using the iptables physdev match. So everything should be fine :)
>
> > One obvious thing that's missing in your performance and Roberto's figures
> > is what *exactly* are the non-matching rules. Ie. do they only match IP
> > address, a TCP port, or what? (TCP port matching is about a degree of
> > complexity more expensive with iptables, I recall.)
>
> [answered in private e-mail]
>
>
> Regards,
>
> +-----------------------+----------------------+
> | Michael Bellion | Thomas Heinz |
> | <[email protected]> | <[email protected]> |
> +-----------------------+----------------------+
>
>

--
Pekka Savola "You each name yourselves king, yet the
Netcore Oy kingdom bleeds."
Systems. Networks. Security. -- George R.R. Martin: A Clash of Kings

Subject: Re: [ANNOUNCE] nf-hipac v0.8 released

Hi Pekka

> Thanks for your clarification. We've also conducted some tests with
> bridging firewall functionality, and we're very pleased with nf-hipac's
> performance! Results below.

Great, thanks a lot. Your tests are very interesting for us as we haven't done
any gigabit or SMP tests yet.

> In the measurements, tests were run through a bridging Linux firewall,
> with a netperf UDP stream of 1450 byte packets (launched from a different
> computer connected with gigabit ethernet), with a varying amount of
> filtering rules checks for each packet.
> I don't have the specs of the Linux PC hardware handy, but I recall
> they're *very* highend dual-P4's, like 2.4Ghz, very fast PCI bus, etc.

Since real world network traffic always consists of a lot of different sized
packets taking maximum sized packets is very euphemistic. 1450 byte packets
at 950 Mbit/s correspond to approx. 80,000 packets/sec.
We are really interested in how our algorithm performs at higher packet rates.
Our performance tests are based on 100 Mbit hardware so we coudn't test with
more than approx. 80,000 packets/sec even with minimum sized packets. At this
packet rate we were hardly able to drive the algorithm to its limit, even
with more than 25000 rules involved (and our test system was 1.3 GHz
uniprocessor).

We'd appreciate it very much if you could run additional tests with smaller
packet sizes (including minimum packet size). This way we can get an idea of
whether our SMP optimizations work and whether our algorithm in general would
benefit from further fine tuning.


Regards

+-----------------------+----------------------+
| Michael Bellion | Thomas Heinz |
| <[email protected]> | <[email protected]> |
+-----------------------+----------------------+

2003-07-02 13:00:26

by Pádraig Brady

[permalink] [raw]
Subject: Re: [ANNOUNCE] nf-hipac v0.8 released

Michael Bellion and Thomas Heinz wrote:
> Hi Pekka
>
>
>>Thanks for your clarification. We've also conducted some tests with
>>bridging firewall functionality, and we're very pleased with nf-hipac's
>>performance! Results below.
>
>
> Great, thanks a lot. Your tests are very interesting for us as we haven't done
> any gigabit or SMP tests yet.
>
>>In the measurements, tests were run through a bridging Linux firewall,
>>with a netperf UDP stream of 1450 byte packets (launched from a different
>>computer connected with gigabit ethernet), with a varying amount of
>>filtering rules checks for each packet.
>>I don't have the specs of the Linux PC hardware handy, but I recall
>>they're *very* highend dual-P4's, like 2.4Ghz, very fast PCI bus, etc.
>
> Since real world network traffic always consists of a lot of different sized
> packets taking maximum sized packets is very euphemistic. 1450 byte packets
> at 950 Mbit/s correspond to approx. 80,000 packets/sec.
> We are really interested in how our algorithm performs at higher packet rates.
> Our performance tests are based on 100 Mbit hardware so we coudn't test with
> more than approx. 80,000 packets/sec even with minimum sized packets.

Interrupt latency is the problem here. You'll require napi et. al
to get over this hump.

> At this
> packet rate we were hardly able to drive the algorithm to its limit, even
> with more than 25000 rules involved (and our test system was 1.3 GHz
> uniprocessor).

Cool. The same sort of test with ordinary netfilter that
I did showed it could only handle around 125 rules at this
packet rate on a 1.4GHz PIII, e1000 @ 100Mb/s.

# ./readprofile -m /boot/System.map | sort -nr | head -30
6779 total 0.0047
4441 default_idle 69.3906
787 handle_IRQ_event 7.0268
589 ip_packet_match 1.6733
433 ipt_do_table 0.6294
106 eth_type_trans 0.5521
56 kfree 0.8750
46 skb_release_data 0.3194
37 add_timer_randomness 0.1542
35 alloc_skb 0.0781
30 __kmem_cache_alloc 0.1172
27 kmalloc 0.3375
23 ip_rcv 0.0342
22 do_gettimeofday 0.1964
20 netif_rx 0.0521
19 __kfree_skb 0.0540
18 add_entropy_words 0.1023
15 __constant_c_and_count_memset 0.0938
13 batch_entropy_store 0.0813
12 kfree_skbmem 0.1071
11 netif_receive_skb 0.0208
7 nf_iterate 0.0437
7 nf_hook_slow 0.0175
6 process_backlog 0.0221
5 batch_entropy_process 0.0223
5 add_interrupt_randomness 0.0781
3 kmem_cache_free 0.0625
2 ipt_hook 0.0312
1 write_profile 0.0156
1 ip_promisc_rcv_finish 0.0208

P?draig.

Subject: Re: [ANNOUNCE] nf-hipac v0.8 released

Hi P?draig

> > Since real world network traffic always consists of a lot of different
> > sized packets taking maximum sized packets is very euphemistic. 1450 byte
> > packets at 950 Mbit/s correspond to approx. 80,000 packets/sec.
> > We are really interested in how our algorithm performs at higher packet
> > rates. Our performance tests are based on 100 Mbit hardware so we coudn't
> > test with more than approx. 80,000 packets/sec even with minimum sized
> > packets.
>
> Interrupt latency is the problem here. You'll require napi et. al
> to get over this hump.

Yes we know, but with 128 byte frame size you can archieve a packet rate of at
most 97,656 packets/sec (in theory) on 100 Mbit hardware. We don't think this
few more packets would have changed the results fundamentally, so it's
probably not worth it on 100 Mbit.
Certainly you are right, that napi is required on gigabit to saturate the link
with small sized packets.

> Cool. The same sort of test with ordinary netfilter that
> I did showed it could only handle around 125 rules at this
> packet rate on a 1.4GHz PIII, e1000 @ 100Mb/s.
>
> # ./readprofile -m /boot/System.map | sort -nr | head -30
> 6779 total 0.0047
> 4441 default_idle 69.3906
> 787 handle_IRQ_event 7.0268
> 589 ip_packet_match 1.6733
> 433 ipt_do_table 0.6294
> 106 eth_type_trans 0.5521
> [...]

What do you want to show with this profile? Most of the time is spend in the
idle loop and in icq handling and only a few percentage in ip_packet_match
and ipt_do_table, so we don't quite get how this matches your statement
above. Could you explain this in a few words?

Regards,

+-----------------------+----------------------+
| Michael Bellion | Thomas Heinz |
| <[email protected]> | <[email protected]> |
+-----------------------+----------------------+

2003-07-02 14:14:59

by Pádraig Brady

[permalink] [raw]
Subject: Re: [ANNOUNCE] nf-hipac v0.8 released

Michael Bellion and Thomas Heinz wrote:
> Hi P?draig
>
>
>>>Since real world network traffic always consists of a lot of different
>>>sized packets taking maximum sized packets is very euphemistic. 1450 byte
>>>packets at 950 Mbit/s correspond to approx. 80,000 packets/sec.
>>>We are really interested in how our algorithm performs at higher packet
>>>rates. Our performance tests are based on 100 Mbit hardware so we coudn't
>>>test with more than approx. 80,000 packets/sec even with minimum sized
>>>packets.
>>
>>Interrupt latency is the problem here. You'll require napi et. al
>>to get over this hump.
>
> Yes we know, but with 128 byte frame size you can archieve a packet rate of at
> most 97,656 packets/sec (in theory) on 100 Mbit hardware. We don't think this
> few more packets would have changed the results fundamentally, so it's
> probably not worth it on 100 Mbit.

I was testing with 64 byte packets (so around 190Kpps). e100 cards at
least have a handy mode for continually sending a packet as fast as
possible. Also you can use more than one interface. So 100Mb
is very useful for testing. For the test below I was using
a rate of around 85Kpps.

> Certainly you are right, that napi is required on gigabit to saturate the link
> with small sized packets.
>
>>Cool. The same sort of test with ordinary netfilter that
>>I did showed it could only handle around 125 rules at this
>>packet rate on a 1.4GHz PIII, e1000 @ 100Mb/s.
>>
>># ./readprofile -m /boot/System.map | sort -nr | head -30
>> 6779 total 0.0047
>> 4441 default_idle 69.3906
>> 787 handle_IRQ_event 7.0268
>> 589 ip_packet_match 1.6733
>> 433 ipt_do_table 0.6294
>> 106 eth_type_trans 0.5521
>> [...]
>
> What do you want to show with this profile? Most of the time is spend in the
> idle loop and in irq handling and only a few percentage in ip_packet_match
> and ipt_do_table, so we don't quite get how this matches your statement
> above. Could you explain this in a few words?

Confused me too. The system would lock up and start dropping
packets after 125 rules. I.E. it would linearly degrade
as more rules were added. I'm guessing there is a fixed
interrupt overhead that is accounted for
by default_idle? Note the e1000 drivers were
left in the default config so there could definitely
be some tuning done here.

Note I changed netfilter slightly to accept promiscuous traffic
which is done in ip_rcv() and then the packets are just dropped
after the (match any in the test case) rules are traversed.

P?draig.

Subject: Re: [ANNOUNCE] nf-hipac v0.8 released

Hi P?draig

You wrote:
> I was testing with 64 byte packets (so around 190Kpps). e100 cards at
> least have a handy mode for continually sending a packet as fast as
> possible. Also you can use more than one interface.

Yes, that's true. When we did the performance tests we had in mind to
compare the worst case behaviour of nf-hipac and iptables.
Therefore we designed a ruleset which models the worst case for both
iptables and nf-hipac. Of course, the test environment could have been
tuned a lot more, e.g. udp instead of tcp, FORWARD chain instead of
INPUT, tuned network parameters, more interfaces etc.

Anyway, we prefer independent, more sophisticated performance tests.

>>> # ./readprofile -m /boot/System.map | sort -nr | head -30
>>> 6779 total 0.0047
>>> 4441 default_idle 69.3906
>>> 787 handle_IRQ_event 7.0268
>>> 589 ip_packet_match 1.6733
>>> 433 ipt_do_table 0.6294
>>> 106 eth_type_trans 0.5521
>>> [...]
>
> Confused me too. The system would lock up and start dropping
> packets after 125 rules. I.E. it would linearly degrade
> as more rules were added. I'm guessing there is a fixed
> interrupt overhead that is accounted for
> by default_idle?

Hm, but once the system starts to drop packets ip_packet_match and
ipt_do_table start to dominate the profile, don't they?


Regards,

+-----------------------+----------------------+
| Michael Bellion | Thomas Heinz |
| <[email protected]> | <[email protected]> |
+-----------------------+----------------------+