2004-03-22 17:07:52

by uaca

[permalink] [raw]
Subject: RFC/Doc/BUGs: CONFIG_PACKET_MMAP


Hi all

Here is an updated and extended version of the
CONFIG_PACKET_MMAP documentation I posted previously

Please read it and and send me your comments to

[email protected]


I have found two bugs

+ the first is a minor issue, it allows the
user to waste some memory. It is marked as
check (1) in the documentation.

+ another is a memory leakage when two or more calls
to setsockopt with PACKET_RX_RING is done on the same
socket. pg_vec and io_vec vectors are not deallocated.

Also, I have propossed a modification, which would allow to
increase the number of the frames that can be used in the
buffer, there is limit here and things are worse on 64 bit
architectures. I'm very interested on hear comments about
it, I'm sending a CC: to Alexey Kuznetsov, I think he is the author
of PACKET_MMAP.

The propossed modification is to kill the io_vec vector, and just
use pg_vec and infer page position by using pointer arithmetic.



I would like to ask you about skb_buf.ip_summed. This field is also
used for both RX and TX checksum offloading and not only IP packets but
also for TCP/UDP (typhoon, ixgb). I think the ip_summed description
is misleading and should be changed:

* @ip_summed: Driver fed us an IP checksum

And hence also my description of the TP_STATUS_CSUMNOTREADY flag on
PACKET_MMAP.

IMHO, An updated description should also mention what this flag
means with respect to upper layer protocols.



cheers

Ulisses

Debian GNU/Linux: a dream come true
-----------------------------------------------------------------------------
"Computers are useless. They can only give answers." Pablo Picasso

Humans are slow, innaccurate, and brilliant.
Computers are fast, acurrate, and dumb.
Together they are unbeatable

---> Visita http://www.valux.org/ para saber acerca de la <---
---> Asociaci?n Valenciana de Usuarios de Linux <---

--------------------------------------------------------------------------------
+ ABSTRACT
--------------------------------------------------------------------------------

This file documents the CONFIG_PACKET_MMAP option available with the PACKET
socket interface on 2.4 and 2.6 kernels. This type of sockets are used for
capture network traffic with utilities like tcpdump or any other that uses
the libpcap library.

You can find the lastest version of this document at

http://pusa.uv.es/~ulisses/packet_mmap/

Maybe this file is too much verbose, please send me your comments to

Ulises Alonso Camar? <[email protected]>

-------------------------------------------------------------------------------
+ Why use PACKET_MMAP
--------------------------------------------------------------------------------

In Linux 2.4/2.6 if PACKET_MMAP is not enabled, the capture process is very
inefficient. It uses very limited buffers and requires one system call
to capture each packet, it requires two if you want to get packet's
timestamp (like libpcap always does).

In the other hand PACKET_MMAP is very efficient. PACKET_MMAP provides a size
configurable circular buffer mapped in user space. This way reading packets just
needs to wait for them, most of the time there is no need to issue a single
system call. By using a shared buffer between the kernel and the user
also has the benefit of minimizing packet copies.

It's fine to use PACKET_MMAP to improve the performance of the capture process,
but it isn't everything. At least, if you are capturing at high speeds (this
is relative to the cpu speed), you should check if the device driver of your
network interface card supports some sort of interrupt load mitigation or
(even better) if it supports NAPI, also make sure it is enabled.

--------------------------------------------------------------------------------
+ How to use CONFIG_PACKET_MMAP
--------------------------------------------------------------------------------

>From the user standpoint, you should use the higher level libpcap library, wich
is a de facto standard, portable across nearly all operating systems
including Win32.

Said that, at time of this writing, official libpcap 0.8.1 is out and doesn't include
support for PACKET_MMAP, and also probably the libpcap included in your distribution.

I'm aware of two implementations of PACKET_MMAP in libpcap:

http://pusa.uv.es/~ulisses/packet_mmap/ (by Simon Patarin, based on libpcap 0.6.2)
http://public.lanl.gov/cpw/ (by Phil Wood, based on lastest libpcap)

The rest of this document is intended for people who want to understand
the low level details or want to improve libpcap by including PACKET_MMAP
support.

--------------------------------------------------------------------------------
+ How to use CONFIG_PACKET_MMAP directly
--------------------------------------------------------------------------------

>From the system calls stand point, the use of PACKET_MMAP involves
the following process:


[setup] socket() -------> creation of the capture socket
setsockopt() ---> allocation of the circular buffer (ring)
mmap() ---------> maping of the allocated buffer to the
user process

[capture] poll() ---------> to wait for incoming packets

[shutdown] close() --------> destruction of the capture socket and
deallocation of all associated
resources.


socket creation and destruction is straight forward, and is done
the same way with or without PACKET_MMAP:

int fd;

fd= socket(PF_PACKET, mode, htons(ETH_P_ALL))

where mode is SOCK_RAW for the raw interface were link level
information can be captured or SOCK_DGRAM for the cooked
interface where link level information capture is not
supported and a link level pseudo-header is provided
by the kernel.

The destruction of the socket and all associated resources
is done by a simple call to close(fd).

Next I will describe PACKET_MMAP settings and it's constraints,
also the maping of the circular buffer in the user process and
the use of this buffer.

--------------------------------------------------------------------------------
+ PACKET_MMAP settings
--------------------------------------------------------------------------------


To setup PACKET_MMAP from user level code is done with a call like

setsockopt(fd, SOL_PACKET, PACKET_RX_RING, (void *) &req, sizeof(req))

The most significative in the previous call is the req parameter, this parameter
must to have the following structure:

struct tpacket_req
{
unsigned int tp_block_size; /* Minimal size of contiguous block */
unsigned int tp_block_nr; /* Number of blocks */
unsigned int tp_frame_size; /* Size of frame */
unsigned int tp_frame_nr; /* Total number of frames */
};

This structure is defined in include/linux/if_packet.h and establishes a
circular buffer (ring) of unswapable memory mapped in the capture process.
Being mapped in the capture process allows reading the captured frames and
related meta-information like timestamps without requiring a system call.

Capured frames are grouped in blocks. Each block is a physically contiguous
region of memory and holds tp_block_size/tp_frame_size frames. The total number
of blocks is tp_block_nr. Note that tp_frame_nr is a redundant parameter because

frames_per_block = tp_block_size/tp_frame_size

indeed, packet_set_ring checks that the following condition is true

frames_per_block * tp_block_nr == tp_frame_nr


Lets see an example, with the following values:

tp_block_size= 4096
tp_frame_size= 2048
tp_block_nr = 4
tp_frame_nr = 8

we will get the following buffer structure:

block #1 block #2 block #3 block #4
+---------+---------+ +---------+---------+ +---------+---------+ +---------+---------+
| frame 1 | frame 2 | | frame 3 | frame 4 | | frame 5 | frame 6 | | frame 7 | frame 8 |
+---------+---------+ +---------+---------+ +---------+---------+ +---------+---------+



--------------------------------------------------------------------------------
+ PACKET_MMAP setting constraints
--------------------------------------------------------------------------------

Block size limit
------------------

As stated earlier, each block is a contiguous phisical region of memory. These
memory regions are allocated with calls to the __get_free_pages() function. As
the name indicates, this function allocates pages of memory, it allocates a power
of two number of pages, that is 4096, 8192, 16384, etc. The maximum size of a
region allocated by __get_free_pages is determined by the MAX_ORDER macro. More
precisely the limit can be calculated as:

PAGE_SIZE << MAX_ORDER

In a i386 architecture PAGE_SIZE is 4096 bytes
In a 2.4/i386 kernel MAX_ORDER is 10
In a 2.6/i386 kernel MAX_ORDER is 11

So get_free_pages can allocate as much as 4MB or 8MB in a 2.4/2.6 kernel
respectively, with a i386 architecture.

Block number limit
--------------------

To understand the constraints of PACKET_MMAP settings we have to see two
aditional data structures used to support the ring. One of this structures
limits the number of blocks as we will see next, the other structure limits
the total number of frames.

There is a vector that mantains a pointer to each block, this vector is
called pg_vec wich stands for page vector. The following figure represents
the pg_vec that is used with the buffer shown before.

+---+---+---+---+
| x | x | x | x |
+---+---+---+---+
| | | |
| | | v
| | v block #4
| v block #3
v block #2
block #1


The number of blocks that can be allocated is determined by the size of
pg_vec. This vector is allocated with a call to the kmalloc function.

kmalloc allocates any number of bytes of phisically contiguous memory
from a pool of pre-determined sizes. This pool of memory is mantained
by the slab allocator wich is at the end the responsible of doing
the allocation and hence wich imposes the maximum memory
that kmalloc can allocate.

In a 2.4/2.6 kernel and the i386 architecture, the limit is 131072 bytes. This
limit can be checked in the "size-<bytes>" entries of /proc/slabinfo

In a 32 bit architecture, pointers are 4 bytes long, so the total number of
pointers to blocks (and hence blocks) is

131072/4 = 32768 blocks


Total Frame number limit
--------------------------

There is another vector of pointers, wich hold references to each frame
in the buffer, this vector is called io_vec. This vector is also allocated
with kmalloc, so we the maximum number of frames is the same as for the block
number. Indeed, the limit to the size of the buffer is impossed by the io_vec
vector because we have at least the same number of frames than blocks.

If we continue with the previous example the resulting io_vec is:

+---+---+---+---+---+---+---+---+
| y | y | y | y | y | y | y | y |
+---+---+---+---+---+---+---+---+
| | | | | | | |
| | | | | | | v
| | | | | | v frame #8 --- in block #4
| | | | | v frame #7 ------- in block #4
| | | | v frame #6 ----------- in block #3
| | | v frame #5 --------------- in block #3
| | v frame #4 ------------------- in block #2
| v frame #3 ----------------------- in block #2
v frame #2 --------------------------- in block #1
frame #1 ------------------------------- in block #1


If you check the source code you will see that what I draw here as a frame
is not only the link level frame. At the begining of each frame there is a
header called struct tpacket_hdr used in PACKET_MMAP to hold link level's frame
meta information like timestamp. So what we draw here a frame it's really
the following (from include/linux/if_packet.h):

/*
Frame structure:

- Start. Frame must be aligned to TPACKET_ALIGNMENT=16
- struct tpacket_hdr
- pad to TPACKET_ALIGNMENT=16
- struct sockaddr_ll
- Gap, chosen so that packet data (Start+tp_net) alignes to TPACKET_ALIGNMENT=16
- Start+tp_mac: [ Optional MAC header ]
- Start+tp_net: Packet data, aligned to TPACKET_ALIGNMENT=16.
- Pad to align to TPACKET_ALIGNMENT=16
*/

Other constraints
-------------------

The following are conditions that are checked in packet_set_ring

tp_block_size must be a multiple of PAGE_SIZE (1)
tp_frame_size must be greater than TPACKET_HDRLEN (obvious)
tp_frame_size must be a multiple of TPACKET_ALIGNMENT
tp_frame_nr must be exactly frames_per_block*tp_block_nr

I believe that check (1) should be changed to check if
tp_block_size is also a power of two.

I suposse that alignment to 16 bytes boundaries is to fit better
in cache lines.


--------------------------------------------------------------------------------
+ Maping and use of the circular buffer (ring)
--------------------------------------------------------------------------------

The maping of the buffer in the user process is done with the conventional
mmap function. Even the circular buffer is compound of several phisically
discontiguous blocks of memory, they are contiguous to the user space, hence
just one call to mmap is needed:

mmap(0, size, PROT_READ|PROT_WRITE, MAP_SHARED, fd, 0);

Once mapped, each frame (as detailed earlier) will be spaced tp_frame_size bytes
as spected. At the begining of each frame there is an status field (see
struct tpacket_hdr). If this field is 0 means that the frame is ready
to be used for the kernel, If not, there is a frame the user can read
and the following flags apply:

from include/linux/if_packet.h

#define TP_STATUS_COPY 2
#define TP_STATUS_LOSING 4
#define TP_STATUS_CSUMNOTREADY 8


TP_STATUS_COPY : This flag indicates that the frame (and associated
meta information) has been truncated because it's
larger than tp_frame_size. This packet can be read enterily
wich recvfrom().

In order to make this work it must to be
enabled previously with setsockopt() and
the PACKET_COPY_THRESH option.

The number of frames than can be buffered to
be read with recvfrom is limited like a normal socket.
See the SO_RCVBUF option in the socket (7) man page.

TP_STATUS_LOSING : indicates there were packet drops from last time
statistics where checked with getsockopt() and
the PACKET_STATISTICS option.

TP_STATUS_CSUMNOTREADY: currently it's used for outgoing IP packets wich it's checksum
will be done in hardware. So while reading the packet we should
not try to check the checksum.

for convenience there is also the following defines

#define TP_STATUS_KERNEL 0
#define TP_STATUS_USER 1

The kernel initializes all frames to TP_STATUS_KERNEL, when the kernel
receives a packet it puts in the buffer and updates the status with
at least the TP_STATUS_USER flag. Then the user can read the packet,
once the packet is read the user must zero the status field, so the kernel
can use again that frame buffer.

The user can use poll (any other variant should apply too) to check if new
packets are in the ring. It doesn't incur in a race condition to first
check the status value and then poll for frames, eg:

struct pollfd pfd;

pfd.fd = fd;
pfd.revents = 0;
pfd.events = POLLIN|POLLRDNORM|POLLERR;

if (status == TP_STATUS_KERNEL)
retval = poll(&pfd, 1, timeout);

--------------------------------------------------------------------------------
+ Details and discusion
--------------------------------------------------------------------------------

All memory allocations done in packet_set_ring are not freed until the socket
is closed. The memory allocations are done with GFP_KERNEL priority, this
basically means that the allocation can wait and swap other process' memory
in order to allocate the nececessary memory, so normally limits can be reached.

While reading packet_set_ring I asked myself some questions:

+ Why pointers for both blocks and frames?

io_vec and pg_vec pointers are asigned to a struct packet_op
wich is held in the packet socket, not freed until socket close. In
struct packet_opt io_vec renames to iovec.

By having frame pointers there is a fast access to each frame when
needed, and this is fine because this will happen very often. Block pointers
are used only in the setup/shutdown of the PACKET_MMAP infraestructure,
mostly in packet_set_ring and packet_mmap. It is possible to infer block
position by taking into acount the number of frames each block has.
PACKET_MMAP's designers seems that tought it is worth having pg_vec
to make code more readable. More in this later.


+ The maximum number of frames, is really a limitating factor?

Next I will consider the following scenario:

In the Internet, packet average size, including the link layer,
is around 575 bytes.

In a i386 architecture PACKET_MMAP can hold up to 32768 frames.

If we want to monitor a link at a rate of 1 Gb/s, PACKET_MMAP
will only buffer as much as 0.15 seconds ((575*8*32768)/10^9).

With multi-Gigabit interfaces going to mainstream, this limit will
have to go away.

Please note that a 64 bit machine makes things worst with respect to
pg_vec and io_vec because they can handle half of the pointers than a
32 bit machine, pointers are double size and the kmalloc limit
doesn't increase.

kmalloc limits (128 KiB by default) are defined in
include/linux/kmalloc_sizes.h, and is raised in case the CPU doesn't
have an MMU (CONFIG_MMU undefined) and can be raised further with
CONFIG_LARGE_ALLOCS. It's straight forward modifying
kmalloc_sizes.h to increase the limits, but you also have to
modify MAX_OBJ_ORDER and MAX_GFP_ORDER in slab.c.

Another possibility would be to change the allocation of io_vec
to use vmalloc instead of kmalloc.

These two options are hacks and should be avoided, A different
approach could be to convert pg_vec and io_vec in a two folded
structure.

+---+---+---+---+
| x | x | x | x |
+---+---+---+---+
| | | |
| | | | +---+---+---+---+
| | | |-> | y | y | y | y |
| | | +---+---+---+---+
| | | +---+---+---+---+
| | |-> | y | y | y | y |
| | +---+---+---+---+
| | +---+---+---+---+
| |-> | y | y | y | y |
| +---+---+---+---+
| +---+---+---+---+
|-> | y | y | y | y |
+---+---+---+---+


In this case the setup parameters should minimize the number of blocks.

Another option, would be to just use pg_vec and avoid the use of io_vec,
and infer page position by using pointer arithmetic. IMHO this is
preferable.



>>> EOF


2004-03-27 18:43:46

by uaca

[permalink] [raw]
Subject: [PATCH] PACKET_MMAP limit removal


Please apply, tested

This patches fixes the following BUG I posted previosly


On Mon, Mar 22, 2004 at 06:05:20PM +0100, [email protected] wrote:
>
[...]
> + another is a memory leakage when two or more calls
> to setsockopt with PACKET_RX_RING is done on the same
> socket. pg_vec and io_vec vectors are not deallocated.


This patch also it removes the current limit on the number of frames
PACKET_MMAP can hold. Currently the buffer can hold only
0.15 seconds at a 1 Gb/s in a 32 bit architecture, half
this amount in a 64 bit machine.

With this patch, PACKET_MMAP requires __less memory__
to hold the buffer.

I have rearranged the most used members of struct packet_opt so they
fit in a single cache line.

Any comment would be greatly appreciated

Ulisses


--- linux-2.6.4/net/packet/af_packet.c 2004-02-18 04:58:40.000000000 +0100
+++ linux-2.6.4-uac/net/packet/af_packet.c 2004-03-27 18:54:07.000000000 +0100
@@ -27,20 +27,22 @@
* interrupt locking and some slightly
* dubious gcc output. Can you read
* compiler: it said _VOLATILE_
* Richard Kooijman : Timestamp fixes.
* Alan Cox : New buffers. Use sk->mac.raw.
* Alan Cox : sendmsg/recvmsg support.
* Alan Cox : Protocol setting support
* Alexey Kuznetsov : Untied from IPv4 stack.
* Cyrus Durgin : Fixed kerneld for kmod.
* Michal Ostrowski : Module initialization cleanup.
+ * Ulises Alonso : Frame number limit removal and
+ * packet_set_ring memory leak.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
*
*/

#include <linux/config.h>
#include <linux/types.h>
@@ -161,44 +163,61 @@
};
#endif
#ifdef CONFIG_PACKET_MMAP
static int packet_set_ring(struct sock *sk, struct tpacket_req *req, int closing);
#endif

static void packet_flush_mclist(struct sock *sk);

struct packet_opt
{
+ struct tpacket_stats stats;
+#ifdef CONFIG_PACKET_MMAP
+ unsigned long *pg_vec;
+ unsigned int head;
+ unsigned int frames_per_block;
+ unsigned int frame_size;
+ unsigned int frame_max;
+ int copy_thresh;
+#endif
struct packet_type prot_hook;
spinlock_t bind_lock;
char running; /* prot_hook is attached*/
int ifindex; /* bound device */
unsigned short num;
- struct tpacket_stats stats;
#ifdef CONFIG_PACKET_MULTICAST
struct packet_mclist *mclist;
#endif
#ifdef CONFIG_PACKET_MMAP
atomic_t mapped;
- unsigned long *pg_vec;
- unsigned int pg_vec_order;
+ unsigned int pg_vec_order;
unsigned int pg_vec_pages;
unsigned int pg_vec_len;
-
- struct tpacket_hdr **iovec;
- unsigned int frame_size;
- unsigned int iovmax;
- unsigned int head;
- int copy_thresh;
#endif
};

+#ifdef CONFIG_PACKET_MMAP
+
+static inline unsigned long packet_lookup_frame(struct packet_opt *po, unsigned int position)
+{
+ unsigned int pg_vec_pos, frame_offset;
+ unsigned long frame;
+
+ pg_vec_pos = position / po->frames_per_block;
+ frame_offset = position % po->frames_per_block;
+
+ frame = (unsigned long) (po->pg_vec[pg_vec_pos] + (frame_offset * po->frame_size));
+
+ return frame;
+}
+#endif
+
#define pkt_sk(__sk) ((struct packet_opt *)(__sk)->sk_protinfo)

void packet_sock_destruct(struct sock *sk)
{
BUG_TRAP(!atomic_read(&sk->sk_rmem_alloc));
BUG_TRAP(!atomic_read(&sk->sk_wmem_alloc));

if (!sock_flag(sk, SOCK_DEAD)) {
printk("Attempt to release alive packet socket: %p\n", sk);
return;
@@ -579,25 +598,25 @@
skb_set_owner_r(copy_skb, sk);
}
snaplen = po->frame_size - macoff;
if ((int)snaplen < 0)
snaplen = 0;
}
if (snaplen > skb->len-skb->data_len)
snaplen = skb->len-skb->data_len;

spin_lock(&sk->sk_receive_queue.lock);
- h = po->iovec[po->head];
-
+ h = (struct tpacket_hdr *)packet_lookup_frame(po, po->head);
+
if (h->tp_status)
goto ring_is_full;
- po->head = po->head != po->iovmax ? po->head+1 : 0;
+ po->head = po->head != po->frame_max ? po->head+1 : 0;
po->stats.tp_packets++;
if (copy_skb) {
status |= TP_STATUS_COPY;
__skb_queue_tail(&sk->sk_receive_queue, copy_skb);
}
if (!po->stats.tp_drops)
status &= ~TP_STATUS_LOSING;
spin_unlock(&sk->sk_receive_queue.lock);

memcpy((u8*)h + macoff, skb->data, snaplen);
@@ -1478,24 +1497,27 @@
#define packet_poll datagram_poll
#else

unsigned int packet_poll(struct file * file, struct socket *sock, poll_table *wait)
{
struct sock *sk = sock->sk;
struct packet_opt *po = pkt_sk(sk);
unsigned int mask = datagram_poll(file, sock, wait);

spin_lock_bh(&sk->sk_receive_queue.lock);
- if (po->iovec) {
- unsigned last = po->head ? po->head-1 : po->iovmax;
+ if (po->pg_vec) {
+ unsigned last = po->head ? po->head-1 : po->frame_max;
+ struct tpacket_hdr *h;

- if (po->iovec[last]->tp_status)
+ h = (struct tpacket_hdr *)packet_lookup_frame(po, last);
+
+ if (h->tp_status)
mask |= POLLIN | POLLRDNORM;
}
spin_unlock_bh(&sk->sk_receive_queue.lock);
return mask;
}


/* Dirty? Well, I still did not learn better way to account
* for user mmaps.
*/
@@ -1541,42 +1563,45 @@
free_pages(pg_vec[i], order);
}
}
kfree(pg_vec);
}


static int packet_set_ring(struct sock *sk, struct tpacket_req *req, int closing)
{
unsigned long *pg_vec = NULL;
- struct tpacket_hdr **io_vec = NULL;
struct packet_opt *po = pkt_sk(sk);
int was_running, num, order = 0;
int err = 0;
-
+
if (req->tp_block_nr) {
int i, l;
- int frames_per_block;

/* Sanity tests and some calculations */
+
+ if (po->pg_vec)
+ return -EBUSY;
+
if ((int)req->tp_block_size <= 0)
return -EINVAL;
if (req->tp_block_size&(PAGE_SIZE-1))
return -EINVAL;
if (req->tp_frame_size < TPACKET_HDRLEN)
return -EINVAL;
if (req->tp_frame_size&(TPACKET_ALIGNMENT-1))
return -EINVAL;
- frames_per_block = req->tp_block_size/req->tp_frame_size;
- if (frames_per_block <= 0)
+
+ po->frames_per_block = req->tp_block_size/req->tp_frame_size;
+ if (po->frames_per_block <= 0)
return -EINVAL;
- if (frames_per_block*req->tp_block_nr != req->tp_frame_nr)
+ if (po->frames_per_block*req->tp_block_nr != req->tp_frame_nr)
return -EINVAL;
/* OK! */

/* Allocate page vector */
while ((PAGE_SIZE<<order) < req->tp_block_size)
order++;

err = -ENOMEM;

pg_vec = kmalloc(req->tp_block_nr*sizeof(unsigned long*), GFP_KERNEL);
@@ -1589,34 +1614,30 @@
pg_vec[i] = __get_free_pages(GFP_KERNEL, order);
if (!pg_vec[i])
goto out_free_pgvec;

pend = virt_to_page(pg_vec[i] + (PAGE_SIZE << order) - 1);
for (page = virt_to_page(pg_vec[i]); page <= pend; page++)
SetPageReserved(page);
}
/* Page vector is allocated */

- /* Draw frames */
- io_vec = kmalloc(req->tp_frame_nr*sizeof(struct tpacket_hdr*), GFP_KERNEL);
- if (io_vec == NULL)
- goto out_free_pgvec;
- memset(io_vec, 0, req->tp_frame_nr*sizeof(struct tpacket_hdr*));
-
l = 0;
for (i=0; i<req->tp_block_nr; i++) {
unsigned long ptr = pg_vec[i];
+ struct tpacket_hdr *header;
int k;

- for (k=0; k<frames_per_block; k++, l++) {
- io_vec[l] = (struct tpacket_hdr*)ptr;
- io_vec[l]->tp_status = TP_STATUS_KERNEL;
+ for (k=0; k<po->frames_per_block; k++) {
+
+ header = (struct tpacket_hdr*)ptr;
+ header->tp_status = TP_STATUS_KERNEL;
ptr += req->tp_frame_size;
}
}
/* Done */
} else {
if (req->tp_frame_nr)
return -EINVAL;
}

lock_sock(sk);
@@ -1635,51 +1656,47 @@

synchronize_net();

err = -EBUSY;
if (closing || atomic_read(&po->mapped) == 0) {
err = 0;
#define XC(a, b) ({ __typeof__ ((a)) __t; __t = (a); (a) = (b); __t; })

spin_lock_bh(&sk->sk_receive_queue.lock);
pg_vec = XC(po->pg_vec, pg_vec);
- io_vec = XC(po->iovec, io_vec);
- po->iovmax = req->tp_frame_nr-1;
+ po->frame_max = req->tp_frame_nr-1;
po->head = 0;
po->frame_size = req->tp_frame_size;
spin_unlock_bh(&sk->sk_receive_queue.lock);

order = XC(po->pg_vec_order, order);
req->tp_block_nr = XC(po->pg_vec_len, req->tp_block_nr);

po->pg_vec_pages = req->tp_block_size/PAGE_SIZE;
- po->prot_hook.func = po->iovec ? tpacket_rcv : packet_rcv;
+ po->prot_hook.func = po->pg_vec ? tpacket_rcv : packet_rcv;
skb_queue_purge(&sk->sk_receive_queue);
#undef XC
if (atomic_read(&po->mapped))
printk(KERN_DEBUG "packet_mmap: vma is busy: %d\n", atomic_read(&po->mapped));
}

spin_lock(&po->bind_lock);
if (was_running && !po->running) {
sock_hold(sk);
po->running = 1;
po->num = num;
dev_add_pack(&po->prot_hook);
}
spin_unlock(&po->bind_lock);

release_sock(sk);

- if (io_vec)
- kfree(io_vec);
-
out_free_pgvec:
if (pg_vec)
free_pg_vec(pg_vec, order, req->tp_block_nr);
out:
return err;
}

static int packet_mmap(struct file *file, struct socket *sock, struct vm_area_struct *vma)
{
struct sock *sk = sock->sk;

2004-03-28 09:52:18

by David Miller

[permalink] [raw]
Subject: Re: [PATCH] PACKET_MMAP limit removal

On Sat, 27 Mar 2004 19:42:00 +0100
[email protected] wrote:

> This patch also it removes the current limit on the number of frames
> PACKET_MMAP can hold. Currently the buffer can hold only
> 0.15 seconds at a 1 Gb/s in a 32 bit architecture, half
> this amount in a 64 bit machine.
>
> With this patch, PACKET_MMAP requires __less memory__
> to hold the buffer.
>
> I have rearranged the most used members of struct packet_opt so they
> fit in a single cache line.
>
> Any comment would be greatly appreciated

You're basically trading memory overhead for computational overhead.
And in this case I think that's fine.

I think your patch is fine and I'm going to apply it.
Can you cook up a 2.4.x version of this for me?

Thanks.