Hi,
i have a question regarding the design of a network card
which is attached via SDIO. SDIO transfers are handled by
a SDIO host controller in the system which can do DMA from
the host memory to the device. The end of such a transfer
can be indicated via an interrupt. The transfer has to be
triggered by the cpu. So there is no DMA ring like in
standard ethernet cards.
The default way of operation would be to transfer each
packet with one DMA transfer. Such a transfer takes
between 10=B5s and 180=B5s depending on the packet length.
Each transfer requires cpu activity to be triggered.
My fear is that there is a high system load due to interrupts.
Question 1:
Do you have measurement data regarding the cost of
an interrupt? I did some google search and found different
numbers. On an ARM OMAP platform Francis David [1] claims
cost due to interrupt of ~200=B5s/interrupt.
Dan Tsafir [2] did measurements resulting in overheads
of 5-15=B5s/interrupt for different intel architectures.
Question 2:
Do you think it it would be worth to have a packet
aggregation mode, where a number of packets are transfered
in one DMA transfer? That would reduce the number of required cpu
interactions. As only SDIO SDMA transfers [3] are assumed,
it would however involve packet copying.
Question 3:
Regarding driver design, do you think that polling the
DMA host controller would be an efficient way? Polling done
in the NAPI poll routine, waiting for DMA transfers to finish?
Comments are welcome!
Regards
=46riedrich
References:
[1] http://choices.cs.uiuc.edu/contextswitching.pdf
[2] http://www.cs.huji.ac.il/~feit/exp/expcs07/papers/140.pdf
[3] http://www.sdcard.org/about/host_controller/simple_spec