Hi Stephen,
2.6.21-rc1 is the first kernel where my SysKonnect Yukon 2 hardware
with the sky2 v1.13 driver is stable under moderate load. Before a few
GBs of data going over my GigE network quickly with NFSv4 would cause
transmit timeouts previously, but now fine.
I am still observing a performance problem - feels like a wmb() or
some buffer flushing is missing somewhere - disabling processor clock
scaling reduces the problem a bit, but does not eliminate it.
What are your preferred way of checking performance? I think that the
TCP send window can grow enough even if ACKs are delayed due to this
problem, such that TCP does not immediately demonstrate this issue. I
could restrict the window scaling factor, so it would be bound by the
data->ACK round-trip latency, which /should/ be low, but I've been
observing it higher. Maybe I try this.
I'll see what I get with iperf UDP also, since this shows min, max,
avg UDP packet latency IIRC.
Thanks for your great work so far though!
Dan
--
Daniel J Blueman
On Tue, 27 Feb 2007 11:31:58 +0000
"Daniel J Blueman" <[email protected]> wrote:
> Hi Stephen,
>
> 2.6.21-rc1 is the first kernel where my SysKonnect Yukon 2 hardware
> with the sky2 v1.13 driver is stable under moderate load. Before a few
> GBs of data going over my GigE network quickly with NFSv4 would cause
> transmit timeouts previously, but now fine.
>
> I am still observing a performance problem - feels like a wmb() or
> some buffer flushing is missing somewhere - disabling processor clock
> scaling reduces the problem a bit, but does not eliminate it.
That seems odd, if it was a missing barrier you would see data corruption.
Are there checksum errors?
You might be seeing hardware flow control. Look at ethtool -S eth0 output.
Previously, transmit flow control was broken
> What are your preferred way of checking performance? I think that the
> TCP send window can grow enough even if ACKs are delayed due to this
> problem, such that TCP does not immediately demonstrate this issue. I
> could restrict the window scaling factor, so it would be bound by the
> data->ACK round-trip latency, which /should/ be low, but I've been
> observing it higher. Maybe I try this.
iperf is easiest.
> I'll see what I get with iperf UDP also, since this shows min, max,
> avg UDP packet latency IIRC.
>
> Thanks for your great work so far though!
> Dan
--
Stephen Hemminger <[email protected]>