Francesco and Lorenzo,
In my latest test, I cut the RX and TX queue depths to 32 and ran my usual
stress test. This time I logged whenever there was an increase in the depth of
the txstatus queue, and whenever the queue depth was 16. The results were as
follows:
Feb 24 19:27:08 mtech kernel: b43: Max Queue depth is now 2
Feb 24 19:27:12 mtech kernel: b43: Max Queue depth is now 3
Feb 24 19:27:12 mtech kernel: b43: Max Queue depth is now 4
Feb 24 19:27:12 mtech kernel: b43: Max Queue depth is now 5
Feb 24 19:27:12 mtech kernel: b43: Max Queue depth is now 6
Feb 24 19:27:12 mtech kernel: b43: Max Queue depth is now 7
Feb 24 19:27:12 mtech kernel: b43: Max Queue depth is now 8
Feb 24 19:27:12 mtech kernel: b43: Max Queue depth is now 9
Feb 24 19:27:12 mtech kernel: b43: Max Queue depth is now 10
Feb 24 19:28:46 mtech kernel: b43: Max Queue depth is now 11
Feb 25 00:07:19 mtech kernel: b43: Max Queue depth is now 12
Feb 25 00:07:19 mtech kernel: b43: Max Queue depth is now 13
Feb 25 00:09:12 mtech kernel: b43: Max Queue depth is now 14
Feb 25 00:09:50 mtech kernel: b43: Max Queue depth is now 15
Feb 25 00:09:50 mtech kernel: b43: Max Queue depth is now 16
Feb 25 00:09:50 mtech kernel: b43: Max queue depth at 16
Feb 25 00:09:54 mtech kernel: b43: Max queue depth at 16
Feb 25 00:10:55 mtech kernel: b43: Max queue depth at 16
Feb 25 00:26:24 mtech kernel: eth1: No ProbeResp from current AP
00:1a:70:46:ba:b1 - assume out of range
The 00:26:24 event was the interface going offline. That happens with
proprietary firmware, thus I think that problem is with the driver, rather than
the firmware. In any case, I got no out-of-order cookies or poisoned skb's.
There were no dropped packets nor PHY transmission errors. The above list is all
the b43 messages in the log.
I have reset the RX queue depth to the original value of 64 and have started
another test. I would expect this change to have no effect, but I'll keep you
informed.
Larry
On Wednesday 25 February 2009 17:00:31 Larry Finger wrote:
> Francesco and Lorenzo,
>
> In my latest test, I cut the RX and TX queue depths to 32 and ran my usual
> stress test. This time I logged whenever there was an increase in the depth of
> the txstatus queue, and whenever the queue depth was 16. The results were as
> follows:
>
> Feb 24 19:27:08 mtech kernel: b43: Max Queue depth is now 2
> Feb 24 19:27:12 mtech kernel: b43: Max Queue depth is now 3
> Feb 24 19:27:12 mtech kernel: b43: Max Queue depth is now 4
> Feb 24 19:27:12 mtech kernel: b43: Max Queue depth is now 5
> Feb 24 19:27:12 mtech kernel: b43: Max Queue depth is now 6
> Feb 24 19:27:12 mtech kernel: b43: Max Queue depth is now 7
> Feb 24 19:27:12 mtech kernel: b43: Max Queue depth is now 8
> Feb 24 19:27:12 mtech kernel: b43: Max Queue depth is now 9
> Feb 24 19:27:12 mtech kernel: b43: Max Queue depth is now 10
> Feb 24 19:28:46 mtech kernel: b43: Max Queue depth is now 11
> Feb 25 00:07:19 mtech kernel: b43: Max Queue depth is now 12
> Feb 25 00:07:19 mtech kernel: b43: Max Queue depth is now 13
> Feb 25 00:09:12 mtech kernel: b43: Max Queue depth is now 14
> Feb 25 00:09:50 mtech kernel: b43: Max Queue depth is now 15
> Feb 25 00:09:50 mtech kernel: b43: Max Queue depth is now 16
> Feb 25 00:09:50 mtech kernel: b43: Max queue depth at 16
> Feb 25 00:09:54 mtech kernel: b43: Max queue depth at 16
> Feb 25 00:10:55 mtech kernel: b43: Max queue depth at 16
> Feb 25 00:26:24 mtech kernel: eth1: No ProbeResp from current AP
> 00:1a:70:46:ba:b1 - assume out of range
>
> The 00:26:24 event was the interface going offline. That happens with
> proprietary firmware, thus I think that problem is with the driver, rather than
> the firmware.
Are you sure it's a problem at all and not just simply failure to deliver
certain important management frames due to _extreme_ queue pressure?
Especially the tight RX queue will have effects like dropped frames.
--
Greetings, Michael.
On Wednesday 25 February 2009 19:35:15 Larry Finger wrote:
> Michael Buesch wrote:
> > On Wednesday 25 February 2009 17:00:31 Larry Finger wrote:
> >>
> >> The 00:26:24 event was the interface going offline. That happens with
> >> proprietary firmware, thus I think that problem is with the driver, rather than
> >> the firmware.
> >
> > Are you sure it's a problem at all and not just simply failure to deliver
> > certain important management frames due to _extreme_ queue pressure?
> > Especially the tight RX queue will have effects like dropped frames.
>
> No, I don't know. If I get the same problem with a queue depth of 64, is there
> any problem with increasing it to 128 for testing?
The RX queue length can have any size.
The TX queue length % 2 must be zero. But a BUILD_BUG_ON will trigger if it isn't.
--
Greetings, Michael.
On Feb 25, 2009, at 5:00 PM, Larry Finger wrote:
> Francesco and Lorenzo,
>
> In my latest test, I cut the RX and TX queue depths to 32 and ran my
> usual
> stress test. This time I logged whenever there was an increase in
> the depth of
> the txstatus queue, and whenever the queue depth was 16. The results
> were as
> follows:
>
> Feb 24 19:27:08 mtech kernel: b43: Max Queue depth is now 2
> Feb 24 19:27:12 mtech kernel: b43: Max Queue depth is now 3
> Feb 24 19:27:12 mtech kernel: b43: Max Queue depth is now 4
> Feb 24 19:27:12 mtech kernel: b43: Max Queue depth is now 5
> Feb 24 19:27:12 mtech kernel: b43: Max Queue depth is now 6
> Feb 24 19:27:12 mtech kernel: b43: Max Queue depth is now 7
> Feb 24 19:27:12 mtech kernel: b43: Max Queue depth is now 8
> Feb 24 19:27:12 mtech kernel: b43: Max Queue depth is now 9
> Feb 24 19:27:12 mtech kernel: b43: Max Queue depth is now 10
> Feb 24 19:28:46 mtech kernel: b43: Max Queue depth is now 11
> Feb 25 00:07:19 mtech kernel: b43: Max Queue depth is now 12
> Feb 25 00:07:19 mtech kernel: b43: Max Queue depth is now 13
> Feb 25 00:09:12 mtech kernel: b43: Max Queue depth is now 14
> Feb 25 00:09:50 mtech kernel: b43: Max Queue depth is now 15
> Feb 25 00:09:50 mtech kernel: b43: Max Queue depth is now 16
> Feb 25 00:09:50 mtech kernel: b43: Max queue depth at 16
> Feb 25 00:09:54 mtech kernel: b43: Max queue depth at 16
> Feb 25 00:10:55 mtech kernel: b43: Max queue depth at 16
> Feb 25 00:26:24 mtech kernel: eth1: No ProbeResp from current AP
> 00:1a:70:46:ba:b1 - assume out of range
>
> The 00:26:24 event was the interface going offline. That happens with
> proprietary firmware, thus I think that problem is with the driver,
> rather than
> the firmware. In any case, I got no out-of-order cookies or poisoned
> skb's.
> There were no dropped packets nor PHY transmission errors. The above
> list is all
> the b43 messages in the log.
Larry, (pardon me if you answered this question) have you tried to see
what happens with the halved queue when using the open firmware?
Cheers,
-FG
> I have reset the RX queue depth to the original value of 64 and have
> started
> another test. I would expect this change to have no effect, but I'll
> keep you
> informed.
>
> Larry
-------
Francesco Gringoli, PhD - Assistant Professor
Dept. of Electrical Engineering for Automation
University of Brescia
via Branze, 38
25123 Brescia
ITALY
Ph: ++39.030.3715843
FAX: ++39.030.380014
WWW: http://www.ing.unibs.it/~gringoli
Francesco Gringoli wrote:
> Larry, (pardon me if you answered this question) have you tried to see
> what happens with the halved queue when using the open firmware?
The above test was with open firmware V5.1. Sorry if I didn't make that clear.
The current test has been running for 9 hours with RX queue depth of 64 and TX
queue depth of 32. The biggest depth of the TX status queue so far has been 12.
FWIW, the longest the test has run previously without crashing has been 7 hours.
Larry
Michael Buesch wrote:
> On Wednesday 25 February 2009 17:00:31 Larry Finger wrote:
>>
>> The 00:26:24 event was the interface going offline. That happens with
>> proprietary firmware, thus I think that problem is with the driver, rather than
>> the firmware.
>
> Are you sure it's a problem at all and not just simply failure to deliver
> certain important management frames due to _extreme_ queue pressure?
> Especially the tight RX queue will have effects like dropped frames.
No, I don't know. If I get the same problem with a queue depth of 64, is there
any problem with increasing it to 128 for testing?
Larry