A while ago I had read about an effort to consider removing tasklets
[1] or at least trying to not use them. I'm unaware of the progress in
this respect but since reading that article have always tried to
evaluate whether or not we need tasklets on wireless drivers. I have
also wondered whether work in irq context in other parts of the kernel
can be moved to process context, a curious example being timers. I'll
personally be trying to using only process context on bottom halves on
future drivers but I figured it may be a good time to ask how serious
was avoiding tasklets or using wrappers in the future to avoid irq
context is or is it advised. Do we have a general agreement this is a
good step forward to take? Has anyone made tests or changes on a
specific driver from irq context to process context and proven there
are no significant advantages of using irq context where you would
have expected it?
Wireless in particular should IMHO not require taskets for anything
time sensitive that I can think about except perhaps changing channels
quickly and to do that appropriately also process pending RX frames
prior to a switch. It remains to be seen experimentally whether or not
using a workqueue for RX processing would affect the time to switch
channels negatively but I doubt it would be significant. I hope to
test that with ath9k_htc.
What about gigabit or 10 Gigabit Ethernet drivers ? Do they face any
challenges which would yet need to be proven would not face issues
when processing bottom halves in process context?
[1] http://lwn.net/Articles/239633/
Luis
On Tuesday 08 September 2009 06:50:41 Michael Buesch wrote:
> On Tuesday 08 September 2009 04:17:34 Steven Rostedt wrote:
> > > Process context is too slow.
> >
> > Well, I'm hoping to prove the opposite. I'm working on some stuff that I
> > plan to present at Linux Plumbers. I've been too distracted by other
> > things, but hopefully I'll have some good numbers to present by then.
>
> I recently converted the b43 driver to threaded interrupt handlers and
> a workqueue based TX mechanism. (My motivation was porting b43 to the SDIO bus that
> needs to sleep, so requires process context).
>
> There are two things that I noticed. When looking at the "idle" percentage in "top"
> it regressed quite a bit when using threaded IRQ handlers. It shows about 8% less
> idle. This is with threaded IRQs patched in, but without WQ TX mechanism. Applying
> the WQ TX mechanism does not show any noticeable effect in "top".
>
> I'm not quite sure where the 8% slowdown on threaded IRQ handlers come from. I'm not
> really certain that it's _really_ a regression and not just a statistics accounting quirk.
> Why does threaded IRQs slow down stuff and threaded TX does not at all? That does not
> make sense at all to me.
>
> I think there's no real reason for process context being slow in general. It's just that
> we have additional context switches. But these are fast on Linux.
>
Ok, I just did another test. I used a workqueue instead of the standard kernel threaded
IRQ infrastructure. Now the slowdown is only about 4% in "top". Maybe that shows room
for improvement in the threaded IRQ implementation...
B43 does call mac80211's "irqsafe" TX-status and RX functions. They schedule
additional tasklets. That is not required, however. Maybe I should remove that stuff and
retry my tests. That should also improve stuff a bit.
And yes, I notice that "top" is actually crap for testing performance issues. :)
--
Greetings, Michael.
On Mon, 2009-09-07 at 21:16 -0700, Luis R. Rodriguez wrote:
> On Mon, Sep 7, 2009 at 7:17 PM, Steven Rostedt<[email protected]> wrote:
> >>
> >> Process context is too slow.
> >
> > Well, I'm hoping to prove the opposite. I'm working on some stuff that I
> > plan to present at Linux Plumbers. I've been too distracted by other
> > things, but hopefully I'll have some good numbers to present by then.
>
> What day in specific was this planned for at Plumbers?
Wednesday, during the networking session.
http://linuxplumbersconf.org/ocw/proposals/53
-- Steve
On Mon, Sep 7, 2009 at 7:17 PM, Steven Rostedt<[email protected]> wrote:
> On Mon, 2009-09-07 at 17:14 -0700, Stephen Hemminger wrote:
>> On Mon, 7 Sep 2009 15:58:50 -0700
>> "Luis R. Rodriguez" <[email protected]> wrote:
>>
>> > A while ago I had read about an effort to consider removing tasklets
>> > [1] or at least trying to not use them. I'm unaware of the progress in
>> > this respect but since reading that article have always tried to
>> > evaluate whether or not we need tasklets on wireless drivers. I have
>> > also wondered whether work in irq context in other parts of the kernel
>> > can be moved to process context, a curious example being timers. I'll
>> > personally be trying to using only process context on bottom halves on
>> > future drivers but I figured it may be a good time to ask how serious
>> > was avoiding tasklets or using wrappers in the future to avoid irq
>> > context is or is it advised. Do we have a general agreement this is a
>> > good step forward to take? Has anyone made tests or changes on a
>> > specific driver from irq context to process context and proven there
>> > are no significant advantages of using irq context where you would
>> > have expected it?
>> >
>> > Wireless in particular should IMHO not require taskets for anything
>> > time sensitive that I can think about except perhaps changing channels
>> > quickly and to do that appropriately also process pending RX frames
>> > prior to a switch. It remains to be seen experimentally whether or not
>> > using a workqueue for RX processing would affect the time to switch
>> > channels negatively but I doubt it would be significant. I hope to
>> > test that with ath9k_htc.
>> >
>> > What about gigabit or 10 Gigabit Ethernet drivers ? Do they face any
>> > challenges which would yet need to be proven would not face issues
>> > when processing bottom halves in process context?
>> >
>> > [1] http://lwn.net/Articles/239633/
>> >
>> > Luis
>> > --
>> > To unsubscribe from this list: send the line "unsubscribe netdev" in
>> > the body of a message to [email protected]
>> > More majordomo info at http://vger.kernel.org/majordomo-info.html
>>
>> Why not use NAPI, which is soft irq? Almost all 1G and 10G drivers
>> use NAPI.
>>
>> Process context is too slow.
>
> Well, I'm hoping to prove the opposite. I'm working on some stuff that I
> plan to present at Linux Plumbers. I've been too distracted by other
> things, but hopefully I'll have some good numbers to present by then.
What day in specific was this planned for at Plumbers?
Luis
On Mon, 07 Sep 2009 22:17:34 -0400
Steven Rostedt <[email protected]> wrote:
> On Mon, 2009-09-07 at 17:14 -0700, Stephen Hemminger wrote:
> > On Mon, 7 Sep 2009 15:58:50 -0700
> > "Luis R. Rodriguez" <[email protected]> wrote:
> >
> > > A while ago I had read about an effort to consider removing tasklets
> > > [1] or at least trying to not use them. I'm unaware of the progress in
> > > this respect but since reading that article have always tried to
> > > evaluate whether or not we need tasklets on wireless drivers. I have
> > > also wondered whether work in irq context in other parts of the kernel
> > > can be moved to process context, a curious example being timers. I'll
> > > personally be trying to using only process context on bottom halves on
> > > future drivers but I figured it may be a good time to ask how serious
> > > was avoiding tasklets or using wrappers in the future to avoid irq
> > > context is or is it advised. Do we have a general agreement this is a
> > > good step forward to take? Has anyone made tests or changes on a
> > > specific driver from irq context to process context and proven there
> > > are no significant advantages of using irq context where you would
> > > have expected it?
> > >
> > > Wireless in particular should IMHO not require taskets for anything
> > > time sensitive that I can think about except perhaps changing channels
> > > quickly and to do that appropriately also process pending RX frames
> > > prior to a switch. It remains to be seen experimentally whether or not
> > > using a workqueue for RX processing would affect the time to switch
> > > channels negatively but I doubt it would be significant. I hope to
> > > test that with ath9k_htc.
> > >
> > > What about gigabit or 10 Gigabit Ethernet drivers ? Do they face any
> > > challenges which would yet need to be proven would not face issues
> > > when processing bottom halves in process context?
> > >
> > > [1] http://lwn.net/Articles/239633/
> > >
> > > Luis
> > > --
> > > To unsubscribe from this list: send the line "unsubscribe netdev" in
> > > the body of a message to [email protected]
> > > More majordomo info at http://vger.kernel.org/majordomo-info.html
> >
> > Why not use NAPI, which is soft irq? Almost all 1G and 10G drivers
> > use NAPI.
> >
> > Process context is too slow.
>
> Well, I'm hoping to prove the opposite. I'm working on some stuff that I
> plan to present at Linux Plumbers. I've been too distracted by other
> things, but hopefully I'll have some good numbers to present by then.
>
> -- Steve
>
>
A good performance test is changing the behaviour of loopback
device and running lmbench. This checks overhead without the specter
of real hardware.
--
* Michael Buesch <[email protected]> wrote:
> There are two things that I noticed. When looking at the "idle"
> percentage in "top" it regressed quite a bit when using threaded
> IRQ handlers. It shows about 8% less idle. This is with threaded
> IRQs patched in, but without WQ TX mechanism. Applying the WQ TX
> mechanism does not show any noticeable effect in "top".
>
> I'm not quite sure where the 8% slowdown on threaded IRQ handlers
> come from. I'm not really certain that it's _really_ a regression
> and not just a statistics accounting quirk. Why does threaded IRQs
> slow down stuff and threaded TX does not at all? That does not
> make sense at all to me.
Do you have an x86 box to test it on?
If yes then perfcounters can be used for _much_ more precise
measurements that you can trust. Do something like this:
perf stat -a --repeat 3 sleep 1
The '-a/--all' option will measure all CPUs - everything: IRQ
context, irqs-off region, etc. That output will be comparable before
your threaded patch and after the patch.
Here's an example. I started one infinite loop on a testbox, which
is using 100% of a single CPU. The system-wide stats look like this:
# perf stat -a --repeat 3 sleep 1
Performance counter stats for 'sleep 1' (3 runs):
16003.320239 task-clock-msecs # 15.993 CPUs ( +- 0.044% )
94 context-switches # 0.000 M/sec ( +- 11.373% )
3 CPU-migrations # 0.000 M/sec ( +- 25.000% )
170 page-faults # 0.000 M/sec ( +- 0.518% )
3294001334 cycles # 205.832 M/sec ( +- 0.896% )
1088670782 instructions # 0.331 IPC ( +- 0.905% )
1720926 cache-references # 0.108 M/sec ( +- 1.880% )
61253 cache-misses # 0.004 M/sec ( +- 4.401% )
1.000623219 seconds time elapsed ( +- 0.002% )
the instructions count or the cycle count will go up or down,
precisely according to how the threaded handlers. These stats are
not time sampled but 'real', so they reflect reality and show
whether your workload had to spend more (or less) cycles /
instructions /etc.
I started a second loop in addition to the first one, and perf stat
now gives me this output:
# perf stat -a --repeat 3 sleep 1
Performance counter stats for 'sleep 1' (3 runs):
16003.289509 task-clock-msecs # 15.994 CPUs ( +- 0.046% )
88 context-switches # 0.000 M/sec ( +- 15.933% )
2 CPU-migrations # 0.000 M/sec ( +- 14.286% )
188 page-faults # 0.000 M/sec ( +- 9.414% )
6481963224 cycles # 405.039 M/sec ( +- 0.011% )
2152924468 instructions # 0.332 IPC ( +- 0.054% )
397564 cache-references # 0.025 M/sec ( +- 1.217% )
59835 cache-misses # 0.004 M/sec ( +- 3.732% )
1.000576354 seconds time elapsed ( +- 0.005% )
Compare the two results:
before:
6481963224 cycles # 405.039 M/sec ( +- 0.011% )
2152924468 instructions # 0.332 IPC ( +- 0.054% )
after:
3294001334 cycles # 205.832 M/sec ( +- 0.896% )
1088670782 instructions # 0.331 IPC ( +- 0.905% )
The cycles/sec doubled - as expected. You could do the same with
your test and not have to rely in the very imprecise (and often
misleading) 'top' statistics for kernel development.
The IPC (instructions per cycle) factor stayed roughly constant -
showing that both workloads can push the same amount of instructions
when normalized to a single CPU. If a workload becomes very
cache-missy or executes a lot of system calls then the IPC factor
goes down - if it becomes more optimal 'tight' code then the IPC
factor goes up.)
(The cache-miss rate was very low in both cases - it's a simple
infinite loop i tested.)
Furthermore the error bars in the rightmost column help you know
whether any difference in results is statistically significant, or
within the noise level.
Hope this helps,
Ingo
On Tue, 08 Sep 2009 12:40:23 -0400
Steven Rostedt <[email protected]> wrote:
> On Tue, 2009-09-08 at 09:11 -0700, Stephen Hemminger wrote:
>
> > > > Process context is too slow.
> > >
> > > Well, I'm hoping to prove the opposite. I'm working on some stuff that I
> > > plan to present at Linux Plumbers. I've been too distracted by other
> > > things, but hopefully I'll have some good numbers to present by then.
> > >
> >
> >
> > That's great, does it keep the good properties of NAPI (irq disabling
> > and throttling?)
>
> Not exactly sure what you mean by throttling, but I'm assuming it will.
>
> As for irqs disabling, I'm trying to avoid doing that. Note, the device
> will have its interrupts disabled, but not all other devices will.
>
> -- Steve
>
>
The way NAPI works is that in irq routine, the device disables interrupts
then schedules processing packets, when processing is done irq's are re-enabled.
This means that if machine is being flooded, irq's stay off, and the packets
get discarded (because device hardware ring is full), rather than in software
(because software receive queue is full).
--
On Mon, 7 Sep 2009 15:58:50 -0700
"Luis R. Rodriguez" <[email protected]> wrote:
> A while ago I had read about an effort to consider removing tasklets
> [1] or at least trying to not use them. I'm unaware of the progress in
> this respect but since reading that article have always tried to
> evaluate whether or not we need tasklets on wireless drivers. I have
> also wondered whether work in irq context in other parts of the kernel
> can be moved to process context, a curious example being timers. I'll
> personally be trying to using only process context on bottom halves on
> future drivers but I figured it may be a good time to ask how serious
> was avoiding tasklets or using wrappers in the future to avoid irq
> context is or is it advised. Do we have a general agreement this is a
> good step forward to take? Has anyone made tests or changes on a
> specific driver from irq context to process context and proven there
> are no significant advantages of using irq context where you would
> have expected it?
>
> Wireless in particular should IMHO not require taskets for anything
> time sensitive that I can think about except perhaps changing channels
> quickly and to do that appropriately also process pending RX frames
> prior to a switch. It remains to be seen experimentally whether or not
> using a workqueue for RX processing would affect the time to switch
> channels negatively but I doubt it would be significant. I hope to
> test that with ath9k_htc.
>
> What about gigabit or 10 Gigabit Ethernet drivers ? Do they face any
> challenges which would yet need to be proven would not face issues
> when processing bottom halves in process context?
>
> [1] http://lwn.net/Articles/239633/
>
> Luis
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
Why not use NAPI, which is soft irq? Almost all 1G and 10G drivers
use NAPI.
Process context is too slow.
--
On Mon, 2009-09-07 at 17:14 -0700, Stephen Hemminger wrote:
> On Mon, 7 Sep 2009 15:58:50 -0700
> "Luis R. Rodriguez" <[email protected]> wrote:
>
> > A while ago I had read about an effort to consider removing tasklets
> > [1] or at least trying to not use them. I'm unaware of the progress in
> > this respect but since reading that article have always tried to
> > evaluate whether or not we need tasklets on wireless drivers. I have
> > also wondered whether work in irq context in other parts of the kernel
> > can be moved to process context, a curious example being timers. I'll
> > personally be trying to using only process context on bottom halves on
> > future drivers but I figured it may be a good time to ask how serious
> > was avoiding tasklets or using wrappers in the future to avoid irq
> > context is or is it advised. Do we have a general agreement this is a
> > good step forward to take? Has anyone made tests or changes on a
> > specific driver from irq context to process context and proven there
> > are no significant advantages of using irq context where you would
> > have expected it?
> >
> > Wireless in particular should IMHO not require taskets for anything
> > time sensitive that I can think about except perhaps changing channels
> > quickly and to do that appropriately also process pending RX frames
> > prior to a switch. It remains to be seen experimentally whether or not
> > using a workqueue for RX processing would affect the time to switch
> > channels negatively but I doubt it would be significant. I hope to
> > test that with ath9k_htc.
> >
> > What about gigabit or 10 Gigabit Ethernet drivers ? Do they face any
> > challenges which would yet need to be proven would not face issues
> > when processing bottom halves in process context?
> >
> > [1] http://lwn.net/Articles/239633/
> >
> > Luis
> > --
> > To unsubscribe from this list: send the line "unsubscribe netdev" in
> > the body of a message to [email protected]
> > More majordomo info at http://vger.kernel.org/majordomo-info.html
>
> Why not use NAPI, which is soft irq? Almost all 1G and 10G drivers
> use NAPI.
>
> Process context is too slow.
Well, I'm hoping to prove the opposite. I'm working on some stuff that I
plan to present at Linux Plumbers. I've been too distracted by other
things, but hopefully I'll have some good numbers to present by then.
-- Steve
On Tuesday 08 September 2009 21:07:32 matthieu castet wrote:
> Michael Buesch <mb@...> writes:
> >
> > I recently converted the b43 driver to threaded interrupt handlers and
> > a workqueue based TX mechanism. (My motivation was porting b43 to the SDIO bus
> that
> > needs to sleep, so requires process context).
> Well you can do busy wait : for sending small sd command, the overhead of
> sleep ( context switch) can be bigger than the sleep.
The bus is _really_ slow and we push the whole network load through it.
So we really don't want to busywait. It would effectively lockup the machine
on network traffic.
And besides that, the SDIO subsystem requires process context. I was not going
to rewrite that.
> It depends of architecture. For example on older arm (armv5) context switch can
> be slow (need tlb and cache flush). It is not really true for kernel thread that
> keeps the same mmu mapping and don't need all those flush,
Yeah, but that's what it is. A kernel thread.
> but there still a
> visible performance hit.
That's the initial question. How big is it?
--
Greetings, Michael.
On Tue, 2009-09-08 at 09:11 -0700, Stephen Hemminger wrote:
> > > Process context is too slow.
> >
> > Well, I'm hoping to prove the opposite. I'm working on some stuff that I
> > plan to present at Linux Plumbers. I've been too distracted by other
> > things, but hopefully I'll have some good numbers to present by then.
> >
>
>
> That's great, does it keep the good properties of NAPI (irq disabling
> and throttling?)
Not exactly sure what you mean by throttling, but I'm assuming it will.
As for irqs disabling, I'm trying to avoid doing that. Note, the device
will have its interrupts disabled, but not all other devices will.
-- Steve
[ added Thomas Gleixner to Cc]
On Tue, 2009-09-08 at 10:01 -0700, Stephen Hemminger wrote:
> On Tue, 08 Sep 2009 12:40:23 -0400
> Steven Rostedt <[email protected]> wrote:
>
> > On Tue, 2009-09-08 at 09:11 -0700, Stephen Hemminger wrote:
> >
> > > > > Process context is too slow.
> > > >
> > > > Well, I'm hoping to prove the opposite. I'm working on some stuff that I
> > > > plan to present at Linux Plumbers. I've been too distracted by other
> > > > things, but hopefully I'll have some good numbers to present by then.
> > > >
> > >
> > >
> > > That's great, does it keep the good properties of NAPI (irq disabling
> > > and throttling?)
> >
> > Not exactly sure what you mean by throttling, but I'm assuming it will.
> >
> > As for irqs disabling, I'm trying to avoid doing that. Note, the device
> > will have its interrupts disabled, but not all other devices will.
> >
> > -- Steve
> >
> >
>
> The way NAPI works is that in irq routine, the device disables interrupts
> then schedules processing packets, when processing is done irq's are re-enabled.
> This means that if machine is being flooded, irq's stay off, and the packets
> get discarded (because device hardware ring is full), rather than in software
> (because software receive queue is full).
That sounds exactly like what threaded IRQs will do. When an interrupt
comes in, the device driver will disable the device interrupts, and then
the device irq thread handler is awoken.
The device irq handler will handle all the packets. If new packets come
in, and the hardware ring buffer is full, those packets will be dropped.
When the irq handler thread is done processing all pending packets, it
will re-enable the device's interrupts and go to sleep.
Yeah, looking at the NAPI code, it does seem to follow what threaded
interrupts do.
-- Steve
On Tuesday 08 September 2009 04:17:34 Steven Rostedt wrote:
> > Process context is too slow.
>
> Well, I'm hoping to prove the opposite. I'm working on some stuff that I
> plan to present at Linux Plumbers. I've been too distracted by other
> things, but hopefully I'll have some good numbers to present by then.
I recently converted the b43 driver to threaded interrupt handlers and
a workqueue based TX mechanism. (My motivation was porting b43 to the SDIO bus that
needs to sleep, so requires process context).
There are two things that I noticed. When looking at the "idle" percentage in "top"
it regressed quite a bit when using threaded IRQ handlers. It shows about 8% less
idle. This is with threaded IRQs patched in, but without WQ TX mechanism. Applying
the WQ TX mechanism does not show any noticeable effect in "top".
I'm not quite sure where the 8% slowdown on threaded IRQ handlers come from. I'm not
really certain that it's _really_ a regression and not just a statistics accounting quirk.
Why does threaded IRQs slow down stuff and threaded TX does not at all? That does not
make sense at all to me.
I think there's no real reason for process context being slow in general. It's just that
we have additional context switches. But these are fast on Linux.
--
Greetings, Michael.
Michael Buesch <mb@...> writes:
>
> I recently converted the b43 driver to threaded interrupt handlers and
> a workqueue based TX mechanism. (My motivation was porting b43 to the SDIO bus
that
> needs to sleep, so requires process context).
Well you can do busy wait : for sending small sd command, the overhead of
sleep ( context switch) can be bigger than the sleep.
>
> I think there's no real reason for process context being slow in general. It's
just that
> we have additional context switches. But these are fast on Linux.
>
It depends of architecture. For example on older arm (armv5) context switch can
be slow (need tlb and cache flush). It is not really true for kernel thread that
keeps the same mmu mapping and don't need all those flush, but there still a
visible performance hit.
Matthieu
On Mon, 07 Sep 2009 22:17:34 -0400
Steven Rostedt <[email protected]> wrote:
> On Mon, 2009-09-07 at 17:14 -0700, Stephen Hemminger wrote:
> > On Mon, 7 Sep 2009 15:58:50 -0700
> > "Luis R. Rodriguez" <[email protected]> wrote:
> >
> > > A while ago I had read about an effort to consider removing tasklets
> > > [1] or at least trying to not use them. I'm unaware of the progress in
> > > this respect but since reading that article have always tried to
> > > evaluate whether or not we need tasklets on wireless drivers. I have
> > > also wondered whether work in irq context in other parts of the kernel
> > > can be moved to process context, a curious example being timers. I'll
> > > personally be trying to using only process context on bottom halves on
> > > future drivers but I figured it may be a good time to ask how serious
> > > was avoiding tasklets or using wrappers in the future to avoid irq
> > > context is or is it advised. Do we have a general agreement this is a
> > > good step forward to take? Has anyone made tests or changes on a
> > > specific driver from irq context to process context and proven there
> > > are no significant advantages of using irq context where you would
> > > have expected it?
> > >
> > > Wireless in particular should IMHO not require taskets for anything
> > > time sensitive that I can think about except perhaps changing channels
> > > quickly and to do that appropriately also process pending RX frames
> > > prior to a switch. It remains to be seen experimentally whether or not
> > > using a workqueue for RX processing would affect the time to switch
> > > channels negatively but I doubt it would be significant. I hope to
> > > test that with ath9k_htc.
> > >
> > > What about gigabit or 10 Gigabit Ethernet drivers ? Do they face any
> > > challenges which would yet need to be proven would not face issues
> > > when processing bottom halves in process context?
> > >
> > > [1] http://lwn.net/Articles/239633/
> > >
> > > Luis
> > > --
> > > To unsubscribe from this list: send the line "unsubscribe netdev" in
> > > the body of a message to [email protected]
> > > More majordomo info at http://vger.kernel.org/majordomo-info.html
> >
> > Why not use NAPI, which is soft irq? Almost all 1G and 10G drivers
> > use NAPI.
> >
> > Process context is too slow.
>
> Well, I'm hoping to prove the opposite. I'm working on some stuff that I
> plan to present at Linux Plumbers. I've been too distracted by other
> things, but hopefully I'll have some good numbers to present by then.
>
That's great, does it keep the good properties of NAPI (irq disabling
and throttling?)
--