This is version 3 of the genirq patch-queue, against 2.6.17-rc4. This
patch-queue improves the generic IRQ layer to be truly generic, by
adding various abstractions and features to it, without impacting
existing functionality. It was written by Thomas Gleixner (who has done
most of the heavy-lifting) and me. We reused many bits of code and many
concepts from Russell King's ARM IRQ layer.
The ARM architecture has been updated to make use of this improved
generic IRQ layer. The new code also enables a cleaner and simpler
implementation of lowlevel irq-chip details, chained handlers and other
highlevel irq-flow handlers.
The patch-queue consists of 50 individual patches. The queue begins with
a handful of cleanups, to make sure we are adding features to a cleaned
up codebase. Then come the features that dont need the irq-chip
abstractions but are necessary extensions, then comes the core irq-chip
abstraction (genirq-core), features that rely on it, and finally, the
conversion of the ARM architecture to the new generic IRQ layer.
The full patch-queue can also be downloaded from:
http://redhat.com/~mingo/generic-irq-subsystem/
It has been build-tested with allyesconfig, and booted on x86, x86_64
and various ARM platforms. It has been build-tested on all the 50 ARM
platforms. Current ARM testing results are at:
http://www.linutronix.de/index.php?page=46
Many thanks to the ARM developers who ran the initial patches on their
ARM boards and helped tracking down initial migration bugs.
Review suggestions for past iterations of this code from Andrew Morton,
Benjamin Herrenschmidt and Christoph Hellwig were incorporated in this
version.
Comments, suggestions welcome,
Ingo, Thomas
Hi Ingo, Thomas !
Ok, I think it's better :) But I also think it's not there yet....
Separate flow handlers as "the standard recommended way to go" isn't
the right thing to do imho.
While I agree to leave room for such flow handlers per irq_desc for
really broken interrupt controllers, I'm still not convinced that the
"generic" one (ie. __do_IRQ) can't be used for pretty much everything
(maybe with a few changes), and having those 4 separate "default" flow
handlers presented as beeing "the way to go" by the documentation
isn't quite right.
In fact, I also think it would be less robust (I'll give an example later).
I also have reservations on the way the arch code is supposed to
decide how/when to call the various handle_irq_* handlers with
variable locking requirements and is responsible for getting to the
irq_desc (at least a helper here would be of some use).
To summarize before I explain (heh), while I agree that it _might_ be
useful to give the option of having separate flow handlers, I don't
think it should be the default/recommended practice and we shouldn't have
to provide specialized flow handlers in the generic code. In fact, one
standard robust flow handler that deals with the most common cases of
edge,level and percpu interrupts. I'll explain in more details some of
my reasons below.
Let's first go through the changes to irq_chip/hw_interrupt_type
before I dig into the rationale of having (or not having) split
handlers:
>From your previous implementation, you removed the distinction between
irq_type and irq_chip, they are no longer separate structures.
But you still basically merged all the "new" fields together. Thus we
end up with things like both enable/disable/ack/end "high level" and
mask+ack/unmask "low level" callbacks in the irq chip. That makes
things confusing.
If we go back to the initial hw_interrupt_type (which was a misnamed
hw_interrupt_controller, or irq_chip, I'm not opposing the name
change), we have the enable/disable/ack/end "API" to the main old flow
handler (__do_IRQ) and other API functions. I am not convinced that it
makes sense to add "lower level" functions to it at this level.
Essentially, I think those new callbacks are either redundant or not
necessary. If your intent is to expose a "high level" vs. a "low
level" interface to the controller, then I disagree with the design
since that "low level" interface is essentially tied to the usage of
split flow level handlers and to the way "very dumb" interrupt
controllers work (and even with those, I think it's not necessary).
But let's first look at the callbacks themselves:
First disable/enable at the controller level is essentially identical
to mask/unmask. There is some clear redundancy there. The depth
counting or flag checking shall be done by the caller in any case,
thus the controller enable/disable should just be what they are,
low level dumb mask/unmask.
The remaining one is mask_and_ack. I don't personally think it is
needed, ack is enough. Wether ack should mask or not is, I think,
local to the irq_chip implementation.
If a given chip wants to mask some type of interrupts when ack'ing
them, it's free to do so and unmask them in end() based on the
IRQ_LEVEL flags for example, I don't think it mandates en entire level
of abstraction (separate flow handlers that is) to handle that simple
case. Since a flow handler should imho be specific to a given (broken)
interrupt chip that can't use for some (unknown) reason use the
default one, I see no problem having that irq_chip implementation
of ack do something specifically matching the needs of whatever flow
handler it's using. One could argue that it will add an "if ()" or two
in the ack implementation, and my answer is that's better than an
indirect function call (see later why I think the default handler
shouldn't be a function pointer, same reason)
Now, back to the root of my problem which is why I don't think we need
to generalise having separate flow handlers and keep that a special
case for broken controllers.
First, as we discussed on IRC, I yet have to find a convincing example
of an irq controller that cannot fit the current __do_IRQ() flow
handler.
I've turned the example you gave me of a cascaded demuxer that does edge
interrupts all the ways around, I still can't see why it can't be done
properly without special flow handlers. I suspect such a controller
also has HW/design bugs that I haven't guessed, an explanation from
Russell King would be welcome here.
Despite that, I agreed that it might be ok to leave the _option_ of
overriding the main flow handler for a given irq_desc. But that should
be clearly presented as an option for use by special case/broken
controllers.
For an example (among others) of why I find the split handlers
approach less robust is the logic of handling an IRQ that is already
in progress. This logic is useful for edge interrupts in the normal
case and thus you implemented it in your edge handler. But why remove
it from the level handler ? For "normal" level interrupts, it's not
supposed to happens, but IRQ controllers have bugs, especially smarter
ones, and that logic can't harm. For example, some SMP distributed irq
controllers might occasionally incorrectly deliver a given IRQ to more
than one CPU at once. Depending on the timing and the architecture
(how the vectors are send to the processor), this can result in just a
spurrious interrupt (no vector) or a "short" irq. In that case, with
your "simplified" level handler, you'll end up potentially re-entering
the "user" (ie driver) action handler. With the "security", the worse
that can happen is that the "user" action handler will be called for
an interrupt that is no longer pending, it will do nothing, return
IRQ_NONE and we'll take note of a spurrious interrupt. Probably only
one, since even if it's a level interrupt, it shouldn't re-occur right
away as it's a "short" interrupt. In any way, it's handled in a robust
way, while potentially re-entering the driver handler isn't. I like
that kind of by-design safety.
Also, the "split" handlers enforce the semantic that, for example, a
level interrupt needs to be mask'ed and ack'ed, to be unmasked later
while an edge interrupt should be left free to flow after ack. That
sounds good on paper and matches probably the requirements of dumb
controllers but doesn't quite agrees with smarter things like
OpenPIC/MPIC, XICS, or even hypervisors. As I wrote above, I think the
generic flow handler calling ack() and end() is plenty enough, it's up to
the irq_chip implementation of those two to decide wether they should
mask a certain type of interrupt (and later unmask it).
Of course that means 2 or 3 more lines of code in the implementation
of dumb interrupt chips with one classical source of bugs which is the
unmasking in end() which should only be done if the interrupt didn't
get disabled while being handled. I understand that you are trying to
make life easier for those. Maybe one option here would be to provide
"helpers" for use by these things. The simplest would be in the form
of an irq_end_shall_unmask() function that can be called in end() to
know wether to unmask or not. A more complex option would be to have a
irq_dumb_chip which contains those additional "low level" functions
and have pair of "helper" versions of ack and end that can be used by
a dumb_chip... That's more like we do in linux:
Core ----> irq_chip (std set of callbacks)
|
|
|--> irq_dumb_chip (extended set of callbacks)
That is the "standard" interface to the driver is the high level one,
and we can provide pluggable helper functions to implement it on top
of a low level driver
But again, that is provided you really think it's important to save
those few lines of code that needs to be implemented in the ack() and
end() handlers of dmub chips... I don't :)
Now, previously, I also said why I didn't like indirect function calls
and that if's are imho better... If you look at the current __do_IRQ()
you might think it would make sense for example to have percpu
handling be a separate flow handler. But in practice, especially with
heavy pipelined CPUs, it tends to actually be a lot slower to branch
through a function pointer than to handle an if. Maybe it's worth
moving that percpu flow handler to a spearate static function, but the
construct:
if (special_case) {
do_special_case();
return;
}
do_normal_case();
Is faster in many situations than calling a function pointer that can
be either do_special_case() or do_normal_case(). The function pointer
abstraction is still useful in many circumstances and has it's own
justification, but I think in our case, we don't want it.
That's also why we should provide a toplevel:
extern irqreturn_t handle_irq(unsigned int irq, struct pt_regs *regs);
That essentially boils down to:
if (desc->flow_handler)
return desc->flow_handler();
normal_flow_handler();
(with the later being either a function call, or just the expanded
thing that currently is called __do_IRQ())
I don't like in your current approach what seems to be (unless I
missed something) that the arch code (toplevel) is supposed to find
the irq desc and do the desc locking.
Such a handle_irq() should also will also be called from within a
cascade handler obviously. (Unless you want to use my SA_CASCADE
approach, but I'm not sure it's that useful here).
I think the arch should have a single function (as above) to call when
it gets a toplevel interrupt. That function handles the picking up of
the irq_desc, the locking,... Ideally, the arch shouldn't need to know
irq_desc outside of the actual irq_chip implementation code.
Also note that I'm calling the flow handler without the lock. Your
code seems to be slightly inconsistent in your locking rules for the
flow handlers. I think it should be local to the flow handler (some
flow handlers might operate lockless like percpu interrupts do).
Ok, that's pretty much all, unless I missed a bit or two. There is a
lot of good stuff in your patches, don't get me wrong, I just think the
flow handler thing is a bit "too much" in it's current incarnation, at
least until somebody proves me that it's really useful :)
All of the cleanups look good, the new flags too (NOPROBE,NOREQUEST,
etc...)
Cheers,
Ben.
First, I'll say I haven't read Ingo's patches yet, sorry. I'm only
responding to _this_ message, not commenting on Ingo's work. So
this is an initial response only. I'm not going to be able to properly
review this until June.
On Wed, May 17, 2006 at 04:11:56PM +1000, Benjamin Herrenschmidt wrote:
> >From your previous implementation, you removed the distinction between
> irq_type and irq_chip, they are no longer separate structures.
> But you still basically merged all the "new" fields together. Thus we
> end up with things like both enable/disable/ack/end "high level" and
> mask+ack/unmask "low level" callbacks in the irq chip. That makes
> things confusing.
First question I have for you is whether you've read through the
existing ARM IRQ handling code. If not, please do so because it
represents real requirements we have. Almost everything you see
there is not "by accident" but "by design because it was necessary"
to solve real world problems.
For instance, we do not actively mask interrupts when disable_irq()
is called because we have to record events on edge triggered
interrupts to replay them when a subsequent enable_irq() occurs.
(Some people disagree with this, which is fine from an academic
view point, but unfortunately we have to deal with real life
systems and implementations, where things have to work.)
We also have to deal with stupid combinations such as edge triggered
inputs connected to a secondary interrupt controller, which provides
a pulse trigger output. In turn, this is logically orred with some
other random non-maskable interrupt sources and fed into an edge
triggered input on the primary interrupt controller.
Unfortunately, saying "we don't support that" is not an option. We
do support that and we support it cleanly on ARM with the code we
have.
> If we go back to the initial hw_interrupt_type (which was a misnamed
> hw_interrupt_controller, or irq_chip, I'm not opposing the name
> change), we have the enable/disable/ack/end "API" to the main old flow
> handler (__do_IRQ) and other API functions. I am not convinced that it
> makes sense to add "lower level" functions to it at this level.
> Essentially, I think those new callbacks are either redundant or not
> necessary.
You are probably correct, but how do we get to that point without
rewriting from scratch something and probably end up breaking a lot
of machines in the process which used to work?
> First, as we discussed on IRC, I yet have to find a convincing example
> of an irq controller that cannot fit the current __do_IRQ() flow
> handler.
Well, I've not been too forthcoming about this whole "generic IRQ"
thing because (a) I remember all the pain that we had in 2.4 kernels
when we modelled our interrupt system on the x86 way, and (b) I re-
designed our model to something which works for all our requirements
and it does work well with the absolute minimum of overhead... on ARM.
So, I'm rather scared of letting go of something that I know fits our
requirements in favour of going back to something which might be able
to be bent to fit our requirements but might involve compromising on
some corner case.
That said, if someone can show that they can implement a generic IRQ
subsystem which works for x86, PPC, Alpha, ARM, etc, and get it tested
on enough ARM platforms that we're reasonably sure that it's going to
work, I'm not going to stand in the way of that.
> For an example (among others) of why I find the split handlers
> approach less robust is the logic of handling an IRQ that is already
> in progress. This logic is useful for edge interrupts in the normal
> case and thus you implemented it in your edge handler. But why remove
> it from the level handler ? For "normal" level interrupts, it's not
> supposed to happens, but IRQ controllers have bugs, especially smarter
> ones, and that logic can't harm.
Firstly, if you require the more "robust" handling, then you can use
the edge method - nothing stops that. But why impose the considerable
overhead of the edge method on everyone?
Secondly, there are fundamental differences in the way we handle "edge"
and "level" IRQs on ARM - "edge" interrupts are _always_ unmasked
prior to calling the handlers, whereas "level" interrupts must _never_
be unmasked until all handlers have completed.
The constraint on the "edge" case is that if we leave the interrupt
masked while the handlers are called, and, let's say your ethernet
chip receives a packet just as the drivers handler returns, that
edge transition will be lost, and you'll lose your network interface.
The constraint on the "level" case is that if you leave the interrupt
unmasked, as soon as the CPU unmasks it's interrupt (eg, when calling
a handler with SA_INTERRUPT) you immediately take an interrupt exception
and repeat the process, until your kernel stack has gobbled up all
system memory.
> Also, the "split" handlers enforce the semantic that, for example, a
> level interrupt needs to be mask'ed and ack'ed, to be unmasked later
> while an edge interrupt should be left free to flow after ack. That
> sounds good on paper and matches probably the requirements of dumb
> controllers but doesn't quite agrees with smarter things like
> OpenPIC/MPIC, XICS, or even hypervisors.
As you see above, there's good reasons for that difference in behaviour,
and enforcing one common behaviour breaks real-life hardware on ARM.
I don't have much more of a response now - maybe once I've reviewed
the changes in full (see note at the top of this message) I might have
some more comments.
--
Russell King
Linux kernel 2.6 ARM Linux - http://www.arm.linux.org.uk/
maintainer of: 2.6 Serial core
> First question I have for you is whether you've read through the
> existing ARM IRQ handling code. If not, please do so because it
> represents real requirements we have. Almost everything you see
> there is not "by accident" but "by design because it was necessary"
> to solve real world problems.
>
> For instance, we do not actively mask interrupts when disable_irq()
> is called because we have to record events on edge triggered
> interrupts to replay them when a subsequent enable_irq() occurs.
Hrm... that is lost with Ingo/Thomas patch at the moment... mostly
because the irq_type structure is now gone and the only remaining of it
is a per-desc "handler" function which allows custom flow handlers, but
not custom disable_irq/enable_irq.
That might be one argument to keep the split between disable/enable and
mask/unmask in the irq_chip structure but I'm not too keen on that,
since that means adding back flow information to irq_chip which the
patch is trying to get rid of.
An option would be to re-introduce irq_type but I really don't like it
> (Some people disagree with this, which is fine from an academic
> view point, but unfortunately we have to deal with real life
> systems and implementations, where things have to work.)
What is the exact reason where you need to do that ? You controller
stops latching edges when they are masked completely ? Or is just not
emitting upstream irqs (but still have the bits set). The old Apple one
doesn't re-emit when re-enabled but we can still read what happened from
the chip and thus we re-emit them ourselves when re-enabling.
> We also have to deal with stupid combinations such as edge triggered
> inputs connected to a secondary interrupt controller, which provides
> a pulse trigger output. In turn, this is logically orred with some
> other random non-maskable interrupt sources and fed into an edge
> triggered input on the primary interrupt controller.
So you have a secondary controller that takes an edge input and outputs
an edge too, which edge is also shared with another edge interrupt from
another device ? Damn ! Sharing of edge interrupts is fairly insane in
the first place :) Still, I yet have to see why the above is a problem
with the current flow handler ;)
> Unfortunately, saying "we don't support that" is not an option. We
> do support that and we support it cleanly on ARM with the code we
> have.
Oh I'm sure of that, but I haven't been proven yet that the code we have
in __do_IRQ() can't support that too :)
At this point it was pretty much agreed to have custom flow handlers
(even if I'm still convinced that a generic one works just fine :)
though we don't have custom enable_irq and disable_irq in the flow
handler. Thus you'll still need an irq_chip per type with the current
approach if you want to do that kind of soft-disabe of egde interrupts
> You are probably correct, but how do we get to that point without
> rewriting from scratch something and probably end up breaking a lot
> of machines in the process which used to work?
Well, at least _document_ the old disable/enable callbacks as being
redundant with the new mask/unmask and on the way to obsolescence to
make the situation clear :) I didn't understand why we kept 4 calls
until I finally figured out that they have indeed the same semantic,
it's just a renaming/compatibility issue
> Well, I've not been too forthcoming about this whole "generic IRQ"
> thing because (a) I remember all the pain that we had in 2.4 kernels
> when we modelled our interrupt system on the x86 way, and (b) I re-
> designed our model to something which works for all our requirements
> and it does work well with the absolute minimum of overhead... on ARM.
>
> So, I'm rather scared of letting go of something that I know fits our
> requirements in favour of going back to something which might be able
> to be bent to fit our requirements but might involve compromising on
> some corner case.
>
> That said, if someone can show that they can implement a generic IRQ
> subsystem which works for x86, PPC, Alpha, ARM, etc, and get it tested
> on enough ARM platforms that we're reasonably sure that it's going to
> work, I'm not going to stand in the way of that.
Ok good :) I was afraid you would stay there saying "if the new generic
code isn't exactly like the ARM stuff I'll stay in my fork" :)
> Firstly, if you require the more "robust" handling, then you can use
> the edge method - nothing stops that. But why impose the considerable
> overhead of the edge method on everyone?
"considerable overhead" ? heh ! One if and a while loop... I wouldn't
call that considerable :)
> Secondly, there are fundamental differences in the way we handle "edge"
> and "level" IRQs on ARM - "edge" interrupts are _always_ unmasked
> prior to calling the handlers, whereas "level" interrupts must _never_
> be unmasked until all handlers have completed.
Yes, I have seen that. My main concern was that "smart" controllers that
handle the flow in HW are unhappy with that level of abstraction
(mask/ummask's being called from the flow handler instead of just
ack/end). That is solved by having a separate "fastack" flow handler for
these though.
> The constraint on the "edge" case is that if we leave the interrupt
> masked while the handlers are called, and, let's say your ethernet
> chip receives a packet just as the drivers handler returns, that
> edge transition will be lost, and you'll lose your network interface.
Well, it's the same issue you are talking about for
enable_irq/disable_irq and edge interrupts, essentially that you don't
get edge interrupts that were masked. Thus my question above: are they
masked prior to being latched (thus totally lost) or just not re-emitted
when unmasking ? In the later case, it's mostly a matter of reading back
and re-emitting.
However, I do like the whole concept of soft-disabling in the _generic_
case (it's useable for level interrupts as well, they just need to be
masked if they happen while disabled). The current patch from Thomas and
Ingo doesn't do soft-disable afaik. Thus you'll still get your
chip->mask called when disable_irq() is called (which you don't want).
I wonder if we can generalise soft-masking in a way that will allow to
nicely handle your case as well without having to have per-chip
high-level disable/enable...
> The constraint on the "level" case is that if you leave the interrupt
> unmasked, as soon as the CPU unmasks it's interrupt (eg, when calling
> a handler with SA_INTERRUPT) you immediately take an interrupt exception
> and repeat the process, until your kernel stack has gobbled up all
> system memory.
Yes well, thank's for interrupts 101 :)
Ben.
On Thu, May 18, 2006 at 10:32:41AM +1000, Benjamin Herrenschmidt wrote:
> > (Some people disagree with this, which is fine from an academic
> > view point, but unfortunately we have to deal with real life
> > systems and implementations, where things have to work.)
>
> What is the exact reason where you need to do that ? You controller
> stops latching edges when they are masked completely ?
Yes. The only way to disable interrupt detection on these inputs is
to disable the rising and falling edge detection enables.
Hence, if you disable the edge detection to mask the interrupt, and
an edge transition occurs (the one which you're interested in), when
you come to re-enable the edge detection, that transition will have
been missed.
> > We also have to deal with stupid combinations such as edge triggered
> > inputs connected to a secondary interrupt controller, which provides
> > a pulse trigger output. In turn, this is logically orred with some
> > other random non-maskable interrupt sources and fed into an edge
> > triggered input on the primary interrupt controller.
>
> So you have a secondary controller that takes an edge input and outputs
> an edge too, which edge is also shared with another edge interrupt from
> another device ? Damn ! Sharing of edge interrupts is fairly insane in
> the first place :) Still, I yet have to see why the above is a problem
> with the current flow handler ;)
Not quite - the other non-maskable interrupt sources are level based
outputs. In this particular case, these sources are fed through an
FPGA and you have status bits for each, but no way to enable or disable
each individual source. The output interrupt is just a logical OR of
the sources.
> > Secondly, there are fundamental differences in the way we handle "edge"
> > and "level" IRQs on ARM - "edge" interrupts are _always_ unmasked
> > prior to calling the handlers, whereas "level" interrupts must _never_
> > be unmasked until all handlers have completed.
>
> Yes, I have seen that. My main concern was that "smart" controllers that
> handle the flow in HW are unhappy with that level of abstraction
> (mask/ummask's being called from the flow handler instead of just
> ack/end). That is solved by having a separate "fastack" flow handler for
> these though.
It sounds like you've solved this problem already.
> > The constraint on the "edge" case is that if we leave the interrupt
> > masked while the handlers are called, and, let's say your ethernet
> > chip receives a packet just as the drivers handler returns, that
> > edge transition will be lost, and you'll lose your network interface.
>
> Well, it's the same issue you are talking about for
> enable_irq/disable_irq and edge interrupts, essentially that you don't
> get edge interrupts that were masked. Thus my question above: are they
> masked prior to being latched (thus totally lost) or just not re-emitted
> when unmasking ? In the later case, it's mostly a matter of reading back
> and re-emitting.
Totally lost - there is nothing to read back to tell you that the
active edge transition occurred.
> However, I do like the whole concept of soft-disabling in the _generic_
> case (it's useable for level interrupts as well, they just need to be
> masked if they happen while disabled).
That is incredibly wasteful for level interrupts - what you're suggesting
means that to service a level interrupt, you take an interrupt exception,
start processing it, take another interrupt exception, disable the source,
return from that interrupt and continue to service it. No thanks.
I thought you were the one concerned about interrupt handling overhead
(about the overhead introduced by function pointer calls.) but that
idea _far_ outweighs function pointer overheads.
> > The constraint on the "level" case is that if you leave the interrupt
> > unmasked, as soon as the CPU unmasks it's interrupt (eg, when calling
> > a handler with SA_INTERRUPT) you immediately take an interrupt exception
> > and repeat the process, until your kernel stack has gobbled up all
> > system memory.
>
> Yes well, thank's for interrupts 101 :)
Sigh, I'm not teaching you to suck eggs - I was trying to justify
clearly _why_ we have these different "flow" handlers on ARM and why
they are important by contrasting the differences between them.
Obviously, I should've just ignored your email since you know everything
already.
--
Russell King
Linux kernel 2.6 ARM Linux - http://www.arm.linux.org.uk/
maintainer of: 2.6 Serial core
Hi Russell !
> That is incredibly wasteful for level interrupts - what you're suggesting
> means that to service a level interrupt, you take an interrupt exception,
> start processing it, take another interrupt exception, disable the source,
> return from that interrupt and continue to service it. No thanks.
Oh no, that's not what I mean... I've come to agree with having several
flow handlers and thus the level flow handler would mask & ack, then
handle, then unmask is it should be for a level interrupt... What I
meant is that disable_irq could do soft-disable in all cases like it
seems to happen right now in the ARM code but not in Thomas patch.
> I thought you were the one concerned about interrupt handling overhead
> (about the overhead introduced by function pointer calls.) but that
> idea _far_ outweighs function pointer overheads.
I think you misunderstood what I meant by soft-disable :) Basically
bring in more of what ARM does in disable_irq/enable_irq.
> Sigh, I'm not teaching you to suck eggs - I was trying to justify
> clearly _why_ we have these different "flow" handlers on ARM and why
> they are important by contrasting the differences between them.
Yup, and I've finally been convinced, and Thomas patch _does_ have
different flow handlers. However, it doesn't do soft-disable or
lazy-disable as your prefer for disable_irq which means that you'll
still lose edge irqs on ARM. There are 2 ways out:
make disable_irq/enable_irq go through a specific implementation by the
flow handler or just ... generically have disable_irq just mark the
interrupt as disabled in the descriptor, and only actually disable it if
it happens to occur while it was marked disabled (in which case it can
be marked "pending" and possibly re-triggered on enable_irq if the
controller doesn't latch). I even had an idea of how to avoid the
re-trigger on controllers that _do_ latch properly easily: by having
chip->unmask return wether it needs re-emitting or not.
> Obviously, I should've just ignored your email since you know everything
> already.
Bah, don't take it that way please ! I was making a joke ...
Cheers,
Ben.