This patch fixes races between irq_enter() <-> wait_on_ira() and
irq_enter() <-> synchronize_irq().
1. Archs which use bitops or spinlock for global_irq_lock
- adds smp_mb() to [hard_]irq_enter()
- adds smp_mb() to synchronize_irq()
=> alpha, i386, ia64, mips, mips64, parisc, x86_64
=> ppc also has atomic_inc() in hard_irq_enter(), so
smp_mb__after_atomic_inc() is used instead.
2. Archs which use brlock for global_irq_lock
- adds smp_mb() to synchronize_irq()
=> ppc64, sparc, sparc64
3. Archs with no SMP support
- nothing needed
=> arm, cris, generic, m68k, sh, sh64
4. Archs which don't seem to need irq_enter() synchronization to
disable irqs on on other CPUs.
- I don't know. Adds smp_mb() to synchronize_irq()?
=> s390, s390x
Please comment on how it should be done on s390 and s390x and point
out if anything is wrong.
--
tejun
Oops, the patch is against 2.4.22-rc4. Sorry :-)
--
tejun