2007-02-08 20:34:03

by Heiko Carstens

[permalink] [raw]
Subject: [patch] i386/x86_64: smp_call_function locking inconsistency

On i386/x86_64 smp_call_function_single() takes call_lock with
spin_lock_bh(). To me this would imply that it is legal to call
smp_call_function_single() from softirq context.
It's not since smp_call_function() takes call_lock with just
spin_lock(). We can easily deadlock:

-> [process context]
-> smp_call_function()
-> spin_lock(&call_lock)
-> IRQ -> do_softirq -> tasklet
-> [softirq context]
-> smp_call_function_single()
-> spin_lock_bh(&call_lock)
-> dead

So either all spin_lock_bh's should be converted to spin_lock,
which would limit smp_call_function()/smp_call_function_single()
to process context & irqs enabled.
Or the spin_lock's could be converted to spin_lock_bh which would
make it possible to call these two functions even if in softirq
context. AFAICS this should be safe.

Just stumbled across this since we have the same inconsistency
on s390 and our new iucv driver makes use of smp_call_function
in softirq context.

The patch below converts the spin_lock's in i386/x86_64 to
spin_lock_bh, so it would be consistent with s390.

Patch is _not_ compile tested.

Cc: Andi Kleen <[email protected]>
Cc: Ingo Molnar <[email protected]>
Signed-off-by: Heiko Carstens <[email protected]>
---
arch/i386/kernel/smp.c | 8 ++++----
arch/x86_64/kernel/smp.c | 10 +++++-----
2 files changed, 9 insertions(+), 9 deletions(-)

Index: linux-2.6/arch/i386/kernel/smp.c
===================================================================
--- linux-2.6.orig/arch/i386/kernel/smp.c
+++ linux-2.6/arch/i386/kernel/smp.c
@@ -527,7 +527,7 @@ static struct call_data_struct *call_dat
* remote CPUs are nearly ready to execute <<func>> or are or have executed.
*
* You must not call this function with disabled interrupts or from a
- * hardware interrupt handler or from a bottom half handler.
+ * hardware interrupt handler.
*/
int smp_call_function (void (*func) (void *info), void *info, int nonatomic,
int wait)
@@ -536,10 +536,10 @@ int smp_call_function (void (*func) (voi
int cpus;

/* Holding any lock stops cpus from going down. */
- spin_lock(&call_lock);
+ spin_lock_bh(&call_lock);
cpus = num_online_cpus() - 1;
if (!cpus) {
- spin_unlock(&call_lock);
+ spin_unlock_bh(&call_lock);
return 0;
}

@@ -566,7 +566,7 @@ int smp_call_function (void (*func) (voi
if (wait)
while (atomic_read(&data.finished) != cpus)
cpu_relax();
- spin_unlock(&call_lock);
+ spin_unlock_bh(&call_lock);

return 0;
}
Index: linux-2.6/arch/x86_64/kernel/smp.c
===================================================================
--- linux-2.6.orig/arch/x86_64/kernel/smp.c
+++ linux-2.6/arch/x86_64/kernel/smp.c
@@ -439,15 +439,15 @@ static void __smp_call_function (void (*
* remote CPUs are nearly ready to execute func or are or have executed.
*
* You must not call this function with disabled interrupts or from a
- * hardware interrupt handler or from a bottom half handler.
+ * hardware interrupt handler.
* Actually there are a few legal cases, like panic.
*/
int smp_call_function (void (*func) (void *info), void *info, int nonatomic,
int wait)
{
- spin_lock(&call_lock);
+ spin_lock_bh(&call_lock);
__smp_call_function(func,info,nonatomic,wait);
- spin_unlock(&call_lock);
+ spin_unlock_bh(&call_lock);
return 0;
}
EXPORT_SYMBOL(smp_call_function);
@@ -477,13 +477,13 @@ void smp_send_stop(void)
if (reboot_force)
return;
/* Don't deadlock on the call lock in panic */
- if (!spin_trylock(&call_lock)) {
+ if (!spin_trylock_bh(&call_lock)) {
/* ignore locking because we have panicked anyways */
nolock = 1;
}
__smp_call_function(smp_really_stop_cpu, NULL, 0, 0);
if (!nolock)
- spin_unlock(&call_lock);
+ spin_unlock_bh(&call_lock);

local_irq_disable();
disable_local_APIC();


2007-02-08 20:43:30

by David Miller

[permalink] [raw]
Subject: Re: [patch] i386/x86_64: smp_call_function locking inconsistency

From: Heiko Carstens <[email protected]>
Date: Thu, 8 Feb 2007 21:32:10 +0100

> So either all spin_lock_bh's should be converted to spin_lock,
> which would limit smp_call_function()/smp_call_function_single()
> to process context & irqs enabled.
> Or the spin_lock's could be converted to spin_lock_bh which would
> make it possible to call these two functions even if in softirq
> context. AFAICS this should be safe.

Sparc64 does not allow smp_call_function() from software interrupts
either.

Same for powerpc, mips, alpha.

IA-64 has the inconsistency you've spotted between smp_call_function()
and smp_call_function_single().

In short, it's a mess :-)

I think it's logically simpler if we disallow smp_call_function*()
from any kind of asynchronous context. But I'm sure your driver
has a true need for this for some reason.

2007-02-09 07:40:27

by Andi Kleen

[permalink] [raw]
Subject: Re: [patch] i386/x86_64: smp_call_function locking inconsistency

On Thursday 08 February 2007 21:32, Heiko Carstens wrote:
> On i386/x86_64 smp_call_function_single() takes call_lock with
> spin_lock_bh(). To me this would imply that it is legal to call
> smp_call_function_single() from softirq context.
> It's not since smp_call_function() takes call_lock with just
> spin_lock(). We can easily deadlock:
>
> -> [process context]
> -> smp_call_function()
> -> spin_lock(&call_lock)
> -> IRQ -> do_softirq -> tasklet
> -> [softirq context]
> -> smp_call_function_single()
> -> spin_lock_bh(&call_lock)
> -> dead
>
> So either all spin_lock_bh's should be converted to spin_lock,
> which would limit smp_call_function()/smp_call_function_single()
> to process context & irqs enabled.
> Or the spin_lock's could be converted to spin_lock_bh which would
> make it possible to call these two functions even if in softirq
> context. AFAICS this should be safe.

I'm not so sure. Perhaps drop _bh in both and stick a WARN_ON_ONCE in
to catch the cases?

-Andi

2007-02-09 08:43:33

by Heiko Carstens

[permalink] [raw]
Subject: Re: [patch] i386/x86_64: smp_call_function locking inconsistency

On Thu, Feb 08, 2007 at 12:43:28PM -0800, David Miller wrote:
> From: Heiko Carstens <[email protected]>
> Date: Thu, 8 Feb 2007 21:32:10 +0100
>
> > So either all spin_lock_bh's should be converted to spin_lock,
> > which would limit smp_call_function()/smp_call_function_single()
> > to process context & irqs enabled.
> > Or the spin_lock's could be converted to spin_lock_bh which would
> > make it possible to call these two functions even if in softirq
> > context. AFAICS this should be safe.
> [...]
> In short, it's a mess :-)
>
> I think it's logically simpler if we disallow smp_call_function*()
> from any kind of asynchronous context. But I'm sure your driver
> has a true need for this for some reason.

I just want to avoid that s390 has different semantics for
smp_call_functiom*() than any other architecture. But then again it
will probably not hurt since we allow more.
Another thing that comes into my mind is smp_call_function together
with cpu hotplug. Who is responsible that preemption and with that
cpu hotplug is disabled?
Is it the caller or smp_call_function itself?
If it's smp_call_function then s390 would be broken, since
then we would have
int cpus = num_online_cpus()-1;
in preemptible context... I agree: what a mess :)

2007-02-09 12:58:09

by Jan Glauber

[permalink] [raw]
Subject: Re: [patch] i386/x86_64: smp_call_function locking inconsistency

On Fri, 2007-02-09 at 09:42 +0100, Heiko Carstens wrote:
> I just want to avoid that s390 has different semantics for
> smp_call_functiom*() than any other architecture. But then again it
> will probably not hurt since we allow more.
> Another thing that comes into my mind is smp_call_function together
> with cpu hotplug. Who is responsible that preemption and with that
> cpu hotplug is disabled?
> Is it the caller or smp_call_function itself?

I think the caller must disable preemption since smp_call_function()
means "do something on all but the current cpu". If the preempt_disable
would happen only in smp_call_function() it could already be running on
a different cpu, which is not what the caller wants.

If preemption must be disabled before smp_call_function() we should have
the same semantics for all smp_call_function_* variants.

Jan

2007-06-07 14:07:22

by Satyam Sharma

[permalink] [raw]
Subject: Re: [patch] i386/x86_64: smp_call_function locking inconsistency

Hi,

I'm about six months late here(!), but I noticed this bug in
arch/x86_64/kernel/smp.c while preparing another related
patch today and then found this thread during Googling ...

On 2/9/07, Heiko Carstens <[email protected]> wrote:
> On i386/x86_64 smp_call_function_single() takes call_lock with
> spin_lock_bh(). To me this would imply that it is legal to call
> smp_call_function_single() from softirq context.
> It's not since smp_call_function() takes call_lock with just
> spin_lock(). We can easily deadlock:
>
> -> [process context]
> -> smp_call_function()
> -> spin_lock(&call_lock)
> -> IRQ -> do_softirq -> tasklet
> -> [softirq context]
> -> smp_call_function_single()
> -> spin_lock_bh(&call_lock)
> -> dead

You're absolutely right, and this bug still exists in the latest -git.

> So either all spin_lock_bh's should be converted to spin_lock,
> which would limit smp_call_function()/smp_call_function_single()
> to process context & irqs enabled.
> Or the spin_lock's could be converted to spin_lock_bh which would
> make it possible to call these two functions even if in softirq
> context. AFAICS this should be safe.

Actually, I agree with David and Andi here:

On 2/9/07, David Miller <[email protected]> wrote:
> I think it's logically simpler if we disallow smp_call_function*()
> from any kind of asynchronous context. But I'm sure your driver
> has a true need for this for some reason.

and

On 2/9/07, Andi Kleen <[email protected]> wrote:
> I'm not so sure. Perhaps drop _bh in both and stick a WARN_ON_ONCE in
> to catch the cases?

Replacing the _bh variants and making smp_call_function{_single}
illegal from all contexts but process is fine for x86_64, as we
don't really have any driver that needs to use this from softirq
context in the x86_64 tree. This means it becomes dissimilar to
s390, but similar to powerpc, mips, alpha, sparc64 semantics.
I'll prepare and submit a patch for the same, shortly.

As for:

On 2/9/07, Heiko Carstens <[email protected]> wrote:
> Another thing that comes into my mind is smp_call_function together
> with cpu hotplug. Who is responsible that preemption and with that
> cpu hotplug is disabled?
> Is it the caller or smp_call_function itself?
> If it's smp_call_function then s390 would be broken, since
> then we would have
> int cpus = num_online_cpus()-1;
> in preemptible context... I agree: what a mess :)

and

On 2/9/07, Jan Glauber <[email protected]> wrote:
> If preemption must be disabled before smp_call_function() we should have
> the same semantics for all smp_call_function_* variants.

I don't see any CPU hotplug / preemption disabling issues here.
Note that both smp_call_function() and smp_call_function_single()
on x86_64 acquire the call_lock spinlock before using cpu_online_map
via num_online_cpus(). And spin_lock() does preempt_disable() on both
SMP and !SMP, so we're safe. [ But we're not explicitly disabling
preemption and depending on spin_lock() instead, so a comment would
be in order? ]

Satyam

2007-06-07 16:28:19

by Heiko Carstens

[permalink] [raw]
Subject: Re: [patch] i386/x86_64: smp_call_function locking inconsistency

> >So either all spin_lock_bh's should be converted to spin_lock,
> >which would limit smp_call_function()/smp_call_function_single()
> >to process context & irqs enabled.
> >Or the spin_lock's could be converted to spin_lock_bh which would
> >make it possible to call these two functions even if in softirq
> >context. AFAICS this should be safe.
>
> Actually, I agree with David and Andi here:
>
> On 2/9/07, David Miller <[email protected]> wrote:
> >I think it's logically simpler if we disallow smp_call_function*()
> >from any kind of asynchronous context. But I'm sure your driver
> >has a true need for this for some reason.
>
> and
>
> On 2/9/07, Andi Kleen <[email protected]> wrote:
> >I'm not so sure. Perhaps drop _bh in both and stick a WARN_ON_ONCE in
> >to catch the cases?
>
> Replacing the _bh variants and making smp_call_function{_single}
> illegal from all contexts but process is fine for x86_64, as we
> don't really have any driver that needs to use this from softirq
> context in the x86_64 tree. This means it becomes dissimilar to
> s390, but similar to powerpc, mips, alpha, sparc64 semantics.
> I'll prepare and submit a patch for the same, shortly.

Calling an smp_call_* function from any context but process context is
a bug. We didn't notice this initially when we used smp_call_function
from softirq context... until we deadlocked ;)
So s390 is the same as any other architecture wrt this.

> On 2/9/07, Heiko Carstens <[email protected]> wrote:
> >Another thing that comes into my mind is smp_call_function together
> >with cpu hotplug. Who is responsible that preemption and with that
> >cpu hotplug is disabled?
> >Is it the caller or smp_call_function itself?
> >If it's smp_call_function then s390 would be broken, since
> >then we would have
> >int cpus = num_online_cpus()-1;
> >in preemptible context... I agree: what a mess :)
>
> and
>
> On 2/9/07, Jan Glauber <[email protected]> wrote:
> >If preemption must be disabled before smp_call_function() we should have
> >the same semantics for all smp_call_function_* variants.
>
> I don't see any CPU hotplug / preemption disabling issues here.
> Note that both smp_call_function() and smp_call_function_single()
> on x86_64 acquire the call_lock spinlock before using cpu_online_map
> via num_online_cpus(). And spin_lock() does preempt_disable() on both
> SMP and !SMP, so we're safe. [ But we're not explicitly disabling
> preemption and depending on spin_lock() instead, so a comment would
> be in order? ]

Calling smp_call_function_single() with preemption enabled is pointless.
You might be scheduled on the cpu you want to send an IPI to and get
-EBUSY as return... If cpu hotplug is enabled the target cpu might even
be gone when smp_call_function_single() gets executed.

Avi Kivity has already a patch which introduces an on_cpu() function which
looks quite like on_each_cpu(). That way you don't have to open code this
stuff over and over again:

preempt_disable();
if (cpu == smp_processor_id())
func();
else
smp_call_function_single(...);
preempt_enable();

There are already quite a few of these around.

2007-06-07 17:05:23

by Satyam Sharma

[permalink] [raw]
Subject: Re: [patch] i386/x86_64: smp_call_function locking inconsistency

Hi Heiko,

On 6/7/07, Heiko Carstens <[email protected]> wrote:
> > Replacing the _bh variants and making smp_call_function{_single}
> > illegal from all contexts but process is fine for x86_64, as we
> > don't really have any driver that needs to use this from softirq
> > context in the x86_64 tree. This means it becomes dissimilar to
> > s390, but similar to powerpc, mips, alpha, sparc64 semantics.
> > I'll prepare and submit a patch for the same, shortly.
>
> Calling an smp_call_* function from any context but process context is
> a bug. We didn't notice this initially when we used smp_call_function
> from softirq context... until we deadlocked ;)
> So s390 is the same as any other architecture wrt this.

I'll fix the necessary patch for x86_64, in that case.

> > I don't see any CPU hotplug / preemption disabling issues here.
> > Note that both smp_call_function() and smp_call_function_single()
> > on x86_64 acquire the call_lock spinlock before using cpu_online_map
> > via num_online_cpus(). And spin_lock() does preempt_disable() on both
> > SMP and !SMP, so we're safe. [ But we're not explicitly disabling
> > preemption and depending on spin_lock() instead, so a comment would
> > be in order? ]
>
> Calling smp_call_function_single() with preemption enabled is pointless.
> You might be scheduled on the cpu you want to send an IPI to and get
> -EBUSY as return... If cpu hotplug is enabled the target cpu might even
> be gone when smp_call_function_single() gets executed.

Exactly. I was only mentioning that the smp_call_function*
of x86{_64} were safe anyway (but the smp_processor_id()
that would've preceded it need not have been, of course).

> Avi Kivity has already a patch which introduces an on_cpu() function which
> looks quite like on_each_cpu(). That way you don't have to open code this
> stuff over and over again:
>
> preempt_disable();
> if (cpu == smp_processor_id())
> func();
> else
> smp_call_function_single(...);
> preempt_enable();
>
> There are already quite a few of these around.

Indeed -- this was doubly problematic because the un-safeness
was because of smp_processor_id() as well as the (eventual)
access of cpu_online_map (via smp_call_function() ->
num_online_cpus()) ... thanks for letting me know about this.

Satyam

2007-06-07 17:19:20

by Satyam Sharma

[permalink] [raw]
Subject: Re: [patch] i386/x86_64: smp_call_function locking inconsistency

> On 6/7/07, Heiko Carstens <[email protected]> wrote:
> [...]
> > Avi Kivity has already a patch which introduces an on_cpu() function which
> > looks quite like on_each_cpu(). That way you don't have to open code this
> > stuff over and over again:
> >
> > preempt_disable();
> > if (cpu == smp_processor_id())
> > func();
> > else
> > smp_call_function_single(...);
> > preempt_enable();
> >
> > There are already quite a few of these around.
>
> Indeed -- this was doubly problematic because the un-safeness
> was because of smp_processor_id() as well as the (eventual)
> access of cpu_online_map (via smp_call_function() ->
> num_online_cpus()) ... thanks for letting me know about this.

Oh wait, the on_one_cpu() patch proposes on UP:

+static inline int on_one_cpu(int cpu, void (*func)(void *info), void *info,
+ int retry, int wait)
+{

/* this needs a if (cpu == 0) check here, IMO */

+ local_irq_disable();
+ func(info);
+ local_irq_enable();
+ return 0;

/* else WARN and return -EINVAL; */

+}

which is broken without the suggested additions, IMHO
(this is what got me into this in the first place). There
_is_ a difference between on_each_cpu() and the
smp_call_function* semantics (as discussed on the other
thread -- gargh! my mistake for opening this discussion up
on so many threads), and in its current form on_one_cpu()
has quite confused semantics, trying to mix the two. I guess
on_one_cpu() would be better off simply being just an
atomic wrapper over smp_processor_id() and
smp_call_function_single() (which is the *real* issue that
needs solving in the first place), and do it well.

Satyam

2007-06-07 17:22:52

by Avi Kivity

[permalink] [raw]
Subject: Re: [patch] i386/x86_64: smp_call_function locking inconsistency

Satyam Sharma wrote:
>
> Oh wait, the on_one_cpu() patch proposes on UP:
>
> +static inline int on_one_cpu(int cpu, void (*func)(void *info), void
> *info,
> + int retry, int wait)
> +{
>
> /* this needs a if (cpu == 0) check here, IMO */
>
> + local_irq_disable();
> + func(info);
> + local_irq_enable();
> + return 0;
>
> /* else WARN and return -EINVAL; */
>
> +}
>
> which is broken without the suggested additions, IMHO
> (this is what got me into this in the first place). There
> _is_ a difference between on_each_cpu() and the
> smp_call_function* semantics (as discussed on the other
> thread -- gargh! my mistake for opening this discussion up
> on so many threads), and in its current form on_one_cpu()
> has quite confused semantics, trying to mix the two. I guess
> on_one_cpu() would be better off simply being just an
> atomic wrapper over smp_processor_id() and
> smp_call_function_single() (which is the *real* issue that
> needs solving in the first place), and do it well.
>

This is on UP, so (cpu == 0) is trivially true.

--
Do not meddle in the internals of kernels, for they are subtle and quick to panic.

2007-06-07 17:33:35

by Satyam Sharma

[permalink] [raw]
Subject: Re: [patch] i386/x86_64: smp_call_function locking inconsistency

On 6/7/07, Avi Kivity <[email protected]> wrote:
> Satyam Sharma wrote:
> >
> > Oh wait, the on_one_cpu() patch proposes on UP:
> >
> > +static inline int on_one_cpu(int cpu, void (*func)(void *info), void
> > *info,
> > + int retry, int wait)
> > +{
> >
> > /* this needs a if (cpu == 0) check here, IMO */
> >
> > + local_irq_disable();
> > + func(info);
> > + local_irq_enable();
> > + return 0;
> >
> > /* else WARN and return -EINVAL; */
> >
> > +}
> >
> > which is broken without the suggested additions, IMHO
> > (this is what got me into this in the first place). There
> > _is_ a difference between on_each_cpu() and the
> > smp_call_function* semantics (as discussed on the other
> > thread -- gargh! my mistake for opening this discussion up
> > on so many threads), and in its current form on_one_cpu()
> > has quite confused semantics, trying to mix the two. I guess
> > on_one_cpu() would be better off simply being just an
> > atomic wrapper over smp_processor_id() and
> > smp_call_function_single() (which is the *real* issue that
> > needs solving in the first place), and do it well.
> >
>
> This is on UP, so (cpu == 0) is trivially true.

Yes, the caller code might derive the value for the cpu arg in
such a manner to always only ever yield 0 on UP. OTOH,
WARN_ON(!...)'s are often added for such assumptions that are
understood to be trivially true. Note that a warning for cpu != 0
would be perfectly justified, we'd clearly want to flag such
(errant) users.

Anyway, I guess another problem being tackled here is avoding
#ifdef CONFIG_SMP's to mask calls to smp_call_function* (and
thus on_cpu()) in kernel code(?) Avoiding putting WARN_ON() in
the UP cases could be useful to minimize noise, in that case. It
(and smp_call_function*) could still return -EINVAL for the invalid
cases, though.

2007-06-08 19:44:20

by Andi Kleen

[permalink] [raw]
Subject: Re: [patch] i386/x86_64: smp_call_function locking inconsistency


> Indeed -- this was doubly problematic because the un-safeness
> was because of smp_processor_id() as well as the (eventual)
> access of cpu_online_map (via smp_call_function() ->
> num_online_cpus()) ... thanks for letting me know about this.

CPU hotplug is changed to use the freezer anyways. It is known
that the current handling of these maps in preempt has various holes and this
little case here won't make it overall much worse.

-Andi

2007-06-08 19:44:40

by Andi Kleen

[permalink] [raw]
Subject: Re: [patch] i386/x86_64: smp_call_function locking inconsistency

On Thursday 07 June 2007 16:07:04 Satyam Sharma wrote:
> Hi,
>
> I'm about six months late here(!), but I noticed this bug in
> arch/x86_64/kernel/smp.c while preparing another related
> patch today and then found this thread during Googling ...
>
> On 2/9/07, Heiko Carstens <[email protected]> wrote:
> > On i386/x86_64 smp_call_function_single() takes call_lock with
> > spin_lock_bh(). To me this would imply that it is legal to call
> > smp_call_function_single() from softirq context.
> > It's not since smp_call_function() takes call_lock with just
> > spin_lock(). We can easily deadlock:
> >
> > -> [process context]
> > -> smp_call_function()
> > -> spin_lock(&call_lock)
> > -> IRQ -> do_softirq -> tasklet
> > -> [softirq context]
> > -> smp_call_function_single()
> > -> spin_lock_bh(&call_lock)
> > -> dead
>
> You're absolutely right, and this bug still exists in the latest -git.

bug is definitely too strong a word. It might be unnecessary to disable bhs,
but I don't see any bug in here as long as you can't show a case where
the smp_call_function() is called from BHs.

There was a patch floating around to use it from sysrq to display state
of all CPUs (and sysrq is softirq), but I don't think that ever
made it mainline.

And smp_call_function() can be called from panic which can violate
quite some assumptions, but some deadlock possibility there is ok.

I also don't like making it soft/hard irq save because that would
make it much more intrusive to the machine for no good reason
(e.g. slab can call it quite often in some cases)

The _bh should be probably just removed and possibly a WARN_ON added.

-Andi

2007-06-10 07:38:57

by Avi Kivity

[permalink] [raw]
Subject: Re: [patch] i386/x86_64: smp_call_function locking inconsistency

Satyam Sharma wrote:
> On 6/7/07, Avi Kivity <[email protected]> wrote:
>> Satyam Sharma wrote:
>> >
>> > Oh wait, the on_one_cpu() patch proposes on UP:
>> >
>> > +static inline int on_one_cpu(int cpu, void (*func)(void *info), void
>> > *info,
>> > + int retry, int wait)
>> > +{
>> >
>> > /* this needs a if (cpu == 0) check here, IMO */
>> >
>> > + local_irq_disable();
>> > + func(info);
>> > + local_irq_enable();
>> > + return 0;
>> >
>> > /* else WARN and return -EINVAL; */
>> >
>> > +}
>> >
>> > which is broken without the suggested additions, IMHO
>> > (this is what got me into this in the first place). There
>> > _is_ a difference between on_each_cpu() and the
>> > smp_call_function* semantics (as discussed on the other
>> > thread -- gargh! my mistake for opening this discussion up
>> > on so many threads), and in its current form on_one_cpu()
>> > has quite confused semantics, trying to mix the two. I guess
>> > on_one_cpu() would be better off simply being just an
>> > atomic wrapper over smp_processor_id() and
>> > smp_call_function_single() (which is the *real* issue that
>> > needs solving in the first place), and do it well.
>> >
>>
>> This is on UP, so (cpu == 0) is trivially true.
>
> Yes, the caller code might derive the value for the cpu arg in
> such a manner to always only ever yield 0 on UP. OTOH,
> WARN_ON(!...)'s are often added for such assumptions that are
> understood to be trivially true. Note that a warning for cpu != 0
> would be perfectly justified, we'd clearly want to flag such
> (errant) users.

We don't catch incorrect cpu values in smp (because we can't); there's
no reason to do so in UP IMO.

--
error compiling committee.c: too many arguments to function