As discussed with Srivatsa [1], it seems there is no need to keep
ipi_call_[un]lock_irq() when cpu bring-up/down. Because:
1) call_function.lock used in smp_call_function_many() is just to protect
call_function.queue and &data->refs, cpu_online_mask is outside of the
lock. And it's not necessary to protect cpu_online_mask,
because data->cpumask is pre-calculate and even if a cpu is brougt up
when calling arch_send_call_function_ipi_mask(), it's harmless because
validation test in generic_smp_call_function_interrupt() will take care
of it.
2) For cpu down issue, stop_machine() will guarantee that no concurrent
smp_call_fuction() is processing.
In short it's more likely that keeping ipi_call_[un]lock() is leftover
before we introduce generic smp helper. This patchset is just to clean
things up.
[1] http://marc.info/?l=linux-kernel&m=133766786814484&w=2
Yong Zhang (10):
hexagon: smp: remove call to ipi_call_lock()/ipi_call_unlock()
mn10300: smp: remove call to ipi_call_lock()/ipi_call_unlock()
parisc: smp: remove call to ipi_call_lock()/ipi_call_unlock()
S390: smp: remove call to ipi_call_lock()/ipi_call_unlock()
tile: smp: remove call to ipi_call_lock()/ipi_call_unlock()
x86: smp: remove call to ipi_call_lock()/ipi_call_unlock()
ia64: smp: remove call to ipi_call_lock_irq()/ipi_call_unlock_irq()
SPARC: smp: remove call to ipi_call_lock_irq()/ipi_call_unlock_irq()
POWERPC: smp: remove call to ipi_call_lock()/ipi_call_unlock()
smp: remove ipi_call_lock[_irq]()/ipi_call_unlock[_irq]()
arch/hexagon/kernel/smp.c | 2 --
arch/ia64/kernel/smpboot.c | 2 --
arch/mn10300/kernel/smp.c | 2 --
arch/parisc/kernel/smp.c | 2 --
arch/powerpc/kernel/smp.c | 2 --
arch/s390/kernel/smp.c | 2 --
arch/sparc/kernel/smp_64.c | 6 ++----
arch/tile/kernel/smpboot.c | 10 ----------
arch/x86/kernel/smpboot.c | 9 ---------
arch/x86/xen/smp.c | 2 --
include/linux/smp.h | 4 ----
kernel/smp.c | 20 --------------------
12 files changed, 2 insertions(+), 61 deletions(-)
--
1.7.5.4
From: Yong Zhang <[email protected]>
1) call_function.lock used in smp_call_function_many() is just to protect
call_function.queue and &data->refs, cpu_online_mask is outside of the
lock. And it's not necessary to protect cpu_online_mask,
because data->cpumask is pre-calculate and even if a cpu is brougt up
when calling arch_send_call_function_ipi_mask(), it's harmless because
validation test in generic_smp_call_function_interrupt() will take care
of it.
2) For cpu down issue, stop_machine() will guarantee that no concurrent
smp_call_fuction() is processing.
Signed-off-by: Yong Zhang <[email protected]>
Cc: Richard Kuo <[email protected]>
Cc: [email protected]
---
arch/hexagon/kernel/smp.c | 2 --
1 files changed, 0 insertions(+), 2 deletions(-)
diff --git a/arch/hexagon/kernel/smp.c b/arch/hexagon/kernel/smp.c
index f726462..149fbef 100644
--- a/arch/hexagon/kernel/smp.c
+++ b/arch/hexagon/kernel/smp.c
@@ -180,9 +180,7 @@ void __cpuinit start_secondary(void)
notify_cpu_starting(cpu);
- ipi_call_lock();
set_cpu_online(cpu, true);
- ipi_call_unlock();
local_irq_enable();
--
1.7.5.4
From: Yong Zhang <[email protected]>
1) call_function.lock used in smp_call_function_many() is just to protect
call_function.queue and &data->refs, cpu_online_mask is outside of the
lock. And it's not necessary to protect cpu_online_mask,
because data->cpumask is pre-calculate and even if a cpu is brougt up
when calling arch_send_call_function_ipi_mask(), it's harmless because
validation test in generic_smp_call_function_interrupt() will take care
of it.
2) For cpu down issue, stop_machine() will guarantee that no concurrent
smp_call_fuction() is processing.
Signed-off-by: Yong Zhang <[email protected]>
Cc: David Howells <[email protected]>
Cc: Koichi Yasutake <[email protected]>
Cc: [email protected]
---
arch/mn10300/kernel/smp.c | 2 --
1 files changed, 0 insertions(+), 2 deletions(-)
diff --git a/arch/mn10300/kernel/smp.c b/arch/mn10300/kernel/smp.c
index 090d35d..e62c223 100644
--- a/arch/mn10300/kernel/smp.c
+++ b/arch/mn10300/kernel/smp.c
@@ -876,9 +876,7 @@ static void __init smp_online(void)
notify_cpu_starting(cpu);
- ipi_call_lock();
set_cpu_online(cpu, true);
- ipi_call_unlock();
local_irq_enable();
}
--
1.7.5.4
From: Yong Zhang <[email protected]>
1) call_function.lock used in smp_call_function_many() is just to protect
call_function.queue and &data->refs, cpu_online_mask is outside of the
lock. And it's not necessary to protect cpu_online_mask,
because data->cpumask is pre-calculate and even if a cpu is brougt up
when calling arch_send_call_function_ipi_mask(), it's harmless because
validation test in generic_smp_call_function_interrupt() will take care
of it.
2) For cpu down issue, stop_machine() will guarantee that no concurrent
smp_call_fuction() is processing.
Signed-off-by: Yong Zhang <[email protected]>
Cc: "James E.J. Bottomley" <[email protected]>
Cc: Helge Deller <[email protected]>
Cc: [email protected]
---
arch/parisc/kernel/smp.c | 2 --
1 files changed, 0 insertions(+), 2 deletions(-)
diff --git a/arch/parisc/kernel/smp.c b/arch/parisc/kernel/smp.c
index a47828d..6266730 100644
--- a/arch/parisc/kernel/smp.c
+++ b/arch/parisc/kernel/smp.c
@@ -300,9 +300,7 @@ smp_cpu_init(int cpunum)
notify_cpu_starting(cpunum);
- ipi_call_lock();
set_cpu_online(cpunum, true);
- ipi_call_unlock();
/* Initialise the idle task for this CPU */
atomic_inc(&init_mm.mm_count);
--
1.7.5.4
From: Yong Zhang <[email protected]>
1) call_function.lock used in smp_call_function_many() is just to protect
call_function.queue and &data->refs, cpu_online_mask is outside of the
lock. And it's not necessary to protect cpu_online_mask,
because data->cpumask is pre-calculate and even if a cpu is brougt up
when calling arch_send_call_function_ipi_mask(), it's harmless because
validation test in generic_smp_call_function_interrupt() will take care
of it.
2) For cpu down issue, stop_machine() will guarantee that no concurrent
smp_call_fuction() is processing.
Signed-off-by: Yong Zhang <[email protected]>
Cc: Martin Schwidefsky <[email protected]>
Cc: Heiko Carstens <[email protected]>
Cc: [email protected]
Cc: [email protected]
---
arch/s390/kernel/smp.c | 2 --
1 files changed, 0 insertions(+), 2 deletions(-)
diff --git a/arch/s390/kernel/smp.c b/arch/s390/kernel/smp.c
index 647ba94..14c8a08 100644
--- a/arch/s390/kernel/smp.c
+++ b/arch/s390/kernel/smp.c
@@ -716,9 +716,7 @@ static void __cpuinit smp_start_secondary(void *cpuvoid)
init_cpu_vtimer();
pfault_init();
notify_cpu_starting(smp_processor_id());
- ipi_call_lock();
set_cpu_online(smp_processor_id(), true);
- ipi_call_unlock();
local_irq_enable();
/* cpu_idle will call schedule for us */
cpu_idle();
--
1.7.5.4
From: Yong Zhang <[email protected]>
1) call_function.lock used in smp_call_function_many() is just to protect
call_function.queue and &data->refs, cpu_online_mask is outside of the
lock. And it's not necessary to protect cpu_online_mask,
because data->cpumask is pre-calculate and even if a cpu is brougt up
when calling arch_send_call_function_ipi_mask(), it's harmless because
validation test in generic_smp_call_function_interrupt() will take care
of it.
2) For cpu down issue, stop_machine() will guarantee that no concurrent
smp_call_fuction() is processing.
Signed-off-by: Yong Zhang <[email protected]>
Cc: Chris Metcalf <[email protected]>
---
arch/tile/kernel/smpboot.c | 10 ----------
1 files changed, 0 insertions(+), 10 deletions(-)
diff --git a/arch/tile/kernel/smpboot.c b/arch/tile/kernel/smpboot.c
index 84873fb..e686c5a 100644
--- a/arch/tile/kernel/smpboot.c
+++ b/arch/tile/kernel/smpboot.c
@@ -198,17 +198,7 @@ void __cpuinit online_secondary(void)
notify_cpu_starting(smp_processor_id());
- /*
- * We need to hold call_lock, so there is no inconsistency
- * between the time smp_call_function() determines number of
- * IPI recipients, and the time when the determination is made
- * for which cpus receive the IPI. Holding this
- * lock helps us to not include this cpu in a currently in progress
- * smp_call_function().
- */
- ipi_call_lock();
set_cpu_online(smp_processor_id(), 1);
- ipi_call_unlock();
__get_cpu_var(cpu_state) = CPU_ONLINE;
/* Set up tile-specific state for this cpu. */
--
1.7.5.4
From: Yong Zhang <[email protected]>
1) call_function.lock used in smp_call_function_many() is just to protect
call_function.queue and &data->refs, cpu_online_mask is outside of the
lock. And it's not necessary to protect cpu_online_mask,
because data->cpumask is pre-calculate and even if a cpu is brougt up
when calling arch_send_call_function_ipi_mask(), it's harmless because
validation test in generic_smp_call_function_interrupt() will take care
of it.
2) For cpu down issue, stop_machine() will guarantee that no concurrent
smp_call_fuction() is processing.
Signed-off-by: Yong Zhang <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: [email protected]
Cc: Konrad Rzeszutek Wilk <[email protected]>
Cc: Jeremy Fitzhardinge <[email protected]>
---
arch/x86/kernel/smpboot.c | 9 ---------
arch/x86/xen/smp.c | 2 --
2 files changed, 0 insertions(+), 11 deletions(-)
diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index 433529e..bfbe30e 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -253,22 +253,13 @@ notrace static void __cpuinit start_secondary(void *unused)
check_tsc_sync_target();
/*
- * We need to hold call_lock, so there is no inconsistency
- * between the time smp_call_function() determines number of
- * IPI recipients, and the time when the determination is made
- * for which cpus receive the IPI. Holding this
- * lock helps us to not include this cpu in a currently in progress
- * smp_call_function().
- *
* We need to hold vector_lock so there the set of online cpus
* does not change while we are assigning vectors to cpus. Holding
* this lock ensures we don't half assign or remove an irq from a cpu.
*/
- ipi_call_lock();
lock_vector_lock();
set_cpu_online(smp_processor_id(), true);
unlock_vector_lock();
- ipi_call_unlock();
per_cpu(cpu_state, smp_processor_id()) = CPU_ONLINE;
x86_platform.nmi_init();
diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
index afb250d..f58dca7 100644
--- a/arch/x86/xen/smp.c
+++ b/arch/x86/xen/smp.c
@@ -80,9 +80,7 @@ static void __cpuinit cpu_bringup(void)
notify_cpu_starting(cpu);
- ipi_call_lock();
set_cpu_online(cpu, true);
- ipi_call_unlock();
this_cpu_write(cpu_state, CPU_ONLINE);
--
1.7.5.4
From: Yong Zhang <[email protected]>
1) call_function.lock used in smp_call_function_many() is just to protect
call_function.queue and &data->refs, cpu_online_mask is outside of the
lock. And it's not necessary to protect cpu_online_mask,
because data->cpumask is pre-calculate and even if a cpu is brougt up
when calling arch_send_call_function_ipi_mask(), it's harmless because
validation test in generic_smp_call_function_interrupt() will take care
of it.
2) For cpu down issue, stop_machine() will guarantee that no concurrent
smp_call_fuction() is processing.
Signed-off-by: Yong Zhang <[email protected]>
Cc: Tony Luck <[email protected]>
Cc: Fenghua Yu <[email protected]>
Cc: [email protected]
---
arch/ia64/kernel/smpboot.c | 2 --
1 files changed, 0 insertions(+), 2 deletions(-)
diff --git a/arch/ia64/kernel/smpboot.c b/arch/ia64/kernel/smpboot.c
index 1113b8a..963d2db 100644
--- a/arch/ia64/kernel/smpboot.c
+++ b/arch/ia64/kernel/smpboot.c
@@ -382,7 +382,6 @@ smp_callin (void)
set_numa_node(cpu_to_node_map[cpuid]);
set_numa_mem(local_memory_node(cpu_to_node_map[cpuid]));
- ipi_call_lock_irq();
spin_lock(&vector_lock);
/* Setup the per cpu irq handling data structures */
__setup_vector_irq(cpuid);
@@ -390,7 +389,6 @@ smp_callin (void)
set_cpu_online(cpuid, true);
per_cpu(cpu_state, cpuid) = CPU_ONLINE;
spin_unlock(&vector_lock);
- ipi_call_unlock_irq();
smp_setup_percpu_timer();
--
1.7.5.4
From: Yong Zhang <[email protected]>
1) call_function.lock used in smp_call_function_many() is just to protect
call_function.queue and &data->refs, cpu_online_mask is outside of the
lock. And it's not necessary to protect cpu_online_mask,
because data->cpumask is pre-calculate and even if a cpu is brougt up
when calling arch_send_call_function_ipi_mask(), it's harmless because
validation test in generic_smp_call_function_interrupt() will take care
of it.
2) For cpu down issue, stop_machine() will guarantee that no concurrent
smp_call_fuction() is processing.
Signed-off-by: Yong Zhang <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: [email protected]
---
arch/sparc/kernel/smp_64.c | 6 ++----
1 files changed, 2 insertions(+), 4 deletions(-)
diff --git a/arch/sparc/kernel/smp_64.c b/arch/sparc/kernel/smp_64.c
index f591598..60e745c 100644
--- a/arch/sparc/kernel/smp_64.c
+++ b/arch/sparc/kernel/smp_64.c
@@ -124,9 +124,9 @@ void __cpuinit smp_callin(void)
while (!cpumask_test_cpu(cpuid, &smp_commenced_mask))
rmb();
- ipi_call_lock_irq();
+ local_irq_disable();
set_cpu_online(cpuid, true);
- ipi_call_unlock_irq();
+ local_irq_enable();
/* idle thread is expected to have preempt disabled */
preempt_disable();
@@ -1308,9 +1308,7 @@ int __cpu_disable(void)
mdelay(1);
local_irq_disable();
- ipi_call_lock();
set_cpu_online(cpu, false);
- ipi_call_unlock();
cpu_map_rebuild();
--
1.7.5.4
From: Yong Zhang <[email protected]>
1) call_function.lock used in smp_call_function_many() is just to protect
call_function.queue and &data->refs, cpu_online_mask is outside of the
lock. And it's not necessary to protect cpu_online_mask,
because data->cpumask is pre-calculate and even if a cpu is brougt up
when calling arch_send_call_function_ipi_mask(), it's harmless because
validation test in generic_smp_call_function_interrupt() will take care
of it.
2) For cpu down issue, stop_machine() will guarantee that no concurrent
smp_call_fuction() is processing.
Signed-off-by: Yong Zhang <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: [email protected]
---
arch/powerpc/kernel/smp.c | 2 --
1 files changed, 0 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
index e4cb343..e1417c4 100644
--- a/arch/powerpc/kernel/smp.c
+++ b/arch/powerpc/kernel/smp.c
@@ -571,7 +571,6 @@ void __devinit start_secondary(void *unused)
if (system_state == SYSTEM_RUNNING)
vdso_data->processorCount++;
#endif
- ipi_call_lock();
notify_cpu_starting(cpu);
set_cpu_online(cpu, true);
/* Update sibling maps */
@@ -601,7 +600,6 @@ void __devinit start_secondary(void *unused)
of_node_put(np);
}
of_node_put(l2_cache);
- ipi_call_unlock();
local_irq_enable();
--
1.7.5.4
From: Yong Zhang <[email protected]>
There is no user of those APIs anymore, just remove it.
Signed-off-by: Yong Zhang <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Thomas Gleixner <[email protected]>
---
include/linux/smp.h | 4 ----
kernel/smp.c | 20 --------------------
2 files changed, 0 insertions(+), 24 deletions(-)
diff --git a/include/linux/smp.h b/include/linux/smp.h
index 717fb74..a34d4f1 100644
--- a/include/linux/smp.h
+++ b/include/linux/smp.h
@@ -90,10 +90,6 @@ void kick_all_cpus_sync(void);
void __init call_function_init(void);
void generic_smp_call_function_single_interrupt(void);
void generic_smp_call_function_interrupt(void);
-void ipi_call_lock(void);
-void ipi_call_unlock(void);
-void ipi_call_lock_irq(void);
-void ipi_call_unlock_irq(void);
#else
static inline void call_function_init(void) { }
#endif
diff --git a/kernel/smp.c b/kernel/smp.c
index d0ae5b2..29dd40a 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -581,26 +581,6 @@ int smp_call_function(smp_call_func_t func, void *info, int wait)
return 0;
}
EXPORT_SYMBOL(smp_call_function);
-
-void ipi_call_lock(void)
-{
- raw_spin_lock(&call_function.lock);
-}
-
-void ipi_call_unlock(void)
-{
- raw_spin_unlock(&call_function.lock);
-}
-
-void ipi_call_lock_irq(void)
-{
- raw_spin_lock_irq(&call_function.lock);
-}
-
-void ipi_call_unlock_irq(void)
-{
- raw_spin_unlock_irq(&call_function.lock);
-}
#endif /* USE_GENERIC_SMP_HELPERS */
/* Setup configured maximum number of CPUs to activate */
--
1.7.5.4
On 05/29/2012 12:45 PM, Yong Zhang wrote:
> As discussed with Srivatsa [1], it seems there is no need to keep
> ipi_call_[un]lock_irq() when cpu bring-up/down. Because:
>
> 1) call_function.lock used in smp_call_function_many() is just to protect
> call_function.queue and &data->refs, cpu_online_mask is outside of the
> lock. And it's not necessary to protect cpu_online_mask,
> because data->cpumask is pre-calculate and even if a cpu is brougt up
> when calling arch_send_call_function_ipi_mask(), it's harmless because
> validation test in generic_smp_call_function_interrupt() will take care
> of it.
>
> 2) For cpu down issue, stop_machine() will guarantee that no concurrent
> smp_call_fuction() is processing.
>
> In short it's more likely that keeping ipi_call_[un]lock() is leftover
> before we introduce generic smp helper. This patchset is just to clean
> things up.
>
> [1] http://marc.info/?l=linux-kernel&m=133766786814484&w=2
>
> Yong Zhang (10):
> hexagon: smp: remove call to ipi_call_lock()/ipi_call_unlock()
> mn10300: smp: remove call to ipi_call_lock()/ipi_call_unlock()
> parisc: smp: remove call to ipi_call_lock()/ipi_call_unlock()
> S390: smp: remove call to ipi_call_lock()/ipi_call_unlock()
> tile: smp: remove call to ipi_call_lock()/ipi_call_unlock()
> x86: smp: remove call to ipi_call_lock()/ipi_call_unlock()
> ia64: smp: remove call to ipi_call_lock_irq()/ipi_call_unlock_irq()
> SPARC: smp: remove call to ipi_call_lock_irq()/ipi_call_unlock_irq()
> POWERPC: smp: remove call to ipi_call_lock()/ipi_call_unlock()
> smp: remove ipi_call_lock[_irq]()/ipi_call_unlock[_irq]()
>
All patches except the sparc one look good.
Acked-by: Srivatsa S. Bhat <[email protected]>
Regards,
Srivatsa S. Bhat
On 05/29/2012 12:46 PM, Yong Zhang wrote:
> From: Yong Zhang <[email protected]>
>
> 1) call_function.lock used in smp_call_function_many() is just to protect
> call_function.queue and &data->refs, cpu_online_mask is outside of the
> lock. And it's not necessary to protect cpu_online_mask,
> because data->cpumask is pre-calculate and even if a cpu is brougt up
> when calling arch_send_call_function_ipi_mask(), it's harmless because
> validation test in generic_smp_call_function_interrupt() will take care
> of it.
>
> 2) For cpu down issue, stop_machine() will guarantee that no concurrent
> smp_call_fuction() is processing.
>
> Signed-off-by: Yong Zhang <[email protected]>
> Cc: "David S. Miller" <[email protected]>
> Cc: [email protected]
> ---
> arch/sparc/kernel/smp_64.c | 6 ++----
> 1 files changed, 2 insertions(+), 4 deletions(-)
>
> diff --git a/arch/sparc/kernel/smp_64.c b/arch/sparc/kernel/smp_64.c
> index f591598..60e745c 100644
> --- a/arch/sparc/kernel/smp_64.c
> +++ b/arch/sparc/kernel/smp_64.c
> @@ -124,9 +124,9 @@ void __cpuinit smp_callin(void)
> while (!cpumask_test_cpu(cpuid, &smp_commenced_mask))
> rmb();
>
> - ipi_call_lock_irq();
> + local_irq_disable();
This looks odd. IRQs must not have been enabled at this point.
Just remove the call to local_irq_enable() that is found a few lines above
this line and then you won't have to add this call to local_irq_disable().
> set_cpu_online(cpuid, true);
> - ipi_call_unlock_irq();
> + local_irq_enable();
>
> /* idle thread is expected to have preempt disabled */
> preempt_disable();
> @@ -1308,9 +1308,7 @@ int __cpu_disable(void)
> mdelay(1);
> local_irq_disable();
>
> - ipi_call_lock();
> set_cpu_online(cpu, false);
> - ipi_call_unlock();
>
> cpu_map_rebuild();
>
Regards,
Srivatsa S. Bhat
From: "Srivatsa S. Bhat" <[email protected]>
Date: Tue, 29 May 2012 13:31:54 +0530
> On 05/29/2012 12:46 PM, Yong Zhang wrote:
>
>> From: Yong Zhang <[email protected]>
>>
>> 1) call_function.lock used in smp_call_function_many() is just to protect
>> call_function.queue and &data->refs, cpu_online_mask is outside of the
>> lock. And it's not necessary to protect cpu_online_mask,
>> because data->cpumask is pre-calculate and even if a cpu is brougt up
>> when calling arch_send_call_function_ipi_mask(), it's harmless because
>> validation test in generic_smp_call_function_interrupt() will take care
>> of it.
>>
>> 2) For cpu down issue, stop_machine() will guarantee that no concurrent
>> smp_call_fuction() is processing.
>>
>> Signed-off-by: Yong Zhang <[email protected]>
>> Cc: "David S. Miller" <[email protected]>
>> Cc: [email protected]
>> ---
>> arch/sparc/kernel/smp_64.c | 6 ++----
>> 1 files changed, 2 insertions(+), 4 deletions(-)
>>
>> diff --git a/arch/sparc/kernel/smp_64.c b/arch/sparc/kernel/smp_64.c
>> index f591598..60e745c 100644
>> --- a/arch/sparc/kernel/smp_64.c
>> +++ b/arch/sparc/kernel/smp_64.c
>> @@ -124,9 +124,9 @@ void __cpuinit smp_callin(void)
>> while (!cpumask_test_cpu(cpuid, &smp_commenced_mask))
>> rmb();
>>
>> - ipi_call_lock_irq();
>> + local_irq_disable();
>
>
> This looks odd. IRQs must not have been enabled at this point.
> Just remove the call to local_irq_enable() that is found a few lines above
> this line and then you won't have to add this call to local_irq_disable().
Agreed.
On Tue, May 29, 2012 at 01:31:54PM +0530, Srivatsa S. Bhat wrote:
> This looks odd. IRQs must not have been enabled at this point.
> Just remove the call to local_irq_enable() that is found a few lines above
> this line and then you won't have to add this call to local_irq_disable().
Yeah, have thought about it. But because I was not sure there is
special need to enalbe irq that early (don't know much about sparc),
I decided to make minor change :)
Since we have gotten confirmation from David, I'm sending out the
new version. Please check it.
Thanks,
Yong
---
From: Yong Zhang <[email protected]>
Date: Tue, 29 May 2012 12:56:08 +0800
Subject: [UPDATED] [RFC PATCH 8/10] SPARC: smp: remove call to
ipi_call_lock_irq()/ipi_call_unlock_irq()
1) call_function.lock used in smp_call_function_many() is just to protect
call_function.queue and &data->refs, cpu_online_mask is outside of the
lock. And it's not necessary to protect cpu_online_mask,
because data->cpumask is pre-calculate and even if a cpu is brougt up
when calling arch_send_call_function_ipi_mask(), it's harmless because
validation test in generic_smp_call_function_interrupt() will take care
of it.
2) For cpu down issue, stop_machine() will guarantee that no concurrent
smp_call_fuction() is processing.
And also delay irq enable to after set_cpu_online().
Signed-off-by: Yong Zhang <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: [email protected]
---
arch/sparc/kernel/smp_64.c | 7 +------
1 files changed, 1 insertions(+), 6 deletions(-)
diff --git a/arch/sparc/kernel/smp_64.c b/arch/sparc/kernel/smp_64.c
index f591598..781bcb1 100644
--- a/arch/sparc/kernel/smp_64.c
+++ b/arch/sparc/kernel/smp_64.c
@@ -103,8 +103,6 @@ void __cpuinit smp_callin(void)
if (cheetah_pcache_forced_on)
cheetah_enable_pcache();
- local_irq_enable();
-
callin_flag = 1;
__asm__ __volatile__("membar #Sync\n\t"
"flush %%g6" : : : "memory");
@@ -124,9 +122,8 @@ void __cpuinit smp_callin(void)
while (!cpumask_test_cpu(cpuid, &smp_commenced_mask))
rmb();
- ipi_call_lock_irq();
set_cpu_online(cpuid, true);
- ipi_call_unlock_irq();
+ local_irq_enable();
/* idle thread is expected to have preempt disabled */
preempt_disable();
@@ -1308,9 +1305,7 @@ int __cpu_disable(void)
mdelay(1);
local_irq_disable();
- ipi_call_lock();
set_cpu_online(cpu, false);
- ipi_call_unlock();
cpu_map_rebuild();
--
1.7.5.4
On Tue, 2012-05-29 at 15:15 +0800, Yong Zhang wrote:
> As discussed with Srivatsa [1], it seems there is no need to keep
> ipi_call_[un]lock_irq() when cpu bring-up/down. Because:
>
> 1) call_function.lock used in smp_call_function_many() is just to protect
> call_function.queue and &data->refs, cpu_online_mask is outside of the
> lock. And it's not necessary to protect cpu_online_mask,
> because data->cpumask is pre-calculate and even if a cpu is brougt up
> when calling arch_send_call_function_ipi_mask(), it's harmless because
> validation test in generic_smp_call_function_interrupt() will take care
> of it.
>
> 2) For cpu down issue, stop_machine() will guarantee that no concurrent
> smp_call_fuction() is processing.
But that lock was only taken around setting a cpu online, so the offline
case is pretty much irrelevant for these patches, right?
That said, is there an alternative to stop_machine on the down side?
I guess flipping the cpu offline and then doing synchronize_sched()
should suffice.
On 05/29/2012 01:57 PM, Yong Zhang wrote:
> On Tue, May 29, 2012 at 01:31:54PM +0530, Srivatsa S. Bhat wrote:
>> This looks odd. IRQs must not have been enabled at this point.
>> Just remove the call to local_irq_enable() that is found a few lines above
>> this line and then you won't have to add this call to local_irq_disable().
>
> Yeah, have thought about it. But because I was not sure there is
> special need to enalbe irq that early (don't know much about sparc),
> I decided to make minor change :)
>
> Since we have gotten confirmation from David, I'm sending out the
> new version. Please check it.
>
> Thanks,
> Yong
>
> ---
> From: Yong Zhang <[email protected]>
> Date: Tue, 29 May 2012 12:56:08 +0800
> Subject: [UPDATED] [RFC PATCH 8/10] SPARC: smp: remove call to
> ipi_call_lock_irq()/ipi_call_unlock_irq()
>
> 1) call_function.lock used in smp_call_function_many() is just to protect
> call_function.queue and &data->refs, cpu_online_mask is outside of the
> lock. And it's not necessary to protect cpu_online_mask,
> because data->cpumask is pre-calculate and even if a cpu is brougt up
> when calling arch_send_call_function_ipi_mask(), it's harmless because
> validation test in generic_smp_call_function_interrupt() will take care
> of it.
>
> 2) For cpu down issue, stop_machine() will guarantee that no concurrent
> smp_call_fuction() is processing.
>
> And also delay irq enable to after set_cpu_online().
>
> Signed-off-by: Yong Zhang <[email protected]>
> Cc: "David S. Miller" <[email protected]>
> Cc: [email protected]
> ---
Looks good.
Acked-by: Srivatsa S. Bhat <[email protected]>
> arch/sparc/kernel/smp_64.c | 7 +------
> 1 files changed, 1 insertions(+), 6 deletions(-)
>
> diff --git a/arch/sparc/kernel/smp_64.c b/arch/sparc/kernel/smp_64.c
> index f591598..781bcb1 100644
> --- a/arch/sparc/kernel/smp_64.c
> +++ b/arch/sparc/kernel/smp_64.c
> @@ -103,8 +103,6 @@ void __cpuinit smp_callin(void)
> if (cheetah_pcache_forced_on)
> cheetah_enable_pcache();
>
> - local_irq_enable();
> -
> callin_flag = 1;
> __asm__ __volatile__("membar #Sync\n\t"
> "flush %%g6" : : : "memory");
> @@ -124,9 +122,8 @@ void __cpuinit smp_callin(void)
> while (!cpumask_test_cpu(cpuid, &smp_commenced_mask))
> rmb();
>
> - ipi_call_lock_irq();
> set_cpu_online(cpuid, true);
> - ipi_call_unlock_irq();
> + local_irq_enable();
>
> /* idle thread is expected to have preempt disabled */
> preempt_disable();
> @@ -1308,9 +1305,7 @@ int __cpu_disable(void)
> mdelay(1);
> local_irq_disable();
>
> - ipi_call_lock();
> set_cpu_online(cpu, false);
> - ipi_call_unlock();
>
> cpu_map_rebuild();
>
On Tue, 2012-05-29 at 10:28 +0200, Peter Zijlstra wrote:
> On Tue, 2012-05-29 at 15:15 +0800, Yong Zhang wrote:
> > As discussed with Srivatsa [1], it seems there is no need to keep
> > ipi_call_[un]lock_irq() when cpu bring-up/down. Because:
> >
> > 1) call_function.lock used in smp_call_function_many() is just to protect
> > call_function.queue and &data->refs, cpu_online_mask is outside of the
> > lock. And it's not necessary to protect cpu_online_mask,
> > because data->cpumask is pre-calculate and even if a cpu is brougt up
> > when calling arch_send_call_function_ipi_mask(), it's harmless because
> > validation test in generic_smp_call_function_interrupt() will take care
> > of it.
> >
> > 2) For cpu down issue, stop_machine() will guarantee that no concurrent
> > smp_call_fuction() is processing.
>
> But that lock was only taken around setting a cpu online, so the offline
> case is pretty much irrelevant for these patches, right?
Ah, I see, some archs also did it on offline.
> That said, is there an alternative to stop_machine on the down side?
>
> I guess flipping the cpu offline and then doing synchronize_sched()
> should suffice.
On 05/29/2012 01:58 PM, Peter Zijlstra wrote:
> That said, is there an alternative to stop_machine on the down side?
>
> I guess flipping the cpu offline and then doing synchronize_sched()
> should suffice.
>
Yes I think so. Even Paul McKenney had suggested this.
Regards,
Srivatsa S. Bhat
From: Yong Zhang <[email protected]>
Date: Tue, 29 May 2012 16:27:33 +0800
> From: Yong Zhang <[email protected]>
> Date: Tue, 29 May 2012 12:56:08 +0800
> Subject: [UPDATED] [RFC PATCH 8/10] SPARC: smp: remove call to
> ipi_call_lock_irq()/ipi_call_unlock_irq()
>
> 1) call_function.lock used in smp_call_function_many() is just to protect
> call_function.queue and &data->refs, cpu_online_mask is outside of the
> lock. And it's not necessary to protect cpu_online_mask,
> because data->cpumask is pre-calculate and even if a cpu is brougt up
> when calling arch_send_call_function_ipi_mask(), it's harmless because
> validation test in generic_smp_call_function_interrupt() will take care
> of it.
>
> 2) For cpu down issue, stop_machine() will guarantee that no concurrent
> smp_call_fuction() is processing.
>
> And also delay irq enable to after set_cpu_online().
>
> Signed-off-by: Yong Zhang <[email protected]>
Acked-by: David S. Miller <[email protected]>
Commit-ID: 4ee0444bebf3d0399c93841d98826ade193e6330
Gitweb: http://git.kernel.org/tip/4ee0444bebf3d0399c93841d98826ade193e6330
Author: Yong Zhang <[email protected]>
AuthorDate: Tue, 29 May 2012 15:15:56 +0800
Committer: Thomas Gleixner <[email protected]>
CommitDate: Tue, 5 Jun 2012 17:27:11 +0200
hexagon: SMP: Remove call to ipi_call_lock()/ipi_call_unlock()
ipi_call_lock/unlock() lock resp. unlock call_function.lock. This lock
protects only the call_function data structure itself, but it's
completely unrelated to cpu_online_mask. The mask to which the IPIs
are sent is calculated before call_function.lock is taken in
smp_call_function_many(), so the locking around set_cpu_online() is
pointless and can be removed.
[ tglx: Massaged changelog ]
Signed-off-by: Yong Zhang <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: Richard Kuo <[email protected]>
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]
Acked-by: Srivatsa S. Bhat <[email protected]>
Acked-by: Peter Zijlstra <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
---
arch/hexagon/kernel/smp.c | 2 --
1 files changed, 0 insertions(+), 2 deletions(-)
diff --git a/arch/hexagon/kernel/smp.c b/arch/hexagon/kernel/smp.c
index f726462..149fbef 100644
--- a/arch/hexagon/kernel/smp.c
+++ b/arch/hexagon/kernel/smp.c
@@ -180,9 +180,7 @@ void __cpuinit start_secondary(void)
notify_cpu_starting(cpu);
- ipi_call_lock();
set_cpu_online(cpu, true);
- ipi_call_unlock();
local_irq_enable();
Commit-ID: 4cbd19fcf9d49f28766415b96eb6201bc263d817
Gitweb: http://git.kernel.org/tip/4cbd19fcf9d49f28766415b96eb6201bc263d817
Author: Yong Zhang <[email protected]>
AuthorDate: Tue, 29 May 2012 15:15:57 +0800
Committer: Thomas Gleixner <[email protected]>
CommitDate: Tue, 5 Jun 2012 17:27:12 +0200
mn10300: SMP: Remove call to ipi_call_lock()/ipi_call_unlock()
ipi_call_lock/unlock() lock resp. unlock call_function.lock. This lock
protects only the call_function data structure itself, but it's
completely unrelated to cpu_online_mask. The mask to which the IPIs
are sent is calculated before call_function.lock is taken in
smp_call_function_many(), so the locking around set_cpu_online() is
pointless and can be removed.
[ tglx: Massaged changelog ]
Signed-off-by: Yong Zhang <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: David Howells <[email protected]>
Cc: Koichi Yasutake <[email protected]>
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]
Acked-by: Srivatsa S. Bhat <[email protected]>
Acked-by: Peter Zijlstra <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
---
arch/mn10300/kernel/smp.c | 2 --
1 files changed, 0 insertions(+), 2 deletions(-)
diff --git a/arch/mn10300/kernel/smp.c b/arch/mn10300/kernel/smp.c
index 090d35d..e62c223 100644
--- a/arch/mn10300/kernel/smp.c
+++ b/arch/mn10300/kernel/smp.c
@@ -876,9 +876,7 @@ static void __init smp_online(void)
notify_cpu_starting(cpu);
- ipi_call_lock();
set_cpu_online(cpu, true);
- ipi_call_unlock();
local_irq_enable();
}
Commit-ID: ac4216d6922f90c459d531e051c832dadde3a8f7
Gitweb: http://git.kernel.org/tip/ac4216d6922f90c459d531e051c832dadde3a8f7
Author: Yong Zhang <[email protected]>
AuthorDate: Tue, 29 May 2012 15:15:58 +0800
Committer: Thomas Gleixner <[email protected]>
CommitDate: Tue, 5 Jun 2012 17:27:12 +0200
parisc: Smp: remove call to ipi_call_lock()/ipi_call_unlock()
ipi_call_lock/unlock() lock resp. unlock call_function.lock. This lock
protects only the call_function data structure itself, but it's
completely unrelated to cpu_online_mask. The mask to which the IPIs
are sent is calculated before call_function.lock is taken in
smp_call_function_many(), so the locking around set_cpu_online() is
pointless and can be removed.
[ tglx: Massaged changelog ]
Signed-off-by: Yong Zhang <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: James E.J. Bottomley <[email protected]>
Cc: Helge Deller <[email protected]>
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]
Acked-by: Srivatsa S. Bhat <[email protected]>
Acked-by: Peter Zijlstra <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
---
arch/parisc/kernel/smp.c | 2 --
1 files changed, 0 insertions(+), 2 deletions(-)
diff --git a/arch/parisc/kernel/smp.c b/arch/parisc/kernel/smp.c
index a47828d..6266730 100644
--- a/arch/parisc/kernel/smp.c
+++ b/arch/parisc/kernel/smp.c
@@ -300,9 +300,7 @@ smp_cpu_init(int cpunum)
notify_cpu_starting(cpunum);
- ipi_call_lock();
set_cpu_online(cpunum, true);
- ipi_call_unlock();
/* Initialise the idle task for this CPU */
atomic_inc(&init_mm.mm_count);
Commit-ID: 8efdfc3a4ed009c978dab6609d15fb958e7cff12
Gitweb: http://git.kernel.org/tip/8efdfc3a4ed009c978dab6609d15fb958e7cff12
Author: Yong Zhang <[email protected]>
AuthorDate: Tue, 29 May 2012 15:16:00 +0800
Committer: Thomas Gleixner <[email protected]>
CommitDate: Tue, 5 Jun 2012 17:27:12 +0200
tile: SMP: Remove call to ipi_call_lock()/ipi_call_unlock()
ipi_call_lock/unlock() lock resp. unlock call_function.lock. This lock
protects only the call_function data structure itself, but it's
completely unrelated to cpu_online_mask. The mask to which the IPIs
are sent is calculated before call_function.lock is taken in
smp_call_function_many(), so the locking around set_cpu_online() is
pointless and can be removed.
[ tglx: Massaged changelog ]
Signed-off-by: Yong Zhang <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: Chris Metcalf <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Acked-by: Srivatsa S. Bhat <[email protected]>
Acked-by: Peter Zijlstra <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
---
arch/tile/kernel/smpboot.c | 10 ----------
1 files changed, 0 insertions(+), 10 deletions(-)
diff --git a/arch/tile/kernel/smpboot.c b/arch/tile/kernel/smpboot.c
index 84873fb..e686c5a 100644
--- a/arch/tile/kernel/smpboot.c
+++ b/arch/tile/kernel/smpboot.c
@@ -198,17 +198,7 @@ void __cpuinit online_secondary(void)
notify_cpu_starting(smp_processor_id());
- /*
- * We need to hold call_lock, so there is no inconsistency
- * between the time smp_call_function() determines number of
- * IPI recipients, and the time when the determination is made
- * for which cpus receive the IPI. Holding this
- * lock helps us to not include this cpu in a currently in progress
- * smp_call_function().
- */
- ipi_call_lock();
set_cpu_online(smp_processor_id(), 1);
- ipi_call_unlock();
__get_cpu_var(cpu_state) = CPU_ONLINE;
/* Set up tile-specific state for this cpu. */
Commit-ID: 459165e25030c0023cb54f73c14261a3d2f4a244
Gitweb: http://git.kernel.org/tip/459165e25030c0023cb54f73c14261a3d2f4a244
Author: Yong Zhang <[email protected]>
AuthorDate: Tue, 29 May 2012 15:16:02 +0800
Committer: Thomas Gleixner <[email protected]>
CommitDate: Tue, 5 Jun 2012 17:27:13 +0200
ia64: SMP: Remove call to ipi_call_lock_irq()/ipi_call_unlock_irq()
ipi_call_lock/unlock() lock resp. unlock call_function.lock. This lock
protects only the call_function data structure itself, but it's
completely unrelated to cpu_online_mask. The mask to which the IPIs
are sent is calculated before call_function.lock is taken in
smp_call_function_many(), so the locking around set_cpu_online() is
pointless and can be removed.
[ tglx: Massaged changelog ]
Signed-off-by: Yong Zhang <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: Tony Luck <[email protected]>
Cc: Fenghua Yu <[email protected]>
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]
Acked-by: Srivatsa S. Bhat <[email protected]>
Acked-by: Peter Zijlstra <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
---
arch/ia64/kernel/smpboot.c | 2 --
1 files changed, 0 insertions(+), 2 deletions(-)
diff --git a/arch/ia64/kernel/smpboot.c b/arch/ia64/kernel/smpboot.c
index 1113b8a..963d2db 100644
--- a/arch/ia64/kernel/smpboot.c
+++ b/arch/ia64/kernel/smpboot.c
@@ -382,7 +382,6 @@ smp_callin (void)
set_numa_node(cpu_to_node_map[cpuid]);
set_numa_mem(local_memory_node(cpu_to_node_map[cpuid]));
- ipi_call_lock_irq();
spin_lock(&vector_lock);
/* Setup the per cpu irq handling data structures */
__setup_vector_irq(cpuid);
@@ -390,7 +389,6 @@ smp_callin (void)
set_cpu_online(cpuid, true);
per_cpu(cpu_state, cpuid) = CPU_ONLINE;
spin_unlock(&vector_lock);
- ipi_call_unlock_irq();
smp_setup_percpu_timer();
Commit-ID: bc6833009583bd5b096ef7aa2bb006854a5a2dce
Gitweb: http://git.kernel.org/tip/bc6833009583bd5b096ef7aa2bb006854a5a2dce
Author: Yong Zhang <[email protected]>
AuthorDate: Tue, 29 May 2012 16:27:33 +0800
Committer: Thomas Gleixner <[email protected]>
CommitDate: Tue, 5 Jun 2012 17:27:13 +0200
SPARC: SMP: Remove call to ipi_call_lock_irq()/ipi_call_unlock_irq()
ipi_call_lock/unlock() lock resp. unlock call_function.lock. This lock
protects only the call_function data structure itself, but it's
completely unrelated to cpu_online_mask. The mask to which the IPIs
are sent is calculated before call_function.lock is taken in
smp_call_function_many(), so the locking around set_cpu_online() is
pointless and can be removed.
Delay irq enable to after set_cpu_online().
[ tglx: Massaged changelog ]
Signed-off-by: Yong Zhang <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Link: http://lkml.kernel.org/r/20120529082732.GA4250@zhy
Acked-by: "David S. Miller" <[email protected]>
Acked-by: Srivatsa S. Bhat <[email protected]>
Acked-by: Peter Zijlstra <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
---
arch/sparc/kernel/smp_64.c | 7 +------
1 files changed, 1 insertions(+), 6 deletions(-)
diff --git a/arch/sparc/kernel/smp_64.c b/arch/sparc/kernel/smp_64.c
index f591598..781bcb1 100644
--- a/arch/sparc/kernel/smp_64.c
+++ b/arch/sparc/kernel/smp_64.c
@@ -103,8 +103,6 @@ void __cpuinit smp_callin(void)
if (cheetah_pcache_forced_on)
cheetah_enable_pcache();
- local_irq_enable();
-
callin_flag = 1;
__asm__ __volatile__("membar #Sync\n\t"
"flush %%g6" : : : "memory");
@@ -124,9 +122,8 @@ void __cpuinit smp_callin(void)
while (!cpumask_test_cpu(cpuid, &smp_commenced_mask))
rmb();
- ipi_call_lock_irq();
set_cpu_online(cpuid, true);
- ipi_call_unlock_irq();
+ local_irq_enable();
/* idle thread is expected to have preempt disabled */
preempt_disable();
@@ -1308,9 +1305,7 @@ int __cpu_disable(void)
mdelay(1);
local_irq_disable();
- ipi_call_lock();
set_cpu_online(cpu, false);
- ipi_call_unlock();
cpu_map_rebuild();
Commit-ID: 3b6f70fd7dd4e19fc674ec99e389bf0da5589525
Gitweb: http://git.kernel.org/tip/3b6f70fd7dd4e19fc674ec99e389bf0da5589525
Author: Yong Zhang <[email protected]>
AuthorDate: Tue, 29 May 2012 15:16:01 +0800
Committer: Thomas Gleixner <[email protected]>
CommitDate: Tue, 5 Jun 2012 17:27:12 +0200
x86-smp-remove-call-to-ipi_call_lock-ipi_call_unlock
ipi_call_lock/unlock() lock resp. unlock call_function.lock. This lock
protects only the call_function data structure itself, but it's
completely unrelated to cpu_online_mask. The mask to which the IPIs
are sent is calculated before call_function.lock is taken in
smp_call_function_many(), so the locking around set_cpu_online() is
pointless and can be removed.
[ tglx: Massaged changelog ]
Signed-off-by: Yong Zhang <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: Konrad Rzeszutek Wilk <[email protected]>
Cc: Jeremy Fitzhardinge <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Acked-by: Srivatsa S. Bhat <[email protected]>
Acked-by: Peter Zijlstra <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
---
arch/x86/kernel/smpboot.c | 9 ---------
arch/x86/xen/smp.c | 2 --
2 files changed, 0 insertions(+), 11 deletions(-)
diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index f56f96d..b2fd28f 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -255,22 +255,13 @@ notrace static void __cpuinit start_secondary(void *unused)
check_tsc_sync_target();
/*
- * We need to hold call_lock, so there is no inconsistency
- * between the time smp_call_function() determines number of
- * IPI recipients, and the time when the determination is made
- * for which cpus receive the IPI. Holding this
- * lock helps us to not include this cpu in a currently in progress
- * smp_call_function().
- *
* We need to hold vector_lock so there the set of online cpus
* does not change while we are assigning vectors to cpus. Holding
* this lock ensures we don't half assign or remove an irq from a cpu.
*/
- ipi_call_lock();
lock_vector_lock();
set_cpu_online(smp_processor_id(), true);
unlock_vector_lock();
- ipi_call_unlock();
per_cpu(cpu_state, smp_processor_id()) = CPU_ONLINE;
x86_platform.nmi_init();
diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
index afb250d..f58dca7 100644
--- a/arch/x86/xen/smp.c
+++ b/arch/x86/xen/smp.c
@@ -80,9 +80,7 @@ static void __cpuinit cpu_bringup(void)
notify_cpu_starting(cpu);
- ipi_call_lock();
set_cpu_online(cpu, true);
- ipi_call_unlock();
this_cpu_write(cpu_state, CPU_ONLINE);
Commit-ID: 46ce7fbfdbe0be6aaf0e816331aac08a74a00e5a
Gitweb: http://git.kernel.org/tip/46ce7fbfdbe0be6aaf0e816331aac08a74a00e5a
Author: Yong Zhang <[email protected]>
AuthorDate: Tue, 29 May 2012 15:15:59 +0800
Committer: Thomas Gleixner <[email protected]>
CommitDate: Tue, 5 Jun 2012 17:27:12 +0200
S390: Smp: remove call to ipi_call_lock()/ipi_call_unlock()
ipi_call_lock/unlock() lock resp. unlock call_function.lock. This lock
protects only the call_function data structure itself, but it's
completely unrelated to cpu_online_mask. The mask to which the IPIs
are sent is calculated before call_function.lock is taken in
smp_call_function_many(), so the locking around set_cpu_online() is
pointless and can be removed.
[ tglx: Massaged changelog ]
Signed-off-by: Yong Zhang <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: Martin Schwidefsky <[email protected]>
Cc: Heiko Carstens <[email protected]>
Cc: [email protected]
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]
Acked-by: Srivatsa S. Bhat <[email protected]>
Acked-by: Peter Zijlstra <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
---
arch/s390/kernel/smp.c | 2 --
1 files changed, 0 insertions(+), 2 deletions(-)
diff --git a/arch/s390/kernel/smp.c b/arch/s390/kernel/smp.c
index 15cca26..8dca9c2 100644
--- a/arch/s390/kernel/smp.c
+++ b/arch/s390/kernel/smp.c
@@ -717,9 +717,7 @@ static void __cpuinit smp_start_secondary(void *cpuvoid)
init_cpu_vtimer();
pfault_init();
notify_cpu_starting(smp_processor_id());
- ipi_call_lock();
set_cpu_online(smp_processor_id(), true);
- ipi_call_unlock();
local_irq_enable();
/* cpu_idle will call schedule for us */
cpu_idle();
Commit-ID: 8feb8e896d77439146d2e2ab3d0ab55bb5baf5fc
Gitweb: http://git.kernel.org/tip/8feb8e896d77439146d2e2ab3d0ab55bb5baf5fc
Author: Yong Zhang <[email protected]>
AuthorDate: Tue, 29 May 2012 15:16:05 +0800
Committer: Thomas Gleixner <[email protected]>
CommitDate: Tue, 5 Jun 2012 17:27:14 +0200
smp: Remove ipi_call_lock[_irq]()/ipi_call_unlock[_irq]()
There is no user of those APIs anymore, just remove it.
Signed-off-by: Yong Zhang <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: Andrew Morton <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Acked-by: Srivatsa S. Bhat <[email protected]>
Acked-by: Peter Zijlstra <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
---
include/linux/smp.h | 4 ----
kernel/smp.c | 20 --------------------
2 files changed, 0 insertions(+), 24 deletions(-)
diff --git a/include/linux/smp.h b/include/linux/smp.h
index 717fb74..a34d4f1 100644
--- a/include/linux/smp.h
+++ b/include/linux/smp.h
@@ -90,10 +90,6 @@ void kick_all_cpus_sync(void);
void __init call_function_init(void);
void generic_smp_call_function_single_interrupt(void);
void generic_smp_call_function_interrupt(void);
-void ipi_call_lock(void);
-void ipi_call_unlock(void);
-void ipi_call_lock_irq(void);
-void ipi_call_unlock_irq(void);
#else
static inline void call_function_init(void) { }
#endif
diff --git a/kernel/smp.c b/kernel/smp.c
index d0ae5b2..29dd40a 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -581,26 +581,6 @@ int smp_call_function(smp_call_func_t func, void *info, int wait)
return 0;
}
EXPORT_SYMBOL(smp_call_function);
-
-void ipi_call_lock(void)
-{
- raw_spin_lock(&call_function.lock);
-}
-
-void ipi_call_unlock(void)
-{
- raw_spin_unlock(&call_function.lock);
-}
-
-void ipi_call_lock_irq(void)
-{
- raw_spin_lock_irq(&call_function.lock);
-}
-
-void ipi_call_unlock_irq(void)
-{
- raw_spin_unlock_irq(&call_function.lock);
-}
#endif /* USE_GENERIC_SMP_HELPERS */
/* Setup configured maximum number of CPUs to activate */
Commit-ID: 72ec61a9e8347375e37c05e23511173d8451c3d4
Gitweb: http://git.kernel.org/tip/72ec61a9e8347375e37c05e23511173d8451c3d4
Author: Yong Zhang <[email protected]>
AuthorDate: Tue, 29 May 2012 15:16:04 +0800
Committer: Thomas Gleixner <[email protected]>
CommitDate: Tue, 5 Jun 2012 17:27:13 +0200
POWERPC: Smp: remove call to ipi_call_lock()/ipi_call_unlock()
ipi_call_lock/unlock() lock resp. unlock call_function.lock. This lock
protects only the call_function data structure itself, but it's
completely unrelated to cpu_online_mask. The mask to which the IPIs
are sent is calculated before call_function.lock is taken in
smp_call_function_many(), so the locking around set_cpu_online() is
pointless and can be removed.
[ tglx: Massaged changelog ]
Signed-off-by: Yong Zhang <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]
Acked-by: Srivatsa S. Bhat <[email protected]>
Acked-by: Peter Zijlstra <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
---
arch/powerpc/kernel/smp.c | 2 --
1 files changed, 0 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
index e4cb343..e1417c4 100644
--- a/arch/powerpc/kernel/smp.c
+++ b/arch/powerpc/kernel/smp.c
@@ -571,7 +571,6 @@ void __devinit start_secondary(void *unused)
if (system_state == SYSTEM_RUNNING)
vdso_data->processorCount++;
#endif
- ipi_call_lock();
notify_cpu_starting(cpu);
set_cpu_online(cpu, true);
/* Update sibling maps */
@@ -601,7 +600,6 @@ void __devinit start_secondary(void *unused)
of_node_put(np);
}
of_node_put(l2_cache);
- ipi_call_unlock();
local_irq_enable();
On Tue, May 29, 2012 at 03:15:55PM +0800, Yong Zhang wrote:
> As discussed with Srivatsa [1], it seems there is no need to keep
> ipi_call_[un]lock_irq() when cpu bring-up/down. Because:
>
> 1) call_function.lock used in smp_call_function_many() is just to protect
> call_function.queue and &data->refs, cpu_online_mask is outside of the
> lock. And it's not necessary to protect cpu_online_mask,
> because data->cpumask is pre-calculate and even if a cpu is brougt up
> when calling arch_send_call_function_ipi_mask(), it's harmless because
> validation test in generic_smp_call_function_interrupt() will take care
> of it.
>
> 2) For cpu down issue, stop_machine() will guarantee that no concurrent
> smp_call_fuction() is processing.
>
> In short it's more likely that keeping ipi_call_[un]lock() is leftover
> before we introduce generic smp helper. This patchset is just to clean
> things up.
This series gets rid of the call_function lockdep splat I was seeing,
very good!
Tested-by: Paul E. McKenney <[email protected]>
> [1] http://marc.info/?l=linux-kernel&m=133766786814484&w=2
>
> Yong Zhang (10):
> hexagon: smp: remove call to ipi_call_lock()/ipi_call_unlock()
> mn10300: smp: remove call to ipi_call_lock()/ipi_call_unlock()
> parisc: smp: remove call to ipi_call_lock()/ipi_call_unlock()
> S390: smp: remove call to ipi_call_lock()/ipi_call_unlock()
> tile: smp: remove call to ipi_call_lock()/ipi_call_unlock()
> x86: smp: remove call to ipi_call_lock()/ipi_call_unlock()
> ia64: smp: remove call to ipi_call_lock_irq()/ipi_call_unlock_irq()
> SPARC: smp: remove call to ipi_call_lock_irq()/ipi_call_unlock_irq()
> POWERPC: smp: remove call to ipi_call_lock()/ipi_call_unlock()
> smp: remove ipi_call_lock[_irq]()/ipi_call_unlock[_irq]()
>
> arch/hexagon/kernel/smp.c | 2 --
> arch/ia64/kernel/smpboot.c | 2 --
> arch/mn10300/kernel/smp.c | 2 --
> arch/parisc/kernel/smp.c | 2 --
> arch/powerpc/kernel/smp.c | 2 --
> arch/s390/kernel/smp.c | 2 --
> arch/sparc/kernel/smp_64.c | 6 ++----
> arch/tile/kernel/smpboot.c | 10 ----------
> arch/x86/kernel/smpboot.c | 9 ---------
> arch/x86/xen/smp.c | 2 --
> include/linux/smp.h | 4 ----
> kernel/smp.c | 20 --------------------
> 12 files changed, 2 insertions(+), 61 deletions(-)
>
> --
> 1.7.5.4
>
On Tue, May 29, 2012 at 03:16:04PM +0800, Yong Zhang wrote:
> From: Yong Zhang <[email protected]>
>
> 1) call_function.lock used in smp_call_function_many() is just to protect
> call_function.queue and &data->refs, cpu_online_mask is outside of the
> lock. And it's not necessary to protect cpu_online_mask,
> because data->cpumask is pre-calculate and even if a cpu is brougt up
> when calling arch_send_call_function_ipi_mask(), it's harmless because
> validation test in generic_smp_call_function_interrupt() will take care
> of it.
>
> 2) For cpu down issue, stop_machine() will guarantee that no concurrent
> smp_call_fuction() is processing.
However, there is an effort to get rid of stop_machine() from the
CPU-down path... So something else will be needed.
Thanx, Paul
> Signed-off-by: Yong Zhang <[email protected]>
> Cc: Benjamin Herrenschmidt <[email protected]>
> Cc: Paul Mackerras <[email protected]>
> Cc: [email protected]
> ---
> arch/powerpc/kernel/smp.c | 2 --
> 1 files changed, 0 insertions(+), 2 deletions(-)
>
> diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
> index e4cb343..e1417c4 100644
> --- a/arch/powerpc/kernel/smp.c
> +++ b/arch/powerpc/kernel/smp.c
> @@ -571,7 +571,6 @@ void __devinit start_secondary(void *unused)
> if (system_state == SYSTEM_RUNNING)
> vdso_data->processorCount++;
> #endif
> - ipi_call_lock();
> notify_cpu_starting(cpu);
> set_cpu_online(cpu, true);
> /* Update sibling maps */
> @@ -601,7 +600,6 @@ void __devinit start_secondary(void *unused)
> of_node_put(np);
> }
> of_node_put(l2_cache);
> - ipi_call_unlock();
>
> local_irq_enable();
>
> --
> 1.7.5.4
>
On Sat, 2012-06-16 at 09:32 -0700, Paul E. McKenney wrote:
> However, there is an effort to get rid of stop_machine() from the
> CPU-down path... So something else will be needed.
Elsewhere in this thread I mentioned we could do a synchronize_sched().
I think that covers most of what stop-machine is doing these days.
On Sat, Jun 16, 2012 at 07:30:58PM +0200, Peter Zijlstra wrote:
> On Sat, 2012-06-16 at 09:32 -0700, Paul E. McKenney wrote:
> > However, there is an effort to get rid of stop_machine() from the
> > CPU-down path... So something else will be needed.
>
> Elsewhere in this thread I mentioned we could do a synchronize_sched().
> I think that covers most of what stop-machine is doing these days.
Ah, apologies for the noise!
Thanx, Paul
On Sat, Jun 16, 2012 at 09:32:19AM -0700, Paul E. McKenney wrote:
> On Tue, May 29, 2012 at 03:16:04PM +0800, Yong Zhang wrote:
> > From: Yong Zhang <[email protected]>
> >
> > 1) call_function.lock used in smp_call_function_many() is just to protect
> > call_function.queue and &data->refs, cpu_online_mask is outside of the
> > lock. And it's not necessary to protect cpu_online_mask,
> > because data->cpumask is pre-calculate and even if a cpu is brougt up
> > when calling arch_send_call_function_ipi_mask(), it's harmless because
> > validation test in generic_smp_call_function_interrupt() will take care
> > of it.
> >
> > 2) For cpu down issue, stop_machine() will guarantee that no concurrent
> > smp_call_fuction() is processing.
>
> However, there is an effort to get rid of stop_machine() from the
> CPU-down path... So something else will be needed.
Yeah. So Thomas changed the commit log like below:
[
ipi_call_lock/unlock() lock resp. unlock call_function.lock. This lock
protects only the call_function data structure itself, but it's
completely unrelated to cpu_online_mask. The mask to which the IPIs
are sent is calculated before call_function.lock is taken in
smp_call_function_many(), so the locking around set_cpu_online() is
pointless and can be removed.
[ tglx: Massaged changelog ]
]
in tip/smp/hotplug.
Thanks,
Yong
>
> Thanx, Paul
>
> > Signed-off-by: Yong Zhang <[email protected]>
> > Cc: Benjamin Herrenschmidt <[email protected]>
> > Cc: Paul Mackerras <[email protected]>
> > Cc: [email protected]
> > ---
> > arch/powerpc/kernel/smp.c | 2 --
> > 1 files changed, 0 insertions(+), 2 deletions(-)
> >
> > diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
> > index e4cb343..e1417c4 100644
> > --- a/arch/powerpc/kernel/smp.c
> > +++ b/arch/powerpc/kernel/smp.c
> > @@ -571,7 +571,6 @@ void __devinit start_secondary(void *unused)
> > if (system_state == SYSTEM_RUNNING)
> > vdso_data->processorCount++;
> > #endif
> > - ipi_call_lock();
> > notify_cpu_starting(cpu);
> > set_cpu_online(cpu, true);
> > /* Update sibling maps */
> > @@ -601,7 +600,6 @@ void __devinit start_secondary(void *unused)
> > of_node_put(np);
> > }
> > of_node_put(l2_cache);
> > - ipi_call_unlock();
> >
> > local_irq_enable();
> >
> > --
> > 1.7.5.4
> >
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
--
Only stand for myself
On Mon, Jun 18, 2012 at 10:51:59AM +0800, Yong Zhang wrote:
> On Sat, Jun 16, 2012 at 09:32:19AM -0700, Paul E. McKenney wrote:
> > On Tue, May 29, 2012 at 03:16:04PM +0800, Yong Zhang wrote:
> > > From: Yong Zhang <[email protected]>
> > >
> > > 1) call_function.lock used in smp_call_function_many() is just to protect
> > > call_function.queue and &data->refs, cpu_online_mask is outside of the
> > > lock. And it's not necessary to protect cpu_online_mask,
> > > because data->cpumask is pre-calculate and even if a cpu is brougt up
> > > when calling arch_send_call_function_ipi_mask(), it's harmless because
> > > validation test in generic_smp_call_function_interrupt() will take care
> > > of it.
> > >
> > > 2) For cpu down issue, stop_machine() will guarantee that no concurrent
> > > smp_call_fuction() is processing.
> >
> > However, there is an effort to get rid of stop_machine() from the
> > CPU-down path... So something else will be needed.
>
> Yeah. So Thomas changed the commit log like below:
> [
> ipi_call_lock/unlock() lock resp. unlock call_function.lock. This lock
> protects only the call_function data structure itself, but it's
> completely unrelated to cpu_online_mask. The mask to which the IPIs
> are sent is calculated before call_function.lock is taken in
> smp_call_function_many(), so the locking around set_cpu_online() is
> pointless and can be removed.
>
> [ tglx: Massaged changelog ]
> ]
>
> in tip/smp/hotplug.
Got it, thank you!
Thanx, Paul
> Thanks,
> Yong
>
> >
> > Thanx, Paul
> >
> > > Signed-off-by: Yong Zhang <[email protected]>
> > > Cc: Benjamin Herrenschmidt <[email protected]>
> > > Cc: Paul Mackerras <[email protected]>
> > > Cc: [email protected]
> > > ---
> > > arch/powerpc/kernel/smp.c | 2 --
> > > 1 files changed, 0 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
> > > index e4cb343..e1417c4 100644
> > > --- a/arch/powerpc/kernel/smp.c
> > > +++ b/arch/powerpc/kernel/smp.c
> > > @@ -571,7 +571,6 @@ void __devinit start_secondary(void *unused)
> > > if (system_state == SYSTEM_RUNNING)
> > > vdso_data->processorCount++;
> > > #endif
> > > - ipi_call_lock();
> > > notify_cpu_starting(cpu);
> > > set_cpu_online(cpu, true);
> > > /* Update sibling maps */
> > > @@ -601,7 +600,6 @@ void __devinit start_secondary(void *unused)
> > > of_node_put(np);
> > > }
> > > of_node_put(l2_cache);
> > > - ipi_call_unlock();
> > >
> > > local_irq_enable();
> > >
> > > --
> > > 1.7.5.4
> > >
> >
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> > the body of a message to [email protected]
> > More majordomo info at http://vger.kernel.org/majordomo-info.html
> > Please read the FAQ at http://www.tux.org/lkml/
>
> --
> Only stand for myself
>