2012-06-01 09:10:55

by Srivatsa S. Bhat

[permalink] [raw]
Subject: [PATCH 00/27] Generic framework for SMP booting/CPU hotplug related code

This patchset is an effort to reduce the SMP booting/CPU hotplug related code
duplication in various architectures and pull them out into core code.
Consolidating the code at one place makes it more maintainable and less
error-prone.

There is still a lot of opportunity to pull out more stuff from the arch/
directories into core code than what this patchset does, but this does the
ground-work on top of which we can make more and more code generic.

This applies on top of tip tree's smp/hotplug branch + Yong's ipi_call_lock
cleanup patches[1].

The patch descriptions of some of the arch/ patches contain a section called
"Notes" which documents some of the non-trivial changes implemented in that
patch. This section is to help bring maintainer attention to those code
pieces so that if its wrong for that arch, they can point it out and I'll
work on fixing it.


Acknowledgements:
----------------

Thanks a lot to Paul McKenney for his guidance and for consolidating a
comprehensive list of what needs to be done to make CPU Hotplug better [2]
and [3]. Many thanks to Peter Zijlstra for complaining about the code
duplication in various architectures and pointing out what is missing and
what needs to be worked on.

Also, many thanks to Thomas Gleixner for his initiative to make smp booting
generic and sane, and for his whole bunch of cleanups in that direction.
I hope this patchset can contribute to that effort :-)

References:
[1]. http://marc.info/?l=linux-kernel&m=133827595625509&w=2
[2]. https://lkml.org/lkml/2012/4/9/198
[3]. https://lkml.org/lkml/2012/4/5/379
--
Nikunj A. Dadhania (9):
um, smpboot: Use generic SMP booting infrastructure
sparc64, smpboot: Use generic SMP booting infrastructure
blackfin, smpboot: Use generic SMP booting infrastructure
powerpc, smpboot: Use generic SMP booting infrastructure
mn10300, smpboot: Use generic SMP booting infrastructure
ia64, smpboot: Use generic SMP booting infrastructure
hexagon, smpboot: Use generic SMP booting infrastructure
x86, smpboot: Use generic SMP booting infrastructure
smpboot: Provide a generic method to boot secondary processors

Srivatsa S. Bhat (18):
alpha, smpboot: Use generic SMP booting infrastructure
arm, smpboot: Use generic SMP booting infrastructure
s390, smpboot: Use generic SMP booting infrastructure
parisc, smpboot: Use generic SMP booting infrastructure
cris, smpboot: Use generic SMP booting infrastructure
sparc32, smpboot: Use generic SMP booting infrastructure
mn10300: Fix horrible logic in smp_prepare_cpus()
ia64: Move holding of vector_lock to __setup_vector_irq()
tile, smpboot: Use generic SMP booting infrastructure
sh, smpboot: Use generic SMP booting infrastructure
mips, smpboot: Use generic SMP booting infrastructure
m32r, smpboot: Use generic SMP booting infrastructure
m32r: Fix horrible logic in smp_prepare_cpus()
xen, smpboot: Use generic SMP booting infrastructure
xen, cpu hotplug: Don't call cpu_bringup() in xen_play_dead()
smpboot, x86, xen: Determine smp booting implementations at run-time
smpboot: Define and use cpu_state per-cpu variable in generic code
smpboot: Add provisions for arch-specific locking around cpu_online_mask


arch/alpha/kernel/smp.c | 27 ++++++-----
arch/arm/kernel/smp.c | 26 +++++------
arch/blackfin/mach-bf561/smp.c | 1
arch/blackfin/mach-common/smp.c | 16 ++++---
arch/cris/arch-v32/kernel/smp.c | 14 ++----
arch/hexagon/kernel/smp.c | 18 +++----
arch/ia64/include/asm/cpu.h | 2 -
arch/ia64/kernel/irq_ia64.c | 21 ++++++++-
arch/ia64/kernel/process.c | 1
arch/ia64/kernel/smpboot.c | 46 +++++++++----------
arch/m32r/kernel/smpboot.c | 50 ++++++++++----------
arch/mips/cavium-octeon/smp.c | 4 --
arch/mips/kernel/smp-cmp.c | 8 ++-
arch/mips/kernel/smp-mt.c | 2 -
arch/mips/kernel/smp.c | 23 ++++++---
arch/mips/kernel/sync-r4k.c | 3 +
arch/mn10300/kernel/smp.c | 36 +++++----------
arch/parisc/kernel/smp.c | 30 ++++++------
arch/powerpc/kernel/smp.c | 32 +++++++------
arch/s390/kernel/smp.c | 12 ++---
arch/sh/include/asm/smp.h | 2 -
arch/sh/kernel/smp.c | 23 ++++-----
arch/sparc/kernel/hvtramp.S | 1
arch/sparc/kernel/leon_smp.c | 25 ++++++----
arch/sparc/kernel/smp_64.c | 18 ++++---
arch/sparc/kernel/sun4d_smp.c | 26 +++++------
arch/sparc/kernel/sun4m_smp.c | 26 ++++++-----
arch/sparc/kernel/trampoline_32.S | 1
arch/sparc/kernel/trampoline_64.S | 1
arch/tile/kernel/smpboot.c | 26 ++++-------
arch/um/kernel/smp.c | 12 +++--
arch/x86/include/asm/cpu.h | 2 -
arch/x86/include/asm/smp.h | 25 ++++++++++
arch/x86/kernel/apic/io_apic.c | 15 ++++++
arch/x86/kernel/smp.c | 4 ++
arch/x86/kernel/smpboot.c | 43 ++++++-----------
arch/x86/xen/smp.c | 30 ++----------
include/linux/smpboot.h | 11 ++++
kernel/smpboot.c | 91 +++++++++++++++++++++++++++++++++++++
kernel/smpboot.h | 4 +-
40 files changed, 437 insertions(+), 321 deletions(-)
create mode 100644 include/linux/smpboot.h



Thanks,
Srivatsa S. Bhat
IBM Linux Technology Center


2012-06-01 09:12:10

by Srivatsa S. Bhat

[permalink] [raw]
Subject: [PATCH 04/27] smpboot, x86, xen: Determine smp booting implementations at run-time

x86 and xen use the smp_ops structure to determine their respective
implementations of common functions at run-time, by registering appropriate
function pointers at early boot. Hook on to this mechanism for generic smp
booting implementation as well.

Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: [email protected]
Cc: Andrew Morton <[email protected]>
Cc: Konrad Rzeszutek Wilk <[email protected]>
Cc: Jeremy Fitzhardinge <[email protected]>
Signed-off-by: Srivatsa S. Bhat <[email protected]>
---

arch/x86/include/asm/smp.h | 22 ++++++++++++++++++++++
1 files changed, 22 insertions(+), 0 deletions(-)

diff --git a/arch/x86/include/asm/smp.h b/arch/x86/include/asm/smp.h
index f483945..ac1f3eb 100644
--- a/arch/x86/include/asm/smp.h
+++ b/arch/x86/include/asm/smp.h
@@ -72,6 +72,10 @@ struct smp_ops {
void (*stop_other_cpus)(int wait);
void (*smp_send_reschedule)(int cpu);

+ void (*cpu_pre_starting)(void *arg);
+ void (*cpu_pre_online)(void *arg);
+ void (*cpu_post_online)(void *arg);
+
int (*cpu_up)(unsigned cpu, struct task_struct *tidle);
int (*cpu_disable)(void);
void (*cpu_die)(unsigned int cpu);
@@ -115,6 +119,24 @@ static inline void smp_cpus_done(unsigned int max_cpus)
smp_ops.smp_cpus_done(max_cpus);
}

+static inline void __cpu_pre_starting(void *arg)
+{
+ smp_ops.cpu_pre_starting(arg);
+}
+#define __cpu_pre_starting __cpu_pre_starting
+
+static inline void __cpu_pre_online(void *arg)
+{
+ smp_ops.cpu_pre_online(arg);
+}
+#define __cpu_pre_online __cpu_pre_online
+
+static inline void __cpu_post_online(void *arg)
+{
+ smp_ops.cpu_post_online(arg);
+}
+#define __cpu_post_online __cpu_post_online
+
static inline int __cpu_up(unsigned int cpu, struct task_struct *tidle)
{
return smp_ops.cpu_up(cpu, tidle);

2012-06-01 09:12:51

by Srivatsa S. Bhat

[permalink] [raw]
Subject: [PATCH 07/27] x86, smpboot: Use generic SMP booting infrastructure

From: Nikunj A. Dadhania <[email protected]>

Convert x86 to use the generic framework to boot secondary CPUs.

Notes:
1. x86 manipulates the cpu_online_mask under vector_lock. So, while
converting over to the generic smp booting code, override arch_vector_lock()
and arch_vector_unlock() to lock_vector_lock() and unlock_vector_lock()
respectively.

2. In smp_callin(), we allow the master to continue as soon as the physical
booting of the secondary processor is done. That is, we don't wait till the
CPU_STARTING notifications are sent.

Implications:
- This does not alter the order in which the notifications are sent (i.e.,
still CPU_STARTING is followed by CPU_ONLINE) because the master waits till
the new cpu is set in the cpu_online_mask before returning to generic code.

- This approach is better because of 2 reasons:
a. It makes more sense: the master has a timeout for waiting on the
cpu_callin_map - which means we should report back as soon as possible.
The whole idea of having a timeout is to estimate the maximum time that
could be taken for physical booting. This approach separates out the
physical booting vs running CPU hotplug callbacks and reports back to
the master as soon as physical booting is done.

b. Because we send out CPU_STARTING notifications *after* reporting to the
master, we don't risk the chance of the master wrongly concluding a boot
failure if we happen to add more callbacks to the CPU_STARTING
notification.

Signed-off-by: Nikunj A. Dadhania <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: [email protected]
Cc: Suresh Siddha <[email protected]>
Cc: Joerg Roedel <[email protected]>
Cc: Yinghai Lu <[email protected]>
Cc: Naga Chumbalkar <[email protected]>
Cc: Don Zickus <[email protected]>
Cc: Paul Gortmaker <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Signed-off-by: Srivatsa S. Bhat <[email protected]>
---

arch/x86/include/asm/smp.h | 3 +++
arch/x86/kernel/apic/io_apic.c | 15 +++++++++++++++
arch/x86/kernel/smp.c | 4 ++++
arch/x86/kernel/smpboot.c | 39 ++++++++++++++-------------------------
4 files changed, 36 insertions(+), 25 deletions(-)

diff --git a/arch/x86/include/asm/smp.h b/arch/x86/include/asm/smp.h
index ac1f3eb..b081b90 100644
--- a/arch/x86/include/asm/smp.h
+++ b/arch/x86/include/asm/smp.h
@@ -176,6 +176,9 @@ void cpu_disable_common(void);
void native_smp_prepare_boot_cpu(void);
void native_smp_prepare_cpus(unsigned int max_cpus);
void native_smp_cpus_done(unsigned int max_cpus);
+void native_cpu_pre_starting(void *arg);
+void native_cpu_pre_online(void *arg);
+void native_cpu_post_online(void *arg);
int native_cpu_up(unsigned int cpunum, struct task_struct *tidle);
int native_cpu_disable(void);
void native_cpu_die(unsigned int cpu);
diff --git a/arch/x86/kernel/apic/io_apic.c b/arch/x86/kernel/apic/io_apic.c
index ac96561..a7d0037 100644
--- a/arch/x86/kernel/apic/io_apic.c
+++ b/arch/x86/kernel/apic/io_apic.c
@@ -1084,6 +1084,21 @@ int IO_APIC_get_PCI_irq_vector(int bus, int slot, int pin,
}
EXPORT_SYMBOL(IO_APIC_get_PCI_irq_vector);

+/*
+ * We need to hold vector_lock while manipulating cpu_online_mask so that the
+ * set of online cpus does not change while we are assigning vectors to cpus.
+ * Holding this lock ensures we don't half assign or remove an irq from a cpu.
+ */
+void arch_vector_lock(void)
+{
+ lock_vector_lock();
+}
+
+void arch_vector_unlock(void)
+{
+ unlock_vector_lock();
+}
+
void lock_vector_lock(void)
{
/* Used to the online set of cpus does not change
diff --git a/arch/x86/kernel/smp.c b/arch/x86/kernel/smp.c
index 48d2b7d..4a9748e 100644
--- a/arch/x86/kernel/smp.c
+++ b/arch/x86/kernel/smp.c
@@ -293,6 +293,10 @@ struct smp_ops smp_ops = {
.stop_other_cpus = native_stop_other_cpus,
.smp_send_reschedule = native_smp_send_reschedule,

+ .cpu_pre_starting = native_cpu_pre_starting,
+ .cpu_pre_online = native_cpu_pre_online,
+ .cpu_post_online = native_cpu_post_online,
+
.cpu_up = native_cpu_up,
.cpu_die = native_cpu_die,
.cpu_disable = native_cpu_disable,
diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index 269bc1f..202be43 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -189,7 +189,7 @@ static void __cpuinit smp_callin(void)
/*
* Need to setup vector mappings before we enable interrupts.
*/
- setup_vector_irq(smp_processor_id());
+ setup_vector_irq(cpuid);

/*
* Save our processor parameters. Note: this information
@@ -211,14 +211,10 @@ static void __cpuinit smp_callin(void)
* This must be done before setting cpu_online_mask
* or calling notify_cpu_starting.
*/
- set_cpu_sibling_map(raw_smp_processor_id());
+ set_cpu_sibling_map(cpuid);
wmb();

- notify_cpu_starting(cpuid);
-
- /*
- * Allow the master to continue.
- */
+ /* Allow the master to continue. */
cpumask_set_cpu(cpuid, cpu_callin_mask);
}

@@ -227,6 +223,11 @@ static void __cpuinit smp_callin(void)
*/
notrace static void __cpuinit start_secondary(void *unused)
{
+ smpboot_start_secondary(unused);
+}
+
+void __cpuinit native_cpu_pre_starting(void *unused)
+{
/*
* Don't put *anything* before cpu_init(), SMP booting is too
* fragile that we want to limit the things done here to the
@@ -234,43 +235,31 @@ notrace static void __cpuinit start_secondary(void *unused)
*/
cpu_init();
x86_cpuinit.early_percpu_clock_init();
- preempt_disable();
smp_callin();
+}

+void __cpuinit native_cpu_pre_online(void *unused)
+{
#ifdef CONFIG_X86_32
/* switch away from the initial page table */
load_cr3(swapper_pg_dir);
__flush_tlb_all();
#endif

- /* otherwise gcc will move up smp_processor_id before the cpu_init */
- barrier();
/*
* Check TSC synchronization with the BP:
*/
check_tsc_sync_target();
+}

- /*
- * We need to hold vector_lock so there the set of online cpus
- * does not change while we are assigning vectors to cpus. Holding
- * this lock ensures we don't half assign or remove an irq from a cpu.
- */
- lock_vector_lock();
- set_cpu_online(smp_processor_id(), true);
- unlock_vector_lock();
- per_cpu(cpu_state, smp_processor_id()) = CPU_ONLINE;
+void __cpuinit native_cpu_post_online(void *unused)
+{
x86_platform.nmi_init();

- /* enable local interrupts */
- local_irq_enable();
-
/* to prevent fake stack check failure in clock setup */
boot_init_stack_canary();

x86_cpuinit.setup_percpu_clockev();
-
- wmb();
- cpu_idle();
}

/*

2012-06-01 09:12:36

by Srivatsa S. Bhat

[permalink] [raw]
Subject: [PATCH 06/27] xen, smpboot: Use generic SMP booting infrastructure

Convert xen to use the generic framework to boot secondary CPUs.

Cc: Konrad Rzeszutek Wilk <[email protected]>
Cc: Jeremy Fitzhardinge <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Srivatsa S. Bhat <[email protected]>
---

arch/x86/xen/smp.c | 21 ++++-----------------
1 files changed, 4 insertions(+), 17 deletions(-)

diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
index 602d6b7..46c96f9 100644
--- a/arch/x86/xen/smp.c
+++ b/arch/x86/xen/smp.c
@@ -58,13 +58,12 @@ static irqreturn_t xen_reschedule_interrupt(int irq, void *dev_id)
return IRQ_HANDLED;
}

-static void __cpuinit cpu_bringup(void)
+void __cpuinit xen_cpu_pre_starting(void *unused)
{
int cpu;

cpu_init();
touch_softlockup_watchdog();
- preempt_disable();

xen_enable_sysenter();
xen_enable_syscall();
@@ -75,25 +74,11 @@ static void __cpuinit cpu_bringup(void)
set_cpu_sibling_map(cpu);

xen_setup_cpu_clockevents();
-
- notify_cpu_starting(cpu);
-
- set_cpu_online(cpu, true);
-
- this_cpu_write(cpu_state, CPU_ONLINE);
-
- wmb();
-
- /* We can take interrupts now: we're officially "up". */
- local_irq_enable();
-
- wmb(); /* make sure everything is out */
}

static void __cpuinit cpu_bringup_and_idle(void)
{
- cpu_bringup();
- cpu_idle();
+ smpboot_start_secondary(NULL);
}

static int xen_smp_intr_init(unsigned int cpu)
@@ -515,6 +500,8 @@ static const struct smp_ops xen_smp_ops __initconst = {
.smp_prepare_cpus = xen_smp_prepare_cpus,
.smp_cpus_done = xen_smp_cpus_done,

+ .cpu_pre_starting = xen_cpu_pre_starting,
+
.cpu_up = xen_cpu_up,
.cpu_die = xen_cpu_die,
.cpu_disable = xen_cpu_disable,

2012-06-01 09:12:24

by Srivatsa S. Bhat

[permalink] [raw]
Subject: [PATCH 05/27] xen, cpu hotplug: Don't call cpu_bringup() in xen_play_dead()

xen_play_dead calls cpu_bringup() which looks weird, because xen_play_dead()
is invoked in the cpu down path, whereas cpu_bringup() (as the name suggests)
is useful in the cpu bringup path.

Getting rid of xen_play_dead()'s dependency on cpu_bringup() helps in hooking
on to the generic SMP booting framework.

Also remove the extra call to preempt_enable() added by commit 41bd956
(xen/smp: Fix CPU online/offline bug triggering a BUG: scheduling while
atomic) because it becomes unnecessary after this change.

Cc: Konrad Rzeszutek Wilk <[email protected]>
Cc: Jeremy Fitzhardinge <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Srivatsa S. Bhat <[email protected]>
---

arch/x86/xen/smp.c | 8 --------
1 files changed, 0 insertions(+), 8 deletions(-)

diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
index 09a7199..602d6b7 100644
--- a/arch/x86/xen/smp.c
+++ b/arch/x86/xen/smp.c
@@ -417,14 +417,6 @@ static void __cpuinit xen_play_dead(void) /* used only with HOTPLUG_CPU */
{
play_dead_common();
HYPERVISOR_vcpu_op(VCPUOP_down, smp_processor_id(), NULL);
- cpu_bringup();
- /*
- * Balance out the preempt calls - as we are running in cpu_idle
- * loop which has been called at bootup from cpu_bringup_and_idle.
- * The cpucpu_bringup_and_idle called cpu_bringup which made a
- * preempt_disable() So this preempt_enable will balance it out.
- */
- preempt_enable();
}

#else /* !CONFIG_HOTPLUG_CPU */

2012-06-01 09:11:11

by Srivatsa S. Bhat

[permalink] [raw]
Subject: [PATCH 01/27] smpboot: Provide a generic method to boot secondary processors

From: Nikunj A. Dadhania <[email protected]>

The smp booting and cpu hotplug related code is heavily duplicated in various
architectures. To solve that problem, provide a generic framework to boot
secondary CPUs which can then be used by the architecture code.

For now, there is no functional change in the smp boot or hotplug sequence.
Just plain consolidation of common code from various architectures.

Signed-off-by: Nikunj A. Dadhania <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Suresh Siddha <[email protected]>
Cc: Venkatesh Pallipadi <[email protected]>
Signed-off-by: Srivatsa S. Bhat <[email protected]>
---

include/linux/smpboot.h | 10 ++++++
kernel/smpboot.c | 78 +++++++++++++++++++++++++++++++++++++++++++++++
kernel/smpboot.h | 4 +-
3 files changed, 90 insertions(+), 2 deletions(-)
create mode 100644 include/linux/smpboot.h

diff --git a/include/linux/smpboot.h b/include/linux/smpboot.h
new file mode 100644
index 0000000..63bbedd
--- /dev/null
+++ b/include/linux/smpboot.h
@@ -0,0 +1,10 @@
+/*
+ * Generic SMP CPU booting framework
+ */
+
+#ifndef SMPBOOT_H
+#define SMPBOOT_H
+
+extern void smpboot_start_secondary(void *arg);
+
+#endif
diff --git a/kernel/smpboot.c b/kernel/smpboot.c
index 98f60c5..6c26133 100644
--- a/kernel/smpboot.c
+++ b/kernel/smpboot.c
@@ -6,6 +6,7 @@
#include <linux/init.h>
#include <linux/sched.h>
#include <linux/percpu.h>
+#include <linux/cpu.h>

#include "smpboot.h"

@@ -65,3 +66,80 @@ void __init idle_threads_init(void)
}
}
#endif
+
+
+/* Implement the following functions in your architecture, as appropriate. */
+
+/**
+ * __cpu_pre_starting()
+ *
+ * Implement whatever you need to do before the CPU_STARTING notifiers are
+ * invoked. Note that the CPU_STARTING callbacks run *on* the cpu that is
+ * coming up. So that cpu better be prepared! IOW, implement all the early
+ * boot/init code for the cpu here. And do NOT enable interrupts.
+ */
+#ifndef __cpu_pre_starting
+void __weak __cpu_pre_starting(void *arg) {}
+#endif
+
+/**
+ * __cpu_pre_online()
+ *
+ * Implement whatever you need to do before the upcoming CPU is set in the
+ * cpu_online_mask. (Setting the cpu in the cpu_online_mask is like an
+ * announcement that the cpu has come up, because it would be publicly
+ * visible now). Again, don't enable interrupts.
+ */
+#ifndef __cpu_pre_online
+void __weak __cpu_pre_online(void *arg) {}
+#endif
+
+/**
+ * __cpu_post_online()
+ *
+ * Implement whatever you need to do after the CPU has been set in the
+ * cpu_online_mask, and before enabling interrupts and calling cpu_idle().
+ * Ideally, it is preferable if you don't have anything to do here.
+ * We want to move to a model where setting cpu_online_mask is pretty
+ * much the final step. Again, don't enable interrupts.
+ */
+#ifndef __cpu_post_online
+void __weak __cpu_post_online(void *arg) {}
+#endif
+
+
+/**
+ * smpboot_start_secondary - Generic way to boot secondary processors
+ */
+void __cpuinit smpboot_start_secondary(void *arg)
+{
+ unsigned int cpu;
+
+ /*
+ * SMP booting is extremely fragile in some architectures. So run
+ * the cpu initialization code first before anything else.
+ */
+ __cpu_pre_starting(arg);
+
+ preempt_disable();
+ cpu = smp_processor_id();
+
+ /* Invoke the CPU_STARTING notifier callbacks */
+ notify_cpu_starting(cpu);
+
+ __cpu_pre_online(arg);
+
+ /* Set the CPU in the cpu_online_mask */
+ set_cpu_online(cpu, true);
+
+ __cpu_post_online(arg);
+
+ /* Enable local interrupts now */
+ local_irq_enable();
+
+ wmb();
+ cpu_idle();
+
+ /* We should never reach here! */
+ BUG();
+}
diff --git a/kernel/smpboot.h b/kernel/smpboot.h
index 80c0acf..9753cc0 100644
--- a/kernel/smpboot.h
+++ b/kernel/smpboot.h
@@ -1,5 +1,5 @@
-#ifndef SMPBOOT_H
-#define SMPBOOT_H
+#ifndef __SMPBOOT_H
+#define __SMPBOOT_H

struct task_struct;

2012-06-01 09:13:29

by Srivatsa S. Bhat

[permalink] [raw]
Subject: [PATCH 10/27] mips, smpboot: Use generic SMP booting infrastructure

Convert mips to use the generic framework to boot secondary CPUs.

Notes:
1. The boot processor was setting the secondary cpu in cpu_online_mask!
Instead, leave it up to the secondary cpu (... and it will be done by the
generic code now).

2. Make the boot cpu wait for the secondary cpu to be set in cpu_online_mask
before returning.

3. Don't enable interrupts in cmp_smp_finish() and vsmp_smp_finish().
Do it much later, in generic code.

4. In synchronise_count_slave(), use local_save_flags() instead of
local_irq_save() because irqs are still disabled.

Cc: Ralf Baechle <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Eric Dumazet <[email protected]>
Cc: Mike Frysinger <[email protected]>
Cc: David Howells <[email protected]>
Cc: Arun Sharma <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Rusty Russell <[email protected]>
Cc: [email protected]
Signed-off-by: Srivatsa S. Bhat <[email protected]>
---

arch/mips/kernel/smp-cmp.c | 8 ++++----
arch/mips/kernel/smp-mt.c | 2 --
arch/mips/kernel/smp.c | 23 +++++++++++++++--------
arch/mips/kernel/sync-r4k.c | 3 ++-
4 files changed, 21 insertions(+), 15 deletions(-)

diff --git a/arch/mips/kernel/smp-cmp.c b/arch/mips/kernel/smp-cmp.c
index e7e03ec..7ecd6db 100644
--- a/arch/mips/kernel/smp-cmp.c
+++ b/arch/mips/kernel/smp-cmp.c
@@ -108,7 +108,9 @@ static void cmp_init_secondary(void)

static void cmp_smp_finish(void)
{
- pr_debug("SMPCMP: CPU%d: %s\n", smp_processor_id(), __func__);
+ unsigned int cpu = smp_processor_id();
+
+ pr_debug("SMPCMP: CPU%d: %s\n", cpu, __func__);

/* CDFIXME: remove this? */
write_c0_compare(read_c0_count() + (8 * mips_hpt_frequency / HZ));
@@ -116,10 +118,8 @@ static void cmp_smp_finish(void)
#ifdef CONFIG_MIPS_MT_FPAFF
/* If we have an FPU, enroll ourselves in the FPU-full mask */
if (cpu_has_fpu)
- cpu_set(smp_processor_id(), mt_fpu_cpumask);
+ cpumask_set_cpu(cpu, &mt_fpu_cpumask);
#endif /* CONFIG_MIPS_MT_FPAFF */
-
- local_irq_enable();
}

static void cmp_cpus_done(void)
diff --git a/arch/mips/kernel/smp-mt.c b/arch/mips/kernel/smp-mt.c
index ff17868..25f7b09 100644
--- a/arch/mips/kernel/smp-mt.c
+++ b/arch/mips/kernel/smp-mt.c
@@ -171,8 +171,6 @@ static void __cpuinit vsmp_smp_finish(void)
if (cpu_has_fpu)
cpu_set(smp_processor_id(), mt_fpu_cpumask);
#endif /* CONFIG_MIPS_MT_FPAFF */
-
- local_irq_enable();
}

static void vsmp_cpus_done(void)
diff --git a/arch/mips/kernel/smp.c b/arch/mips/kernel/smp.c
index 71a95f5..4453d4d 100644
--- a/arch/mips/kernel/smp.c
+++ b/arch/mips/kernel/smp.c
@@ -33,6 +33,7 @@
#include <linux/cpu.h>
#include <linux/err.h>
#include <linux/ftrace.h>
+#include <linux/smpboot.h>

#include <linux/atomic.h>
#include <asm/cpu.h>
@@ -98,8 +99,11 @@ __cpuinit void register_smp_ops(struct plat_smp_ops *ops)
*/
asmlinkage __cpuinit void start_secondary(void)
{
- unsigned int cpu;
+ smpboot_start_secondary(NULL);
+}

+void __cpuinit __cpu_pre_starting(void *unused)
+{
#ifdef CONFIG_MIPS_MT_SMTC
/* Only do cpu_probe for first TC of CPU */
if ((read_c0_tcbind() & TCBIND_CURTC) == 0)
@@ -116,20 +120,22 @@ asmlinkage __cpuinit void start_secondary(void)
*/

calibrate_delay();
- preempt_disable();
- cpu = smp_processor_id();
- cpu_data[cpu].udelay_val = loops_per_jiffy;
+ cpu_data[smp_processor_id()].udelay_val = loops_per_jiffy;
+}

- notify_cpu_starting(cpu);
+void __cpuinit __cpu_pre_online(void *unused)
+{
+ unsigned int cpu = smp_processor_id();

mp_ops->smp_finish();
set_cpu_sibling_map(cpu);

cpu_set(cpu, cpu_callin_map);
+}

+void __cpuinit __cpu_post_online(void *unused)
+{
synchronise_count_slave();
-
- cpu_idle();
}

/*
@@ -196,7 +202,8 @@ int __cpuinit __cpu_up(unsigned int cpu, struct task_struct *tidle)
while (!cpu_isset(cpu, cpu_callin_map))
udelay(100);

- set_cpu_online(cpu, true);
+ while (!cpu_online(cpu))
+ udelay(100);

return 0;
}
diff --git a/arch/mips/kernel/sync-r4k.c b/arch/mips/kernel/sync-r4k.c
index 99f913c..7f43069 100644
--- a/arch/mips/kernel/sync-r4k.c
+++ b/arch/mips/kernel/sync-r4k.c
@@ -46,7 +46,8 @@ void __cpuinit synchronise_count_master(void)
printk(KERN_INFO "Synchronize counters across %u CPUs: ",
num_online_cpus());

- local_irq_save(flags);
+ /* IRQs are already disabled. So just save the flags */
+ local_save_flags(flags);

/*
* Notify the slaves that it's time to start

2012-06-01 09:14:36

by Srivatsa S. Bhat

[permalink] [raw]
Subject: [PATCH 15/27] ia64, smpboot: Use generic SMP booting infrastructure

From: Nikunj A. Dadhania <[email protected]>

Convert ia64 to use the generic framework to boot secondary CPUs.

Notes:
1. ia64 manipulates the cpu_online_mask under vector_lock. So, while
converting over to the generic smp booting code, override arch_vector_lock()
and arch_vector_unlock() appropriately.

2. ia64 needs to enable interrupts before fully completing booting of
secondary CPU (as explicitly mentioned in the comment above the call to
ia64_sync_itc()). Luckily this won't pose much of a problem because it
is in post-online stage (and hence CPU_STARTING notifications have been
sent and cpu_online_mask is setup already) and moreover, we were going to
enable the interrupts shortly anyway.

Signed-off-by: Nikunj A. Dadhania <[email protected]>
Cc: Fenghua Yu <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: David Howells <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Mike Frysinger <[email protected]>
Cc: [email protected]
Signed-off-by: Srivatsa S. Bhat <[email protected]>
---

arch/ia64/kernel/irq_ia64.c | 16 ++++++++++++++++
arch/ia64/kernel/smpboot.c | 40 ++++++++++++++++++++++------------------
2 files changed, 38 insertions(+), 18 deletions(-)

diff --git a/arch/ia64/kernel/irq_ia64.c b/arch/ia64/kernel/irq_ia64.c
index 6ac99c8..6a724f9 100644
--- a/arch/ia64/kernel/irq_ia64.c
+++ b/arch/ia64/kernel/irq_ia64.c
@@ -196,6 +196,22 @@ static void clear_irq_vector(int irq)
spin_unlock_irqrestore(&vector_lock, flags);
}

+/*
+ * We need to hold vector_lock while manipulating cpu_online_mask so that the
+ * set of online cpus does not change while we are assigning vectors to cpus
+ * (assign_irq_vector()). Holding this lock ensures we don't half assign or
+ * remove an irq from a cpu.
+ */
+void arch_vector_lock(void)
+{
+ spin_lock(&vector_lock);
+}
+
+void arch_vector_unlock(void)
+{
+ spin_unlock(&vector_lock);
+}
+
int
ia64_native_assign_irq_vector (int irq)
{
diff --git a/arch/ia64/kernel/smpboot.c b/arch/ia64/kernel/smpboot.c
index 709ce07..0f00dc6 100644
--- a/arch/ia64/kernel/smpboot.c
+++ b/arch/ia64/kernel/smpboot.c
@@ -351,18 +351,10 @@ smp_setup_percpu_timer (void)
static void __cpuinit
smp_callin (void)
{
- int cpuid, phys_id, itc_master;
- struct cpuinfo_ia64 *last_cpuinfo, *this_cpuinfo;
- extern void ia64_init_itm(void);
- extern volatile int time_keeper_id;
-
-#ifdef CONFIG_PERFMON
- extern void pfm_init_percpu(void);
-#endif
+ unsigned int cpuid, phys_id;

cpuid = smp_processor_id();
phys_id = hard_smp_processor_id();
- itc_master = time_keeper_id;

if (cpu_online(cpuid)) {
printk(KERN_ERR "huh, phys CPU#0x%x, CPU#0x%x already present??\n",
@@ -380,11 +372,21 @@ smp_callin (void)

/* Setup the per cpu irq handling data structures */
__setup_vector_irq(cpuid);
- notify_cpu_starting(cpuid);
- spin_lock(&vector_lock);
- set_cpu_online(cpuid, true);
- per_cpu(cpu_state, cpuid) = CPU_ONLINE;
- spin_unlock(&vector_lock);
+}
+
+void __cpuinit __cpu_post_online(void *unused)
+{
+ unsigned int cpuid, itc_master;
+ struct cpuinfo_ia64 *last_cpuinfo, *this_cpuinfo;
+ extern void ia64_init_itm(void);
+ extern volatile int time_keeper_id;
+
+#ifdef CONFIG_PERFMON
+ extern void pfm_init_percpu(void);
+#endif
+
+ cpuid = smp_processor_id();
+ itc_master = time_keeper_id;

smp_setup_percpu_timer();

@@ -442,6 +444,12 @@ smp_callin (void)
int __cpuinit
start_secondary (void *unused)
{
+ smpboot_start_secondary(unused);
+ return 0;
+}
+
+void __cpuinit __cpu_pre_starting(void *unused)
+{
/* Early console may use I/O ports */
ia64_set_kr(IA64_KR_IO_BASE, __pa(ia64_iobase));
#ifndef CONFIG_PRINTK_TIME
@@ -449,11 +457,7 @@ start_secondary (void *unused)
#endif
efi_map_pal_code();
cpu_init();
- preempt_disable();
smp_callin();
-
- cpu_idle();
- return 0;
}

struct pt_regs * __cpuinit idle_regs(struct pt_regs *regs)

2012-06-01 09:15:52

by Srivatsa S. Bhat

[permalink] [raw]
Subject: [PATCH 11/27] sh, smpboot: Use generic SMP booting infrastructure

Convert sh to use the generic framework to boot secondary CPUs.

Notes:
Postpone enabling local interrupts to after completing bringup of
secondary CPU.

Cc: Paul Mundt <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Rusty Russell <[email protected]>
Cc: Mike Frysinger <[email protected]>
Cc: [email protected]
Signed-off-by: Srivatsa S. Bhat <[email protected]>
---

arch/sh/kernel/smp.c | 19 +++++++++----------
1 files changed, 9 insertions(+), 10 deletions(-)

diff --git a/arch/sh/kernel/smp.c b/arch/sh/kernel/smp.c
index 8e0fde0..d7f7faf 100644
--- a/arch/sh/kernel/smp.c
+++ b/arch/sh/kernel/smp.c
@@ -179,6 +179,11 @@ void native_play_dead(void)

asmlinkage void __cpuinit start_secondary(void)
{
+ smpboot_start_secondary(NULL);
+}
+
+void __cpuinit __cpu_pre_starting(void *unused)
+{
unsigned int cpu = smp_processor_id();
struct mm_struct *mm = &init_mm;

@@ -190,23 +195,17 @@ asmlinkage void __cpuinit start_secondary(void)
local_flush_tlb_all();

per_cpu_trap_init();
+}

- preempt_disable();
-
- notify_cpu_starting(cpu);
-
- local_irq_enable();
+void __cpuinit __cpu_pre_online(void *unused)
+{
+ unsigned int cpu = smp_processor_id();

/* Enable local timers */
local_timer_setup(cpu);
calibrate_delay();

smp_store_cpu_info(cpu);
-
- set_cpu_online(cpu, true);
- per_cpu(cpu_state, cpu) = CPU_ONLINE;
-
- cpu_idle();
}

extern struct {

2012-06-01 09:13:13

by Srivatsa S. Bhat

[permalink] [raw]
Subject: [PATCH 08/27] m32r: Fix horrible logic in smp_prepare_cpus()

In smp_prepare_cpus(), after max_cpus are booted, instead of breaking
from the loop, the 'continue' statement is used which results in unnecessarily
wasting time by looping NR_CPUS times!

Many things around this could be pulled into generic code in the future, but
for now, fix this particular piece of code locally (because I am unable to
convince myself to ignore it even temporarily, given that it is such a gem!).

And also rewrite the 'if' statement in a more natural way.

Cc: Hirokazu Takata <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Srivatsa S. Bhat <[email protected]>
---

arch/m32r/kernel/smpboot.c | 4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/m32r/kernel/smpboot.c b/arch/m32r/kernel/smpboot.c
index a2cfc0a..6ddc51a 100644
--- a/arch/m32r/kernel/smpboot.c
+++ b/arch/m32r/kernel/smpboot.c
@@ -209,8 +209,8 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
if (!physid_isset(phys_id, phys_cpu_present_map))
continue;

- if (max_cpus <= cpucount + 1)
- continue;
+ if (cpucount + 1 >= max_cpus)
+ break;

do_boot_cpu(phys_id);

2012-06-01 09:17:07

by Srivatsa S. Bhat

[permalink] [raw]
Subject: [PATCH 25/27] s390, smpboot: Use generic SMP booting infrastructure

Convert s390 to use the generic framework to boot secondary CPUs.

Cc: Martin Schwidefsky <[email protected]>
Cc: Heiko Carstens <[email protected]>
Cc: [email protected] (supporter:S390)
Cc: Michael Holzheu <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Jan Glauber <[email protected]>
Cc: [email protected]
Signed-off-by: Srivatsa S. Bhat <[email protected]>
---

arch/s390/kernel/smp.c | 12 ++++++------
1 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/arch/s390/kernel/smp.c b/arch/s390/kernel/smp.c
index 14c8a08..3eb12e6 100644
--- a/arch/s390/kernel/smp.c
+++ b/arch/s390/kernel/smp.c
@@ -29,6 +29,7 @@
#include <linux/interrupt.h>
#include <linux/irqflags.h>
#include <linux/cpu.h>
+#include <linux/smpboot.h>
#include <linux/slab.h>
#include <linux/crash_dump.h>
#include <asm/asm-offsets.h>
@@ -702,6 +703,11 @@ static void __init smp_detect_cpus(void)
*/
static void __cpuinit smp_start_secondary(void *cpuvoid)
{
+ smpboot_start_secondary(cpuvoid);
+}
+
+void __cpuinit __cpu_pre_starting(void *unused)
+{
S390_lowcore.last_update_clock = get_clock();
S390_lowcore.restart_stack = (unsigned long) restart_stack;
S390_lowcore.restart_fn = (unsigned long) do_restart;
@@ -711,15 +717,9 @@ static void __cpuinit smp_start_secondary(void *cpuvoid)
__ctl_load(S390_lowcore.cregs_save_area, 0, 15);
__load_psw_mask(psw_kernel_bits | PSW_MASK_DAT);
cpu_init();
- preempt_disable();
init_cpu_timer();
init_cpu_vtimer();
pfault_init();
- notify_cpu_starting(smp_processor_id());
- set_cpu_online(smp_processor_id(), true);
- local_irq_enable();
- /* cpu_idle will call schedule for us */
- cpu_idle();
}

/* Upping and downing of CPUs */

2012-06-01 09:17:15

by Srivatsa S. Bhat

[permalink] [raw]
Subject: [PATCH 26/27] arm, smpboot: Use generic SMP booting infrastructure

Convert arm to use the generic framework to boot secondary CPUs.

Notes:
The calls to local_irq_enable() and local_fiq_enable() which were there
in secondary_start_kernel() originally, are retained in __cpu_post_online()
since the generic code only calls local_irq_enable().
Also, we were anyway going to enable interrupts in the generic code
immediately after __cpu_post_online(), so its not like we enabled interrupts
way too early.

Cc: Russell King <[email protected]>
Cc: Marc Zyngier <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: David Brown <[email protected]>
Cc: [email protected]
Signed-off-by: Srivatsa S. Bhat <[email protected]>
---

arch/arm/kernel/smp.c | 26 +++++++++++++-------------
1 files changed, 13 insertions(+), 13 deletions(-)

diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c
index b735521..460b7e7 100644
--- a/arch/arm/kernel/smp.c
+++ b/arch/arm/kernel/smp.c
@@ -25,6 +25,7 @@
#include <linux/percpu.h>
#include <linux/clockchips.h>
#include <linux/completion.h>
+#include <linux/smpboot.h>

#include <linux/atomic.h>
#include <asm/cacheflush.h>
@@ -227,6 +228,11 @@ static void percpu_timer_setup(void);
*/
asmlinkage void __cpuinit secondary_start_kernel(void)
{
+ smpboot_start_secondary(NULL);
+}
+
+void __cpuinit __cpu_pre_starting(void *unused)
+{
struct mm_struct *mm = &init_mm;
unsigned int cpu = smp_processor_id();

@@ -244,26 +250,25 @@ asmlinkage void __cpuinit secondary_start_kernel(void)
printk("CPU%u: Booted secondary processor\n", cpu);

cpu_init();
- preempt_disable();
trace_hardirqs_off();

/*
* Give the platform a chance to do its own initialisation.
*/
platform_secondary_init(cpu);
+}

- notify_cpu_starting(cpu);
+void __cpuinit __cpu_pre_online(void *unused)
+{
+ unsigned int cpu = smp_processor_id();

calibrate_delay();

smp_store_cpu_info(cpu);
+}

- /*
- * OK, now it's safe to let the boot CPU continue. Wait for
- * the CPU migration code to notice that the CPU is online
- * before we continue - which happens after __cpu_up returns.
- */
- set_cpu_online(cpu, true);
+void __cpuinit __cpu_post_online(void *unused)
+{
complete(&cpu_running);

/*
@@ -273,11 +278,6 @@ asmlinkage void __cpuinit secondary_start_kernel(void)

local_irq_enable();
local_fiq_enable();
-
- /*
- * OK, it's off to the idle thread for us
- */
- cpu_idle();
}

void __init smp_cpus_done(unsigned int max_cpus)

2012-06-01 09:17:23

by Srivatsa S. Bhat

[permalink] [raw]
Subject: [PATCH 27/27] alpha, smpboot: Use generic SMP booting infrastructure

Convert alpha to use the generic framework to boot secondary CPUs.

Notes:
1. The secondary cpu was being set in the cpu_online_mask way too early.
Postpone it.

2. The original code was enabling interrupts before calling
wait_boot_cpu_to_stop(). Instead postpone the enabling of interrupts
to after the cpu_online_mask is set.

Cc: Richard Henderson <[email protected]>
Cc: Ivan Kokshaysky <[email protected]>
Cc: Matt Turner <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Eric Dumazet <[email protected]>
Cc: Rusty Russell <[email protected]>
Cc: Mike Frysinger <[email protected]>
Cc: [email protected]
Signed-off-by: Srivatsa S. Bhat <[email protected]>
---

arch/alpha/kernel/smp.c | 27 +++++++++++++++------------
1 files changed, 15 insertions(+), 12 deletions(-)

diff --git a/arch/alpha/kernel/smp.c b/arch/alpha/kernel/smp.c
index 35ddc02..2547431 100644
--- a/arch/alpha/kernel/smp.c
+++ b/arch/alpha/kernel/smp.c
@@ -28,6 +28,7 @@
#include <linux/profile.h>
#include <linux/bitops.h>
#include <linux/cpu.h>
+#include <linux/smpboot.h>

#include <asm/hwrpb.h>
#include <asm/ptrace.h>
@@ -119,13 +120,17 @@ wait_boot_cpu_to_stop(int cpuid)
void __cpuinit
smp_callin(void)
{
- int cpuid = hard_smp_processor_id();
+ smpboot_start_secondary(NULL);
+}
+
+void __cpuinit __cpu_pre_starting(void *unused)
+{
+ unsigned int cpuid = hard_smp_processor_id();

if (cpu_online(cpuid)) {
printk("??, cpu 0x%x already present??\n", cpuid);
BUG();
}
- set_cpu_online(cpuid, true);

/* Turn on machine checks. */
wrmces(7);
@@ -145,15 +150,16 @@ smp_callin(void)
/* All kernel threads share the same mm context. */
atomic_inc(&init_mm.mm_count);
current->active_mm = &init_mm;
+}

- /* inform the notifiers about the new cpu */
- notify_cpu_starting(cpuid);
-
- /* Must have completely accurate bogos. */
- local_irq_enable();
+void __cpuinit __cpu_pre_online(void *unused)
+{
+ unsigned int cpuid = hard_smp_processor_id();

- /* Wait boot CPU to stop with irq enabled before running
- calibrate_delay. */
+ /*
+ * Wait boot CPU to stop with irq disabled before running
+ * calibrate_delay.
+ */
wait_boot_cpu_to_stop(cpuid);
mb();
calibrate_delay();
@@ -165,9 +171,6 @@ smp_callin(void)

DBGS(("smp_callin: commencing CPU %d current %p active_mm %p\n",
cpuid, current, current->active_mm));
-
- /* Do nothing. */
- cpu_idle();
}

/* Wait until hwrpb->txrdy is clear for cpu. Return -1 on timeout. */

2012-06-01 09:17:54

by Srivatsa S. Bhat

[permalink] [raw]
Subject: [PATCH 09/27] m32r, smpboot: Use generic SMP booting infrastructure

Convert m32r to use the generic framework to boot secondary CPUs.

Notes:
Postpone enabling interrupts to after booting the secondary CPU fully.

Cc: Hirokazu Takata <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Srivatsa S. Bhat <[email protected]>
---

arch/m32r/kernel/smpboot.c | 46 ++++++++++++++++++++++----------------------
1 files changed, 23 insertions(+), 23 deletions(-)

diff --git a/arch/m32r/kernel/smpboot.c b/arch/m32r/kernel/smpboot.c
index 6ddc51a..cf3fc0c 100644
--- a/arch/m32r/kernel/smpboot.c
+++ b/arch/m32r/kernel/smpboot.c
@@ -49,6 +49,7 @@
#include <linux/irq.h>
#include <linux/bootmem.h>
#include <linux/delay.h>
+#include <linux/smpboot.h>

#include <asm/io.h>
#include <asm/pgalloc.h>
@@ -114,7 +115,6 @@ static void do_boot_cpu(int);

int start_secondary(void *);
static void smp_callin(void);
-static void smp_online(void);

static void show_mp_info(int);
static void smp_store_cpu_info(int);
@@ -418,22 +418,38 @@ void __init smp_cpus_done(unsigned int max_cpus)
*==========================================================================*/
int __init start_secondary(void *unused)
{
+ smpboot_start_secondary(unused);
+ return 0;
+}
+
+void __cpuinit __cpu_pre_starting(void *unused)
+{
+ unsigned int cpu;
+
cpu_init();
- preempt_disable();
smp_callin();
- while (!cpumask_test_cpu(smp_processor_id(), &smp_commenced_mask))
+
+ cpu = smp_processor_id();
+ while (!cpumask_test_cpu(cpu, &smp_commenced_mask))
cpu_relax();
+}

- smp_online();
+void __cpuinit __cpu_pre_online(void *unused)
+{
+ /* Get our bogomips. */
+ calibrate_delay();

+ /* Save our processor parameters */
+ smp_store_cpu_info(smp_processor_id());
+}
+
+void __cpuinit __cpu_post_online(void *unused)
+{
/*
* low-memory mappings have been cleared, flush them from
* the local TLBs too.
*/
local_flush_tlb_all();
-
- cpu_idle();
- return 0;
}

/*==========================================================================*
@@ -485,22 +501,6 @@ static void __init smp_callin(void)
cpumask_set_cpu(cpu_id, &cpu_callin_map);
}

-static void __init smp_online(void)
-{
- int cpu_id = smp_processor_id();
-
- notify_cpu_starting(cpu_id);
-
- local_irq_enable();
-
- /* Get our bogomips. */
- calibrate_delay();
-
- /* Save our processor parameters */
- smp_store_cpu_info(cpu_id);
-
- set_cpu_online(cpu_id, true);
-}

/*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*/
/* Boot up CPUs common Routines */

2012-06-01 09:52:47

by Srivatsa S. Bhat

[permalink] [raw]
Subject: [PATCH 18/27] powerpc, smpboot: Use generic SMP booting infrastructure

From: Nikunj A. Dadhania <[email protected]>

Convert powerpc to use the generic framework to boot secondary CPUs.

Signed-off-by: Nikunj A. Dadhania <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Yong Zhang <[email protected]>
Cc: Paul Gortmaker <[email protected]>
Cc: [email protected]
Signed-off-by: Srivatsa S. Bhat <[email protected]>
---

arch/powerpc/kernel/smp.c | 26 +++++++++++++++-----------
1 files changed, 15 insertions(+), 11 deletions(-)

diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
index 1928058a..96c3718 100644
--- a/arch/powerpc/kernel/smp.c
+++ b/arch/powerpc/kernel/smp.c
@@ -544,16 +544,18 @@ static struct device_node *cpu_to_l2cache(int cpu)
/* Activate a secondary processor. */
void __devinit start_secondary(void *unused)
{
+ smpboot_start_secondary(unused);
+}
+
+void __cpuinit __cpu_pre_starting(void *unused)
+{
unsigned int cpu = smp_processor_id();
- struct device_node *l2_cache;
- int i, base;

atomic_inc(&init_mm.mm_count);
current->active_mm = &init_mm;

smp_store_cpu_info(cpu);
set_dec(tb_ticks_per_jiffy);
- preempt_disable();
cpu_callin_map[cpu] = 1;

if (smp_ops->setup_cpu)
@@ -567,8 +569,16 @@ void __devinit start_secondary(void *unused)
if (system_state == SYSTEM_RUNNING)
vdso_data->processorCount++;
#endif
- notify_cpu_starting(cpu);
- set_cpu_online(cpu, true);
+}
+
+void __cpuinit __cpu_post_online(void *unused)
+{
+ unsigned int cpu;
+ struct device_node *l2_cache;
+ int i, base;
+
+ cpu = smp_processor_id();
+
/* Update sibling maps */
base = cpu_first_thread_sibling(cpu);
for (i = 0; i < threads_per_core; i++) {
@@ -596,12 +606,6 @@ void __devinit start_secondary(void *unused)
of_node_put(np);
}
of_node_put(l2_cache);
-
- local_irq_enable();
-
- cpu_idle();
-
- BUG();
}

int setup_profiling_timer(unsigned int multiplier)

2012-06-01 09:53:03

by Srivatsa S. Bhat

[permalink] [raw]
Subject: [PATCH 19/27] blackfin, smpboot: Use generic SMP booting infrastructure

From: Nikunj A. Dadhania <[email protected]>

Convert blackfin to use the generic framework to boot secondary CPUs.

Notes:
The call to set_cpu_online() is removed from platform_secondary_init(), since
it will now be done by generic SMP booting code.

Signed-off-by: Nikunj A. Dadhania <[email protected]>
Cc: Mike Frysinger <[email protected]>
Cc: Bob Liu <[email protected]>
Cc: Steven Miao <[email protected]>
Cc: Yong Zhang <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: [email protected]
Signed-off-by: Srivatsa S. Bhat <[email protected]>
---

arch/blackfin/mach-bf561/smp.c | 1 -
arch/blackfin/mach-common/smp.c | 16 +++++++++-------
2 files changed, 9 insertions(+), 8 deletions(-)

diff --git a/arch/blackfin/mach-bf561/smp.c b/arch/blackfin/mach-bf561/smp.c
index ab1c617..e7e2e50 100644
--- a/arch/blackfin/mach-bf561/smp.c
+++ b/arch/blackfin/mach-bf561/smp.c
@@ -69,7 +69,6 @@ void __cpuinit platform_secondary_init(unsigned int cpu)
SSYNC();

/* We are done with local CPU inits, unblock the boot CPU. */
- set_cpu_online(cpu, true);
spin_lock(&boot_lock);
spin_unlock(&boot_lock);
}
diff --git a/arch/blackfin/mach-common/smp.c b/arch/blackfin/mach-common/smp.c
index 00bbe67..9ee6f2f 100644
--- a/arch/blackfin/mach-common/smp.c
+++ b/arch/blackfin/mach-common/smp.c
@@ -25,6 +25,7 @@
#include <linux/irq.h>
#include <linux/slab.h>
#include <linux/atomic.h>
+#include <linux/smpboot.h>
#include <asm/cacheflush.h>
#include <asm/irq_handler.h>
#include <asm/mmu_context.h>
@@ -373,6 +374,11 @@ static void __cpuinit setup_secondary(unsigned int cpu)

void __cpuinit secondary_start_kernel(void)
{
+ smpboot_start_secondary(NULL);
+}
+
+void __cpuinit __cpu_pre_starting(void *unused)
+{
unsigned int cpu = smp_processor_id();
struct mm_struct *mm = &init_mm;

@@ -405,8 +411,6 @@ void __cpuinit secondary_start_kernel(void)
atomic_inc(&mm->mm_count);
current->active_mm = mm;

- preempt_disable();
-
setup_secondary(cpu);

platform_secondary_init(cpu);
@@ -414,19 +418,17 @@ void __cpuinit secondary_start_kernel(void)
/* setup local core timer */
bfin_local_timer_setup();

- local_irq_enable();
-
bfin_setup_caches(cpu);
+}

- notify_cpu_starting(cpu);
+void __cpuinit __cpu_pre_online(void *unused)
+{
/*
* Calibrate loops per jiffy value.
* IRQs need to be enabled here - D-cache can be invalidated
* in timer irq handler, so core B can read correct jiffies.
*/
calibrate_delay();
-
- cpu_idle();
}

void __init smp_prepare_boot_cpu(void)

2012-06-01 09:55:59

by Srivatsa S. Bhat

[permalink] [raw]
Subject: [PATCH 14/27] ia64: Move holding of vector_lock to __setup_vector_irq()

__setup_vector_irq() expects that its caller holds the vector_lock.
As of now there is only one caller - smp_callin(); and acquiring the lock
in smp_callin() around the call to __setup_vector_irq() obstructs the
conversion of ia64 to generic smp booting code. So move the lock acquisition
to __setup_vector_irq() itself (mimicking what x86 does).

Cc: Tony Luck <[email protected]>
Cc: Fenghua Yu <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: David Howells <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Mike Frysinger <[email protected]>
Cc: [email protected]
Signed-off-by: Srivatsa S. Bhat <[email protected]>
---

arch/ia64/kernel/irq_ia64.c | 5 +++--
arch/ia64/kernel/smpboot.c | 2 +-
2 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/arch/ia64/kernel/irq_ia64.c b/arch/ia64/kernel/irq_ia64.c
index 5c3e088..6ac99c8 100644
--- a/arch/ia64/kernel/irq_ia64.c
+++ b/arch/ia64/kernel/irq_ia64.c
@@ -241,13 +241,13 @@ reserve_irq_vector (int vector)
}

/*
- * Initialize vector_irq on a new cpu. This function must be called
- * with vector_lock held.
+ * Initialize vector_irq on a new cpu.
*/
void __setup_vector_irq(int cpu)
{
int irq, vector;

+ spin_lock(&vector_lock);
/* Clear vector_irq */
for (vector = 0; vector < IA64_NUM_VECTORS; ++vector)
per_cpu(vector_irq, cpu)[vector] = -1;
@@ -258,6 +258,7 @@ void __setup_vector_irq(int cpu)
vector = irq_to_vector(irq);
per_cpu(vector_irq, cpu)[vector] = irq;
}
+ spin_unlock(&vector_lock);
}

#if defined(CONFIG_SMP) && (defined(CONFIG_IA64_GENERIC) || defined(CONFIG_IA64_DIG))
diff --git a/arch/ia64/kernel/smpboot.c b/arch/ia64/kernel/smpboot.c
index df00a3c..709ce07 100644
--- a/arch/ia64/kernel/smpboot.c
+++ b/arch/ia64/kernel/smpboot.c
@@ -378,10 +378,10 @@ smp_callin (void)
set_numa_node(cpu_to_node_map[cpuid]);
set_numa_mem(local_memory_node(cpu_to_node_map[cpuid]));

- spin_lock(&vector_lock);
/* Setup the per cpu irq handling data structures */
__setup_vector_irq(cpuid);
notify_cpu_starting(cpuid);
+ spin_lock(&vector_lock);
set_cpu_online(cpuid, true);
per_cpu(cpu_state, cpuid) = CPU_ONLINE;
spin_unlock(&vector_lock);

2012-06-01 09:56:36

by Srivatsa S. Bhat

[permalink] [raw]
Subject: [PATCH 12/27] tile, smpboot: Use generic SMP booting infrastructure

Convert tile to use the generic framework to boot secondary CPUs.

Notes:
1. Don't enable interrupts in start_secondary().
2. Remove the unnecessary calls to local_irq_enable() in __cpu_up().
3. cpu_idle() was being called after doing a preempt_enable() instead
of a preempt_disable()! Fix it.

Cc: Chris Metcalf <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Yong Zhang <[email protected]>
Signed-off-by: Srivatsa S. Bhat <[email protected]>
---

arch/tile/kernel/smpboot.c | 22 ++++++++--------------
1 files changed, 8 insertions(+), 14 deletions(-)

diff --git a/arch/tile/kernel/smpboot.c b/arch/tile/kernel/smpboot.c
index 24a9c06..8558824 100644
--- a/arch/tile/kernel/smpboot.c
+++ b/arch/tile/kernel/smpboot.c
@@ -145,8 +145,6 @@ static void __cpuinit start_secondary(void)
/* Set our thread pointer appropriately. */
set_my_cpu_offset(__per_cpu_offset[cpuid]);

- preempt_disable();
-
/*
* In large machines even this will slow us down, since we
* will be contending for for the printk spinlock.
@@ -165,7 +163,6 @@ static void __cpuinit start_secondary(void)

/* Allow hypervisor messages to be received */
init_messaging();
- local_irq_enable();

/* Indicate that we're ready to come up. */
/* Must not do this before we're ready to receive messages */
@@ -183,6 +180,11 @@ static void __cpuinit start_secondary(void)
*/
void __cpuinit online_secondary(void)
{
+ smpboot_start_secondary(NULL);
+}
+
+void __cpuinit __cpu_pre_starting(void *unused)
+{
/*
* low-memory mappings have been cleared, flush them from
* the local TLBs too.
@@ -193,21 +195,15 @@ void __cpuinit online_secondary(void)

/* This must be done before setting cpu_online_mask */
wmb();
+}

- notify_cpu_starting(smp_processor_id());
-
- set_cpu_online(smp_processor_id(), 1);
- __get_cpu_var(cpu_state) = CPU_ONLINE;
-
+void __cpuinit __cpu_post_online(void *unused)
+{
/* Set up tile-specific state for this cpu. */
setup_cpu(0);

/* Set up tile-timer clock-event device on this cpu */
setup_tile_timer();
-
- preempt_enable();
-
- cpu_idle();
}

int __cpuinit __cpu_up(unsigned int cpu, struct task_struct *tidle)
@@ -217,13 +213,11 @@ int __cpuinit __cpu_up(unsigned int cpu, struct task_struct *tidle)
for (; !cpumask_test_cpu(cpu, &cpu_started); timeout++) {
if (timeout >= 50000) {
pr_info("skipping unresponsive cpu%d\n", cpu);
- local_irq_enable();
return -EIO;
}
udelay(100);
}

- local_irq_enable();
per_cpu(cpu_state, cpu) = CPU_UP_PREPARE;

/* Unleash the CPU! */

2012-06-01 09:57:09

by Srivatsa S. Bhat

[permalink] [raw]
Subject: [PATCH 16/27] mn10300: Fix horrible logic in smp_prepare_cpus()

In smp_prepare_cpus(), after max_cpus are booted, instead of breaking
from the loop, the 'continue' statement is used which results in
unnecessarily wasting time by looping NR_CPUS times!

Many things around this could be pulled into generic code in the future, but
for now, fix this particular piece of code locally (because I am unable to
convince myself to ignore it even temporarily, given that it is such a gem!).

And also rewrite the 'if' statement in a more natural way.

Cc: David Howells <[email protected]>
Cc: Koichi Yasutake <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Yong Zhang <[email protected]>
Cc: [email protected]
Signed-off-by: Srivatsa S. Bhat <[email protected]>
---

arch/mn10300/kernel/smp.c | 4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/mn10300/kernel/smp.c b/arch/mn10300/kernel/smp.c
index e62c223..b19e75d2 100644
--- a/arch/mn10300/kernel/smp.c
+++ b/arch/mn10300/kernel/smp.c
@@ -695,9 +695,9 @@ void __init smp_prepare_cpus(unsigned int max_cpus)

/* Boot secondary CPUs (for which phy_id > 0) */
for (phy_id = 0; phy_id < NR_CPUS; phy_id++) {
+ if (cpucount + 1 >= max_cpus)
+ break;
/* Don't boot primary CPU */
- if (max_cpus <= cpucount + 1)
- continue;
if (phy_id != 0)
do_boot_cpu(phy_id);
set_cpu_possible(phy_id, true);

2012-06-01 09:58:09

by Srivatsa S. Bhat

[permalink] [raw]
Subject: [PATCH 24/27] parisc, smpboot: Use generic SMP booting infrastructure

Convert parisc to use the generic framework to boot secondary CPUs.

Notes:
1. The secondary cpu was being set in the cpu_online_mask way too early
when things aren't initialized fully yet. Postpone that.

Cc: "James E.J. Bottomley" <[email protected]>
Cc: Helge Deller <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Rusty Russell <[email protected]>
Cc: Mike Frysinger <[email protected]>
Cc: David Howells <[email protected]>
Cc: [email protected]
Signed-off-by: Srivatsa S. Bhat <[email protected]>
---

arch/parisc/kernel/smp.c | 30 ++++++++++++++----------------
1 files changed, 14 insertions(+), 16 deletions(-)

diff --git a/arch/parisc/kernel/smp.c b/arch/parisc/kernel/smp.c
index 6266730..a6199e2 100644
--- a/arch/parisc/kernel/smp.c
+++ b/arch/parisc/kernel/smp.c
@@ -32,6 +32,7 @@
#include <linux/bitops.h>
#include <linux/ftrace.h>
#include <linux/cpu.h>
+#include <linux/smpboot.h>

#include <linux/atomic.h>
#include <asm/current.h>
@@ -280,8 +281,6 @@ static void __init
smp_cpu_init(int cpunum)
{
extern int init_per_cpu(int); /* arch/parisc/kernel/processor.c */
- extern void init_IRQ(void); /* arch/parisc/kernel/irq.c */
- extern void start_cpu_itimer(void); /* arch/parisc/kernel/time.c */

/* Set modes and Enable floating point coprocessor */
(void) init_per_cpu(cpunum);
@@ -297,10 +296,12 @@ smp_cpu_init(int cpunum)
printk(KERN_CRIT "CPU#%d already initialized!\n", cpunum);
machine_halt();
}
+}

- notify_cpu_starting(cpunum);
-
- set_cpu_online(cpunum, true);
+void __cpuinit __cpu_pre_online(void *unused)
+{
+ extern void init_IRQ(void); /* arch/parisc/kernel/irq.c */
+ extern void start_cpu_itimer(void); /* arch/parisc/kernel/time.c */

/* Initialise the idle task for this CPU */
atomic_inc(&init_mm.mm_count);
@@ -310,6 +311,9 @@ smp_cpu_init(int cpunum)

init_IRQ(); /* make sure no IRQs are enabled or pending */
start_cpu_itimer();
+
+ flush_cache_all_local(); /* start with known state */
+ flush_tlb_all_local(NULL);
}


@@ -319,20 +323,14 @@ smp_cpu_init(int cpunum)
*/
void __init smp_callin(void)
{
+ smpboot_start_secondary(NULL);
+}
+
+void __cpuinit __cpu_pre_starting(void *unused)
+{
int slave_id = cpu_now_booting;

smp_cpu_init(slave_id);
- preempt_disable();
-
- flush_cache_all_local(); /* start with known state */
- flush_tlb_all_local(NULL);
-
- local_irq_enable(); /* Interrupts have been off until now */
-
- cpu_idle(); /* Wait for timer to schedule some work */
-
- /* NOTREACHED */
- panic("smp_callin() AAAAaaaaahhhh....\n");
}

/*

2012-06-01 10:04:00

by Srivatsa S. Bhat

[permalink] [raw]
Subject: [PATCH 02/27] smpboot: Add provisions for arch-specific locking around cpu_online_mask

We want to make smp booting as generic as possible and remove code
duplication in arch/ directories.

While manipulating the cpu_online_mask, x86 uses an additional lock, i.e.,
'vector_lock'. So provide a generic way to implement such arch-specific
extra locking, by providing weakly defined functions arch_vector_lock()
and arch_vector_unlock() which can be overriden by different architectures
suitably.

Cc: Thomas Gleixner <[email protected]>
Cc: Suresh Siddha <[email protected]>
Cc: Venkatesh Pallipadi <[email protected]>
Signed-off-by: Srivatsa S. Bhat <[email protected]>
---

kernel/smpboot.c | 11 ++++++++++-
1 files changed, 10 insertions(+), 1 deletions(-)

diff --git a/kernel/smpboot.c b/kernel/smpboot.c
index 6c26133..5ae1805 100644
--- a/kernel/smpboot.c
+++ b/kernel/smpboot.c
@@ -107,6 +107,13 @@ void __weak __cpu_pre_online(void *arg) {}
void __weak __cpu_post_online(void *arg) {}
#endif

+/*
+ * Optional arch-specific locking for manipulating cpu_online_mask while
+ * bringing up secondary CPUs.
+ */
+void __weak arch_vector_lock(void) {}
+void __weak arch_vector_unlock(void) {}
+

/**
* smpboot_start_secondary - Generic way to boot secondary processors
@@ -129,8 +136,10 @@ void __cpuinit smpboot_start_secondary(void *arg)

__cpu_pre_online(arg);

- /* Set the CPU in the cpu_online_mask */
+ /* Set the CPU in the cpu_online_mask with required locks held */
+ arch_vector_lock();
set_cpu_online(cpu, true);
+ arch_vector_unlock();

__cpu_post_online(arg);

2012-06-01 10:08:49

by Srivatsa S. Bhat

[permalink] [raw]
Subject: [PATCH 20/27] sparc64, smpboot: Use generic SMP booting infrastructure

From: Nikunj A. Dadhania <[email protected]>

Convert sparc64 to use the generic framework to boot secondary CPUs.

Notes:
Remove the calls to cpu_idle() from assembly files because we will invoke
cpu_idle() in generic code.

Signed-off-by: Nikunj A. Dadhania <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Paul Gortmaker <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Mike Frysinger <[email protected]>
Cc: Jiri Kosina <[email protected]>
Cc: [email protected]
Signed-off-by: Srivatsa S. Bhat <[email protected]>
---

arch/sparc/kernel/hvtramp.S | 1 -
arch/sparc/kernel/smp_64.c | 18 ++++++++++--------
arch/sparc/kernel/trampoline_64.S | 1 -
3 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/arch/sparc/kernel/hvtramp.S b/arch/sparc/kernel/hvtramp.S
index 9365432..3eb7e0b 100644
--- a/arch/sparc/kernel/hvtramp.S
+++ b/arch/sparc/kernel/hvtramp.S
@@ -128,7 +128,6 @@ hv_cpu_startup:

call smp_callin
nop
- call cpu_idle
mov 0, %o0
call cpu_panic
nop
diff --git a/arch/sparc/kernel/smp_64.c b/arch/sparc/kernel/smp_64.c
index 781bcb1..3c45538 100644
--- a/arch/sparc/kernel/smp_64.c
+++ b/arch/sparc/kernel/smp_64.c
@@ -25,6 +25,7 @@
#include <linux/ftrace.h>
#include <linux/cpu.h>
#include <linux/slab.h>
+#include <linux/smpboot.h>

#include <asm/head.h>
#include <asm/ptrace.h>
@@ -89,6 +90,11 @@ static volatile unsigned long callin_flag = 0;

void __cpuinit smp_callin(void)
{
+ smpboot_start_secondary(NULL);
+}
+
+void __cpuinit __cpu_pre_starting(void *unused)
+{
int cpuid = hard_smp_processor_id();

__local_per_cpu_offset = __per_cpu_offset(cpuid);
@@ -115,18 +121,14 @@ void __cpuinit smp_callin(void)
/* Attach to the address space of init_task. */
atomic_inc(&init_mm.mm_count);
current->active_mm = &init_mm;
+}

- /* inform the notifiers about the new cpu */
- notify_cpu_starting(cpuid);
+void __cpuinit __cpu_pre_online(void *unused)
+{
+ unsigned int cpuid = hard_smp_processor_id();

while (!cpumask_test_cpu(cpuid, &smp_commenced_mask))
rmb();
-
- set_cpu_online(cpuid, true);
- local_irq_enable();
-
- /* idle thread is expected to have preempt disabled */
- preempt_disable();
}

void cpu_panic(void)
diff --git a/arch/sparc/kernel/trampoline_64.S b/arch/sparc/kernel/trampoline_64.S
index da1b781..379b464 100644
--- a/arch/sparc/kernel/trampoline_64.S
+++ b/arch/sparc/kernel/trampoline_64.S
@@ -407,7 +407,6 @@ after_lock_tlb:

call smp_callin
nop
- call cpu_idle
mov 0, %o0
call cpu_panic
nop

2012-06-01 10:16:17

by Srivatsa S. Bhat

[permalink] [raw]
Subject: [PATCH 13/27] hexagon, smpboot: Use generic SMP booting infrastructure

From: Nikunj A. Dadhania <[email protected]>

Convert hexagon to use the generic framework to boot secondary CPUs.

Signed-off-by: Nikunj A. Dadhania <[email protected]>
Cc: Richard Kuo <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Rusty Russell <[email protected]>
Cc: David Howells <[email protected]>
Cc: Chris Metcalf <[email protected]>
Cc: [email protected]
Signed-off-by: Srivatsa S. Bhat <[email protected]>
---

arch/hexagon/kernel/smp.c | 18 ++++++++----------
1 files changed, 8 insertions(+), 10 deletions(-)

diff --git a/arch/hexagon/kernel/smp.c b/arch/hexagon/kernel/smp.c
index 149fbef..0a679a4 100644
--- a/arch/hexagon/kernel/smp.c
+++ b/arch/hexagon/kernel/smp.c
@@ -29,6 +29,7 @@
#include <linux/smp.h>
#include <linux/spinlock.h>
#include <linux/cpu.h>
+#include <linux/smpboot.h>

#include <asm/time.h> /* timer_interrupt */
#include <asm/hexagon_vm.h>
@@ -148,7 +149,6 @@ void __init smp_prepare_boot_cpu(void)

void __cpuinit start_secondary(void)
{
- unsigned int cpu;
unsigned long thread_ptr;

/* Calculate thread_info pointer from stack pointer */
@@ -165,6 +165,13 @@ void __cpuinit start_secondary(void)
: "r" (thread_ptr)
);

+ smpboot_start_secondary(NULL);
+}
+
+void __cpuinit __cpu_pre_starting(void *arg)
+{
+ unsigned int cpu;
+
/* Set the memory struct */
atomic_inc(&init_mm.mm_count);
current->active_mm = &init_mm;
@@ -177,17 +184,8 @@ void __cpuinit start_secondary(void)
setup_percpu_clockdev();

printk(KERN_INFO "%s cpu %d\n", __func__, current_thread_info()->cpu);
-
- notify_cpu_starting(cpu);
-
- set_cpu_online(cpu, true);
-
- local_irq_enable();
-
- cpu_idle();
}

-
/*
* called once for each present cpu
* apparently starts up the CPU and then

2012-06-01 10:18:31

by Srivatsa S. Bhat

[permalink] [raw]
Subject: [PATCH 21/27] sparc32, smpboot: Use generic SMP booting infrastructure

Convert sparc32 to use the generic framework to boot secondary CPUs.

Cc: "David S. Miller" <[email protected]>
Cc: Sam Ravnborg <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Mike Frysinger <[email protected]>
Cc: Konrad Eisele <[email protected]>
Cc: Tkhai Kirill <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Signed-off-by: Srivatsa S. Bhat <[email protected]>
---

arch/sparc/kernel/leon_smp.c | 25 +++++++++++++++----------
arch/sparc/kernel/sun4d_smp.c | 26 +++++++++++++-------------
arch/sparc/kernel/sun4m_smp.c | 26 +++++++++++++++-----------
arch/sparc/kernel/trampoline_32.S | 1 -
4 files changed, 43 insertions(+), 35 deletions(-)

diff --git a/arch/sparc/kernel/leon_smp.c b/arch/sparc/kernel/leon_smp.c
index a469090..498ca9b 100644
--- a/arch/sparc/kernel/leon_smp.c
+++ b/arch/sparc/kernel/leon_smp.c
@@ -25,6 +25,7 @@
#include <linux/gfp.h>
#include <linux/cpu.h>
#include <linux/clockchips.h>
+#include <linux/smpboot.h>

#include <asm/cacheflush.h>
#include <asm/tlbflush.h>
@@ -73,13 +74,21 @@ static inline unsigned long do_swap(volatile unsigned long *ptr,

void __cpuinit leon_callin(void)
{
- int cpuid = hard_smp_processor_id();
+ smpboot_start_secondary(NULL);
+}
+
+void __cpuinit __cpu_pre_starting(void *unused)
+{
+ unsigned int cpuid = hard_smp_processor_id();

local_ops->cache_all();
local_ops->tlb_all();
leon_configure_cache_smp();
+}

- notify_cpu_starting(cpuid);
+void __cpuinit __cpu_pre_online(void *unused)
+{
+ unsigned int cpuid = hard_smp_processor_id();

/* Get our local ticker going. */
register_percpu_ce(cpuid);
@@ -91,11 +100,10 @@ void __cpuinit leon_callin(void)
local_ops->tlb_all();

/*
- * Unblock the master CPU _only_ when the scheduler state
- * of all secondary CPUs will be up-to-date, so after
- * the SMP initialization the master will be just allowed
- * to call the scheduler code.
- * Allow master to continue.
+ * Allow master to continue. The master will then give us the
+ * go-ahead by setting the smp_commenced_mask and will wait without
+ * timeouts until our setup is completed fully (signified by
+ * our bit being set in the cpu_online_mask).
*/
do_swap(&cpu_callin_map[cpuid], 1);

@@ -112,9 +120,6 @@ void __cpuinit leon_callin(void)

while (!cpumask_test_cpu(cpuid, &smp_commenced_mask))
mb();
-
- local_irq_enable();
- set_cpu_online(cpuid, true);
}

/*
diff --git a/arch/sparc/kernel/sun4d_smp.c b/arch/sparc/kernel/sun4d_smp.c
index ddaea31..cd5367a 100644
--- a/arch/sparc/kernel/sun4d_smp.c
+++ b/arch/sparc/kernel/sun4d_smp.c
@@ -12,6 +12,7 @@
#include <linux/delay.h>
#include <linux/sched.h>
#include <linux/cpu.h>
+#include <linux/smpboot.h>

#include <asm/cacheflush.h>
#include <asm/switch_to.h>
@@ -52,8 +53,12 @@ static inline void show_leds(int cpuid)

void __cpuinit smp4d_callin(void)
{
- int cpuid = hard_smp_processor_id();
- unsigned long flags;
+ smpboot_start_secondary(NULL);
+}
+
+void __cpuinit __cpu_pre_starting(void *unused)
+{
+ unsigned int cpuid = hard_smp_processor_id();

/* Show we are alive */
cpu_leds[cpuid] = 0x6;
@@ -64,14 +69,13 @@ void __cpuinit smp4d_callin(void)

local_ops->cache_all();
local_ops->tlb_all();
+}
+
+void __cpuinit __cpu_pre_online(void *unused)
+{
+ unsigned int cpuid = hard_smp_processor_id();
+ unsigned long flags;

- notify_cpu_starting(cpuid);
- /*
- * Unblock the master CPU _only_ when the scheduler state
- * of all secondary CPUs will be up-to-date, so after
- * the SMP initialization the master will be just allowed
- * to call the scheduler code.
- */
/* Get our local ticker going. */
register_percpu_ce(cpuid);

@@ -106,16 +110,12 @@ void __cpuinit smp4d_callin(void)
local_ops->cache_all();
local_ops->tlb_all();

- local_irq_enable(); /* We don't allow PIL 14 yet */
-
while (!cpumask_test_cpu(cpuid, &smp_commenced_mask))
barrier();

spin_lock_irqsave(&sun4d_imsk_lock, flags);
cc_set_imsk(cc_get_imsk() & ~0x4000); /* Allow PIL 14 as well */
spin_unlock_irqrestore(&sun4d_imsk_lock, flags);
- set_cpu_online(cpuid, true);
-
}

/*
diff --git a/arch/sparc/kernel/sun4m_smp.c b/arch/sparc/kernel/sun4m_smp.c
index 128af73..ed05f54 100644
--- a/arch/sparc/kernel/sun4m_smp.c
+++ b/arch/sparc/kernel/sun4m_smp.c
@@ -10,6 +10,7 @@
#include <linux/delay.h>
#include <linux/sched.h>
#include <linux/cpu.h>
+#include <linux/smpboot.h>

#include <asm/cacheflush.h>
#include <asm/switch_to.h>
@@ -36,12 +37,20 @@ swap_ulong(volatile unsigned long *ptr, unsigned long val)

void __cpuinit smp4m_callin(void)
{
- int cpuid = hard_smp_processor_id();
+ smpboot_start_secondary(NULL);
+}
+
+void __cpuinit __cpu_pre_starting(void *unused)
+{
+ unsigned int cpuid = hard_smp_processor_id();

local_ops->cache_all();
local_ops->tlb_all();
+}

- notify_cpu_starting(cpuid);
+void __cpuinit __cpu_pre_online(void *unused)
+{
+ unsigned int cpuid = hard_smp_processor_id();

register_percpu_ce(cpuid);

@@ -52,12 +61,11 @@ void __cpuinit smp4m_callin(void)
local_ops->tlb_all();

/*
- * Unblock the master CPU _only_ when the scheduler state
- * of all secondary CPUs will be up-to-date, so after
- * the SMP initialization the master will be just allowed
- * to call the scheduler code.
+ * Allow master to continue. The master will then give us the
+ * go-ahead by setting the smp_commenced_mask and will wait without
+ * timeouts until our setup is completed fully (signified by
+ * our bit being set in the cpu_online_mask).
*/
- /* Allow master to continue. */
swap_ulong(&cpu_callin_map[cpuid], 1);

/* XXX: What's up with all the flushes? */
@@ -75,10 +83,6 @@ void __cpuinit smp4m_callin(void)

while (!cpumask_test_cpu(cpuid, &smp_commenced_mask))
mb();
-
- local_irq_enable();
-
- set_cpu_online(cpuid, true);
}

/*
diff --git a/arch/sparc/kernel/trampoline_32.S b/arch/sparc/kernel/trampoline_32.S
index 7364ddc..20e90ba 100644
--- a/arch/sparc/kernel/trampoline_32.S
+++ b/arch/sparc/kernel/trampoline_32.S
@@ -88,7 +88,6 @@ cpu3_startup:
.align 4

smp_do_cpu_idle:
- call cpu_idle
mov 0, %o0

call cpu_panic

2012-06-01 10:21:44

by Srivatsa S. Bhat

[permalink] [raw]
Subject: [PATCH 17/27] mn10300, smpboot: Use generic SMP booting infrastructure

From: Nikunj A. Dadhania <[email protected]>

Convert mn10300 to use the generic framework to boot secondary CPUs.

Notes:
1. In order to avoid enabling interrupts very early during secondary CPU
bringup (in smp_cpu_init()), use arch_local_save_flags() instead of
arch_local_cli_save().

Signed-off-by: Nikunj A. Dadhania <[email protected]>
Cc: David Howells <[email protected]>
Cc: Koichi Yasutake <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Yong Zhang <[email protected]>
Cc: [email protected]
Signed-off-by: Srivatsa S. Bhat <[email protected]>
---

arch/mn10300/kernel/smp.c | 32 +++++++++++---------------------
1 files changed, 11 insertions(+), 21 deletions(-)

diff --git a/arch/mn10300/kernel/smp.c b/arch/mn10300/kernel/smp.c
index b19e75d2..9c4e35e 100644
--- a/arch/mn10300/kernel/smp.c
+++ b/arch/mn10300/kernel/smp.c
@@ -25,6 +25,7 @@
#include <linux/profile.h>
#include <linux/smp.h>
#include <linux/cpu.h>
+#include <linux/smpboot.h>
#include <asm/tlbflush.h>
#include <asm/bitops.h>
#include <asm/processor.h>
@@ -100,7 +101,6 @@ cpumask_t cpu_initialized __initdata = CPU_MASK_NONE;
static int do_boot_cpu(int);
static void smp_show_cpu_info(int cpu_id);
static void smp_callin(void);
-static void smp_online(void);
static void smp_store_cpu_info(int);
static void smp_cpu_init(void);
static void smp_tune_scheduling(void);
@@ -607,7 +607,7 @@ static void __init smp_cpu_init(void)
mn10300_ipi_shutdown(SMP_BOOT_IRQ);

/* Set up the non-maskable call function IPI */
- flags = arch_local_cli_save();
+ flags = arch_local_save_flags();
GxICR(CALL_FUNCTION_NMI_IPI) = GxICR_NMI | GxICR_ENABLE | GxICR_DETECT;
tmp16 = GxICR(CALL_FUNCTION_NMI_IPI);
arch_local_irq_restore(flags);
@@ -655,20 +655,25 @@ void smp_prepare_cpu_init(void)
*/
int __init start_secondary(void *unused)
{
+ smpboot_start_secondary(unused);
+ return 0;
+}
+
+void __cpuinit __cpu_pre_starting(void *unused)
+{
smp_cpu_init();
smp_callin();
while (!cpumask_test_cpu(smp_processor_id(), &smp_commenced_mask))
cpu_relax();

local_flush_tlb();
- preempt_disable();
- smp_online();
+}

+void __cpuinit __cpu_post_online(void *unused)
+{
#ifdef CONFIG_GENERIC_CLOCKEVENTS
init_clockevents();
#endif
- cpu_idle();
- return 0;
}

/**
@@ -865,21 +870,6 @@ static void __init smp_callin(void)
cpumask_set_cpu(cpu, &cpu_callin_map);
}

-/**
- * smp_online - Set cpu_online_mask
- */
-static void __init smp_online(void)
-{
- int cpu;
-
- cpu = smp_processor_id();
-
- notify_cpu_starting(cpu);
-
- set_cpu_online(cpu, true);
-
- local_irq_enable();
-}

/**
* smp_cpus_done -

2012-06-01 10:23:20

by Srivatsa S. Bhat

[permalink] [raw]
Subject: [PATCH 22/27] um, smpboot: Use generic SMP booting infrastructure

From: Nikunj A. Dadhania <[email protected]>

Convert um to use the generic framework to boot secondary CPUs.

Notes:
1. Removed call to default_idle() in idle_proc(). The generic framework will
invoke cpu_idle().
2. The generic code will call preempt_disable() and local_irq_enable() which
weren't originally present in idle_proc().

Signed-off-by: Nikunj A. Dadhania <[email protected]>
Cc: Jeff Dike <[email protected]>
Cc: Richard Weinberger <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: Rusty Russell <[email protected]>
Cc: Chris Metcalf <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Srivatsa S. Bhat <[email protected]>
---

arch/um/kernel/smp.c | 12 +++++++-----
1 files changed, 7 insertions(+), 5 deletions(-)

diff --git a/arch/um/kernel/smp.c b/arch/um/kernel/smp.c
index a02b7e9..6cffcf4 100644
--- a/arch/um/kernel/smp.c
+++ b/arch/um/kernel/smp.c
@@ -10,6 +10,7 @@
#ifdef CONFIG_SMP

#include "linux/sched.h"
+#include "linux/smpboot.h"
#include "linux/module.h"
#include "linux/threads.h"
#include "linux/interrupt.h"
@@ -58,6 +59,12 @@ static cpumask_t cpu_callin_map = CPU_MASK_NONE;

static int idle_proc(void *cpup)
{
+ smpboot_start_secondary(cpup);
+ return 0;
+}
+
+void __cpuinit __cpu_pre_starting(void *unused)
+{
int cpu = (int) cpup, err;

err = os_pipe(cpu_data[cpu].ipi_pipe, 1, 1);
@@ -74,11 +81,6 @@ static int idle_proc(void *cpup)

while (!cpu_isset(cpu, smp_commenced_mask))
cpu_relax();
-
- notify_cpu_starting(cpu);
- set_cpu_online(cpu, true);
- default_idle();
- return 0;
}

static struct task_struct *idle_thread(int cpu)

2012-06-01 10:23:32

by Srivatsa S. Bhat

[permalink] [raw]
Subject: [PATCH 23/27] cris, smpboot: Use generic SMP booting infrastructure

Convert cris to use the generic framework to boot secondary CPUs.

Cc: Mikael Starvik <[email protected]>
Cc: Jesper Nilsson <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Eric Dumazet <[email protected]>
Cc: Mike Frysinger <[email protected]>
Cc: [email protected]
Signed-off-by: Srivatsa S. Bhat <[email protected]>
---

arch/cris/arch-v32/kernel/smp.c | 14 ++++++--------
1 files changed, 6 insertions(+), 8 deletions(-)

diff --git a/arch/cris/arch-v32/kernel/smp.c b/arch/cris/arch-v32/kernel/smp.c
index ebe2cb3..357114a 100644
--- a/arch/cris/arch-v32/kernel/smp.c
+++ b/arch/cris/arch-v32/kernel/smp.c
@@ -17,6 +17,7 @@
#include <linux/cpumask.h>
#include <linux/interrupt.h>
#include <linux/module.h>
+#include <linux/smpboot.h>

#define IPI_SCHEDULE 1
#define IPI_CALL 2
@@ -145,8 +146,11 @@ smp_boot_one_cpu(int cpuid, struct task_struct idle)
* specific stuff such as the local timer and the MMU. */
void __init smp_callin(void)
{
- extern void cpu_idle(void);
+ smpboot_start_secondary(NULL);
+}

+void __cpuinit __cpu_pre_starting(void *unused)
+{
int cpu = cpu_now_booting;
reg_intr_vect_rw_mask vect_mask = {0};

@@ -161,16 +165,10 @@ void __init smp_callin(void)
/* Setup local timer. */
cris_timer_init();

- /* Enable IRQ and idle */
+ /* Enable IRQ */
REG_WR(intr_vect, irq_regs[cpu], rw_mask, vect_mask);
crisv32_unmask_irq(IPI_INTR_VECT);
crisv32_unmask_irq(TIMER0_INTR_VECT);
- preempt_disable();
- notify_cpu_starting(cpu);
- local_irq_enable();
-
- set_cpu_online(cpu, true);
- cpu_idle();
}

/* Stop execution on this CPU.*/

2012-06-01 10:45:16

by Heiko Carstens

[permalink] [raw]
Subject: Re: [PATCH 25/27] s390, smpboot: Use generic SMP booting infrastructure

On Fri, Jun 01, 2012 at 02:46:09PM +0530, Srivatsa S. Bhat wrote:
> Convert s390 to use the generic framework to boot secondary CPUs.
>
> Cc: Martin Schwidefsky <[email protected]>
> Cc: Heiko Carstens <[email protected]>
> Cc: [email protected] (supporter:S390)
> Cc: Michael Holzheu <[email protected]>
> Cc: Thomas Gleixner <[email protected]>
> Cc: Jan Glauber <[email protected]>
> Cc: [email protected]
> Signed-off-by: Srivatsa S. Bhat <[email protected]>
> ---
>
> arch/s390/kernel/smp.c | 12 ++++++------
> 1 files changed, 6 insertions(+), 6 deletions(-)
>

Looks good to me.

Acked-by: Heiko Carstens <[email protected]>

2012-06-01 11:05:09

by Russell King - ARM Linux

[permalink] [raw]
Subject: Re: [PATCH 26/27] arm, smpboot: Use generic SMP booting infrastructure

If you're expecting an ack from me, you're not going to get it because
this patch is useless on its own. It can't be reviewed and it can't be
tested without the rest of the series.

On Fri, Jun 01, 2012 at 02:46:20PM +0530, Srivatsa S. Bhat wrote:
> Convert arm to use the generic framework to boot secondary CPUs.
>
> Notes:
> The calls to local_irq_enable() and local_fiq_enable() which were there
> in secondary_start_kernel() originally, are retained in __cpu_post_online()
> since the generic code only calls local_irq_enable().
> Also, we were anyway going to enable interrupts in the generic code
> immediately after __cpu_post_online(), so its not like we enabled interrupts
> way too early.
>
> Cc: Russell King <[email protected]>
> Cc: Marc Zyngier <[email protected]>
> Cc: Will Deacon <[email protected]>
> Cc: Thomas Gleixner <[email protected]>
> Cc: David Brown <[email protected]>
> Cc: [email protected]
> Signed-off-by: Srivatsa S. Bhat <[email protected]>
> ---
>
> arch/arm/kernel/smp.c | 26 +++++++++++++-------------
> 1 files changed, 13 insertions(+), 13 deletions(-)
>
> diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c
> index b735521..460b7e7 100644
> --- a/arch/arm/kernel/smp.c
> +++ b/arch/arm/kernel/smp.c
> @@ -25,6 +25,7 @@
> #include <linux/percpu.h>
> #include <linux/clockchips.h>
> #include <linux/completion.h>
> +#include <linux/smpboot.h>
>
> #include <linux/atomic.h>
> #include <asm/cacheflush.h>
> @@ -227,6 +228,11 @@ static void percpu_timer_setup(void);
> */
> asmlinkage void __cpuinit secondary_start_kernel(void)
> {
> + smpboot_start_secondary(NULL);
> +}
> +
> +void __cpuinit __cpu_pre_starting(void *unused)
> +{
> struct mm_struct *mm = &init_mm;
> unsigned int cpu = smp_processor_id();
>
> @@ -244,26 +250,25 @@ asmlinkage void __cpuinit secondary_start_kernel(void)
> printk("CPU%u: Booted secondary processor\n", cpu);
>
> cpu_init();
> - preempt_disable();
> trace_hardirqs_off();
>
> /*
> * Give the platform a chance to do its own initialisation.
> */
> platform_secondary_init(cpu);
> +}
>
> - notify_cpu_starting(cpu);
> +void __cpuinit __cpu_pre_online(void *unused)
> +{
> + unsigned int cpu = smp_processor_id();
>
> calibrate_delay();
>
> smp_store_cpu_info(cpu);
> +}
>
> - /*
> - * OK, now it's safe to let the boot CPU continue. Wait for
> - * the CPU migration code to notice that the CPU is online
> - * before we continue - which happens after __cpu_up returns.
> - */
> - set_cpu_online(cpu, true);
> +void __cpuinit __cpu_post_online(void *unused)
> +{
> complete(&cpu_running);
>
> /*
> @@ -273,11 +278,6 @@ asmlinkage void __cpuinit secondary_start_kernel(void)
>
> local_irq_enable();
> local_fiq_enable();
> -
> - /*
> - * OK, it's off to the idle thread for us
> - */
> - cpu_idle();
> }
>
> void __init smp_cpus_done(unsigned int max_cpus)
>

2012-06-01 12:10:23

by Srivatsa S. Bhat

[permalink] [raw]
Subject: Re: [PATCH 01/27] smpboot: Provide a generic method to boot secondary processors

Adding arch maintainers to Cc, which I had missed earlier. No changes to the
patch.

========

From: Nikunj A. Dadhania <[email protected]>

Subject: [PATCH 01/27] smpboot: Provide a generic method to boot secondary processors

The smp booting and cpu hotplug related code is heavily duplicated in various
architectures. To solve that problem, provide a generic framework to boot
secondary CPUs which can then be used by the architecture code.

For now, there is no functional change in the smp boot or hotplug sequence.
Just plain consolidation of common code from various architectures.

Signed-off-by: Nikunj A. Dadhania <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Suresh Siddha <[email protected]>
Cc: Venkatesh Pallipadi <[email protected]>
Cc: Tony Luck <[email protected]>
Cc: Ralf Baechle <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Paul Mundt <[email protected]>
Cc: Chris Metcalf <[email protected]>
Cc: Konrad Rzeszutek Wilk <[email protected]>
Cc: Mike Frysinger <[email protected]>
Cc: Hirokazu Takata <[email protected]>
Cc: Richard Kuo <[email protected]>
Cc: David Howells <[email protected]>
Cc: Bob Liu <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: Richard Weinberger <[email protected]>
Cc: Jesper Nilsson <[email protected]>
Cc: "James E.J. Bottomley" <[email protected]>
Cc: Martin Schwidefsky <[email protected]>
Cc: Russell King <[email protected]>
Cc: Matt Turner <[email protected]>
Signed-off-by: Srivatsa S. Bhat <[email protected]>
---

include/linux/smpboot.h | 10 ++++++
kernel/smpboot.c | 78 +++++++++++++++++++++++++++++++++++++++++++++++
kernel/smpboot.h | 4 +-
3 files changed, 90 insertions(+), 2 deletions(-)
create mode 100644 include/linux/smpboot.h

diff --git a/include/linux/smpboot.h b/include/linux/smpboot.h
new file mode 100644
index 0000000..63bbedd
--- /dev/null
+++ b/include/linux/smpboot.h
@@ -0,0 +1,10 @@
+/*
+ * Generic SMP CPU booting framework
+ */
+
+#ifndef SMPBOOT_H
+#define SMPBOOT_H
+
+extern void smpboot_start_secondary(void *arg);
+
+#endif
diff --git a/kernel/smpboot.c b/kernel/smpboot.c
index 98f60c5..6c26133 100644
--- a/kernel/smpboot.c
+++ b/kernel/smpboot.c
@@ -6,6 +6,7 @@
#include <linux/init.h>
#include <linux/sched.h>
#include <linux/percpu.h>
+#include <linux/cpu.h>

#include "smpboot.h"

@@ -65,3 +66,80 @@ void __init idle_threads_init(void)
}
}
#endif
+
+
+/* Implement the following functions in your architecture, as appropriate. */
+
+/**
+ * __cpu_pre_starting()
+ *
+ * Implement whatever you need to do before the CPU_STARTING notifiers are
+ * invoked. Note that the CPU_STARTING callbacks run *on* the cpu that is
+ * coming up. So that cpu better be prepared! IOW, implement all the early
+ * boot/init code for the cpu here. And do NOT enable interrupts.
+ */
+#ifndef __cpu_pre_starting
+void __weak __cpu_pre_starting(void *arg) {}
+#endif
+
+/**
+ * __cpu_pre_online()
+ *
+ * Implement whatever you need to do before the upcoming CPU is set in the
+ * cpu_online_mask. (Setting the cpu in the cpu_online_mask is like an
+ * announcement that the cpu has come up, because it would be publicly
+ * visible now). Again, don't enable interrupts.
+ */
+#ifndef __cpu_pre_online
+void __weak __cpu_pre_online(void *arg) {}
+#endif
+
+/**
+ * __cpu_post_online()
+ *
+ * Implement whatever you need to do after the CPU has been set in the
+ * cpu_online_mask, and before enabling interrupts and calling cpu_idle().
+ * Ideally, it is preferable if you don't have anything to do here.
+ * We want to move to a model where setting cpu_online_mask is pretty
+ * much the final step. Again, don't enable interrupts.
+ */
+#ifndef __cpu_post_online
+void __weak __cpu_post_online(void *arg) {}
+#endif
+
+
+/**
+ * smpboot_start_secondary - Generic way to boot secondary processors
+ */
+void __cpuinit smpboot_start_secondary(void *arg)
+{
+ unsigned int cpu;
+
+ /*
+ * SMP booting is extremely fragile in some architectures. So run
+ * the cpu initialization code first before anything else.
+ */
+ __cpu_pre_starting(arg);
+
+ preempt_disable();
+ cpu = smp_processor_id();
+
+ /* Invoke the CPU_STARTING notifier callbacks */
+ notify_cpu_starting(cpu);
+
+ __cpu_pre_online(arg);
+
+ /* Set the CPU in the cpu_online_mask */
+ set_cpu_online(cpu, true);
+
+ __cpu_post_online(arg);
+
+ /* Enable local interrupts now */
+ local_irq_enable();
+
+ wmb();
+ cpu_idle();
+
+ /* We should never reach here! */
+ BUG();
+}
diff --git a/kernel/smpboot.h b/kernel/smpboot.h
index 80c0acf..9753cc0 100644
--- a/kernel/smpboot.h
+++ b/kernel/smpboot.h
@@ -1,5 +1,5 @@
-#ifndef SMPBOOT_H
-#define SMPBOOT_H
+#ifndef __SMPBOOT_H
+#define __SMPBOOT_H

struct task_struct;



From: Srivatsa S. Bhat <[email protected]>
Subject: [PATCH 02/27] smpboot: Add provisions for arch-specific locking around cpu_online_mask

We want to make smp booting as generic as possible and remove code
duplication in arch/ directories.

While manipulating the cpu_online_mask, x86 uses an additional lock, i.e.,
'vector_lock'. So provide a generic way to implement such arch-specific
extra locking, by providing weakly defined functions arch_vector_lock()
and arch_vector_unlock() which can be overriden by different architectures
suitably.

Cc: Thomas Gleixner <[email protected]>
Cc: Suresh Siddha <[email protected]>
Cc: Venkatesh Pallipadi <[email protected]>
Signed-off-by: Srivatsa S. Bhat <[email protected]>
---

kernel/smpboot.c | 11 ++++++++++-
1 files changed, 10 insertions(+), 1 deletions(-)

diff --git a/kernel/smpboot.c b/kernel/smpboot.c
index 6c26133..5ae1805 100644
--- a/kernel/smpboot.c
+++ b/kernel/smpboot.c
@@ -107,6 +107,13 @@ void __weak __cpu_pre_online(void *arg) {}
void __weak __cpu_post_online(void *arg) {}
#endif

+/*
+ * Optional arch-specific locking for manipulating cpu_online_mask while
+ * bringing up secondary CPUs.
+ */
+void __weak arch_vector_lock(void) {}
+void __weak arch_vector_unlock(void) {}
+

/**
* smpboot_start_secondary - Generic way to boot secondary processors
@@ -129,8 +136,10 @@ void __cpuinit smpboot_start_secondary(void *arg)

__cpu_pre_online(arg);

- /* Set the CPU in the cpu_online_mask */
+ /* Set the CPU in the cpu_online_mask with required locks held */
+ arch_vector_lock();
set_cpu_online(cpu, true);
+ arch_vector_unlock();

__cpu_post_online(arg);



From: Srivatsa S. Bhat <[email protected]>
Subject: [PATCH 03/27] smpboot: Define and use cpu_state per-cpu variable in generic code

The per-cpu variable cpu_state is used in x86 and also used in other
architectures, to track the state of the cpu during bringup and hotplug.
Pull it out into generic code.

Cc: Tony Luck <[email protected]>
Cc: Fenghua Yu <[email protected]>
Cc: Ralf Baechle <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Paul Mundt <[email protected]>
Cc: Chris Metcalf <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: [email protected]
Cc: Konrad Rzeszutek Wilk <[email protected]>
Cc: Jeremy Fitzhardinge <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Mike Frysinger <[email protected]>
Cc: Yong Zhang <[email protected]>
Cc: Venkatesh Pallipadi <[email protected]>
Cc: Suresh Siddha <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Srivatsa S. Bhat <[email protected]>
---

arch/ia64/include/asm/cpu.h | 2 --
arch/ia64/kernel/process.c | 1 +
arch/ia64/kernel/smpboot.c | 6 +-----
arch/mips/cavium-octeon/smp.c | 4 +---
arch/powerpc/kernel/smp.c | 6 +-----
arch/sh/include/asm/smp.h | 2 --
arch/sh/kernel/smp.c | 4 +---
arch/tile/kernel/smpboot.c | 4 +---
arch/x86/include/asm/cpu.h | 2 --
arch/x86/kernel/smpboot.c | 4 +---
arch/x86/xen/smp.c | 1 +
include/linux/smpboot.h | 1 +
kernel/smpboot.c | 4 ++++
13 files changed, 13 insertions(+), 28 deletions(-)

diff --git a/arch/ia64/include/asm/cpu.h b/arch/ia64/include/asm/cpu.h
index fcca30b..1c3acac 100644
--- a/arch/ia64/include/asm/cpu.h
+++ b/arch/ia64/include/asm/cpu.h
@@ -12,8 +12,6 @@ struct ia64_cpu {

DECLARE_PER_CPU(struct ia64_cpu, cpu_devices);

-DECLARE_PER_CPU(int, cpu_state);
-
#ifdef CONFIG_HOTPLUG_CPU
extern int arch_register_cpu(int num);
extern void arch_unregister_cpu(int);
diff --git a/arch/ia64/kernel/process.c b/arch/ia64/kernel/process.c
index 5e0e86d..32566c7 100644
--- a/arch/ia64/kernel/process.c
+++ b/arch/ia64/kernel/process.c
@@ -29,6 +29,7 @@
#include <linux/kdebug.h>
#include <linux/utsname.h>
#include <linux/tracehook.h>
+#include <linux/smpboot.h>

#include <asm/cpu.h>
#include <asm/delay.h>
diff --git a/arch/ia64/kernel/smpboot.c b/arch/ia64/kernel/smpboot.c
index 963d2db..df00a3c 100644
--- a/arch/ia64/kernel/smpboot.c
+++ b/arch/ia64/kernel/smpboot.c
@@ -39,6 +39,7 @@
#include <linux/efi.h>
#include <linux/percpu.h>
#include <linux/bitops.h>
+#include <linux/smpboot.h>

#include <linux/atomic.h>
#include <asm/cache.h>
@@ -111,11 +112,6 @@ extern unsigned long ia64_iobase;

struct task_struct *task_for_booting_cpu;

-/*
- * State for each CPU
- */
-DEFINE_PER_CPU(int, cpu_state);
-
cpumask_t cpu_core_map[NR_CPUS] __cacheline_aligned;
EXPORT_SYMBOL(cpu_core_map);
DEFINE_PER_CPU_SHARED_ALIGNED(cpumask_t, cpu_sibling_map);
diff --git a/arch/mips/cavium-octeon/smp.c b/arch/mips/cavium-octeon/smp.c
index 97e7ce9..93cd4b0 100644
--- a/arch/mips/cavium-octeon/smp.c
+++ b/arch/mips/cavium-octeon/smp.c
@@ -13,6 +13,7 @@
#include <linux/kernel_stat.h>
#include <linux/sched.h>
#include <linux/module.h>
+#include <linux/smpboot.h>

#include <asm/mmu_context.h>
#include <asm/time.h>
@@ -252,9 +253,6 @@ static void octeon_cpus_done(void)

#ifdef CONFIG_HOTPLUG_CPU

-/* State of each CPU. */
-DEFINE_PER_CPU(int, cpu_state);
-
extern void fixup_irqs(void);

static DEFINE_SPINLOCK(smp_reserve_lock);
diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
index e1417c4..1928058a 100644
--- a/arch/powerpc/kernel/smp.c
+++ b/arch/powerpc/kernel/smp.c
@@ -31,6 +31,7 @@
#include <linux/cpu.h>
#include <linux/notifier.h>
#include <linux/topology.h>
+#include <linux/smpboot.h>

#include <asm/ptrace.h>
#include <linux/atomic.h>
@@ -57,11 +58,6 @@
#define DBG(fmt...)
#endif

-#ifdef CONFIG_HOTPLUG_CPU
-/* State of each CPU during hotplug phases */
-static DEFINE_PER_CPU(int, cpu_state) = { 0 };
-#endif
-
struct thread_info *secondary_ti;

DEFINE_PER_CPU(cpumask_var_t, cpu_sibling_map);
diff --git a/arch/sh/include/asm/smp.h b/arch/sh/include/asm/smp.h
index 78b0d0f4..bda041e 100644
--- a/arch/sh/include/asm/smp.h
+++ b/arch/sh/include/asm/smp.h
@@ -31,8 +31,6 @@ enum {
SMP_MSG_NR, /* must be last */
};

-DECLARE_PER_CPU(int, cpu_state);
-
void smp_message_recv(unsigned int msg);
void smp_timer_broadcast(const struct cpumask *mask);

diff --git a/arch/sh/kernel/smp.c b/arch/sh/kernel/smp.c
index b86e9ca..8e0fde0 100644
--- a/arch/sh/kernel/smp.c
+++ b/arch/sh/kernel/smp.c
@@ -22,6 +22,7 @@
#include <linux/interrupt.h>
#include <linux/sched.h>
#include <linux/atomic.h>
+#include <linux/smpboot.h>
#include <asm/processor.h>
#include <asm/mmu_context.h>
#include <asm/smp.h>
@@ -34,9 +35,6 @@ int __cpu_logical_map[NR_CPUS]; /* Map logical to physical */

struct plat_smp_ops *mp_ops = NULL;

-/* State of each CPU */
-DEFINE_PER_CPU(int, cpu_state) = { 0 };
-
void __cpuinit register_smp_ops(struct plat_smp_ops *ops)
{
if (mp_ops)
diff --git a/arch/tile/kernel/smpboot.c b/arch/tile/kernel/smpboot.c
index e686c5a..24a9c06 100644
--- a/arch/tile/kernel/smpboot.c
+++ b/arch/tile/kernel/smpboot.c
@@ -25,13 +25,11 @@
#include <linux/delay.h>
#include <linux/err.h>
#include <linux/irq.h>
+#include <linux/smpboot.h>
#include <asm/mmu_context.h>
#include <asm/tlbflush.h>
#include <asm/sections.h>

-/* State of each CPU. */
-static DEFINE_PER_CPU(int, cpu_state) = { 0 };
-
/* The messaging code jumps to this pointer during boot-up */
unsigned long start_cpu_function_addr;

diff --git a/arch/x86/include/asm/cpu.h b/arch/x86/include/asm/cpu.h
index 4564c8e..2d0b239 100644
--- a/arch/x86/include/asm/cpu.h
+++ b/arch/x86/include/asm/cpu.h
@@ -30,8 +30,6 @@ extern int arch_register_cpu(int num);
extern void arch_unregister_cpu(int);
#endif

-DECLARE_PER_CPU(int, cpu_state);
-
int mwait_usable(const struct cpuinfo_x86 *);

#endif /* _ASM_X86_CPU_H */
diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index bfbe30e..269bc1f 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -51,6 +51,7 @@
#include <linux/stackprotector.h>
#include <linux/gfp.h>
#include <linux/cpuidle.h>
+#include <linux/smpboot.h>

#include <asm/acpi.h>
#include <asm/desc.h>
@@ -73,9 +74,6 @@
#include <asm/smpboot_hooks.h>
#include <asm/i8259.h>

-/* State of each CPU */
-DEFINE_PER_CPU(int, cpu_state) = { 0 };
-
#ifdef CONFIG_HOTPLUG_CPU
/*
* We need this for trampoline_base protection from concurrent accesses when
diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
index 2ef5948..09a7199 100644
--- a/arch/x86/xen/smp.c
+++ b/arch/x86/xen/smp.c
@@ -16,6 +16,7 @@
#include <linux/err.h>
#include <linux/slab.h>
#include <linux/smp.h>
+#include <linux/smpboot.h>

#include <asm/paravirt.h>
#include <asm/desc.h>
diff --git a/include/linux/smpboot.h b/include/linux/smpboot.h
index 63bbedd..834d90c 100644
--- a/include/linux/smpboot.h
+++ b/include/linux/smpboot.h
@@ -5,6 +5,7 @@
#ifndef SMPBOOT_H
#define SMPBOOT_H

+DECLARE_PER_CPU(int, cpu_state);
extern void smpboot_start_secondary(void *arg);

#endif
diff --git a/kernel/smpboot.c b/kernel/smpboot.c
index 5ae1805..0df43b0 100644
--- a/kernel/smpboot.c
+++ b/kernel/smpboot.c
@@ -67,6 +67,8 @@ void __init idle_threads_init(void)
}
#endif

+/* State of each CPU during bringup/teardown */
+DEFINE_PER_CPU(int, cpu_state) = { 0 };

/* Implement the following functions in your architecture, as appropriate. */

@@ -141,6 +143,8 @@ void __cpuinit smpboot_start_secondary(void *arg)
set_cpu_online(cpu, true);
arch_vector_unlock();

+ per_cpu(cpu_state, cpu) = CPU_ONLINE;
+
__cpu_post_online(arg);

/* Enable local interrupts now */

2012-06-01 12:13:51

by Srivatsa S. Bhat

[permalink] [raw]
Subject: Re: [PATCH 02/27] smpboot: Add provisions for arch-specific locking around cpu_online_mask

Adding arch maintainers to Cc, which I had missed earlier. No changes to the
patch.

======

From: Srivatsa S. Bhat <[email protected]>
Subject: [PATCH 02/27] smpboot: Add provisions for arch-specific locking around cpu_online_mask

We want to make smp booting as generic as possible and remove code
duplication in arch/ directories.

While manipulating the cpu_online_mask, x86 uses an additional lock, i.e.,
'vector_lock'. So provide a generic way to implement such arch-specific
extra locking, by providing weakly defined functions arch_vector_lock()
and arch_vector_unlock() which can be overriden by different architectures
suitably.

Cc: Thomas Gleixner <[email protected]>
Cc: Suresh Siddha <[email protected]>
Cc: Venkatesh Pallipadi <[email protected]>
Cc: Tony Luck <[email protected]>
Cc: Ralf Baechle <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Paul Mundt <[email protected]>
Cc: Chris Metcalf <[email protected]>
Cc: Konrad Rzeszutek Wilk <[email protected]>
Cc: Mike Frysinger <[email protected]>
Cc: Hirokazu Takata <[email protected]>
Cc: Richard Kuo <[email protected]>
Cc: David Howells <[email protected]>
Cc: Bob Liu <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: Richard Weinberger <[email protected]>
Cc: Jesper Nilsson <[email protected]>
Cc: "James E.J. Bottomley" <[email protected]>
Cc: Martin Schwidefsky <[email protected]>
Cc: Russell King <[email protected]>
Cc: Matt Turner <[email protected]>
Signed-off-by: Srivatsa S. Bhat <[email protected]>
---

kernel/smpboot.c | 11 ++++++++++-
1 files changed, 10 insertions(+), 1 deletions(-)

diff --git a/kernel/smpboot.c b/kernel/smpboot.c
index 6c26133..5ae1805 100644
--- a/kernel/smpboot.c
+++ b/kernel/smpboot.c
@@ -107,6 +107,13 @@ void __weak __cpu_pre_online(void *arg) {}
void __weak __cpu_post_online(void *arg) {}
#endif

+/*
+ * Optional arch-specific locking for manipulating cpu_online_mask while
+ * bringing up secondary CPUs.
+ */
+void __weak arch_vector_lock(void) {}
+void __weak arch_vector_unlock(void) {}
+

/**
* smpboot_start_secondary - Generic way to boot secondary processors
@@ -129,8 +136,10 @@ void __cpuinit smpboot_start_secondary(void *arg)

__cpu_pre_online(arg);

- /* Set the CPU in the cpu_online_mask */
+ /* Set the CPU in the cpu_online_mask with required locks held */
+ arch_vector_lock();
set_cpu_online(cpu, true);
+ arch_vector_unlock();

__cpu_post_online(arg);


2012-06-01 12:59:56

by Jan Beulich

[permalink] [raw]
Subject: Re: [Xen-devel] [PATCH 05/27] xen, cpu hotplug: Don't call cpu_bringup() in xen_play_dead()

>>> On 01.06.12 at 11:11, "Srivatsa S. Bhat" <[email protected]>
wrote:
> xen_play_dead calls cpu_bringup() which looks weird, because xen_play_dead()
> is invoked in the cpu down path, whereas cpu_bringup() (as the name
> suggests) is useful in the cpu bringup path.

This might not be correct - the code as it is without this change is
safe even when the vCPU gets onlined back later by an external
entity (e.g. the Xen tool stack), and it would in that case resume
at the return point of the VCPUOP_down hypercall. That might
be a heritage from the original XenoLinux tree though, and be
meaningless in pv-ops context - Jeremy, Konrad?

Possibly it was bogus/unused even in that original tree - Keir?

Jan

> Getting rid of xen_play_dead()'s dependency on cpu_bringup() helps in
> hooking on to the generic SMP booting framework.
>
> Also remove the extra call to preempt_enable() added by commit 41bd956
> (xen/smp: Fix CPU online/offline bug triggering a BUG: scheduling while
> atomic) because it becomes unnecessary after this change.
>
> Cc: Konrad Rzeszutek Wilk <[email protected]>
> Cc: Jeremy Fitzhardinge <[email protected]>
> Cc: Thomas Gleixner <[email protected]>
> Cc: Ingo Molnar <[email protected]>
> Cc: "H. Peter Anvin" <[email protected]>
> Cc: [email protected]
> Cc: [email protected]
> Cc: [email protected]
> Signed-off-by: Srivatsa S. Bhat <[email protected]>
> ---
>
> arch/x86/xen/smp.c | 8 --------
> 1 files changed, 0 insertions(+), 8 deletions(-)
>
> diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
> index 09a7199..602d6b7 100644
> --- a/arch/x86/xen/smp.c
> +++ b/arch/x86/xen/smp.c
> @@ -417,14 +417,6 @@ static void __cpuinit xen_play_dead(void) /* used only
> with HOTPLUG_CPU */
> {
> play_dead_common();
> HYPERVISOR_vcpu_op(VCPUOP_down, smp_processor_id(), NULL);
> - cpu_bringup();
> - /*
> - * Balance out the preempt calls - as we are running in cpu_idle
> - * loop which has been called at bootup from cpu_bringup_and_idle.
> - * The cpucpu_bringup_and_idle called cpu_bringup which made a
> - * preempt_disable() So this preempt_enable will balance it out.
> - */
> - preempt_enable();
> }
>
> #else /* !CONFIG_HOTPLUG_CPU */
>
>
> _______________________________________________
> Xen-devel mailing list
> [email protected]
> http://lists.xen.org/xen-devel


2012-06-01 15:14:32

by Srivatsa S. Bhat

[permalink] [raw]
Subject: Re: [Xen-devel] [PATCH 05/27] xen, cpu hotplug: Don't call cpu_bringup() in xen_play_dead()

On 06/01/2012 06:29 PM, Jan Beulich wrote:

>>>> On 01.06.12 at 11:11, "Srivatsa S. Bhat" <[email protected]>
> wrote:
>> xen_play_dead calls cpu_bringup() which looks weird, because xen_play_dead()
>> is invoked in the cpu down path, whereas cpu_bringup() (as the name
>> suggests) is useful in the cpu bringup path.
>
> This might not be correct - the code as it is without this change is
> safe even when the vCPU gets onlined back later by an external
> entity (e.g. the Xen tool stack), and it would in that case resume
> at the return point of the VCPUOP_down hypercall. That might
> be a heritage from the original XenoLinux tree though, and be
> meaningless in pv-ops context - Jeremy, Konrad?
>
> Possibly it was bogus/unused even in that original tree - Keir?
>


Thanks for your comments Jan!

In case this change is wrong, the other method I had in mind was to call
cpu_bringup_and_idle() in xen_play_dead(). (Even ARM does something similar,
in the sense that it runs the cpu bringup code including cpu_idle(), in the
cpu offline path, namely the cpu_die() function). Would that approach work
for xen as well? If yes, then we wouldn't have any issues to convert xen to
generic code.

Regards,
Srivatsa S. Bhat

>
>> Getting rid of xen_play_dead()'s dependency on cpu_bringup() helps in
>> hooking on to the generic SMP booting framework.
>>
>> Also remove the extra call to preempt_enable() added by commit 41bd956
>> (xen/smp: Fix CPU online/offline bug triggering a BUG: scheduling while
>> atomic) because it becomes unnecessary after this change.
>>
>> Cc: Konrad Rzeszutek Wilk <[email protected]>
>> Cc: Jeremy Fitzhardinge <[email protected]>
>> Cc: Thomas Gleixner <[email protected]>
>> Cc: Ingo Molnar <[email protected]>
>> Cc: "H. Peter Anvin" <[email protected]>
>> Cc: [email protected]
>> Cc: [email protected]
>> Cc: [email protected]
>> Signed-off-by: Srivatsa S. Bhat <[email protected]>
>> ---
>>
>> arch/x86/xen/smp.c | 8 --------
>> 1 files changed, 0 insertions(+), 8 deletions(-)
>>
>> diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
>> index 09a7199..602d6b7 100644
>> --- a/arch/x86/xen/smp.c
>> +++ b/arch/x86/xen/smp.c
>> @@ -417,14 +417,6 @@ static void __cpuinit xen_play_dead(void) /* used only
>> with HOTPLUG_CPU */
>> {
>> play_dead_common();
>> HYPERVISOR_vcpu_op(VCPUOP_down, smp_processor_id(), NULL);
>> - cpu_bringup();
>> - /*
>> - * Balance out the preempt calls - as we are running in cpu_idle
>> - * loop which has been called at bootup from cpu_bringup_and_idle.
>> - * The cpucpu_bringup_and_idle called cpu_bringup which made a
>> - * preempt_disable() So this preempt_enable will balance it out.
>> - */
>> - preempt_enable();
>> }
>>
>> #else /* !CONFIG_HOTPLUG_CPU */
>>
>>

2012-06-01 15:36:36

by Jan Beulich

[permalink] [raw]
Subject: Re: [Xen-devel] [PATCH 05/27] xen, cpu hotplug: Don't call cpu_bringup() in xen_play_dead()

>>> On 01.06.12 at 17:13, "Srivatsa S. Bhat" <[email protected]>
wrote:
> On 06/01/2012 06:29 PM, Jan Beulich wrote:
>
>>>>> On 01.06.12 at 11:11, "Srivatsa S. Bhat" <[email protected]>
>> wrote:
>>> xen_play_dead calls cpu_bringup() which looks weird, because xen_play_dead()
>>> is invoked in the cpu down path, whereas cpu_bringup() (as the name
>>> suggests) is useful in the cpu bringup path.
>>
>> This might not be correct - the code as it is without this change is
>> safe even when the vCPU gets onlined back later by an external
>> entity (e.g. the Xen tool stack), and it would in that case resume
>> at the return point of the VCPUOP_down hypercall. That might
>> be a heritage from the original XenoLinux tree though, and be
>> meaningless in pv-ops context - Jeremy, Konrad?
>>
>> Possibly it was bogus/unused even in that original tree - Keir?
>>
>
>
> Thanks for your comments Jan!
>
> In case this change is wrong, the other method I had in mind was to call
> cpu_bringup_and_idle() in xen_play_dead(). (Even ARM does something similar,
> in the sense that it runs the cpu bringup code including cpu_idle(), in the
> cpu offline path, namely the cpu_die() function). Would that approach work
> for xen as well? If yes, then we wouldn't have any issues to convert xen to
> generic code.

No, that wouldn't work either afaict - the function is expected
to return.

Jan

2012-06-01 16:51:18

by Sam Ravnborg

[permalink] [raw]
Subject: Re: [PATCH 01/27] smpboot: Provide a generic method to boot secondary processors

> +/* Implement the following functions in your architecture, as appropriate. */
> +
> +/**
> + * __cpu_pre_starting()
> + *
> + * Implement whatever you need to do before the CPU_STARTING notifiers are
> + * invoked. Note that the CPU_STARTING callbacks run *on* the cpu that is
> + * coming up. So that cpu better be prepared! IOW, implement all the early
> + * boot/init code for the cpu here. And do NOT enable interrupts.
> + */
> +#ifndef __cpu_pre_starting
> +void __weak __cpu_pre_starting(void *arg) {}
> +#endif

__What __is __the __purpose __of __all __these __underscaores __used
__as __function __prefix? __It __does __not __help __readability.

Does the nicely worded comment follow kerneldoc style?
I think not as the parameter is not described.

Sam

2012-06-01 16:53:26

by Sam Ravnborg

[permalink] [raw]
Subject: Re: [PATCH 01/27] smpboot: Provide a generic method to boot secondary processors

> +
> +
> +/* Implement the following functions in your architecture, as appropriate. */
> +
> +/**
> + * __cpu_pre_starting()
> + *
> + * Implement whatever you need to do before the CPU_STARTING notifiers are
> + * invoked. Note that the CPU_STARTING callbacks run *on* the cpu that is
> + * coming up. So that cpu better be prepared! IOW, implement all the early
> + * boot/init code for the cpu here. And do NOT enable interrupts.
> + */
> +#ifndef __cpu_pre_starting
> +void __weak __cpu_pre_starting(void *arg) {}
> +#endif

I miss the prototype for this in a header?
And the comment maybe belong in the header - not in the implementation?

Sam

2012-06-01 18:04:58

by David Miller

[permalink] [raw]
Subject: Re: [PATCH 20/27] sparc64, smpboot: Use generic SMP booting infrastructure

From: "Srivatsa S. Bhat" <[email protected]>
Date: Fri, 01 Jun 2012 14:44:54 +0530

> - call cpu_idle
> mov 0, %o0
...
> - call cpu_idle
> mov 0, %o0


The 'mov' has to be removed too, it's in the delay slot of the
call instruction and is passing in the first argument.

2012-06-01 18:05:00

by David Miller

[permalink] [raw]
Subject: Re: [PATCH 21/27] sparc32, smpboot: Use generic SMP booting infrastructure

From: "Srivatsa S. Bhat" <[email protected]>
Date: Fri, 01 Jun 2012 14:45:09 +0530

> smp_do_cpu_idle:
> - call cpu_idle
> mov 0, %o0

Just like the sparc64 patch, you need to remove the 'mov'

2012-06-01 18:08:03

by Chris Metcalf

[permalink] [raw]
Subject: Re: [PATCH 12/27] tile, smpboot: Use generic SMP booting infrastructure

On 6/1/2012 5:12 AM, Srivatsa S. Bhat wrote:
> Convert tile to use the generic framework to boot secondary CPUs.
>
> Notes:
> 1. Don't enable interrupts in start_secondary().
> 2. Remove the unnecessary calls to local_irq_enable() in __cpu_up().
> 3. cpu_idle() was being called after doing a preempt_enable() instead
> of a preempt_disable()! Fix it.
>
> Cc: Chris Metcalf <[email protected]>
> Cc: Thomas Gleixner <[email protected]>
> Cc: Yong Zhang <[email protected]>
> Signed-off-by: Srivatsa S. Bhat <[email protected]>
> ---
>
> arch/tile/kernel/smpboot.c | 22 ++++++++--------------
> 1 files changed, 8 insertions(+), 14 deletions(-)

Acked-by: Chris Metcalf <[email protected]>

--
Chris Metcalf, Tilera Corp.
http://www.tilera.com

2012-06-01 18:54:53

by Sam Ravnborg

[permalink] [raw]
Subject: Re: [PATCH 21/27] sparc32, smpboot: Use generic SMP booting infrastructure

On Fri, Jun 01, 2012 at 01:56:12PM -0400, David Miller wrote:
> From: "Srivatsa S. Bhat" <[email protected]>
> Date: Fri, 01 Jun 2012 14:45:09 +0530
>
> > smp_do_cpu_idle:
> > - call cpu_idle
> > mov 0, %o0
>
> Just like the sparc64 patch, you need to remove the 'mov'

And the other callers of cpu_idle in the same file should
be updated/removed too.

There is one for sun4m, sun4d and leon.

Sam

2012-06-01 22:30:33

by Srivatsa S. Bhat

[permalink] [raw]
Subject: Re: [PATCH 01/27] smpboot: Provide a generic method to boot secondary processors

On 06/01/2012 10:21 PM, Sam Ravnborg wrote:

>> +/* Implement the following functions in your architecture, as appropriate. */
>> +
>> +/**
>> + * __cpu_pre_starting()
>> + *
>> + * Implement whatever you need to do before the CPU_STARTING notifiers are
>> + * invoked. Note that the CPU_STARTING callbacks run *on* the cpu that is
>> + * coming up. So that cpu better be prepared! IOW, implement all the early
>> + * boot/init code for the cpu here. And do NOT enable interrupts.
>> + */
>> +#ifndef __cpu_pre_starting
>> +void __weak __cpu_pre_starting(void *arg) {}
>> +#endif
>
> __What __is __the __purpose __of __all __these __underscaores __used
> __as __function __prefix? __It __does __not __help __readability.

>


We had used "__" as the function prefix to emphasize that these functions are
implemented/overriden in the depths of architecture-specific code.

But now that you mention it, I see that we don't really have something like an
arch-independent variant without the "__" prefix. So adding the "__" prefix
might not be really necessary, since there is nothing to distinguish name-wise.

However, I do want to emphasize that this isn't generic code. So how about
an "arch_" prefix instead? Something like:
arch_cpu_pre_starting(), arch_cpu_pre_online() and arch_cpu_post_online()?

> Does the nicely worded comment follow kerneldoc style?
> I think not as the parameter is not described.
>

I'll fix that. (The parameter is simply unused for now, btw).

Thanks for your review!

Regards,
Srivatsa S. Bhat

2012-06-01 22:42:24

by Srivatsa S. Bhat

[permalink] [raw]
Subject: Re: [PATCH 01/27] smpboot: Provide a generic method to boot secondary processors

On 06/01/2012 10:23 PM, Sam Ravnborg wrote:

>> +
>> +
>> +/* Implement the following functions in your architecture, as appropriate. */
>> +
>> +/**
>> + * __cpu_pre_starting()
>> + *
>> + * Implement whatever you need to do before the CPU_STARTING notifiers are
>> + * invoked. Note that the CPU_STARTING callbacks run *on* the cpu that is
>> + * coming up. So that cpu better be prepared! IOW, implement all the early
>> + * boot/init code for the cpu here. And do NOT enable interrupts.
>> + */
>> +#ifndef __cpu_pre_starting
>> +void __weak __cpu_pre_starting(void *arg) {}
>> +#endif
>
> I miss the prototype for this in a header?


Prototype is not really necessary for this. Hence not added.

> And the comment maybe belong in the header - not in the implementation?
>


I think having the comment near the implementation itself works better, just
like how it is done at other places in the kernel.

Regards,
Srivatsa S. Bhat

2012-06-01 22:45:14

by Srivatsa S. Bhat

[permalink] [raw]
Subject: [UPDATED PATCH 20/27] sparc64, smpboot: Use generic SMP booting infrastructure

From: Nikunj A. Dadhania <[email protected]>

Convert sparc64 to use the generic framework to boot secondary CPUs.

Notes:
Remove the calls to cpu_idle() from assembly files because we will invoke
cpu_idle() in generic code.

Signed-off-by: Nikunj A. Dadhania <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Paul Gortmaker <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Mike Frysinger <[email protected]>
Cc: Jiri Kosina <[email protected]>
Cc: [email protected]
Signed-off-by: Srivatsa S. Bhat <[email protected]>
---

arch/sparc/kernel/hvtramp.S | 2 --
arch/sparc/kernel/smp_64.c | 18 ++++++++++--------
arch/sparc/kernel/trampoline_64.S | 2 --
3 files changed, 10 insertions(+), 12 deletions(-)

diff --git a/arch/sparc/kernel/hvtramp.S b/arch/sparc/kernel/hvtramp.S
index 9365432..e1a45cf 100644
--- a/arch/sparc/kernel/hvtramp.S
+++ b/arch/sparc/kernel/hvtramp.S
@@ -128,8 +128,6 @@ hv_cpu_startup:

call smp_callin
nop
- call cpu_idle
- mov 0, %o0
call cpu_panic
nop

diff --git a/arch/sparc/kernel/smp_64.c b/arch/sparc/kernel/smp_64.c
index 781bcb1..3c45538 100644
--- a/arch/sparc/kernel/smp_64.c
+++ b/arch/sparc/kernel/smp_64.c
@@ -25,6 +25,7 @@
#include <linux/ftrace.h>
#include <linux/cpu.h>
#include <linux/slab.h>
+#include <linux/smpboot.h>

#include <asm/head.h>
#include <asm/ptrace.h>
@@ -89,6 +90,11 @@ static volatile unsigned long callin_flag = 0;

void __cpuinit smp_callin(void)
{
+ smpboot_start_secondary(NULL);
+}
+
+void __cpuinit __cpu_pre_starting(void *unused)
+{
int cpuid = hard_smp_processor_id();

__local_per_cpu_offset = __per_cpu_offset(cpuid);
@@ -115,18 +121,14 @@ void __cpuinit smp_callin(void)
/* Attach to the address space of init_task. */
atomic_inc(&init_mm.mm_count);
current->active_mm = &init_mm;
+}

- /* inform the notifiers about the new cpu */
- notify_cpu_starting(cpuid);
+void __cpuinit __cpu_pre_online(void *unused)
+{
+ unsigned int cpuid = hard_smp_processor_id();

while (!cpumask_test_cpu(cpuid, &smp_commenced_mask))
rmb();
-
- set_cpu_online(cpuid, true);
- local_irq_enable();
-
- /* idle thread is expected to have preempt disabled */
- preempt_disable();
}

void cpu_panic(void)
diff --git a/arch/sparc/kernel/trampoline_64.S b/arch/sparc/kernel/trampoline_64.S
index da1b781..6171126 100644
--- a/arch/sparc/kernel/trampoline_64.S
+++ b/arch/sparc/kernel/trampoline_64.S
@@ -407,8 +407,6 @@ after_lock_tlb:

call smp_callin
nop
- call cpu_idle
- mov 0, %o0
call cpu_panic
nop
1: b,a,pt %xcc, 1b

2012-06-01 22:48:47

by Srivatsa S. Bhat

[permalink] [raw]
Subject: [UPDATED PATCH 21/27] sparc32, smpboot: Use generic SMP booting infrastructure


Convert sparc32 to use the generic framework to boot secondary CPUs.

Notes:
While removing the call to cpu_idle(), we can also remove the call to
cpu_panic() because we are calling BUG() in the generic code anyway.

Cc: "David S. Miller" <[email protected]>
Cc: Sam Ravnborg <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Mike Frysinger <[email protected]>
Cc: Konrad Eisele <[email protected]>
Cc: Tkhai Kirill <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Signed-off-by: Srivatsa S. Bhat <[email protected]>
---

arch/sparc/kernel/leon_smp.c | 25 +++++++++++++++----------
arch/sparc/kernel/sun4d_smp.c | 26 +++++++++++++-------------
arch/sparc/kernel/sun4m_smp.c | 26 +++++++++++++++-----------
arch/sparc/kernel/trampoline_32.S | 12 ------------
4 files changed, 43 insertions(+), 46 deletions(-)

diff --git a/arch/sparc/kernel/leon_smp.c b/arch/sparc/kernel/leon_smp.c
index a469090..498ca9b 100644
--- a/arch/sparc/kernel/leon_smp.c
+++ b/arch/sparc/kernel/leon_smp.c
@@ -25,6 +25,7 @@
#include <linux/gfp.h>
#include <linux/cpu.h>
#include <linux/clockchips.h>
+#include <linux/smpboot.h>

#include <asm/cacheflush.h>
#include <asm/tlbflush.h>
@@ -73,13 +74,21 @@ static inline unsigned long do_swap(volatile unsigned long *ptr,

void __cpuinit leon_callin(void)
{
- int cpuid = hard_smp_processor_id();
+ smpboot_start_secondary(NULL);
+}
+
+void __cpuinit __cpu_pre_starting(void *unused)
+{
+ unsigned int cpuid = hard_smp_processor_id();

local_ops->cache_all();
local_ops->tlb_all();
leon_configure_cache_smp();
+}

- notify_cpu_starting(cpuid);
+void __cpuinit __cpu_pre_online(void *unused)
+{
+ unsigned int cpuid = hard_smp_processor_id();

/* Get our local ticker going. */
register_percpu_ce(cpuid);
@@ -91,11 +100,10 @@ void __cpuinit leon_callin(void)
local_ops->tlb_all();

/*
- * Unblock the master CPU _only_ when the scheduler state
- * of all secondary CPUs will be up-to-date, so after
- * the SMP initialization the master will be just allowed
- * to call the scheduler code.
- * Allow master to continue.
+ * Allow master to continue. The master will then give us the
+ * go-ahead by setting the smp_commenced_mask and will wait without
+ * timeouts until our setup is completed fully (signified by
+ * our bit being set in the cpu_online_mask).
*/
do_swap(&cpu_callin_map[cpuid], 1);

@@ -112,9 +120,6 @@ void __cpuinit leon_callin(void)

while (!cpumask_test_cpu(cpuid, &smp_commenced_mask))
mb();
-
- local_irq_enable();
- set_cpu_online(cpuid, true);
}

/*
diff --git a/arch/sparc/kernel/sun4d_smp.c b/arch/sparc/kernel/sun4d_smp.c
index ddaea31..cd5367a 100644
--- a/arch/sparc/kernel/sun4d_smp.c
+++ b/arch/sparc/kernel/sun4d_smp.c
@@ -12,6 +12,7 @@
#include <linux/delay.h>
#include <linux/sched.h>
#include <linux/cpu.h>
+#include <linux/smpboot.h>

#include <asm/cacheflush.h>
#include <asm/switch_to.h>
@@ -52,8 +53,12 @@ static inline void show_leds(int cpuid)

void __cpuinit smp4d_callin(void)
{
- int cpuid = hard_smp_processor_id();
- unsigned long flags;
+ smpboot_start_secondary(NULL);
+}
+
+void __cpuinit __cpu_pre_starting(void *unused)
+{
+ unsigned int cpuid = hard_smp_processor_id();

/* Show we are alive */
cpu_leds[cpuid] = 0x6;
@@ -64,14 +69,13 @@ void __cpuinit smp4d_callin(void)

local_ops->cache_all();
local_ops->tlb_all();
+}
+
+void __cpuinit __cpu_pre_online(void *unused)
+{
+ unsigned int cpuid = hard_smp_processor_id();
+ unsigned long flags;

- notify_cpu_starting(cpuid);
- /*
- * Unblock the master CPU _only_ when the scheduler state
- * of all secondary CPUs will be up-to-date, so after
- * the SMP initialization the master will be just allowed
- * to call the scheduler code.
- */
/* Get our local ticker going. */
register_percpu_ce(cpuid);

@@ -106,16 +110,12 @@ void __cpuinit smp4d_callin(void)
local_ops->cache_all();
local_ops->tlb_all();

- local_irq_enable(); /* We don't allow PIL 14 yet */
-
while (!cpumask_test_cpu(cpuid, &smp_commenced_mask))
barrier();

spin_lock_irqsave(&sun4d_imsk_lock, flags);
cc_set_imsk(cc_get_imsk() & ~0x4000); /* Allow PIL 14 as well */
spin_unlock_irqrestore(&sun4d_imsk_lock, flags);
- set_cpu_online(cpuid, true);
-
}

/*
diff --git a/arch/sparc/kernel/sun4m_smp.c b/arch/sparc/kernel/sun4m_smp.c
index 128af73..ed05f54 100644
--- a/arch/sparc/kernel/sun4m_smp.c
+++ b/arch/sparc/kernel/sun4m_smp.c
@@ -10,6 +10,7 @@
#include <linux/delay.h>
#include <linux/sched.h>
#include <linux/cpu.h>
+#include <linux/smpboot.h>

#include <asm/cacheflush.h>
#include <asm/switch_to.h>
@@ -36,12 +37,20 @@ swap_ulong(volatile unsigned long *ptr, unsigned long val)

void __cpuinit smp4m_callin(void)
{
- int cpuid = hard_smp_processor_id();
+ smpboot_start_secondary(NULL);
+}
+
+void __cpuinit __cpu_pre_starting(void *unused)
+{
+ unsigned int cpuid = hard_smp_processor_id();

local_ops->cache_all();
local_ops->tlb_all();
+}

- notify_cpu_starting(cpuid);
+void __cpuinit __cpu_pre_online(void *unused)
+{
+ unsigned int cpuid = hard_smp_processor_id();

register_percpu_ce(cpuid);

@@ -52,12 +61,11 @@ void __cpuinit smp4m_callin(void)
local_ops->tlb_all();

/*
- * Unblock the master CPU _only_ when the scheduler state
- * of all secondary CPUs will be up-to-date, so after
- * the SMP initialization the master will be just allowed
- * to call the scheduler code.
+ * Allow master to continue. The master will then give us the
+ * go-ahead by setting the smp_commenced_mask and will wait without
+ * timeouts until our setup is completed fully (signified by
+ * our bit being set in the cpu_online_mask).
*/
- /* Allow master to continue. */
swap_ulong(&cpu_callin_map[cpuid], 1);

/* XXX: What's up with all the flushes? */
@@ -75,10 +83,6 @@ void __cpuinit smp4m_callin(void)

while (!cpumask_test_cpu(cpuid, &smp_commenced_mask))
mb();
-
- local_irq_enable();
-
- set_cpu_online(cpuid, true);
}

/*
diff --git a/arch/sparc/kernel/trampoline_32.S b/arch/sparc/kernel/trampoline_32.S
index 7364ddc..75b659e 100644
--- a/arch/sparc/kernel/trampoline_32.S
+++ b/arch/sparc/kernel/trampoline_32.S
@@ -82,17 +82,9 @@ cpu3_startup:
call smp4m_callin
nop

- b,a smp_do_cpu_idle
-
.text
.align 4

-smp_do_cpu_idle:
- call cpu_idle
- mov 0, %o0
-
- call cpu_panic
- nop

/* CPUID in bootbus can be found at PA 0xff0140000 */
#define SUN4D_BOOTBUS_CPUID 0xf0140000
@@ -147,8 +139,6 @@ sun4d_cpu_startup:
call smp4d_callin
nop

- b,a smp_do_cpu_idle
-
#ifdef CONFIG_SPARC_LEON

__CPUINIT
@@ -206,6 +196,4 @@ leon_smp_cpu_startup:
call leon_callin
nop

- b,a smp_do_cpu_idle
-
#endif

2012-06-01 22:52:13

by David Miller

[permalink] [raw]
Subject: Re: [UPDATED PATCH 20/27] sparc64, smpboot: Use generic SMP booting infrastructure

From: "Srivatsa S. Bhat" <[email protected]>
Date: Sat, 02 Jun 2012 04:14:17 +0530

> From: Nikunj A. Dadhania <[email protected]>
>
> Convert sparc64 to use the generic framework to boot secondary CPUs.
>
> Notes:
> Remove the calls to cpu_idle() from assembly files because we will invoke
> cpu_idle() in generic code.
>
> Signed-off-by: Nikunj A. Dadhania <[email protected]>
> Signed-off-by: Srivatsa S. Bhat <[email protected]>

Acked-by: David S. Miller <[email protected]>

2012-06-01 22:53:47

by David Miller

[permalink] [raw]
Subject: Re: [UPDATED PATCH 21/27] sparc32, smpboot: Use generic SMP booting infrastructure

From: "Srivatsa S. Bhat" <[email protected]>
Date: Sat, 02 Jun 2012 04:17:47 +0530

>
> Convert sparc32 to use the generic framework to boot secondary CPUs.
>
> Notes:
> While removing the call to cpu_idle(), we can also remove the call to
> cpu_panic() because we are calling BUG() in the generic code anyway.
>
> Signed-off-by: Srivatsa S. Bhat <[email protected]>

Almost there, but not quite yet.

You can't get rid of the cpu_panic invocations, those have to be there
after the *_callin calls to catch if they accidently return.

2012-06-01 23:18:05

by Srivatsa S. Bhat

[permalink] [raw]
Subject: Re: [UPDATED PATCH 21/27] sparc32, smpboot: Use generic SMP booting infrastructure

On 06/02/2012 04:23 AM, David Miller wrote:

> From: "Srivatsa S. Bhat" <[email protected]>
> Date: Sat, 02 Jun 2012 04:17:47 +0530
>
>>
>> Convert sparc32 to use the generic framework to boot secondary CPUs.
>>
>> Notes:
>> While removing the call to cpu_idle(), we can also remove the call to
>> cpu_panic() because we are calling BUG() in the generic code anyway.
>>
>> Signed-off-by: Srivatsa S. Bhat <[email protected]>
>
> Almost there, but not quite yet.
>
> You can't get rid of the cpu_panic invocations, those have to be there
> after the *_callin calls to catch if they accidently return.
>


Oh.. ok..

So can I do something like the following, or do you want me to specifically
add call cpu_panic and nop after every *_callin?

diff --git a/arch/sparc/kernel/trampoline_32.S b/arch/sparc/kernel/trampoline_32.S
index 7364ddc..9b2b81d 100644
--- a/arch/sparc/kernel/trampoline_32.S
+++ b/arch/sparc/kernel/trampoline_32.S
@@ -82,15 +82,12 @@ cpu3_startup:
call smp4m_callin
nop

- b,a smp_do_cpu_idle
+ b,a do_panic

.text
.align 4

-smp_do_cpu_idle:
- call cpu_idle
- mov 0, %o0
-
+do_panic:
call cpu_panic
nop

@@ -147,7 +144,7 @@ sun4d_cpu_startup:
call smp4d_callin
nop

- b,a smp_do_cpu_idle
+ b,a do_panic

#ifdef CONFIG_SPARC_LEON

@@ -206,6 +203,6 @@ leon_smp_cpu_startup:
call leon_callin
nop

- b,a smp_do_cpu_idle
+ b,a do_panic

2012-06-02 06:14:40

by Paul Mackerras

[permalink] [raw]
Subject: Re: [PATCH 18/27] powerpc, smpboot: Use generic SMP booting infrastructure

On Fri, Jun 01, 2012 at 02:44:23PM +0530, Srivatsa S. Bhat wrote:
> From: Nikunj A. Dadhania <[email protected]>
>
> Convert powerpc to use the generic framework to boot secondary CPUs.
>
> Signed-off-by: Nikunj A. Dadhania <[email protected]>

Acked-by: Paul Mackerras <[email protected]>

2012-06-02 06:52:54

by Sam Ravnborg

[permalink] [raw]
Subject: Re: [UPDATED PATCH 21/27] sparc32, smpboot: Use generic SMP booting infrastructure

Hi Srivatsa.

I cannot see how this would work for sparc32...
[Did not notice before...]

> --- a/arch/sparc/kernel/leon_smp.c
> +++ b/arch/sparc/kernel/leon_smp.c

> +void __cpuinit __cpu_pre_starting(void *unused)
> +{
> + unsigned int cpuid = hard_smp_processor_id();
>
> diff --git a/arch/sparc/kernel/sun4d_smp.c b/arch/sparc/kernel/sun4d_smp.c
> index ddaea31..cd5367a 100644
> --- a/arch/sparc/kernel/sun4d_smp.c
> +++ b/arch/sparc/kernel/sun4d_smp.c

> +void __cpuinit __cpu_pre_starting(void *unused)
> +{
> + unsigned int cpuid = hard_smp_processor_id();
>
> diff --git a/arch/sparc/kernel/sun4m_smp.c b/arch/sparc/kernel/sun4m_smp.c
> index 128af73..ed05f54 100644
> --- a/arch/sparc/kernel/sun4m_smp.c
> +++ b/arch/sparc/kernel/sun4m_smp.c

> +void __cpuinit __cpu_pre_starting(void *unused)
> +{
> + unsigned int cpuid = hard_smp_processor_id();

See how you define a function with the same name three times.
On sparc32 we include all of the above files in the kernel,
and uses various tricks to determine at run-time
which variant to use.

We need to define these general functions in smp_32.c and
then take relevant action depending on sparc_cpu_model.

Sam

2012-06-02 07:44:28

by Sam Ravnborg

[permalink] [raw]
Subject: Re: [UPDATED PATCH 21/27] sparc32, smpboot: Use generic SMP booting infrastructure

On Sat, Jun 02, 2012 at 08:52:49AM +0200, Sam Ravnborg wrote:
> Hi Srivatsa.
>
> I cannot see how this would work for sparc32...
> [Did not notice before...]
>
> > --- a/arch/sparc/kernel/leon_smp.c
> > +++ b/arch/sparc/kernel/leon_smp.c
>
> > +void __cpuinit __cpu_pre_starting(void *unused)
> > +{
> > + unsigned int cpuid = hard_smp_processor_id();
> >
> > diff --git a/arch/sparc/kernel/sun4d_smp.c b/arch/sparc/kernel/sun4d_smp.c
> > index ddaea31..cd5367a 100644
> > --- a/arch/sparc/kernel/sun4d_smp.c
> > +++ b/arch/sparc/kernel/sun4d_smp.c
>
> > +void __cpuinit __cpu_pre_starting(void *unused)
> > +{
> > + unsigned int cpuid = hard_smp_processor_id();
> >
> > diff --git a/arch/sparc/kernel/sun4m_smp.c b/arch/sparc/kernel/sun4m_smp.c
> > index 128af73..ed05f54 100644
> > --- a/arch/sparc/kernel/sun4m_smp.c
> > +++ b/arch/sparc/kernel/sun4m_smp.c
>
> > +void __cpuinit __cpu_pre_starting(void *unused)
> > +{
> > + unsigned int cpuid = hard_smp_processor_id();
>
> See how you define a function with the same name three times.
> On sparc32 we include all of the above files in the kernel,
> and uses various tricks to determine at run-time
> which variant to use.
>
> We need to define these general functions in smp_32.c and
> then take relevant action depending on sparc_cpu_model.

I took a short look at this.
And it is a bit complicated :-(

leon is the odd-one here. For reasons I do not understand
we have a call from head_32.S to leon_smp_cpu_startup:
in trampoline_32.S.

But sun4m and sun4d uses the normal path via start_kernel() etc.

This has the side-effect that sparc_cpu_model is not set
when we call leon_smp_cpu_startup.

I will try to dive into this and see if I on the sparc32 side
can make leon behave like the other platforms, and then
unify some of the smp cpu boot stuff such that we then
can introduce the generalization from your patch.

Sam

2012-06-02 08:02:52

by Srivatsa S. Bhat

[permalink] [raw]
Subject: Re: [UPDATED PATCH 21/27] sparc32, smpboot: Use generic SMP booting infrastructure

On 06/02/2012 01:14 PM, Sam Ravnborg wrote:

> On Sat, Jun 02, 2012 at 08:52:49AM +0200, Sam Ravnborg wrote:
>> Hi Srivatsa.
>>
>> I cannot see how this would work for sparc32...
>> [Did not notice before...]
>>
>>> --- a/arch/sparc/kernel/leon_smp.c
>>> +++ b/arch/sparc/kernel/leon_smp.c
>>
>>> +void __cpuinit __cpu_pre_starting(void *unused)
>>> +{
>>> + unsigned int cpuid = hard_smp_processor_id();
>>>
>>> diff --git a/arch/sparc/kernel/sun4d_smp.c b/arch/sparc/kernel/sun4d_smp.c
>>> index ddaea31..cd5367a 100644
>>> --- a/arch/sparc/kernel/sun4d_smp.c
>>> +++ b/arch/sparc/kernel/sun4d_smp.c
>>
>>> +void __cpuinit __cpu_pre_starting(void *unused)
>>> +{
>>> + unsigned int cpuid = hard_smp_processor_id();
>>>
>>> diff --git a/arch/sparc/kernel/sun4m_smp.c b/arch/sparc/kernel/sun4m_smp.c
>>> index 128af73..ed05f54 100644
>>> --- a/arch/sparc/kernel/sun4m_smp.c
>>> +++ b/arch/sparc/kernel/sun4m_smp.c
>>
>>> +void __cpuinit __cpu_pre_starting(void *unused)
>>> +{
>>> + unsigned int cpuid = hard_smp_processor_id();
>>
>> See how you define a function with the same name three times.
>> On sparc32 we include all of the above files in the kernel,
>> and uses various tricks to determine at run-time
>> which variant to use.
>>
>> We need to define these general functions in smp_32.c and
>> then take relevant action depending on sparc_cpu_model.
>
> I took a short look at this.
> And it is a bit complicated :-(
>
> leon is the odd-one here. For reasons I do not understand
> we have a call from head_32.S to leon_smp_cpu_startup:
> in trampoline_32.S.
>
> But sun4m and sun4d uses the normal path via start_kernel() etc.
>
> This has the side-effect that sparc_cpu_model is not set
> when we call leon_smp_cpu_startup.
>


Yes, I saw that too now..

> I will try to dive into this and see if I on the sparc32 side
> can make leon behave like the other platforms, and then
> unify some of the smp cpu boot stuff such that we then
> can introduce the generalization from your patch.
>


Thanks a lot for that Sam!

Regards,
Srivatsa S. Bhat

2012-06-02 08:16:09

by Srivatsa S. Bhat

[permalink] [raw]
Subject: Re: [UPDATED PATCH 21/27] sparc32, smpboot: Use generic SMP booting infrastructure

On 06/02/2012 01:31 PM, Srivatsa S. Bhat wrote:

> On 06/02/2012 01:14 PM, Sam Ravnborg wrote:
>
>> On Sat, Jun 02, 2012 at 08:52:49AM +0200, Sam Ravnborg wrote:
>>> Hi Srivatsa.
>>>
>>> I cannot see how this would work for sparc32...
>>> [Did not notice before...]
>>>
>>>> --- a/arch/sparc/kernel/leon_smp.c
>>>> +++ b/arch/sparc/kernel/leon_smp.c
>>>
>>>> +void __cpuinit __cpu_pre_starting(void *unused)
>>>> +{
>>>> + unsigned int cpuid = hard_smp_processor_id();
>>>>
>>>> diff --git a/arch/sparc/kernel/sun4d_smp.c b/arch/sparc/kernel/sun4d_smp.c
>>>> index ddaea31..cd5367a 100644
>>>> --- a/arch/sparc/kernel/sun4d_smp.c
>>>> +++ b/arch/sparc/kernel/sun4d_smp.c
>>>
>>>> +void __cpuinit __cpu_pre_starting(void *unused)
>>>> +{
>>>> + unsigned int cpuid = hard_smp_processor_id();
>>>>
>>>> diff --git a/arch/sparc/kernel/sun4m_smp.c b/arch/sparc/kernel/sun4m_smp.c
>>>> index 128af73..ed05f54 100644
>>>> --- a/arch/sparc/kernel/sun4m_smp.c
>>>> +++ b/arch/sparc/kernel/sun4m_smp.c
>>>
>>>> +void __cpuinit __cpu_pre_starting(void *unused)
>>>> +{
>>>> + unsigned int cpuid = hard_smp_processor_id();
>>>
>>> See how you define a function with the same name three times.
>>> On sparc32 we include all of the above files in the kernel,
>>> and uses various tricks to determine at run-time
>>> which variant to use.
>>>
>>> We need to define these general functions in smp_32.c and
>>> then take relevant action depending on sparc_cpu_model.
>>
>> I took a short look at this.
>> And it is a bit complicated :-(
>>
>> leon is the odd-one here. For reasons I do not understand
>> we have a call from head_32.S to leon_smp_cpu_startup:
>> in trampoline_32.S.
>>
>> But sun4m and sun4d uses the normal path via start_kernel() etc.
>>
>> This has the side-effect that sparc_cpu_model is not set
>> when we call leon_smp_cpu_startup.
>>
>
> Yes, I saw that too now..
>


But I'm wondering where will it set sparc_cpu_model to sparc_leon
then.. The only place i saw was setup_arch(), which gets called from
start_kernel(). But if leon doesn't even call start_kernel(), then....?

Regards,
Srivatsa S. Bhat

2012-06-02 15:13:55

by Sam Ravnborg

[permalink] [raw]
Subject: Re: [UPDATED PATCH 21/27] sparc32, smpboot: Use generic SMP booting infrastructure

>
> > I will try to dive into this and see if I on the sparc32 side
> > can make leon behave like the other platforms, and then
> > unify some of the smp cpu boot stuff such that we then
> > can introduce the generalization from your patch.

This is what I came up with.
I modelled this over your patch-set to make it
simpler to convert to the generic scheme.

I realised a bit more how CPU's are started up on sparc32.
So I think all of leon, sun4m and sun4d should work with this.

But my SUN box is defunct atm. so I have not tested it.
Comments appreciated!

PS. I used the ugly __names for now.
As soon as you have found a better set of names this should
be fixed here too.

Sam

diff --git a/arch/sparc/kernel/kernel.h b/arch/sparc/kernel/kernel.h
index 291bb5d..2bc1c6d 100644
--- a/arch/sparc/kernel/kernel.h
+++ b/arch/sparc/kernel/kernel.h
@@ -48,6 +48,10 @@ extern void sun4m_init_IRQ(void);
extern void sun4m_unmask_profile_irq(void);
extern void sun4m_clear_profile_irq(int cpu);

+/* sun4m_smp.c */
+void sun4m_cpu_pre_starting(void);
+void sun4m_cpu_pre_online(void);
+
/* sun4d_irq.c */
extern spinlock_t sun4d_imsk_lock;

@@ -60,6 +64,14 @@ extern int show_sun4d_interrupts(struct seq_file *, void *);
extern void sun4d_distribute_irqs(void);
extern void sun4d_free_irq(unsigned int irq, void *dev_id);

+/* sun4d_smp.c */
+void sun4d_cpu_pre_starting(void);
+void sun4d_cpu_pre_online(void);
+
+/* leon_smp.c */
+void leon_cpu_pre_starting(void);
+void leon_cpu_pre_online(void);
+
/* head_32.S */
extern unsigned int t_nmi[];
extern unsigned int linux_trap_ipi15_sun4d[];
diff --git a/arch/sparc/kernel/leon_smp.c b/arch/sparc/kernel/leon_smp.c
index 0f3fb6d..0b028b3 100644
--- a/arch/sparc/kernel/leon_smp.c
+++ b/arch/sparc/kernel/leon_smp.c
@@ -69,31 +69,20 @@ static inline unsigned long do_swap(volatile unsigned long *ptr,
return val;
}

-void __cpuinit leon_callin(void)
+void __cpuinit leon_cpu_pre_starting(void)
{
- int cpuid = hard_smp_processor_id();
-
- local_ops->cache_all();
- local_ops->tlb_all();
leon_configure_cache_smp();
+}

- notify_cpu_starting(cpuid);
-
- /* Get our local ticker going. */
- register_percpu_ce(cpuid);
-
- calibrate_delay();
- smp_store_cpu_info(cpuid);

- local_ops->cache_all();
- local_ops->tlb_all();
+void __cpuinit cpu_cpu_pre_starting(void *unused)
+{
+ int cpuid = hard_smp_processor_id();

- /*
- * Unblock the master CPU _only_ when the scheduler state
- * of all secondary CPUs will be up-to-date, so after
- * the SMP initialization the master will be just allowed
- * to call the scheduler code.
- * Allow master to continue.
+ /* Allow master to continue. The master will then give us the
+ * go-ahead by setting the smp_commenced_mask and will wait without
+ * timeouts until our setup is completed fully (signified by
+ * our bit being set in the cpu_online_mask).
*/
do_swap(&cpu_callin_map[cpuid], 1);

@@ -110,9 +99,6 @@ void __cpuinit leon_callin(void)

while (!cpumask_test_cpu(cpuid, &smp_commenced_mask))
mb();
-
- local_irq_enable();
- set_cpu_online(cpuid, true);
}

/*
diff --git a/arch/sparc/kernel/smp_32.c b/arch/sparc/kernel/smp_32.c
index 79db45e..2e2990f 100644
--- a/arch/sparc/kernel/smp_32.c
+++ b/arch/sparc/kernel/smp_32.c
@@ -20,6 +20,7 @@
#include <linux/seq_file.h>
#include <linux/cache.h>
#include <linux/delay.h>
+#include <linux/cpu.h>

#include <asm/ptrace.h>
#include <linux/atomic.h>
@@ -32,8 +33,10 @@
#include <asm/cacheflush.h>
#include <asm/tlbflush.h>
#include <asm/cpudata.h>
+#include <asm/timer.h>
#include <asm/leon.h>

+#include "kernel.h"
#include "irq.h"

volatile unsigned long cpu_callin_map[NR_CPUS] __cpuinitdata = {0,};
@@ -294,6 +297,89 @@ int __cpuinit __cpu_up(unsigned int cpu, struct task_struct *tidle)
return ret;
}

+void __cpuinit __cpu_pre_starting(void *unused)
+{
+ local_ops->cache_all();
+ local_ops->tlb_all();
+
+ switch(sparc_cpu_model) {
+ case sun4m:
+ sun4m_cpu_pre_starting();
+ break;
+ case sun4d:
+ sun4d_cpu_pre_starting();
+ break;
+ case sparc_leon:
+ leon_cpu_pre_starting();
+ break;
+ default:
+ BUG();
+ }
+}
+
+void __cpuinit __cpu_pre_online(void *unused)
+{
+ unsigned int cpuid = hard_smp_processor_id();
+
+ register_percpu_ce(cpuid);
+
+ calibrate_delay();
+ smp_store_cpu_info(cpuid);
+
+ local_ops->cache_all();
+ local_ops->tlb_all();
+
+ switch(sparc_cpu_model) {
+ case sun4m:
+ sun4m_cpu_pre_online();
+ break;
+ case sun4d:
+ sun4d_cpu_pre_online();
+ break;
+ case sparc_leon:
+ leon_cpu_pre_starting();
+ break;
+ default:
+ BUG();
+ }
+}
+
+void __cpuinit sparc_start_secondary(void *unused)
+{
+ unsigned int cpu;
+
+ /*
+ * SMP booting is extremely fragile in some architectures. So run
+ * the cpu initialization code first before anything else.
+ */
+ __cpu_pre_starting(NULL);
+
+ preempt_disable();
+ cpu = smp_processor_id();
+
+ /* Invoke the CPU_STARTING notifier callbacks */
+ notify_cpu_starting(cpu);
+
+ __cpu_pre_online(NULL);
+
+ /* Set the CPU in the cpu_online_mask */
+ set_cpu_online(cpu, true);
+
+ /* Enable local interrupts now */
+ local_irq_enable();
+
+ wmb();
+ cpu_idle();
+
+ /* We should never reach here! */
+ BUG();
+}
+
+void __cpuinit smp_callin(void)
+{
+ sparc_start_secondary(NULL);
+}
+
void smp_bogo(struct seq_file *m)
{
int i;
diff --git a/arch/sparc/kernel/sun4d_smp.c b/arch/sparc/kernel/sun4d_smp.c
index ddaea31..bfbbc46 100644
--- a/arch/sparc/kernel/sun4d_smp.c
+++ b/arch/sparc/kernel/sun4d_smp.c
@@ -50,10 +50,9 @@ static inline void show_leds(int cpuid)
"i" (ASI_M_CTL));
}

-void __cpuinit smp4d_callin(void)
+void __cpuinit sun4d_cpu_pre_starting(void)
{
int cpuid = hard_smp_processor_id();
- unsigned long flags;

/* Show we are alive */
cpu_leds[cpuid] = 0x6;
@@ -61,24 +60,14 @@ void __cpuinit smp4d_callin(void)

/* Enable level15 interrupt, disable level14 interrupt for now */
cc_set_imsk((cc_get_imsk() & ~0x8000) | 0x4000);
+}

- local_ops->cache_all();
- local_ops->tlb_all();
-
- notify_cpu_starting(cpuid);
- /*
- * Unblock the master CPU _only_ when the scheduler state
- * of all secondary CPUs will be up-to-date, so after
- * the SMP initialization the master will be just allowed
- * to call the scheduler code.
- */
- /* Get our local ticker going. */
- register_percpu_ce(cpuid);
+void __cpuinit sun4d_cpu_pre_online(void)
+{
+ unsigned long flags;
+ int cpuid;

- calibrate_delay();
- smp_store_cpu_info(cpuid);
- local_ops->cache_all();
- local_ops->tlb_all();
+ cpuid = hard_smp_processor_id();

/* Allow master to continue. */
sun4d_swap((unsigned long *)&cpu_callin_map[cpuid], 1);
@@ -106,16 +95,12 @@ void __cpuinit smp4d_callin(void)
local_ops->cache_all();
local_ops->tlb_all();

- local_irq_enable(); /* We don't allow PIL 14 yet */
-
while (!cpumask_test_cpu(cpuid, &smp_commenced_mask))
barrier();

spin_lock_irqsave(&sun4d_imsk_lock, flags);
cc_set_imsk(cc_get_imsk() & ~0x4000); /* Allow PIL 14 as well */
spin_unlock_irqrestore(&sun4d_imsk_lock, flags);
- set_cpu_online(cpuid, true);
-
}

/*
diff --git a/arch/sparc/kernel/sun4m_smp.c b/arch/sparc/kernel/sun4m_smp.c
index 128af73..0cf6b4f 100644
--- a/arch/sparc/kernel/sun4m_smp.c
+++ b/arch/sparc/kernel/sun4m_smp.c
@@ -34,30 +34,19 @@ swap_ulong(volatile unsigned long *ptr, unsigned long val)
return val;
}

-void __cpuinit smp4m_callin(void)
+void __cpuinit sun4m_cpu_pre_starting(void)
{
- int cpuid = hard_smp_processor_id();
-
- local_ops->cache_all();
- local_ops->tlb_all();
-
- notify_cpu_starting(cpuid);
-
- register_percpu_ce(cpuid);
-
- calibrate_delay();
- smp_store_cpu_info(cpuid);
+}

- local_ops->cache_all();
- local_ops->tlb_all();
+void __cpuinit sun4m_cpu_pre_online(void)
+{
+ int cpuid = hard_smp_processor_id();

- /*
- * Unblock the master CPU _only_ when the scheduler state
- * of all secondary CPUs will be up-to-date, so after
- * the SMP initialization the master will be just allowed
- * to call the scheduler code.
+ /* Allow master to continue. The master will then give us the
+ * go-ahead by setting the smp_commenced_mask and will wait without
+ * timeouts until our setup is completed fully (signified by
+ * our bit being set in the cpu_online_mask).
*/
- /* Allow master to continue. */
swap_ulong(&cpu_callin_map[cpuid], 1);

/* XXX: What's up with all the flushes? */
@@ -75,10 +64,6 @@ void __cpuinit smp4m_callin(void)

while (!cpumask_test_cpu(cpuid, &smp_commenced_mask))
mb();
-
- local_irq_enable();
-
- set_cpu_online(cpuid, true);
}

/*
diff --git a/arch/sparc/kernel/trampoline_32.S b/arch/sparc/kernel/trampoline_32.S
index af27aca..6cdb08c 100644
--- a/arch/sparc/kernel/trampoline_32.S
+++ b/arch/sparc/kernel/trampoline_32.S
@@ -79,18 +79,15 @@ cpu3_startup:
nop

/* Start this processor. */
- call smp4m_callin
+ call smp_callin
nop

- b,a smp_do_cpu_idle
+ b,a smp_panic

.text
.align 4

-smp_do_cpu_idle:
- call cpu_idle
- mov 0, %o0
-
+smp_panic:
call cpu_panic
nop

@@ -144,10 +141,10 @@ sun4d_cpu_startup:
nop

/* Start this processor. */
- call smp4d_callin
+ call smp_callin
nop

- b,a smp_do_cpu_idle
+ b,a smp_panic

__CPUINIT
.align 4
@@ -201,7 +198,7 @@ leon_smp_cpu_startup:
nop

/* Start this processor. */
- call leon_callin
+ call smp_callin
nop

- b,a smp_do_cpu_idle
+ b,a smp_panic

2012-06-02 15:16:06

by Sam Ravnborg

[permalink] [raw]
Subject: Re: [PATCH 01/27] smpboot: Provide a generic method to boot secondary processors

On Sat, Jun 02, 2012 at 04:11:28AM +0530, Srivatsa S. Bhat wrote:
> On 06/01/2012 10:23 PM, Sam Ravnborg wrote:
>
> >> +
> >> +
> >> +/* Implement the following functions in your architecture, as appropriate. */
> >> +
> >> +/**
> >> + * __cpu_pre_starting()
> >> + *
> >> + * Implement whatever you need to do before the CPU_STARTING notifiers are
> >> + * invoked. Note that the CPU_STARTING callbacks run *on* the cpu that is
> >> + * coming up. So that cpu better be prepared! IOW, implement all the early
> >> + * boot/init code for the cpu here. And do NOT enable interrupts.
> >> + */
> >> +#ifndef __cpu_pre_starting
> >> +void __weak __cpu_pre_starting(void *arg) {}
> >> +#endif
> >
> > I miss the prototype for this in a header?
>
>
> Prototype is not really necessary for this. Hence not added.
There is a simple rule:
1) If a function is declared => not static
2) If function not declared => static

With respect to this simple rule your functions shall be defined as static.
And that would not work.

Sam

2012-06-02 15:35:29

by Srivatsa S. Bhat

[permalink] [raw]
Subject: Re: [PATCH 01/27] smpboot: Provide a generic method to boot secondary processors

On 06/02/2012 08:46 PM, Sam Ravnborg wrote:

> On Sat, Jun 02, 2012 at 04:11:28AM +0530, Srivatsa S. Bhat wrote:
>> On 06/01/2012 10:23 PM, Sam Ravnborg wrote:
>>
>>>> +
>>>> +
>>>> +/* Implement the following functions in your architecture, as appropriate. */
>>>> +
>>>> +/**
>>>> + * __cpu_pre_starting()
>>>> + *
>>>> + * Implement whatever you need to do before the CPU_STARTING notifiers are
>>>> + * invoked. Note that the CPU_STARTING callbacks run *on* the cpu that is
>>>> + * coming up. So that cpu better be prepared! IOW, implement all the early
>>>> + * boot/init code for the cpu here. And do NOT enable interrupts.
>>>> + */
>>>> +#ifndef __cpu_pre_starting
>>>> +void __weak __cpu_pre_starting(void *arg) {}
>>>> +#endif
>>>
>>> I miss the prototype for this in a header?
>>
>>
>> Prototype is not really necessary for this. Hence not added.
> There is a simple rule:
> 1) If a function is declared => not static
> 2) If function not declared => static
>
> With respect to this simple rule your functions shall be defined as static.
> And that would not work.
>


Fair enough. Will add prototypes in include/linux/smpboot.h

Thanks you!

Regards,
Srivatsa S. Bhat

2012-06-02 15:59:36

by Srivatsa S. Bhat

[permalink] [raw]
Subject: Re: [UPDATED PATCH 21/27] sparc32, smpboot: Use generic SMP booting infrastructure

On 06/02/2012 08:43 PM, Sam Ravnborg wrote:

>>
>>> I will try to dive into this and see if I on the sparc32 side
>>> can make leon behave like the other platforms, and then
>>> unify some of the smp cpu boot stuff such that we then
>>> can introduce the generalization from your patch.
>
> This is what I came up with.
> I modelled this over your patch-set to make it
> simpler to convert to the generic scheme.
>


Thanks a lot for doing this :-)

> I realised a bit more how CPU's are started up on sparc32.
> So I think all of leon, sun4m and sun4d should work with this.
>
> But my SUN box is defunct atm. so I have not tested it.
> Comments appreciated!
>
> PS. I used the ugly __names for now.
> As soon as you have found a better set of names this should
> be fixed here too.
>


As I mentioned in my other mail, I am thinking of changing them to
arch_cpu_pre_starting(), arch_cpu_pre_online() and arch_cpu_post_online().
Let me know what you think of those names.

Would you kindly add a changelog and your sign-off to this patch?

>
> diff --git a/arch/sparc/kernel/kernel.h b/arch/sparc/kernel/kernel.h
> index 291bb5d..2bc1c6d 100644
> --- a/arch/sparc/kernel/kernel.h
> +++ b/arch/sparc/kernel/kernel.h
> @@ -48,6 +48,10 @@ extern void sun4m_init_IRQ(void);
> extern void sun4m_unmask_profile_irq(void);
> extern void sun4m_clear_profile_irq(int cpu);
>
> +/* sun4m_smp.c */
> +void sun4m_cpu_pre_starting(void);
> +void sun4m_cpu_pre_online(void);
> +
> /* sun4d_irq.c */
> extern spinlock_t sun4d_imsk_lock;
>
> @@ -60,6 +64,14 @@ extern int show_sun4d_interrupts(struct seq_file *, void *);
> extern void sun4d_distribute_irqs(void);
> extern void sun4d_free_irq(unsigned int irq, void *dev_id);
>
> +/* sun4d_smp.c */
> +void sun4d_cpu_pre_starting(void);
> +void sun4d_cpu_pre_online(void);
> +
> +/* leon_smp.c */
> +void leon_cpu_pre_starting(void);
> +void leon_cpu_pre_online(void);
> +
> /* head_32.S */
> extern unsigned int t_nmi[];
> extern unsigned int linux_trap_ipi15_sun4d[];
> diff --git a/arch/sparc/kernel/leon_smp.c b/arch/sparc/kernel/leon_smp.c
> index 0f3fb6d..0b028b3 100644
> --- a/arch/sparc/kernel/leon_smp.c
> +++ b/arch/sparc/kernel/leon_smp.c
> @@ -69,31 +69,20 @@ static inline unsigned long do_swap(volatile unsigned long *ptr,
> return val;
> }
>
> -void __cpuinit leon_callin(void)
> +void __cpuinit leon_cpu_pre_starting(void)
> {
> - int cpuid = hard_smp_processor_id();
> -
> - local_ops->cache_all();
> - local_ops->tlb_all();
> leon_configure_cache_smp();
> +}
>
> - notify_cpu_starting(cpuid);
> -
> - /* Get our local ticker going. */
> - register_percpu_ce(cpuid);
> -
> - calibrate_delay();
> - smp_store_cpu_info(cpuid);
>
> - local_ops->cache_all();
> - local_ops->tlb_all();
> +void __cpuinit cpu_cpu_pre_starting(void *unused)


Shouldn't this be leon_cpu_pre_online()? And the parameters
should be consistent too (here it expects a void*, but it is
called with no arguments...)

> +{
> + int cpuid = hard_smp_processor_id();
>
> - /*
> - * Unblock the master CPU _only_ when the scheduler state
> - * of all secondary CPUs will be up-to-date, so after
> - * the SMP initialization the master will be just allowed
> - * to call the scheduler code.
> - * Allow master to continue.
> + /* Allow master to continue. The master will then give us the
> + * go-ahead by setting the smp_commenced_mask and will wait without
> + * timeouts until our setup is completed fully (signified by
> + * our bit being set in the cpu_online_mask).
> */
> do_swap(&cpu_callin_map[cpuid], 1);
>
> @@ -110,9 +99,6 @@ void __cpuinit leon_callin(void)
>
> while (!cpumask_test_cpu(cpuid, &smp_commenced_mask))
> mb();
> -
> - local_irq_enable();
> - set_cpu_online(cpuid, true);
> }
>
> /*
> diff --git a/arch/sparc/kernel/smp_32.c b/arch/sparc/kernel/smp_32.c
> index 79db45e..2e2990f 100644
> --- a/arch/sparc/kernel/smp_32.c
> +++ b/arch/sparc/kernel/smp_32.c
> @@ -20,6 +20,7 @@
> #include <linux/seq_file.h>
> #include <linux/cache.h>
> #include <linux/delay.h>
> +#include <linux/cpu.h>
>
> #include <asm/ptrace.h>
> #include <linux/atomic.h>
> @@ -32,8 +33,10 @@
> #include <asm/cacheflush.h>
> #include <asm/tlbflush.h>
> #include <asm/cpudata.h>
> +#include <asm/timer.h>
> #include <asm/leon.h>
>
> +#include "kernel.h"
> #include "irq.h"
>
> volatile unsigned long cpu_callin_map[NR_CPUS] __cpuinitdata = {0,};
> @@ -294,6 +297,89 @@ int __cpuinit __cpu_up(unsigned int cpu, struct task_struct *tidle)
> return ret;
> }
>
> +void __cpuinit __cpu_pre_starting(void *unused)
> +{
> + local_ops->cache_all();
> + local_ops->tlb_all();
> +
> + switch(sparc_cpu_model) {
> + case sun4m:
> + sun4m_cpu_pre_starting();


Wouldn't it be better to have a provision to pass the
pointer down to these functions as well?

> + break;
> + case sun4d:
> + sun4d_cpu_pre_starting();
> + break;
> + case sparc_leon:
> + leon_cpu_pre_starting();
> + break;
> + default:
> + BUG();
> + }
> +}
> +
> +void __cpuinit __cpu_pre_online(void *unused)
> +{
> + unsigned int cpuid = hard_smp_processor_id();
> +
> + register_percpu_ce(cpuid);
> +
> + calibrate_delay();
> + smp_store_cpu_info(cpuid);
> +
> + local_ops->cache_all();
> + local_ops->tlb_all();
> +
> + switch(sparc_cpu_model) {
> + case sun4m:
> + sun4m_cpu_pre_online();
> + break;
> + case sun4d:
> + sun4d_cpu_pre_online();
> + break;
> + case sparc_leon:
> + leon_cpu_pre_starting();


Shouldn't this be leon_cpu_pre_online()?

> + break;
> + default:
> + BUG();
> + }
> +}
> +
> +void __cpuinit sparc_start_secondary(void *unused)
> +{
> + unsigned int cpu;
> +
> + /*
> + * SMP booting is extremely fragile in some architectures. So run
> + * the cpu initialization code first before anything else.
> + */
> + __cpu_pre_starting(NULL);
> +


Why not pass the pointer to __cpu_pre_starting() as well as the rest of
the functions?

> + preempt_disable();
> + cpu = smp_processor_id();
> +
> + /* Invoke the CPU_STARTING notifier callbacks */
> + notify_cpu_starting(cpu);
> +
> + __cpu_pre_online(NULL);
> +
> + /* Set the CPU in the cpu_online_mask */
> + set_cpu_online(cpu, true);
> +
> + /* Enable local interrupts now */
> + local_irq_enable();
> +
> + wmb();
> + cpu_idle();
> +
> + /* We should never reach here! */
> + BUG();
> +}
> +
> +void __cpuinit smp_callin(void)
> +{
> + sparc_start_secondary(NULL);
> +}
> +
> void smp_bogo(struct seq_file *m)
> {
> int i;
> diff --git a/arch/sparc/kernel/sun4d_smp.c b/arch/sparc/kernel/sun4d_smp.c
> index ddaea31..bfbbc46 100644
> --- a/arch/sparc/kernel/sun4d_smp.c
> +++ b/arch/sparc/kernel/sun4d_smp.c
> @@ -50,10 +50,9 @@ static inline void show_leds(int cpuid)
> "i" (ASI_M_CTL));
> }
>
> -void __cpuinit smp4d_callin(void)
> +void __cpuinit sun4d_cpu_pre_starting(void)
> {
> int cpuid = hard_smp_processor_id();
> - unsigned long flags;
>
> /* Show we are alive */
> cpu_leds[cpuid] = 0x6;
> @@ -61,24 +60,14 @@ void __cpuinit smp4d_callin(void)
>
> /* Enable level15 interrupt, disable level14 interrupt for now */
> cc_set_imsk((cc_get_imsk() & ~0x8000) | 0x4000);
> +}
>
> - local_ops->cache_all();
> - local_ops->tlb_all();
> -
> - notify_cpu_starting(cpuid);
> - /*
> - * Unblock the master CPU _only_ when the scheduler state
> - * of all secondary CPUs will be up-to-date, so after
> - * the SMP initialization the master will be just allowed
> - * to call the scheduler code.
> - */
> - /* Get our local ticker going. */
> - register_percpu_ce(cpuid);
> +void __cpuinit sun4d_cpu_pre_online(void)
> +{
> + unsigned long flags;
> + int cpuid;
>
> - calibrate_delay();
> - smp_store_cpu_info(cpuid);
> - local_ops->cache_all();
> - local_ops->tlb_all();
> + cpuid = hard_smp_processor_id();
>
> /* Allow master to continue. */
> sun4d_swap((unsigned long *)&cpu_callin_map[cpuid], 1);
> @@ -106,16 +95,12 @@ void __cpuinit smp4d_callin(void)
> local_ops->cache_all();
> local_ops->tlb_all();
>
> - local_irq_enable(); /* We don't allow PIL 14 yet */
> -
> while (!cpumask_test_cpu(cpuid, &smp_commenced_mask))
> barrier();
>
> spin_lock_irqsave(&sun4d_imsk_lock, flags);
> cc_set_imsk(cc_get_imsk() & ~0x4000); /* Allow PIL 14 as well */
> spin_unlock_irqrestore(&sun4d_imsk_lock, flags);
> - set_cpu_online(cpuid, true);
> -
> }
>
> /*
> diff --git a/arch/sparc/kernel/sun4m_smp.c b/arch/sparc/kernel/sun4m_smp.c
> index 128af73..0cf6b4f 100644
> --- a/arch/sparc/kernel/sun4m_smp.c
> +++ b/arch/sparc/kernel/sun4m_smp.c
> @@ -34,30 +34,19 @@ swap_ulong(volatile unsigned long *ptr, unsigned long val)
> return val;
> }
>
> -void __cpuinit smp4m_callin(void)
> +void __cpuinit sun4m_cpu_pre_starting(void)
> {
> - int cpuid = hard_smp_processor_id();
> -
> - local_ops->cache_all();
> - local_ops->tlb_all();
> -
> - notify_cpu_starting(cpuid);
> -
> - register_percpu_ce(cpuid);
> -
> - calibrate_delay();
> - smp_store_cpu_info(cpuid);
> +}
>
> - local_ops->cache_all();
> - local_ops->tlb_all();
> +void __cpuinit sun4m_cpu_pre_online(void)
> +{
> + int cpuid = hard_smp_processor_id();
>
> - /*
> - * Unblock the master CPU _only_ when the scheduler state
> - * of all secondary CPUs will be up-to-date, so after
> - * the SMP initialization the master will be just allowed
> - * to call the scheduler code.
> + /* Allow master to continue. The master will then give us the
> + * go-ahead by setting the smp_commenced_mask and will wait without
> + * timeouts until our setup is completed fully (signified by
> + * our bit being set in the cpu_online_mask).
> */
> - /* Allow master to continue. */
> swap_ulong(&cpu_callin_map[cpuid], 1);
>
> /* XXX: What's up with all the flushes? */
> @@ -75,10 +64,6 @@ void __cpuinit smp4m_callin(void)
>
> while (!cpumask_test_cpu(cpuid, &smp_commenced_mask))
> mb();
> -
> - local_irq_enable();
> -
> - set_cpu_online(cpuid, true);
> }
>
> /*
> diff --git a/arch/sparc/kernel/trampoline_32.S b/arch/sparc/kernel/trampoline_32.S
> index af27aca..6cdb08c 100644
> --- a/arch/sparc/kernel/trampoline_32.S
> +++ b/arch/sparc/kernel/trampoline_32.S
> @@ -79,18 +79,15 @@ cpu3_startup:
> nop
>
> /* Start this processor. */
> - call smp4m_callin
> + call smp_callin
> nop
>
> - b,a smp_do_cpu_idle
> + b,a smp_panic
>
> .text
> .align 4
>
> -smp_do_cpu_idle:
> - call cpu_idle
> - mov 0, %o0
> -
> +smp_panic:
> call cpu_panic
> nop
>
> @@ -144,10 +141,10 @@ sun4d_cpu_startup:
> nop
>
> /* Start this processor. */
> - call smp4d_callin
> + call smp_callin
> nop
>
> - b,a smp_do_cpu_idle
> + b,a smp_panic
>
> __CPUINIT
> .align 4
> @@ -201,7 +198,7 @@ leon_smp_cpu_startup:
> nop
>
> /* Start this processor. */
> - call leon_callin
> + call smp_callin


I still didn't get how this solves the original problem of
not having sparc_cpu_model set to sparc_leon. You mentioned
that by the time we reach leon_smp_cpu_startup, that variable
is not set. Even inside leon_smp_cpu_startup, I don't immediately
see where it is set. Am I missing something?

> nop
>
> - b,a smp_do_cpu_idle
> + b,a smp_panic
>


Regards,
Srivatsa S. Bhat

2012-06-02 16:23:45

by Sam Ravnborg

[permalink] [raw]
Subject: Re: [UPDATED PATCH 21/27] sparc32, smpboot: Use generic SMP booting infrastructure

>
> As I mentioned in my other mail, I am thinking of changing them to
> arch_cpu_pre_starting(), arch_cpu_pre_online() and arch_cpu_post_online().
> Let me know what you think of those names.
Much better than "__" - so if none of the guys that excel in core
code objects go for that.

> Would you kindly add a changelog and your sign-off to this patch?
Will do in next revision.

...

Thanks for the throughfull review.
I will address all points - including passing the pointer down,
as I assume you have some future plans with that pointer.

>
> I still didn't get how this solves the original problem of
> not having sparc_cpu_model set to sparc_leon. You mentioned
> that by the time we reach leon_smp_cpu_startup, that variable
> is not set. Even inside leon_smp_cpu_startup, I don't immediately
> see where it is set. Am I missing something?

After looking more closely at the code it is my understanding
that a leon CPU when started will actually jump to the reset
vector and start from there.
So the secondary CPU's will run long time after
sparc_cpu_model is set so we can safely use it.

The sun based cpu will in comparsion jump to
an address supplied to a prom call - so they do not
jump to the reset vector.
But they also have sparc_cpu_model set so no problem there
either.

All this are my deductions from reading the code - but this
is not an area I have looked at otherwise..

I may not find time today to cook up a new version of
the patch - but then you will have it tomorrow.

Sam

2012-06-02 16:35:46

by Srivatsa S. Bhat

[permalink] [raw]
Subject: Re: [UPDATED PATCH 21/27] sparc32, smpboot: Use generic SMP booting infrastructure

On 06/02/2012 09:53 PM, Sam Ravnborg wrote:

>>
>> As I mentioned in my other mail, I am thinking of changing them to
>> arch_cpu_pre_starting(), arch_cpu_pre_online() and arch_cpu_post_online().
>> Let me know what you think of those names.
> Much better than "__" - so if none of the guys that excel in core
> code objects go for that.
>


Ok..

>> Would you kindly add a changelog and your sign-off to this patch?
> Will do in next revision.
>


Great!

> ...
>
> Thanks for the throughfull review.
> I will address all points - including passing the pointer down,
> as I assume you have some future plans with that pointer.
>
>>
>> I still didn't get how this solves the original problem of
>> not having sparc_cpu_model set to sparc_leon. You mentioned
>> that by the time we reach leon_smp_cpu_startup, that variable
>> is not set. Even inside leon_smp_cpu_startup, I don't immediately
>> see where it is set. Am I missing something?
>
> After looking more closely at the code it is my understanding
> that a leon CPU when started will actually jump to the reset
> vector and start from there.
> So the secondary CPU's will run long time after
> sparc_cpu_model is set so we can safely use it.
>
> The sun based cpu will in comparsion jump to
> an address supplied to a prom call - so they do not
> jump to the reset vector.
> But they also have sparc_cpu_model set so no problem there
> either.
>


Ok..

> All this are my deductions from reading the code - but this
> is not an area I have looked at otherwise..
>
> I may not find time today to cook up a new version of
> the patch - but then you will have it tomorrow.
>


Sure! Once again, thanks a lot for your time and efforts :-)

Regards,
Srivatsa S. Bhat

2012-06-02 18:07:00

by Srivatsa S. Bhat

[permalink] [raw]
Subject: Re: [Xen-devel] [PATCH 05/27] xen, cpu hotplug: Don't call cpu_bringup() in xen_play_dead()

On 06/01/2012 09:06 PM, Jan Beulich wrote:

>>>> On 01.06.12 at 17:13, "Srivatsa S. Bhat" <[email protected]>
> wrote:
>> On 06/01/2012 06:29 PM, Jan Beulich wrote:
>>
>>>>>> On 01.06.12 at 11:11, "Srivatsa S. Bhat" <[email protected]>
>>> wrote:
>>>> xen_play_dead calls cpu_bringup() which looks weird, because xen_play_dead()
>>>> is invoked in the cpu down path, whereas cpu_bringup() (as the name
>>>> suggests) is useful in the cpu bringup path.
>>>
>>> This might not be correct - the code as it is without this change is
>>> safe even when the vCPU gets onlined back later by an external
>>> entity (e.g. the Xen tool stack), and it would in that case resume
>>> at the return point of the VCPUOP_down hypercall. That might
>>> be a heritage from the original XenoLinux tree though, and be
>>> meaningless in pv-ops context - Jeremy, Konrad?
>>>
>>> Possibly it was bogus/unused even in that original tree - Keir?
>>>
>>
>>
>> Thanks for your comments Jan!
>>
>> In case this change is wrong, the other method I had in mind was to call
>> cpu_bringup_and_idle() in xen_play_dead(). (Even ARM does something similar,
>> in the sense that it runs the cpu bringup code including cpu_idle(), in the
>> cpu offline path, namely the cpu_die() function). Would that approach work
>> for xen as well? If yes, then we wouldn't have any issues to convert xen to
>> generic code.
>
> No, that wouldn't work either afaict - the function is expected
> to return.
>


Ok.. So, I would love to hear a confirmation about whether this patch (which
removes cpu_bringup() in xen_play_dead()) will break things or it is good as is.

If its not correct, then we can probably make __cpu_post_online() return an int,
with the meaning:

0 => success, go ahead and call cpu_idle()
non-zero => stop here, thanks for your services so far.. now leave the rest to me.

So all other archs will return 0, Xen will return non-zero, and it will handle
when to call cpu_idle() and when not to do so.

Might sound a bit ugly, but I don't see much other option. Suggestions are
appreciated!

Regards,
Srivatsa S. Bhat

2012-06-03 08:25:37

by Yong Zhang

[permalink] [raw]
Subject: Re: [PATCH 10/27] mips, smpboot: Use generic SMP booting infrastructure

On Fri, Jun 01, 2012 at 02:42:32PM +0530, Srivatsa S. Bhat wrote:
> Convert mips to use the generic framework to boot secondary CPUs.
>
> Notes:
> 1. The boot processor was setting the secondary cpu in cpu_online_mask!
> Instead, leave it up to the secondary cpu (... and it will be done by the
> generic code now).
>
> 2. Make the boot cpu wait for the secondary cpu to be set in cpu_online_mask
> before returning.

We don't need to wait for both cpu_callin_map (The code above yours)
any more.

>
> 3. Don't enable interrupts in cmp_smp_finish() and vsmp_smp_finish().
> Do it much later, in generic code.

Hmmm... the bad thing is that some board enable irq more early than
->smp_finish(), I have sent patches for that (by moving irq enable
to smp_finish() and delaying smp_finish()).
Please check patch#0001~patch#0004 in
http://marc.info/?l=linux-mips&m=133758022710973&w=2

>
> 4. In synchronise_count_slave(), use local_save_flags() instead of
> local_irq_save() because irqs are still disabled.

We can just remove local_irq_save()/local_irq_restore() like:
http://marc.info/?l=linux-mips&m=133758046211043&w=2

Thanks,
Yong

>
> Cc: Ralf Baechle <[email protected]>
> Cc: Andrew Morton <[email protected]>
> Cc: Eric Dumazet <[email protected]>
> Cc: Mike Frysinger <[email protected]>
> Cc: David Howells <[email protected]>
> Cc: Arun Sharma <[email protected]>
> Cc: Thomas Gleixner <[email protected]>
> Cc: Rusty Russell <[email protected]>
> Cc: [email protected]
> Signed-off-by: Srivatsa S. Bhat <[email protected]>
> ---
>
> arch/mips/kernel/smp-cmp.c | 8 ++++----
> arch/mips/kernel/smp-mt.c | 2 --
> arch/mips/kernel/smp.c | 23 +++++++++++++++--------
> arch/mips/kernel/sync-r4k.c | 3 ++-
> 4 files changed, 21 insertions(+), 15 deletions(-)
>
> diff --git a/arch/mips/kernel/smp-cmp.c b/arch/mips/kernel/smp-cmp.c
> index e7e03ec..7ecd6db 100644
> --- a/arch/mips/kernel/smp-cmp.c
> +++ b/arch/mips/kernel/smp-cmp.c
> @@ -108,7 +108,9 @@ static void cmp_init_secondary(void)
>
> static void cmp_smp_finish(void)
> {
> - pr_debug("SMPCMP: CPU%d: %s\n", smp_processor_id(), __func__);
> + unsigned int cpu = smp_processor_id();
> +
> + pr_debug("SMPCMP: CPU%d: %s\n", cpu, __func__);
>
> /* CDFIXME: remove this? */
> write_c0_compare(read_c0_count() + (8 * mips_hpt_frequency / HZ));
> @@ -116,10 +118,8 @@ static void cmp_smp_finish(void)
> #ifdef CONFIG_MIPS_MT_FPAFF
> /* If we have an FPU, enroll ourselves in the FPU-full mask */
> if (cpu_has_fpu)
> - cpu_set(smp_processor_id(), mt_fpu_cpumask);
> + cpumask_set_cpu(cpu, &mt_fpu_cpumask);
> #endif /* CONFIG_MIPS_MT_FPAFF */
> -
> - local_irq_enable();
> }
>
> static void cmp_cpus_done(void)
> diff --git a/arch/mips/kernel/smp-mt.c b/arch/mips/kernel/smp-mt.c
> index ff17868..25f7b09 100644
> --- a/arch/mips/kernel/smp-mt.c
> +++ b/arch/mips/kernel/smp-mt.c
> @@ -171,8 +171,6 @@ static void __cpuinit vsmp_smp_finish(void)
> if (cpu_has_fpu)
> cpu_set(smp_processor_id(), mt_fpu_cpumask);
> #endif /* CONFIG_MIPS_MT_FPAFF */
> -
> - local_irq_enable();
> }
>
> static void vsmp_cpus_done(void)
> diff --git a/arch/mips/kernel/smp.c b/arch/mips/kernel/smp.c
> index 71a95f5..4453d4d 100644
> --- a/arch/mips/kernel/smp.c
> +++ b/arch/mips/kernel/smp.c
> @@ -33,6 +33,7 @@
> #include <linux/cpu.h>
> #include <linux/err.h>
> #include <linux/ftrace.h>
> +#include <linux/smpboot.h>
>
> #include <linux/atomic.h>
> #include <asm/cpu.h>
> @@ -98,8 +99,11 @@ __cpuinit void register_smp_ops(struct plat_smp_ops *ops)
> */
> asmlinkage __cpuinit void start_secondary(void)
> {
> - unsigned int cpu;
> + smpboot_start_secondary(NULL);
> +}
>
> +void __cpuinit __cpu_pre_starting(void *unused)
> +{
> #ifdef CONFIG_MIPS_MT_SMTC
> /* Only do cpu_probe for first TC of CPU */
> if ((read_c0_tcbind() & TCBIND_CURTC) == 0)
> @@ -116,20 +120,22 @@ asmlinkage __cpuinit void start_secondary(void)
> */
>
> calibrate_delay();
> - preempt_disable();
> - cpu = smp_processor_id();
> - cpu_data[cpu].udelay_val = loops_per_jiffy;
> + cpu_data[smp_processor_id()].udelay_val = loops_per_jiffy;
> +}
>
> - notify_cpu_starting(cpu);
> +void __cpuinit __cpu_pre_online(void *unused)
> +{
> + unsigned int cpu = smp_processor_id();
>
> mp_ops->smp_finish();
> set_cpu_sibling_map(cpu);
>
> cpu_set(cpu, cpu_callin_map);
> +}
>
> +void __cpuinit __cpu_post_online(void *unused)
> +{
> synchronise_count_slave();
> -
> - cpu_idle();
> }
>
> /*
> @@ -196,7 +202,8 @@ int __cpuinit __cpu_up(unsigned int cpu, struct task_struct *tidle)
> while (!cpu_isset(cpu, cpu_callin_map))
> udelay(100);
>
> - set_cpu_online(cpu, true);
> + while (!cpu_online(cpu))
> + udelay(100);
>
> return 0;
> }
> diff --git a/arch/mips/kernel/sync-r4k.c b/arch/mips/kernel/sync-r4k.c
> index 99f913c..7f43069 100644
> --- a/arch/mips/kernel/sync-r4k.c
> +++ b/arch/mips/kernel/sync-r4k.c
> @@ -46,7 +46,8 @@ void __cpuinit synchronise_count_master(void)
> printk(KERN_INFO "Synchronize counters across %u CPUs: ",
> num_online_cpus());
>
> - local_irq_save(flags);
> + /* IRQs are already disabled. So just save the flags */
> + local_save_flags(flags);
>
> /*
> * Notify the slaves that it's time to start

2012-06-03 08:34:09

by Yong Zhang

[permalink] [raw]
Subject: Re: [PATCH 17/27] mn10300, smpboot: Use generic SMP booting infrastructure

On Fri, Jun 01, 2012 at 02:44:08PM +0530, Srivatsa S. Bhat wrote:
> From: Nikunj A. Dadhania <[email protected]>
>
> Convert mn10300 to use the generic framework to boot secondary CPUs.
>
> Notes:
> 1. In order to avoid enabling interrupts very early during secondary CPU
> bringup (in smp_cpu_init()), use arch_local_save_flags() instead of
> arch_local_cli_save().

We can just remove arch_local_cli_save()/arch_local_irq_restore() IMHO.

Thanks,
Yong

>
> Signed-off-by: Nikunj A. Dadhania <[email protected]>
> Cc: David Howells <[email protected]>
> Cc: Koichi Yasutake <[email protected]>
> Cc: Thomas Gleixner <[email protected]>
> Cc: Yong Zhang <[email protected]>
> Cc: [email protected]
> Signed-off-by: Srivatsa S. Bhat <[email protected]>
> ---
>
> arch/mn10300/kernel/smp.c | 32 +++++++++++---------------------
> 1 files changed, 11 insertions(+), 21 deletions(-)
>
> diff --git a/arch/mn10300/kernel/smp.c b/arch/mn10300/kernel/smp.c
> index b19e75d2..9c4e35e 100644
> --- a/arch/mn10300/kernel/smp.c
> +++ b/arch/mn10300/kernel/smp.c
> @@ -25,6 +25,7 @@
> #include <linux/profile.h>
> #include <linux/smp.h>
> #include <linux/cpu.h>
> +#include <linux/smpboot.h>
> #include <asm/tlbflush.h>
> #include <asm/bitops.h>
> #include <asm/processor.h>
> @@ -100,7 +101,6 @@ cpumask_t cpu_initialized __initdata = CPU_MASK_NONE;
> static int do_boot_cpu(int);
> static void smp_show_cpu_info(int cpu_id);
> static void smp_callin(void);
> -static void smp_online(void);
> static void smp_store_cpu_info(int);
> static void smp_cpu_init(void);
> static void smp_tune_scheduling(void);
> @@ -607,7 +607,7 @@ static void __init smp_cpu_init(void)
> mn10300_ipi_shutdown(SMP_BOOT_IRQ);
>
> /* Set up the non-maskable call function IPI */
> - flags = arch_local_cli_save();
> + flags = arch_local_save_flags();
> GxICR(CALL_FUNCTION_NMI_IPI) = GxICR_NMI | GxICR_ENABLE | GxICR_DETECT;
> tmp16 = GxICR(CALL_FUNCTION_NMI_IPI);
> arch_local_irq_restore(flags);
> @@ -655,20 +655,25 @@ void smp_prepare_cpu_init(void)
> */
> int __init start_secondary(void *unused)
> {
> + smpboot_start_secondary(unused);
> + return 0;
> +}
> +
> +void __cpuinit __cpu_pre_starting(void *unused)
> +{
> smp_cpu_init();
> smp_callin();
> while (!cpumask_test_cpu(smp_processor_id(), &smp_commenced_mask))
> cpu_relax();
>
> local_flush_tlb();
> - preempt_disable();
> - smp_online();
> +}
>
> +void __cpuinit __cpu_post_online(void *unused)
> +{
> #ifdef CONFIG_GENERIC_CLOCKEVENTS
> init_clockevents();
> #endif
> - cpu_idle();
> - return 0;
> }
>
> /**
> @@ -865,21 +870,6 @@ static void __init smp_callin(void)
> cpumask_set_cpu(cpu, &cpu_callin_map);
> }
>
> -/**
> - * smp_online - Set cpu_online_mask
> - */
> -static void __init smp_online(void)
> -{
> - int cpu;
> -
> - cpu = smp_processor_id();
> -
> - notify_cpu_starting(cpu);
> -
> - set_cpu_online(cpu, true);
> -
> - local_irq_enable();
> -}
>
> /**
> * smp_cpus_done -
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/

2012-06-03 08:42:35

by Yong Zhang

[permalink] [raw]
Subject: Re: [PATCH 23/27] cris, smpboot: Use generic SMP booting infrastructure

On Fri, Jun 01, 2012 at 02:45:40PM +0530, Srivatsa S. Bhat wrote:
> Convert cris to use the generic framework to boot secondary CPUs.
>
> Cc: Mikael Starvik <[email protected]>
> Cc: Jesper Nilsson <[email protected]>
> Cc: Thomas Gleixner <[email protected]>
> Cc: Andrew Morton <[email protected]>
> Cc: Eric Dumazet <[email protected]>
> Cc: Mike Frysinger <[email protected]>
> Cc: [email protected]
> Signed-off-by: Srivatsa S. Bhat <[email protected]>
> ---
>
> arch/cris/arch-v32/kernel/smp.c | 14 ++++++--------
> 1 files changed, 6 insertions(+), 8 deletions(-)
>
> diff --git a/arch/cris/arch-v32/kernel/smp.c b/arch/cris/arch-v32/kernel/smp.c
> index ebe2cb3..357114a 100644
> --- a/arch/cris/arch-v32/kernel/smp.c
> +++ b/arch/cris/arch-v32/kernel/smp.c
> @@ -17,6 +17,7 @@
> #include <linux/cpumask.h>
> #include <linux/interrupt.h>
> #include <linux/module.h>
> +#include <linux/smpboot.h>
>
> #define IPI_SCHEDULE 1
> #define IPI_CALL 2
> @@ -145,8 +146,11 @@ smp_boot_one_cpu(int cpuid, struct task_struct idle)
> * specific stuff such as the local timer and the MMU. */
> void __init smp_callin(void)
> {
> - extern void cpu_idle(void);
> + smpboot_start_secondary(NULL);
> +}
>
> +void __cpuinit __cpu_pre_starting(void *unused)
> +{
> int cpu = cpu_now_booting;
> reg_intr_vect_rw_mask vect_mask = {0};
>
> @@ -161,16 +165,10 @@ void __init smp_callin(void)
> /* Setup local timer. */
> cris_timer_init();
>
> - /* Enable IRQ and idle */
> + /* Enable IRQ */

I don't know cris; is the comments right?

> REG_WR(intr_vect, irq_regs[cpu], rw_mask, vect_mask);
> crisv32_unmask_irq(IPI_INTR_VECT);
> crisv32_unmask_irq(TIMER0_INTR_VECT);
> - preempt_disable();
> - notify_cpu_starting(cpu);
> - local_irq_enable();
> -
> - set_cpu_online(cpu, true);
> - cpu_idle();

Before your patch, the comments seems end here.

Thanks,
Yong

> }
>
> /* Stop execution on this CPU.*/

2012-06-03 08:51:32

by Yong Zhang

[permalink] [raw]
Subject: Re: [PATCH 01/27] smpboot: Provide a generic method to boot secondary processors

On Fri, Jun 01, 2012 at 05:39:27PM +0530, Srivatsa S. Bhat wrote:
> +/* Implement the following functions in your architecture, as appropriate. */
> +
> +/**
> + * __cpu_pre_starting()
> + *
> + * Implement whatever you need to do before the CPU_STARTING notifiers are
> + * invoked. Note that the CPU_STARTING callbacks run *on* the cpu that is
> + * coming up. So that cpu better be prepared! IOW, implement all the early
> + * boot/init code for the cpu here. And do NOT enable interrupts.
> + */
> +#ifndef __cpu_pre_starting

Why the ifndef?

Thanks,
Yong

> +void __weak __cpu_pre_starting(void *arg) {}
> +#endif
> +
> +/**
> + * __cpu_pre_online()
> + *
> + * Implement whatever you need to do before the upcoming CPU is set in the
> + * cpu_online_mask. (Setting the cpu in the cpu_online_mask is like an
> + * announcement that the cpu has come up, because it would be publicly
> + * visible now). Again, don't enable interrupts.
> + */
> +#ifndef __cpu_pre_online
> +void __weak __cpu_pre_online(void *arg) {}
> +#endif
> +
> +/**
> + * __cpu_post_online()
> + *
> + * Implement whatever you need to do after the CPU has been set in the
> + * cpu_online_mask, and before enabling interrupts and calling cpu_idle().
> + * Ideally, it is preferable if you don't have anything to do here.
> + * We want to move to a model where setting cpu_online_mask is pretty
> + * much the final step. Again, don't enable interrupts.
> + */
> +#ifndef __cpu_post_online
> +void __weak __cpu_post_online(void *arg) {}
> +#endif
> +
> +
> +/**
> + * smpboot_start_secondary - Generic way to boot secondary processors
> + */
> +void __cpuinit smpboot_start_secondary(void *arg)
> +{
> + unsigned int cpu;
> +
> + /*
> + * SMP booting is extremely fragile in some architectures. So run
> + * the cpu initialization code first before anything else.
> + */
> + __cpu_pre_starting(arg);
> +
> + preempt_disable();
> + cpu = smp_processor_id();
> +
> + /* Invoke the CPU_STARTING notifier callbacks */
> + notify_cpu_starting(cpu);
> +
> + __cpu_pre_online(arg);
> +
> + /* Set the CPU in the cpu_online_mask */
> + set_cpu_online(cpu, true);
> +
> + __cpu_post_online(arg);
> +
> + /* Enable local interrupts now */
> + local_irq_enable();
> +
> + wmb();
> + cpu_idle();
> +
> + /* We should never reach here! */
> + BUG();
> +}
> diff --git a/kernel/smpboot.h b/kernel/smpboot.h
> index 80c0acf..9753cc0 100644
> --- a/kernel/smpboot.h
> +++ b/kernel/smpboot.h
> @@ -1,5 +1,5 @@
> -#ifndef SMPBOOT_H
> -#define SMPBOOT_H
> +#ifndef __SMPBOOT_H
> +#define __SMPBOOT_H
>
> struct task_struct;
>
>
>
> From: Srivatsa S. Bhat <[email protected]>
> Subject: [PATCH 02/27] smpboot: Add provisions for arch-specific locking around cpu_online_mask
>
> We want to make smp booting as generic as possible and remove code
> duplication in arch/ directories.
>
> While manipulating the cpu_online_mask, x86 uses an additional lock, i.e.,
> 'vector_lock'. So provide a generic way to implement such arch-specific
> extra locking, by providing weakly defined functions arch_vector_lock()
> and arch_vector_unlock() which can be overriden by different architectures
> suitably.
>
> Cc: Thomas Gleixner <[email protected]>
> Cc: Suresh Siddha <[email protected]>
> Cc: Venkatesh Pallipadi <[email protected]>
> Signed-off-by: Srivatsa S. Bhat <[email protected]>
> ---
>
> kernel/smpboot.c | 11 ++++++++++-
> 1 files changed, 10 insertions(+), 1 deletions(-)
>
> diff --git a/kernel/smpboot.c b/kernel/smpboot.c
> index 6c26133..5ae1805 100644
> --- a/kernel/smpboot.c
> +++ b/kernel/smpboot.c
> @@ -107,6 +107,13 @@ void __weak __cpu_pre_online(void *arg) {}
> void __weak __cpu_post_online(void *arg) {}
> #endif
>
> +/*
> + * Optional arch-specific locking for manipulating cpu_online_mask while
> + * bringing up secondary CPUs.
> + */
> +void __weak arch_vector_lock(void) {}
> +void __weak arch_vector_unlock(void) {}
> +
>
> /**
> * smpboot_start_secondary - Generic way to boot secondary processors
> @@ -129,8 +136,10 @@ void __cpuinit smpboot_start_secondary(void *arg)
>
> __cpu_pre_online(arg);
>
> - /* Set the CPU in the cpu_online_mask */
> + /* Set the CPU in the cpu_online_mask with required locks held */
> + arch_vector_lock();
> set_cpu_online(cpu, true);
> + arch_vector_unlock();
>
> __cpu_post_online(arg);
>
>
>
> From: Srivatsa S. Bhat <[email protected]>
> Subject: [PATCH 03/27] smpboot: Define and use cpu_state per-cpu variable in generic code
>
> The per-cpu variable cpu_state is used in x86 and also used in other
> architectures, to track the state of the cpu during bringup and hotplug.
> Pull it out into generic code.
>
> Cc: Tony Luck <[email protected]>
> Cc: Fenghua Yu <[email protected]>
> Cc: Ralf Baechle <[email protected]>
> Cc: Benjamin Herrenschmidt <[email protected]>
> Cc: Paul Mundt <[email protected]>
> Cc: Chris Metcalf <[email protected]>
> Cc: Thomas Gleixner <[email protected]>
> Cc: Ingo Molnar <[email protected]>
> Cc: "H. Peter Anvin" <[email protected]>
> Cc: [email protected]
> Cc: Konrad Rzeszutek Wilk <[email protected]>
> Cc: Jeremy Fitzhardinge <[email protected]>
> Cc: Peter Zijlstra <[email protected]>
> Cc: Andrew Morton <[email protected]>
> Cc: Mike Frysinger <[email protected]>
> Cc: Yong Zhang <[email protected]>
> Cc: Venkatesh Pallipadi <[email protected]>
> Cc: Suresh Siddha <[email protected]>
> Cc: [email protected]
> Cc: [email protected]
> Cc: [email protected]
> Cc: [email protected]
> Cc: [email protected]
> Cc: [email protected]
> Signed-off-by: Srivatsa S. Bhat <[email protected]>
> ---
>
> arch/ia64/include/asm/cpu.h | 2 --
> arch/ia64/kernel/process.c | 1 +
> arch/ia64/kernel/smpboot.c | 6 +-----
> arch/mips/cavium-octeon/smp.c | 4 +---
> arch/powerpc/kernel/smp.c | 6 +-----
> arch/sh/include/asm/smp.h | 2 --
> arch/sh/kernel/smp.c | 4 +---
> arch/tile/kernel/smpboot.c | 4 +---
> arch/x86/include/asm/cpu.h | 2 --
> arch/x86/kernel/smpboot.c | 4 +---
> arch/x86/xen/smp.c | 1 +
> include/linux/smpboot.h | 1 +
> kernel/smpboot.c | 4 ++++
> 13 files changed, 13 insertions(+), 28 deletions(-)
>
> diff --git a/arch/ia64/include/asm/cpu.h b/arch/ia64/include/asm/cpu.h
> index fcca30b..1c3acac 100644
> --- a/arch/ia64/include/asm/cpu.h
> +++ b/arch/ia64/include/asm/cpu.h
> @@ -12,8 +12,6 @@ struct ia64_cpu {
>
> DECLARE_PER_CPU(struct ia64_cpu, cpu_devices);
>
> -DECLARE_PER_CPU(int, cpu_state);
> -
> #ifdef CONFIG_HOTPLUG_CPU
> extern int arch_register_cpu(int num);
> extern void arch_unregister_cpu(int);
> diff --git a/arch/ia64/kernel/process.c b/arch/ia64/kernel/process.c
> index 5e0e86d..32566c7 100644
> --- a/arch/ia64/kernel/process.c
> +++ b/arch/ia64/kernel/process.c
> @@ -29,6 +29,7 @@
> #include <linux/kdebug.h>
> #include <linux/utsname.h>
> #include <linux/tracehook.h>
> +#include <linux/smpboot.h>
>
> #include <asm/cpu.h>
> #include <asm/delay.h>
> diff --git a/arch/ia64/kernel/smpboot.c b/arch/ia64/kernel/smpboot.c
> index 963d2db..df00a3c 100644
> --- a/arch/ia64/kernel/smpboot.c
> +++ b/arch/ia64/kernel/smpboot.c
> @@ -39,6 +39,7 @@
> #include <linux/efi.h>
> #include <linux/percpu.h>
> #include <linux/bitops.h>
> +#include <linux/smpboot.h>
>
> #include <linux/atomic.h>
> #include <asm/cache.h>
> @@ -111,11 +112,6 @@ extern unsigned long ia64_iobase;
>
> struct task_struct *task_for_booting_cpu;
>
> -/*
> - * State for each CPU
> - */
> -DEFINE_PER_CPU(int, cpu_state);
> -
> cpumask_t cpu_core_map[NR_CPUS] __cacheline_aligned;
> EXPORT_SYMBOL(cpu_core_map);
> DEFINE_PER_CPU_SHARED_ALIGNED(cpumask_t, cpu_sibling_map);
> diff --git a/arch/mips/cavium-octeon/smp.c b/arch/mips/cavium-octeon/smp.c
> index 97e7ce9..93cd4b0 100644
> --- a/arch/mips/cavium-octeon/smp.c
> +++ b/arch/mips/cavium-octeon/smp.c
> @@ -13,6 +13,7 @@
> #include <linux/kernel_stat.h>
> #include <linux/sched.h>
> #include <linux/module.h>
> +#include <linux/smpboot.h>
>
> #include <asm/mmu_context.h>
> #include <asm/time.h>
> @@ -252,9 +253,6 @@ static void octeon_cpus_done(void)
>
> #ifdef CONFIG_HOTPLUG_CPU
>
> -/* State of each CPU. */
> -DEFINE_PER_CPU(int, cpu_state);
> -
> extern void fixup_irqs(void);
>
> static DEFINE_SPINLOCK(smp_reserve_lock);
> diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
> index e1417c4..1928058a 100644
> --- a/arch/powerpc/kernel/smp.c
> +++ b/arch/powerpc/kernel/smp.c
> @@ -31,6 +31,7 @@
> #include <linux/cpu.h>
> #include <linux/notifier.h>
> #include <linux/topology.h>
> +#include <linux/smpboot.h>
>
> #include <asm/ptrace.h>
> #include <linux/atomic.h>
> @@ -57,11 +58,6 @@
> #define DBG(fmt...)
> #endif
>
> -#ifdef CONFIG_HOTPLUG_CPU
> -/* State of each CPU during hotplug phases */
> -static DEFINE_PER_CPU(int, cpu_state) = { 0 };
> -#endif
> -
> struct thread_info *secondary_ti;
>
> DEFINE_PER_CPU(cpumask_var_t, cpu_sibling_map);
> diff --git a/arch/sh/include/asm/smp.h b/arch/sh/include/asm/smp.h
> index 78b0d0f4..bda041e 100644
> --- a/arch/sh/include/asm/smp.h
> +++ b/arch/sh/include/asm/smp.h
> @@ -31,8 +31,6 @@ enum {
> SMP_MSG_NR, /* must be last */
> };
>
> -DECLARE_PER_CPU(int, cpu_state);
> -
> void smp_message_recv(unsigned int msg);
> void smp_timer_broadcast(const struct cpumask *mask);
>
> diff --git a/arch/sh/kernel/smp.c b/arch/sh/kernel/smp.c
> index b86e9ca..8e0fde0 100644
> --- a/arch/sh/kernel/smp.c
> +++ b/arch/sh/kernel/smp.c
> @@ -22,6 +22,7 @@
> #include <linux/interrupt.h>
> #include <linux/sched.h>
> #include <linux/atomic.h>
> +#include <linux/smpboot.h>
> #include <asm/processor.h>
> #include <asm/mmu_context.h>
> #include <asm/smp.h>
> @@ -34,9 +35,6 @@ int __cpu_logical_map[NR_CPUS]; /* Map logical to physical */
>
> struct plat_smp_ops *mp_ops = NULL;
>
> -/* State of each CPU */
> -DEFINE_PER_CPU(int, cpu_state) = { 0 };
> -
> void __cpuinit register_smp_ops(struct plat_smp_ops *ops)
> {
> if (mp_ops)
> diff --git a/arch/tile/kernel/smpboot.c b/arch/tile/kernel/smpboot.c
> index e686c5a..24a9c06 100644
> --- a/arch/tile/kernel/smpboot.c
> +++ b/arch/tile/kernel/smpboot.c
> @@ -25,13 +25,11 @@
> #include <linux/delay.h>
> #include <linux/err.h>
> #include <linux/irq.h>
> +#include <linux/smpboot.h>
> #include <asm/mmu_context.h>
> #include <asm/tlbflush.h>
> #include <asm/sections.h>
>
> -/* State of each CPU. */
> -static DEFINE_PER_CPU(int, cpu_state) = { 0 };
> -
> /* The messaging code jumps to this pointer during boot-up */
> unsigned long start_cpu_function_addr;
>
> diff --git a/arch/x86/include/asm/cpu.h b/arch/x86/include/asm/cpu.h
> index 4564c8e..2d0b239 100644
> --- a/arch/x86/include/asm/cpu.h
> +++ b/arch/x86/include/asm/cpu.h
> @@ -30,8 +30,6 @@ extern int arch_register_cpu(int num);
> extern void arch_unregister_cpu(int);
> #endif
>
> -DECLARE_PER_CPU(int, cpu_state);
> -
> int mwait_usable(const struct cpuinfo_x86 *);
>
> #endif /* _ASM_X86_CPU_H */
> diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
> index bfbe30e..269bc1f 100644
> --- a/arch/x86/kernel/smpboot.c
> +++ b/arch/x86/kernel/smpboot.c
> @@ -51,6 +51,7 @@
> #include <linux/stackprotector.h>
> #include <linux/gfp.h>
> #include <linux/cpuidle.h>
> +#include <linux/smpboot.h>
>
> #include <asm/acpi.h>
> #include <asm/desc.h>
> @@ -73,9 +74,6 @@
> #include <asm/smpboot_hooks.h>
> #include <asm/i8259.h>
>
> -/* State of each CPU */
> -DEFINE_PER_CPU(int, cpu_state) = { 0 };
> -
> #ifdef CONFIG_HOTPLUG_CPU
> /*
> * We need this for trampoline_base protection from concurrent accesses when
> diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
> index 2ef5948..09a7199 100644
> --- a/arch/x86/xen/smp.c
> +++ b/arch/x86/xen/smp.c
> @@ -16,6 +16,7 @@
> #include <linux/err.h>
> #include <linux/slab.h>
> #include <linux/smp.h>
> +#include <linux/smpboot.h>
>
> #include <asm/paravirt.h>
> #include <asm/desc.h>
> diff --git a/include/linux/smpboot.h b/include/linux/smpboot.h
> index 63bbedd..834d90c 100644
> --- a/include/linux/smpboot.h
> +++ b/include/linux/smpboot.h
> @@ -5,6 +5,7 @@
> #ifndef SMPBOOT_H
> #define SMPBOOT_H
>
> +DECLARE_PER_CPU(int, cpu_state);
> extern void smpboot_start_secondary(void *arg);
>
> #endif
> diff --git a/kernel/smpboot.c b/kernel/smpboot.c
> index 5ae1805..0df43b0 100644
> --- a/kernel/smpboot.c
> +++ b/kernel/smpboot.c
> @@ -67,6 +67,8 @@ void __init idle_threads_init(void)
> }
> #endif
>
> +/* State of each CPU during bringup/teardown */
> +DEFINE_PER_CPU(int, cpu_state) = { 0 };
>
> /* Implement the following functions in your architecture, as appropriate. */
>
> @@ -141,6 +143,8 @@ void __cpuinit smpboot_start_secondary(void *arg)
> set_cpu_online(cpu, true);
> arch_vector_unlock();
>
> + per_cpu(cpu_state, cpu) = CPU_ONLINE;
> +
> __cpu_post_online(arg);
>
> /* Enable local interrupts now */
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/

2012-06-03 08:54:24

by Yong Zhang

[permalink] [raw]
Subject: Re: [PATCH 01/27] smpboot: Provide a generic method to boot secondary processors

On Fri, Jun 01, 2012 at 05:39:27PM +0530, Srivatsa S. Bhat wrote:
> +void __cpuinit smpboot_start_secondary(void *arg)
> +{
> + unsigned int cpu;
> +
> + /*
> + * SMP booting is extremely fragile in some architectures. So run
> + * the cpu initialization code first before anything else.
> + */
> + __cpu_pre_starting(arg);
> +
> + preempt_disable();
> + cpu = smp_processor_id();
> +
> + /* Invoke the CPU_STARTING notifier callbacks */
> + notify_cpu_starting(cpu);
> +
> + __cpu_pre_online(arg);
> +
> + /* Set the CPU in the cpu_online_mask */
> + set_cpu_online(cpu, true);
> +
> + __cpu_post_online(arg);
> +

Seems it worth to catch incorrect irq state here:

WARN_ON_ONCE(!irqs_disabled());

Thanks,
Yong

> + /* Enable local interrupts now */
> + local_irq_enable();
> +
> + wmb();
> + cpu_idle();
> +
> + /* We should never reach here! */
> + BUG();
> +}
> diff --git a/kernel/smpboot.h b/kernel/smpboot.h
> index 80c0acf..9753cc0 100644
> --- a/kernel/smpboot.h
> +++ b/kernel/smpboot.h
> @@ -1,5 +1,5 @@
> -#ifndef SMPBOOT_H
> -#define SMPBOOT_H
> +#ifndef __SMPBOOT_H
> +#define __SMPBOOT_H
>
> struct task_struct;
>
>
>
> From: Srivatsa S. Bhat <[email protected]>
> Subject: [PATCH 02/27] smpboot: Add provisions for arch-specific locking around cpu_online_mask
>
> We want to make smp booting as generic as possible and remove code
> duplication in arch/ directories.
>
> While manipulating the cpu_online_mask, x86 uses an additional lock, i.e.,
> 'vector_lock'. So provide a generic way to implement such arch-specific
> extra locking, by providing weakly defined functions arch_vector_lock()
> and arch_vector_unlock() which can be overriden by different architectures
> suitably.
>
> Cc: Thomas Gleixner <[email protected]>
> Cc: Suresh Siddha <[email protected]>
> Cc: Venkatesh Pallipadi <[email protected]>
> Signed-off-by: Srivatsa S. Bhat <[email protected]>
> ---
>
> kernel/smpboot.c | 11 ++++++++++-
> 1 files changed, 10 insertions(+), 1 deletions(-)
>
> diff --git a/kernel/smpboot.c b/kernel/smpboot.c
> index 6c26133..5ae1805 100644
> --- a/kernel/smpboot.c
> +++ b/kernel/smpboot.c
> @@ -107,6 +107,13 @@ void __weak __cpu_pre_online(void *arg) {}
> void __weak __cpu_post_online(void *arg) {}
> #endif
>
> +/*
> + * Optional arch-specific locking for manipulating cpu_online_mask while
> + * bringing up secondary CPUs.
> + */
> +void __weak arch_vector_lock(void) {}
> +void __weak arch_vector_unlock(void) {}
> +
>
> /**
> * smpboot_start_secondary - Generic way to boot secondary processors
> @@ -129,8 +136,10 @@ void __cpuinit smpboot_start_secondary(void *arg)
>
> __cpu_pre_online(arg);
>
> - /* Set the CPU in the cpu_online_mask */
> + /* Set the CPU in the cpu_online_mask with required locks held */
> + arch_vector_lock();
> set_cpu_online(cpu, true);
> + arch_vector_unlock();
>
> __cpu_post_online(arg);
>
>
>
> From: Srivatsa S. Bhat <[email protected]>
> Subject: [PATCH 03/27] smpboot: Define and use cpu_state per-cpu variable in generic code
>
> The per-cpu variable cpu_state is used in x86 and also used in other
> architectures, to track the state of the cpu during bringup and hotplug.
> Pull it out into generic code.
>
> Cc: Tony Luck <[email protected]>
> Cc: Fenghua Yu <[email protected]>
> Cc: Ralf Baechle <[email protected]>
> Cc: Benjamin Herrenschmidt <[email protected]>
> Cc: Paul Mundt <[email protected]>
> Cc: Chris Metcalf <[email protected]>
> Cc: Thomas Gleixner <[email protected]>
> Cc: Ingo Molnar <[email protected]>
> Cc: "H. Peter Anvin" <[email protected]>
> Cc: [email protected]
> Cc: Konrad Rzeszutek Wilk <[email protected]>
> Cc: Jeremy Fitzhardinge <[email protected]>
> Cc: Peter Zijlstra <[email protected]>
> Cc: Andrew Morton <[email protected]>
> Cc: Mike Frysinger <[email protected]>
> Cc: Yong Zhang <[email protected]>
> Cc: Venkatesh Pallipadi <[email protected]>
> Cc: Suresh Siddha <[email protected]>
> Cc: [email protected]
> Cc: [email protected]
> Cc: [email protected]
> Cc: [email protected]
> Cc: [email protected]
> Cc: [email protected]
> Signed-off-by: Srivatsa S. Bhat <[email protected]>
> ---
>
> arch/ia64/include/asm/cpu.h | 2 --
> arch/ia64/kernel/process.c | 1 +
> arch/ia64/kernel/smpboot.c | 6 +-----
> arch/mips/cavium-octeon/smp.c | 4 +---
> arch/powerpc/kernel/smp.c | 6 +-----
> arch/sh/include/asm/smp.h | 2 --
> arch/sh/kernel/smp.c | 4 +---
> arch/tile/kernel/smpboot.c | 4 +---
> arch/x86/include/asm/cpu.h | 2 --
> arch/x86/kernel/smpboot.c | 4 +---
> arch/x86/xen/smp.c | 1 +
> include/linux/smpboot.h | 1 +
> kernel/smpboot.c | 4 ++++
> 13 files changed, 13 insertions(+), 28 deletions(-)
>
> diff --git a/arch/ia64/include/asm/cpu.h b/arch/ia64/include/asm/cpu.h
> index fcca30b..1c3acac 100644
> --- a/arch/ia64/include/asm/cpu.h
> +++ b/arch/ia64/include/asm/cpu.h
> @@ -12,8 +12,6 @@ struct ia64_cpu {
>
> DECLARE_PER_CPU(struct ia64_cpu, cpu_devices);
>
> -DECLARE_PER_CPU(int, cpu_state);
> -
> #ifdef CONFIG_HOTPLUG_CPU
> extern int arch_register_cpu(int num);
> extern void arch_unregister_cpu(int);
> diff --git a/arch/ia64/kernel/process.c b/arch/ia64/kernel/process.c
> index 5e0e86d..32566c7 100644
> --- a/arch/ia64/kernel/process.c
> +++ b/arch/ia64/kernel/process.c
> @@ -29,6 +29,7 @@
> #include <linux/kdebug.h>
> #include <linux/utsname.h>
> #include <linux/tracehook.h>
> +#include <linux/smpboot.h>
>
> #include <asm/cpu.h>
> #include <asm/delay.h>
> diff --git a/arch/ia64/kernel/smpboot.c b/arch/ia64/kernel/smpboot.c
> index 963d2db..df00a3c 100644
> --- a/arch/ia64/kernel/smpboot.c
> +++ b/arch/ia64/kernel/smpboot.c
> @@ -39,6 +39,7 @@
> #include <linux/efi.h>
> #include <linux/percpu.h>
> #include <linux/bitops.h>
> +#include <linux/smpboot.h>
>
> #include <linux/atomic.h>
> #include <asm/cache.h>
> @@ -111,11 +112,6 @@ extern unsigned long ia64_iobase;
>
> struct task_struct *task_for_booting_cpu;
>
> -/*
> - * State for each CPU
> - */
> -DEFINE_PER_CPU(int, cpu_state);
> -
> cpumask_t cpu_core_map[NR_CPUS] __cacheline_aligned;
> EXPORT_SYMBOL(cpu_core_map);
> DEFINE_PER_CPU_SHARED_ALIGNED(cpumask_t, cpu_sibling_map);
> diff --git a/arch/mips/cavium-octeon/smp.c b/arch/mips/cavium-octeon/smp.c
> index 97e7ce9..93cd4b0 100644
> --- a/arch/mips/cavium-octeon/smp.c
> +++ b/arch/mips/cavium-octeon/smp.c
> @@ -13,6 +13,7 @@
> #include <linux/kernel_stat.h>
> #include <linux/sched.h>
> #include <linux/module.h>
> +#include <linux/smpboot.h>
>
> #include <asm/mmu_context.h>
> #include <asm/time.h>
> @@ -252,9 +253,6 @@ static void octeon_cpus_done(void)
>
> #ifdef CONFIG_HOTPLUG_CPU
>
> -/* State of each CPU. */
> -DEFINE_PER_CPU(int, cpu_state);
> -
> extern void fixup_irqs(void);
>
> static DEFINE_SPINLOCK(smp_reserve_lock);
> diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
> index e1417c4..1928058a 100644
> --- a/arch/powerpc/kernel/smp.c
> +++ b/arch/powerpc/kernel/smp.c
> @@ -31,6 +31,7 @@
> #include <linux/cpu.h>
> #include <linux/notifier.h>
> #include <linux/topology.h>
> +#include <linux/smpboot.h>
>
> #include <asm/ptrace.h>
> #include <linux/atomic.h>
> @@ -57,11 +58,6 @@
> #define DBG(fmt...)
> #endif
>
> -#ifdef CONFIG_HOTPLUG_CPU
> -/* State of each CPU during hotplug phases */
> -static DEFINE_PER_CPU(int, cpu_state) = { 0 };
> -#endif
> -
> struct thread_info *secondary_ti;
>
> DEFINE_PER_CPU(cpumask_var_t, cpu_sibling_map);
> diff --git a/arch/sh/include/asm/smp.h b/arch/sh/include/asm/smp.h
> index 78b0d0f4..bda041e 100644
> --- a/arch/sh/include/asm/smp.h
> +++ b/arch/sh/include/asm/smp.h
> @@ -31,8 +31,6 @@ enum {
> SMP_MSG_NR, /* must be last */
> };
>
> -DECLARE_PER_CPU(int, cpu_state);
> -
> void smp_message_recv(unsigned int msg);
> void smp_timer_broadcast(const struct cpumask *mask);
>
> diff --git a/arch/sh/kernel/smp.c b/arch/sh/kernel/smp.c
> index b86e9ca..8e0fde0 100644
> --- a/arch/sh/kernel/smp.c
> +++ b/arch/sh/kernel/smp.c
> @@ -22,6 +22,7 @@
> #include <linux/interrupt.h>
> #include <linux/sched.h>
> #include <linux/atomic.h>
> +#include <linux/smpboot.h>
> #include <asm/processor.h>
> #include <asm/mmu_context.h>
> #include <asm/smp.h>
> @@ -34,9 +35,6 @@ int __cpu_logical_map[NR_CPUS]; /* Map logical to physical */
>
> struct plat_smp_ops *mp_ops = NULL;
>
> -/* State of each CPU */
> -DEFINE_PER_CPU(int, cpu_state) = { 0 };
> -
> void __cpuinit register_smp_ops(struct plat_smp_ops *ops)
> {
> if (mp_ops)
> diff --git a/arch/tile/kernel/smpboot.c b/arch/tile/kernel/smpboot.c
> index e686c5a..24a9c06 100644
> --- a/arch/tile/kernel/smpboot.c
> +++ b/arch/tile/kernel/smpboot.c
> @@ -25,13 +25,11 @@
> #include <linux/delay.h>
> #include <linux/err.h>
> #include <linux/irq.h>
> +#include <linux/smpboot.h>
> #include <asm/mmu_context.h>
> #include <asm/tlbflush.h>
> #include <asm/sections.h>
>
> -/* State of each CPU. */
> -static DEFINE_PER_CPU(int, cpu_state) = { 0 };
> -
> /* The messaging code jumps to this pointer during boot-up */
> unsigned long start_cpu_function_addr;
>
> diff --git a/arch/x86/include/asm/cpu.h b/arch/x86/include/asm/cpu.h
> index 4564c8e..2d0b239 100644
> --- a/arch/x86/include/asm/cpu.h
> +++ b/arch/x86/include/asm/cpu.h
> @@ -30,8 +30,6 @@ extern int arch_register_cpu(int num);
> extern void arch_unregister_cpu(int);
> #endif
>
> -DECLARE_PER_CPU(int, cpu_state);
> -
> int mwait_usable(const struct cpuinfo_x86 *);
>
> #endif /* _ASM_X86_CPU_H */
> diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
> index bfbe30e..269bc1f 100644
> --- a/arch/x86/kernel/smpboot.c
> +++ b/arch/x86/kernel/smpboot.c
> @@ -51,6 +51,7 @@
> #include <linux/stackprotector.h>
> #include <linux/gfp.h>
> #include <linux/cpuidle.h>
> +#include <linux/smpboot.h>
>
> #include <asm/acpi.h>
> #include <asm/desc.h>
> @@ -73,9 +74,6 @@
> #include <asm/smpboot_hooks.h>
> #include <asm/i8259.h>
>
> -/* State of each CPU */
> -DEFINE_PER_CPU(int, cpu_state) = { 0 };
> -
> #ifdef CONFIG_HOTPLUG_CPU
> /*
> * We need this for trampoline_base protection from concurrent accesses when
> diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
> index 2ef5948..09a7199 100644
> --- a/arch/x86/xen/smp.c
> +++ b/arch/x86/xen/smp.c
> @@ -16,6 +16,7 @@
> #include <linux/err.h>
> #include <linux/slab.h>
> #include <linux/smp.h>
> +#include <linux/smpboot.h>
>
> #include <asm/paravirt.h>
> #include <asm/desc.h>
> diff --git a/include/linux/smpboot.h b/include/linux/smpboot.h
> index 63bbedd..834d90c 100644
> --- a/include/linux/smpboot.h
> +++ b/include/linux/smpboot.h
> @@ -5,6 +5,7 @@
> #ifndef SMPBOOT_H
> #define SMPBOOT_H
>
> +DECLARE_PER_CPU(int, cpu_state);
> extern void smpboot_start_secondary(void *arg);
>
> #endif
> diff --git a/kernel/smpboot.c b/kernel/smpboot.c
> index 5ae1805..0df43b0 100644
> --- a/kernel/smpboot.c
> +++ b/kernel/smpboot.c
> @@ -67,6 +67,8 @@ void __init idle_threads_init(void)
> }
> #endif
>
> +/* State of each CPU during bringup/teardown */
> +DEFINE_PER_CPU(int, cpu_state) = { 0 };
>
> /* Implement the following functions in your architecture, as appropriate. */
>
> @@ -141,6 +143,8 @@ void __cpuinit smpboot_start_secondary(void *arg)
> set_cpu_online(cpu, true);
> arch_vector_unlock();
>
> + per_cpu(cpu_state, cpu) = CPU_ONLINE;
> +
> __cpu_post_online(arg);
>
> /* Enable local interrupts now */
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/

2012-06-03 11:23:54

by Srivatsa S. Bhat

[permalink] [raw]
Subject: Re: [PATCH 01/27] smpboot: Provide a generic method to boot secondary processors

On 06/03/2012 02:21 PM, Yong Zhang wrote:

> On Fri, Jun 01, 2012 at 05:39:27PM +0530, Srivatsa S. Bhat wrote:
>> +/* Implement the following functions in your architecture, as appropriate. */
>> +
>> +/**
>> + * __cpu_pre_starting()
>> + *
>> + * Implement whatever you need to do before the CPU_STARTING notifiers are
>> + * invoked. Note that the CPU_STARTING callbacks run *on* the cpu that is
>> + * coming up. So that cpu better be prepared! IOW, implement all the early
>> + * boot/init code for the cpu here. And do NOT enable interrupts.
>> + */
>> +#ifndef __cpu_pre_starting
>
> Why the ifndef?
>


In short, to avoid breaking build on x86.
We wanted to follow the x86 convention of having static inline functions in
arch/x86/include/asm/smp.h and use the smp_ops structure to route the calls
to x86 or xen as appropriate (see patch 4 in this series).

But __weak definitions interfere with that and break the build. Hence, we
followed what Linus suggested doing in a similar context:
https://lkml.org/lkml/2008/7/26/187

Regards,
Srivatsa S. Bhat

>

>> +void __weak __cpu_pre_starting(void *arg) {}
>> +#endif
>> +
>> +/**
>> + * __cpu_pre_online()
>> + *
>> + * Implement whatever you need to do before the upcoming CPU is set in the
>> + * cpu_online_mask. (Setting the cpu in the cpu_online_mask is like an
>> + * announcement that the cpu has come up, because it would be publicly
>> + * visible now). Again, don't enable interrupts.
>> + */
>> +#ifndef __cpu_pre_online
>> +void __weak __cpu_pre_online(void *arg) {}
>> +#endif
>> +

2012-06-03 11:34:50

by Srivatsa S. Bhat

[permalink] [raw]
Subject: Re: [PATCH 01/27] smpboot: Provide a generic method to boot secondary processors

On 06/03/2012 02:23 PM, Yong Zhang wrote:

> On Fri, Jun 01, 2012 at 05:39:27PM +0530, Srivatsa S. Bhat wrote:
>> +void __cpuinit smpboot_start_secondary(void *arg)
>> +{
>> + unsigned int cpu;
>> +
>> + /*
>> + * SMP booting is extremely fragile in some architectures. So run
>> + * the cpu initialization code first before anything else.
>> + */
>> + __cpu_pre_starting(arg);
>> +
>> + preempt_disable();
>> + cpu = smp_processor_id();
>> +
>> + /* Invoke the CPU_STARTING notifier callbacks */
>> + notify_cpu_starting(cpu);
>> +
>> + __cpu_pre_online(arg);
>> +
>> + /* Set the CPU in the cpu_online_mask */
>> + set_cpu_online(cpu, true);
>> +
>> + __cpu_post_online(arg);
>> +
>
> Seems it worth to catch incorrect irq state here:
>
> WARN_ON_ONCE(!irqs_disabled());
>


That's a good point! But unfortunately we can't do that just yet.
Because, some architectures have explicit comments that say that
irqs must be enabled at a certain point in time, or have something
special than just a local_irq_enable(), and hence fall under the
__cpu_post_online() function when converted to this model.

Examples: ARM (patch 26) and ia64 (patch 15)

Unless the maintainers give a go-ahead to change them, I don't
think it would be safe.. (I have added the Notes section to each
patch to get the attention of the maintainers to such issues).

Regards,
Srivatsa S. Bhat

>
>> + /* Enable local interrupts now */
>> + local_irq_enable();
>> +
>> + wmb();
>> + cpu_idle();
>> +
>> + /* We should never reach here! */
>> + BUG();
>> +}

2012-06-03 11:40:19

by Russell King - ARM Linux

[permalink] [raw]
Subject: Re: [PATCH 01/27] smpboot: Provide a generic method to boot secondary processors

On Sun, Jun 03, 2012 at 05:03:50PM +0530, Srivatsa S. Bhat wrote:
> That's a good point! But unfortunately we can't do that just yet.
> Because, some architectures have explicit comments that say that
> irqs must be enabled at a certain point in time, or have something
> special than just a local_irq_enable(), and hence fall under the
> __cpu_post_online() function when converted to this model.
>
> Examples: ARM (patch 26) and ia64 (patch 15)
>
> Unless the maintainers give a go-ahead to change them, I don't
> think it would be safe.. (I have added the Notes section to each
> patch to get the attention of the maintainers to such issues).

I have no intention of touching ARMs SMP bringup any more than is
absolutely necessary - this code is extremely fragile, and it's taken
a long time to get it to where it presently is, where most people are
happy with it.

Especially problematical is stuff like where we enable interrupts in
relation to other activities.

It's probably best to describe the code not as "mostly bug free" but
as "causes the least pain" because I don't think there is a solution
which satisfies all the constraints placed upon this code path by
the various parts of the kernel.

2012-06-03 11:49:13

by Srivatsa S. Bhat

[permalink] [raw]
Subject: Re: [PATCH 10/27] mips, smpboot: Use generic SMP booting infrastructure

On 06/03/2012 01:55 PM, Yong Zhang wrote:

> On Fri, Jun 01, 2012 at 02:42:32PM +0530, Srivatsa S. Bhat wrote:
>> Convert mips to use the generic framework to boot secondary CPUs.
>>
>> Notes:
>> 1. The boot processor was setting the secondary cpu in cpu_online_mask!
>> Instead, leave it up to the secondary cpu (... and it will be done by the
>> generic code now).
>>
>> 2. Make the boot cpu wait for the secondary cpu to be set in cpu_online_mask
>> before returning.
>
> We don't need to wait for both cpu_callin_map (The code above yours)
> any more.


Yes, I noticed that while writing the patch. But then, I thought of cleaning
up the hundreds of callin/callout/commenced maps in various architectures and
bringing them out into core code in a later series, and clean this up at that time..
I didn't want to do too many invasive changes all at one-shot.

But I guess for this particular case of mips, I can get rid of the wait for
cpu_callin_map in this patchset itself. I'll update this patch with that change.

Thanks!

>
>>
>> 3. Don't enable interrupts in cmp_smp_finish() and vsmp_smp_finish().
>> Do it much later, in generic code.
>
> Hmmm... the bad thing is that some board enable irq more early than
> ->smp_finish(), I have sent patches for that (by moving irq enable
> to smp_finish() and delaying smp_finish()).
> Please check patch#0001~patch#0004 in
> http://marc.info/?l=linux-mips&m=133758022710973&w=2
>
>>
>> 4. In synchronise_count_slave(), use local_save_flags() instead of
>> local_irq_save() because irqs are still disabled.
>
> We can just remove local_irq_save()/local_irq_restore() like:
> http://marc.info/?l=linux-mips&m=133758046211043&w=2
>


So what is the status of those patches? Has anyone picked them up?

I could rebase this patch on top of yours, or better yet, if your
patches haven't been picked up yet, I could include them in this
patchset itself to avoid too many dependencies on external patches.

What do you say?

Regards,
Srivatsa S. Bhat

2012-06-03 11:51:29

by Srivatsa S. Bhat

[permalink] [raw]
Subject: Re: [PATCH 17/27] mn10300, smpboot: Use generic SMP booting infrastructure

On 06/03/2012 02:03 PM, Yong Zhang wrote:

> On Fri, Jun 01, 2012 at 02:44:08PM +0530, Srivatsa S. Bhat wrote:
>> From: Nikunj A. Dadhania <[email protected]>
>>
>> Convert mn10300 to use the generic framework to boot secondary CPUs.
>>
>> Notes:
>> 1. In order to avoid enabling interrupts very early during secondary CPU
>> bringup (in smp_cpu_init()), use arch_local_save_flags() instead of
>> arch_local_cli_save().
>
> We can just remove arch_local_cli_save()/arch_local_irq_restore() IMHO.
>


Ok, will update the patch. Thanks!

Regards,
Srivatsa S. Bhat

>
>>
>> Signed-off-by: Nikunj A. Dadhania <[email protected]>
>> Cc: David Howells <[email protected]>
>> Cc: Koichi Yasutake <[email protected]>
>> Cc: Thomas Gleixner <[email protected]>
>> Cc: Yong Zhang <[email protected]>
>> Cc: [email protected]
>> Signed-off-by: Srivatsa S. Bhat <[email protected]>
>> ---
>>
>> arch/mn10300/kernel/smp.c | 32 +++++++++++---------------------
>> 1 files changed, 11 insertions(+), 21 deletions(-)
>>
>> diff --git a/arch/mn10300/kernel/smp.c b/arch/mn10300/kernel/smp.c
>> index b19e75d2..9c4e35e 100644
>> --- a/arch/mn10300/kernel/smp.c
>> +++ b/arch/mn10300/kernel/smp.c
>> @@ -25,6 +25,7 @@
>> #include <linux/profile.h>
>> #include <linux/smp.h>
>> #include <linux/cpu.h>
>> +#include <linux/smpboot.h>
>> #include <asm/tlbflush.h>
>> #include <asm/bitops.h>
>> #include <asm/processor.h>
>> @@ -100,7 +101,6 @@ cpumask_t cpu_initialized __initdata = CPU_MASK_NONE;
>> static int do_boot_cpu(int);
>> static void smp_show_cpu_info(int cpu_id);
>> static void smp_callin(void);
>> -static void smp_online(void);
>> static void smp_store_cpu_info(int);
>> static void smp_cpu_init(void);
>> static void smp_tune_scheduling(void);
>> @@ -607,7 +607,7 @@ static void __init smp_cpu_init(void)
>> mn10300_ipi_shutdown(SMP_BOOT_IRQ);
>>
>> /* Set up the non-maskable call function IPI */
>> - flags = arch_local_cli_save();
>> + flags = arch_local_save_flags();
>> GxICR(CALL_FUNCTION_NMI_IPI) = GxICR_NMI | GxICR_ENABLE | GxICR_DETECT;
>> tmp16 = GxICR(CALL_FUNCTION_NMI_IPI);
>> arch_local_irq_restore(flags);
>> @@ -655,20 +655,25 @@ void smp_prepare_cpu_init(void)
>> */
>> int __init start_secondary(void *unused)
>> {
>> + smpboot_start_secondary(unused);
>> + return 0;
>> +}
>> +
>> +void __cpuinit __cpu_pre_starting(void *unused)
>> +{
>> smp_cpu_init();
>> smp_callin();
>> while (!cpumask_test_cpu(smp_processor_id(), &smp_commenced_mask))
>> cpu_relax();
>>
>> local_flush_tlb();
>> - preempt_disable();
>> - smp_online();
>> +}
>>
>> +void __cpuinit __cpu_post_online(void *unused)
>> +{
>> #ifdef CONFIG_GENERIC_CLOCKEVENTS
>> init_clockevents();
>> #endif
>> - cpu_idle();
>> - return 0;
>> }
>>
>> /**
>> @@ -865,21 +870,6 @@ static void __init smp_callin(void)
>> cpumask_set_cpu(cpu, &cpu_callin_map);
>> }
>>
>> -/**
>> - * smp_online - Set cpu_online_mask
>> - */
>> -static void __init smp_online(void)
>> -{
>> - int cpu;
>> -
>> - cpu = smp_processor_id();
>> -
>> - notify_cpu_starting(cpu);
>> -
>> - set_cpu_online(cpu, true);
>> -
>> - local_irq_enable();
>> -}
>>
>> /**
>> * smp_cpus_done -
>>

2012-06-03 11:53:59

by Srivatsa S. Bhat

[permalink] [raw]
Subject: Re: [PATCH 23/27] cris, smpboot: Use generic SMP booting infrastructure

On 06/03/2012 02:11 PM, Yong Zhang wrote:

> On Fri, Jun 01, 2012 at 02:45:40PM +0530, Srivatsa S. Bhat wrote:
>> Convert cris to use the generic framework to boot secondary CPUs.
>>
>> Cc: Mikael Starvik <[email protected]>
>> Cc: Jesper Nilsson <[email protected]>
>> Cc: Thomas Gleixner <[email protected]>
>> Cc: Andrew Morton <[email protected]>
>> Cc: Eric Dumazet <[email protected]>
>> Cc: Mike Frysinger <[email protected]>
>> Cc: [email protected]
>> Signed-off-by: Srivatsa S. Bhat <[email protected]>
>> ---
>>
>> arch/cris/arch-v32/kernel/smp.c | 14 ++++++--------
>> 1 files changed, 6 insertions(+), 8 deletions(-)
>>
>> diff --git a/arch/cris/arch-v32/kernel/smp.c b/arch/cris/arch-v32/kernel/smp.c
>> index ebe2cb3..357114a 100644
>> --- a/arch/cris/arch-v32/kernel/smp.c
>> +++ b/arch/cris/arch-v32/kernel/smp.c
>> @@ -17,6 +17,7 @@
>> #include <linux/cpumask.h>
>> #include <linux/interrupt.h>
>> #include <linux/module.h>
>> +#include <linux/smpboot.h>
>>
>> #define IPI_SCHEDULE 1
>> #define IPI_CALL 2
>> @@ -145,8 +146,11 @@ smp_boot_one_cpu(int cpuid, struct task_struct idle)
>> * specific stuff such as the local timer and the MMU. */
>> void __init smp_callin(void)
>> {
>> - extern void cpu_idle(void);
>> + smpboot_start_secondary(NULL);
>> +}
>>
>> +void __cpuinit __cpu_pre_starting(void *unused)
>> +{
>> int cpu = cpu_now_booting;
>> reg_intr_vect_rw_mask vect_mask = {0};
>>
>> @@ -161,16 +165,10 @@ void __init smp_callin(void)
>> /* Setup local timer. */
>> cris_timer_init();
>>
>> - /* Enable IRQ and idle */
>> + /* Enable IRQ */
>
> I don't know cris; is the comments right?
>
>> REG_WR(intr_vect, irq_regs[cpu], rw_mask, vect_mask);
>> crisv32_unmask_irq(IPI_INTR_VECT);
>> crisv32_unmask_irq(TIMER0_INTR_VECT);
>> - preempt_disable();
>> - notify_cpu_starting(cpu);
>> - local_irq_enable();
>> -
>> - set_cpu_online(cpu, true);
>> - cpu_idle();
>
> Before your patch, the comments seems end here.
>


Good catch! I will remove that comment then, keeping the code in the
patch untouched.

Regards,
Srivatsa S. Bhat

>> }
>>
>> /* Stop execution on this CPU.*/
>

2012-06-03 12:06:53

by Srivatsa S. Bhat

[permalink] [raw]
Subject: Re: [PATCH 01/27] smpboot: Provide a generic method to boot secondary processors

On 06/03/2012 05:09 PM, Russell King - ARM Linux wrote:

> On Sun, Jun 03, 2012 at 05:03:50PM +0530, Srivatsa S. Bhat wrote:
>> That's a good point! But unfortunately we can't do that just yet.
>> Because, some architectures have explicit comments that say that
>> irqs must be enabled at a certain point in time, or have something
>> special than just a local_irq_enable(), and hence fall under the
>> __cpu_post_online() function when converted to this model.
>>
>> Examples: ARM (patch 26) and ia64 (patch 15)
>>
>> Unless the maintainers give a go-ahead to change them, I don't
>> think it would be safe.. (I have added the Notes section to each
>> patch to get the attention of the maintainers to such issues).
>
> I have no intention of touching ARMs SMP bringup any more than is
> absolutely necessary - this code is extremely fragile, and it's taken
> a long time to get it to where it presently is, where most people are
> happy with it.
>

> Especially problematical is stuff like where we enable interrupts in

> relation to other activities.
>
> It's probably best to describe the code not as "mostly bug free" but
> as "causes the least pain" because I don't think there is a solution
> which satisfies all the constraints placed upon this code path by
> the various parts of the kernel.
>


Thanks for your comments Russell. I understand your concerns. So, the
ARM patch does no functional change and retains how and where the
interrupts are enabled. And from looking at what you said above about
the bringup being fragile, I think even in future you wouldn't want any
functional changes to ARM just to make it fit into some generic model.
Thanks for the clarification.

However, what are your thoughts on the existing ARM patch (patch 26)
(which doesn't cause any functional changes)? That shouldn't pose any
problems right?

Regards,
Srivatsa S. Bhat

2012-06-03 21:17:57

by Sam Ravnborg

[permalink] [raw]
Subject: [PATCH] sparc32: refactor smp boot

>From 531d9c538fc60c15363890768ea416897853d6af Mon Sep 17 00:00:00 2001
From: Sam Ravnborg <[email protected]>
Date: Sun, 3 Jun 2012 23:08:18 +0200
Subject: [PATCH] sparc32: refactor smp boot

Introduce a common smp_callin() function to call
from trampoline_32.S.
Add platform specific functions to handle the
platform details.

This is in preparation for a patch that will
unify the smp boot stuff for all architectures.
sparc32 was significantly different to warrant
this patch in preparation.

Signed-off-by: Sam Ravnborg <[email protected]>
---

Hi Srivatsa.

This should address all your comemnts (thanks!).
I assume you will include this patch in your serie.

@davem: please ack this patch (if it is OK that is)
Note: It has not seen much testing.
But I have tried to review it carefully.

Sam

arch/sparc/kernel/kernel.h | 12 +++++
arch/sparc/kernel/leon_smp.c | 33 ++++----------
arch/sparc/kernel/smp_32.c | 86 +++++++++++++++++++++++++++++++++++++
arch/sparc/kernel/sun4d_smp.c | 29 ++++---------
arch/sparc/kernel/sun4m_smp.c | 33 ++++----------
arch/sparc/kernel/trampoline_32.S | 17 +++----
6 files changed, 132 insertions(+), 78 deletions(-)

diff --git a/arch/sparc/kernel/kernel.h b/arch/sparc/kernel/kernel.h
index 291bb5d..a702d9a 100644
--- a/arch/sparc/kernel/kernel.h
+++ b/arch/sparc/kernel/kernel.h
@@ -48,6 +48,10 @@ extern void sun4m_init_IRQ(void);
extern void sun4m_unmask_profile_irq(void);
extern void sun4m_clear_profile_irq(int cpu);

+/* sun4m_smp.c */
+void sun4m_cpu_pre_starting(void *arg);
+void sun4m_cpu_pre_online(void *arg);
+
/* sun4d_irq.c */
extern spinlock_t sun4d_imsk_lock;

@@ -60,6 +64,14 @@ extern int show_sun4d_interrupts(struct seq_file *, void *);
extern void sun4d_distribute_irqs(void);
extern void sun4d_free_irq(unsigned int irq, void *dev_id);

+/* sun4d_smp.c */
+void sun4d_cpu_pre_starting(void *arg);
+void sun4d_cpu_pre_online(void *arg);
+
+/* leon_smp.c */
+void leon_cpu_pre_starting(void *arg);
+void leon_cpu_pre_online(void *arg);
+
/* head_32.S */
extern unsigned int t_nmi[];
extern unsigned int linux_trap_ipi15_sun4d[];
diff --git a/arch/sparc/kernel/leon_smp.c b/arch/sparc/kernel/leon_smp.c
index 0f3fb6d..9b40c9c 100644
--- a/arch/sparc/kernel/leon_smp.c
+++ b/arch/sparc/kernel/leon_smp.c
@@ -69,31 +69,19 @@ static inline unsigned long do_swap(volatile unsigned long *ptr,
return val;
}

-void __cpuinit leon_callin(void)
+void __cpuinit leon_cpu_pre_starting(void *arg)
{
- int cpuid = hard_smp_processor_id();
-
- local_ops->cache_all();
- local_ops->tlb_all();
leon_configure_cache_smp();
+}

- notify_cpu_starting(cpuid);
-
- /* Get our local ticker going. */
- register_percpu_ce(cpuid);
-
- calibrate_delay();
- smp_store_cpu_info(cpuid);
-
- local_ops->cache_all();
- local_ops->tlb_all();
+void __cpuinit leon_cpu_pre_online(void *arg)
+{
+ int cpuid = hard_smp_processor_id();

- /*
- * Unblock the master CPU _only_ when the scheduler state
- * of all secondary CPUs will be up-to-date, so after
- * the SMP initialization the master will be just allowed
- * to call the scheduler code.
- * Allow master to continue.
+ /* Allow master to continue. The master will then give us the
+ * go-ahead by setting the smp_commenced_mask and will wait without
+ * timeouts until our setup is completed fully (signified by
+ * our bit being set in the cpu_online_mask).
*/
do_swap(&cpu_callin_map[cpuid], 1);

@@ -110,9 +98,6 @@ void __cpuinit leon_callin(void)

while (!cpumask_test_cpu(cpuid, &smp_commenced_mask))
mb();
-
- local_irq_enable();
- set_cpu_online(cpuid, true);
}

/*
diff --git a/arch/sparc/kernel/smp_32.c b/arch/sparc/kernel/smp_32.c
index 79db45e..9e7e6d7 100644
--- a/arch/sparc/kernel/smp_32.c
+++ b/arch/sparc/kernel/smp_32.c
@@ -20,6 +20,7 @@
#include <linux/seq_file.h>
#include <linux/cache.h>
#include <linux/delay.h>
+#include <linux/cpu.h>

#include <asm/ptrace.h>
#include <linux/atomic.h>
@@ -32,8 +33,10 @@
#include <asm/cacheflush.h>
#include <asm/tlbflush.h>
#include <asm/cpudata.h>
+#include <asm/timer.h>
#include <asm/leon.h>

+#include "kernel.h"
#include "irq.h"

volatile unsigned long cpu_callin_map[NR_CPUS] __cpuinitdata = {0,};
@@ -294,6 +297,89 @@ int __cpuinit __cpu_up(unsigned int cpu, struct task_struct *tidle)
return ret;
}

+void __cpuinit arch_cpu_pre_starting(void *arg)
+{
+ local_ops->cache_all();
+ local_ops->tlb_all();
+
+ switch(sparc_cpu_model) {
+ case sun4m:
+ sun4m_cpu_pre_starting(arg);
+ break;
+ case sun4d:
+ sun4d_cpu_pre_starting(arg);
+ break;
+ case sparc_leon:
+ leon_cpu_pre_starting(arg);
+ break;
+ default:
+ BUG();
+ }
+}
+
+void __cpuinit arch_cpu_pre_online(void *arg)
+{
+ unsigned int cpuid = hard_smp_processor_id();
+
+ register_percpu_ce(cpuid);
+
+ calibrate_delay();
+ smp_store_cpu_info(cpuid);
+
+ local_ops->cache_all();
+ local_ops->tlb_all();
+
+ switch(sparc_cpu_model) {
+ case sun4m:
+ sun4m_cpu_pre_online(arg);
+ break;
+ case sun4d:
+ sun4d_cpu_pre_online(arg);
+ break;
+ case sparc_leon:
+ leon_cpu_pre_online(arg);
+ break;
+ default:
+ BUG();
+ }
+}
+
+void __cpuinit sparc_start_secondary(void *arg)
+{
+ unsigned int cpu;
+
+ /*
+ * SMP booting is extremely fragile in some architectures. So run
+ * the cpu initialization code first before anything else.
+ */
+ arch_cpu_pre_starting(arg);
+
+ preempt_disable();
+ cpu = smp_processor_id();
+
+ /* Invoke the CPU_STARTING notifier callbacks */
+ notify_cpu_starting(cpu);
+
+ arch_cpu_pre_online(arg);
+
+ /* Set the CPU in the cpu_online_mask */
+ set_cpu_online(cpu, true);
+
+ /* Enable local interrupts now */
+ local_irq_enable();
+
+ wmb();
+ cpu_idle();
+
+ /* We should never reach here! */
+ BUG();
+}
+
+void __cpuinit smp_callin(void)
+{
+ sparc_start_secondary(NULL);
+}
+
void smp_bogo(struct seq_file *m)
{
int i;
diff --git a/arch/sparc/kernel/sun4d_smp.c b/arch/sparc/kernel/sun4d_smp.c
index ddaea31..c9eb82f 100644
--- a/arch/sparc/kernel/sun4d_smp.c
+++ b/arch/sparc/kernel/sun4d_smp.c
@@ -50,10 +50,9 @@ static inline void show_leds(int cpuid)
"i" (ASI_M_CTL));
}

-void __cpuinit smp4d_callin(void)
+void __cpuinit sun4d_cpu_pre_starting(void *arg)
{
int cpuid = hard_smp_processor_id();
- unsigned long flags;

/* Show we are alive */
cpu_leds[cpuid] = 0x6;
@@ -61,26 +60,20 @@ void __cpuinit smp4d_callin(void)

/* Enable level15 interrupt, disable level14 interrupt for now */
cc_set_imsk((cc_get_imsk() & ~0x8000) | 0x4000);
+}

- local_ops->cache_all();
- local_ops->tlb_all();
+void __cpuinit sun4d_cpu_pre_online(void *arg)
+{
+ unsigned long flags;
+ int cpuid;

- notify_cpu_starting(cpuid);
- /*
- * Unblock the master CPU _only_ when the scheduler state
+ cpuid = hard_smp_processor_id();
+
+ /* Unblock the master CPU _only_ when the scheduler state
* of all secondary CPUs will be up-to-date, so after
* the SMP initialization the master will be just allowed
* to call the scheduler code.
*/
- /* Get our local ticker going. */
- register_percpu_ce(cpuid);
-
- calibrate_delay();
- smp_store_cpu_info(cpuid);
- local_ops->cache_all();
- local_ops->tlb_all();
-
- /* Allow master to continue. */
sun4d_swap((unsigned long *)&cpu_callin_map[cpuid], 1);
local_ops->cache_all();
local_ops->tlb_all();
@@ -106,16 +99,12 @@ void __cpuinit smp4d_callin(void)
local_ops->cache_all();
local_ops->tlb_all();

- local_irq_enable(); /* We don't allow PIL 14 yet */
-
while (!cpumask_test_cpu(cpuid, &smp_commenced_mask))
barrier();

spin_lock_irqsave(&sun4d_imsk_lock, flags);
cc_set_imsk(cc_get_imsk() & ~0x4000); /* Allow PIL 14 as well */
spin_unlock_irqrestore(&sun4d_imsk_lock, flags);
- set_cpu_online(cpuid, true);
-
}

/*
diff --git a/arch/sparc/kernel/sun4m_smp.c b/arch/sparc/kernel/sun4m_smp.c
index 128af73..8a65f15 100644
--- a/arch/sparc/kernel/sun4m_smp.c
+++ b/arch/sparc/kernel/sun4m_smp.c
@@ -34,30 +34,19 @@ swap_ulong(volatile unsigned long *ptr, unsigned long val)
return val;
}

-void __cpuinit smp4m_callin(void)
+void __cpuinit sun4m_cpu_pre_starting(void *arg)
{
- int cpuid = hard_smp_processor_id();
-
- local_ops->cache_all();
- local_ops->tlb_all();
-
- notify_cpu_starting(cpuid);
-
- register_percpu_ce(cpuid);
-
- calibrate_delay();
- smp_store_cpu_info(cpuid);
+}

- local_ops->cache_all();
- local_ops->tlb_all();
+void __cpuinit sun4m_cpu_pre_online(void *arg)
+{
+ int cpuid = hard_smp_processor_id();

- /*
- * Unblock the master CPU _only_ when the scheduler state
- * of all secondary CPUs will be up-to-date, so after
- * the SMP initialization the master will be just allowed
- * to call the scheduler code.
+ /* Allow master to continue. The master will then give us the
+ * go-ahead by setting the smp_commenced_mask and will wait without
+ * timeouts until our setup is completed fully (signified by
+ * our bit being set in the cpu_online_mask).
*/
- /* Allow master to continue. */
swap_ulong(&cpu_callin_map[cpuid], 1);

/* XXX: What's up with all the flushes? */
@@ -75,10 +64,6 @@ void __cpuinit smp4m_callin(void)

while (!cpumask_test_cpu(cpuid, &smp_commenced_mask))
mb();
-
- local_irq_enable();
-
- set_cpu_online(cpuid, true);
}

/*
diff --git a/arch/sparc/kernel/trampoline_32.S b/arch/sparc/kernel/trampoline_32.S
index af27aca..6cdb08c 100644
--- a/arch/sparc/kernel/trampoline_32.S
+++ b/arch/sparc/kernel/trampoline_32.S
@@ -79,18 +79,15 @@ cpu3_startup:
nop

/* Start this processor. */
- call smp4m_callin
+ call smp_callin
nop

- b,a smp_do_cpu_idle
+ b,a smp_panic

.text
.align 4

-smp_do_cpu_idle:
- call cpu_idle
- mov 0, %o0
-
+smp_panic:
call cpu_panic
nop

@@ -144,10 +141,10 @@ sun4d_cpu_startup:
nop

/* Start this processor. */
- call smp4d_callin
+ call smp_callin
nop

- b,a smp_do_cpu_idle
+ b,a smp_panic

__CPUINIT
.align 4
@@ -201,7 +198,7 @@ leon_smp_cpu_startup:
nop

/* Start this processor. */
- call leon_callin
+ call smp_callin
nop

- b,a smp_do_cpu_idle
+ b,a smp_panic
--
1.6.0.6

2012-06-04 01:04:55

by David Miller

[permalink] [raw]
Subject: Re: [PATCH] sparc32: refactor smp boot

From: Sam Ravnborg <[email protected]>
Date: Sun, 3 Jun 2012 23:17:52 +0200

> From 531d9c538fc60c15363890768ea416897853d6af Mon Sep 17 00:00:00 2001
> From: Sam Ravnborg <[email protected]>
> Date: Sun, 3 Jun 2012 23:08:18 +0200
> Subject: [PATCH] sparc32: refactor smp boot
>
> Introduce a common smp_callin() function to call
> from trampoline_32.S.
> Add platform specific functions to handle the
> platform details.
>
> This is in preparation for a patch that will
> unify the smp boot stuff for all architectures.
> sparc32 was significantly different to warrant
> this patch in preparation.
>
> Signed-off-by: Sam Ravnborg <[email protected]>

Acked-by: David S. Miller <[email protected]>

2012-06-04 02:17:52

by Yong Zhang

[permalink] [raw]
Subject: Re: [PATCH 10/27] mips, smpboot: Use generic SMP booting infrastructure

On Sun, Jun 03, 2012 at 05:18:16PM +0530, Srivatsa S. Bhat wrote:
> Yes, I noticed that while writing the patch. But then, I thought of cleaning
> up the hundreds of callin/callout/commenced maps in various architectures and
> bringing them out into core code in a later series, and clean this up at that time..
> I didn't want to do too many invasive changes all at one-shot.

OK.

> But I guess for this particular case of mips, I can get rid of the wait for
> cpu_callin_map in this patchset itself. I'll update this patch with that change.

:-)

> So what is the status of those patches? Has anyone picked them up?
>
> I could rebase this patch on top of yours, or better yet, if your
> patches haven't been picked up yet, I could include them in this
> patchset itself to avoid too many dependencies on external patches.
>
> What do you say?

Ralf has pushed it to http://git.linux-mips.org/?p=ralf/linux.git;a=summary

I guess you can see it soon.

Thanks,
Yong

2012-06-04 02:19:14

by Yong Zhang

[permalink] [raw]
Subject: Re: [PATCH 01/27] smpboot: Provide a generic method to boot secondary processors

On Sun, Jun 03, 2012 at 04:52:23PM +0530, Srivatsa S. Bhat wrote:
> In short, to avoid breaking build on x86.
> We wanted to follow the x86 convention of having static inline functions in
> arch/x86/include/asm/smp.h and use the smp_ops structure to route the calls
> to x86 or xen as appropriate (see patch 4 in this series).
>
> But __weak definitions interfere with that and break the build. Hence, we
> followed what Linus suggested doing in a similar context:
> https://lkml.org/lkml/2008/7/26/187

Good to know that, thanks for your explanation.

Yong

2012-06-04 06:49:24

by Srivatsa S. Bhat

[permalink] [raw]
Subject: Re: [PATCH] sparc32: refactor smp boot

On 06/04/2012 02:47 AM, Sam Ravnborg wrote:

> From 531d9c538fc60c15363890768ea416897853d6af Mon Sep 17 00:00:00 2001
> From: Sam Ravnborg <[email protected]>
> Date: Sun, 3 Jun 2012 23:08:18 +0200
> Subject: [PATCH] sparc32: refactor smp boot
>
> Introduce a common smp_callin() function to call
> from trampoline_32.S.
> Add platform specific functions to handle the
> platform details.
>
> This is in preparation for a patch that will
> unify the smp boot stuff for all architectures.
> sparc32 was significantly different to warrant
> this patch in preparation.
>
> Signed-off-by: Sam Ravnborg <[email protected]>
> ---
>
> Hi Srivatsa.
>
> This should address all your comemnts (thanks!).
> I assume you will include this patch in your serie.
>


Of course, thanks for your work!

As soon as the Xen problem gets sorted out (patch 5), I will send out
a v2 including your patch and addressing all the comments received
so far.

> diff --git a/arch/sparc/kernel/sun4d_smp.c b/arch/sparc/kernel/sun4d_smp.c

> index ddaea31..c9eb82f 100644
> --- a/arch/sparc/kernel/sun4d_smp.c
> +++ b/arch/sparc/kernel/sun4d_smp.c
> @@ -50,10 +50,9 @@ static inline void show_leds(int cpuid)
> "i" (ASI_M_CTL));
> }
>
> -void __cpuinit smp4d_callin(void)
> +void __cpuinit sun4d_cpu_pre_starting(void *arg)
> {
> int cpuid = hard_smp_processor_id();
> - unsigned long flags;
>
> /* Show we are alive */
> cpu_leds[cpuid] = 0x6;
> @@ -61,26 +60,20 @@ void __cpuinit smp4d_callin(void)
>
> /* Enable level15 interrupt, disable level14 interrupt for now */
> cc_set_imsk((cc_get_imsk() & ~0x8000) | 0x4000);
> +}
>
> - local_ops->cache_all();
> - local_ops->tlb_all();
> +void __cpuinit sun4d_cpu_pre_online(void *arg)
> +{
> + unsigned long flags;
> + int cpuid;
>
> - notify_cpu_starting(cpuid);
> - /*
> - * Unblock the master CPU _only_ when the scheduler state
> + cpuid = hard_smp_processor_id();
> +
> + /* Unblock the master CPU _only_ when the scheduler state
> * of all secondary CPUs will be up-to-date, so after
> * the SMP initialization the master will be just allowed
> * to call the scheduler code.
> */


Looks like you forgot to update this comment.
Its ok, I'll update your patch and include it in the series.

Regards,
Srivatsa S. Bhat

> - /* Get our local ticker going. */
> - register_percpu_ce(cpuid);
> -
> - calibrate_delay();
> - smp_store_cpu_info(cpuid);
> - local_ops->cache_all();
> - local_ops->tlb_all();
> -
> - /* Allow master to continue. */
> sun4d_swap((unsigned long *)&cpu_callin_map[cpuid], 1);
> local_ops->cache_all();
> local_ops->tlb_all();
> @@ -106,16 +99,12 @@ void __cpuinit smp4d_callin(void)
> local_ops->cache_all();
> local_ops->tlb_all();
>
> - local_irq_enable(); /* We don't allow PIL 14 yet */
> -
> while (!cpumask_test_cpu(cpuid, &smp_commenced_mask))
> barrier();
>
> spin_lock_irqsave(&sun4d_imsk_lock, flags);
> cc_set_imsk(cc_get_imsk() & ~0x4000); /* Allow PIL 14 as well */
> spin_unlock_irqrestore(&sun4d_imsk_lock, flags);
> - set_cpu_online(cpuid, true);
> -
> }
>

2012-06-04 10:33:13

by Thomas Gleixner

[permalink] [raw]
Subject: Re: [PATCH 01/27] smpboot: Provide a generic method to boot secondary processors

On Sat, 2 Jun 2012, Srivatsa S. Bhat wrote:

> On 06/01/2012 10:21 PM, Sam Ravnborg wrote:
>
> >> +/* Implement the following functions in your architecture, as appropriate. */
> >> +
> >> +/**
> >> + * __cpu_pre_starting()
> >> + *
> >> + * Implement whatever you need to do before the CPU_STARTING notifiers are
> >> + * invoked. Note that the CPU_STARTING callbacks run *on* the cpu that is
> >> + * coming up. So that cpu better be prepared! IOW, implement all the early
> >> + * boot/init code for the cpu here. And do NOT enable interrupts.
> >> + */
> >> +#ifndef __cpu_pre_starting
> >> +void __weak __cpu_pre_starting(void *arg) {}
> >> +#endif

This wants to be a prototype w/o the __weak prefix and the #ifndef
magic and the weak default implementation should be in kernel/smpboot.c

> > __What __is __the __purpose __of __all __these __underscaores __used
> > __as __function __prefix? __It __does __not __help __readability.
>
> >
>
>
> We had used "__" as the function prefix to emphasize that these functions are
> implemented/overriden in the depths of architecture-specific code.
>
> But now that you mention it, I see that we don't really have something like an
> arch-independent variant without the "__" prefix. So adding the "__" prefix
> might not be really necessary, since there is nothing to distinguish name-wise.
>
> However, I do want to emphasize that this isn't generic code. So how about
> an "arch_" prefix instead? Something like:
> arch_cpu_pre_starting(), arch_cpu_pre_online() and arch_cpu_post_online()?

Yes, please.

Otherwise, thanks for that work! From the first glance, it's not
colliding much with the changes I have in the pipeline, but I will
have a closer look.

Thanks,

tglx

2012-06-04 13:08:59

by Srivatsa S. Bhat

[permalink] [raw]
Subject: Re: [PATCH 01/27] smpboot: Provide a generic method to boot secondary processors

On 06/04/2012 04:02 PM, Thomas Gleixner wrote:

> On Sat, 2 Jun 2012, Srivatsa S. Bhat wrote:
>
>> On 06/01/2012 10:21 PM, Sam Ravnborg wrote:
>>
>>>> +/* Implement the following functions in your architecture, as appropriate. */
>>>> +
>>>> +/**
>>>> + * __cpu_pre_starting()
>>>> + *
>>>> + * Implement whatever you need to do before the CPU_STARTING notifiers are
>>>> + * invoked. Note that the CPU_STARTING callbacks run *on* the cpu that is
>>>> + * coming up. So that cpu better be prepared! IOW, implement all the early
>>>> + * boot/init code for the cpu here. And do NOT enable interrupts.
>>>> + */
>>>> +#ifndef __cpu_pre_starting
>>>> +void __weak __cpu_pre_starting(void *arg) {}
>>>> +#endif
>
> This wants to be a prototype w/o the __weak prefix and the #ifndef
> magic and the weak default implementation should be in kernel/smpboot.c
>


I can add the prototype w/o the __weak prefix and the #ifndef magic in
include/linux/smpboot.h (which I will, and include it in v2).

However, I can't get rid of the #ifndef magic in kernel/smpboot.c because
it will cause build failures on x86.

I addressed this same issue in another email:
https://lkml.org/lkml/2012/6/3/33

>>> __What __is __the __purpose __of __all __these __underscaores __used
>>> __as __function __prefix? __It __does __not __help __readability.
>>
>>>
>>
>>
>> We had used "__" as the function prefix to emphasize that these functions are
>> implemented/overriden in the depths of architecture-specific code.
>>
>> But now that you mention it, I see that we don't really have something like an
>> arch-independent variant without the "__" prefix. So adding the "__" prefix
>> might not be really necessary, since there is nothing to distinguish name-wise.
>>
>> However, I do want to emphasize that this isn't generic code. So how about
>> an "arch_" prefix instead? Something like:
>> arch_cpu_pre_starting(), arch_cpu_pre_online() and arch_cpu_post_online()?
>
> Yes, please.


Sure, queued up for v2. (Atm, figuring out how to deal with xen (patch 5). Once
that gets done, will post a v2.)

>
> Otherwise, thanks for that work!


We are glad that it helps :-)

> From the first glance, it's not
> colliding much with the changes I have in the pipeline, but I will
> have a closer look.
>


Great! Thanks a lot for your time, Thomas!

Regards,
Srivatsa S. Bhat

2012-06-04 13:19:11

by Thomas Gleixner

[permalink] [raw]
Subject: Re: [PATCH 01/27] smpboot: Provide a generic method to boot secondary processors

On Mon, 4 Jun 2012, Srivatsa S. Bhat wrote:
> On 06/04/2012 04:02 PM, Thomas Gleixner wrote:
> I can add the prototype w/o the __weak prefix and the #ifndef magic in
> include/linux/smpboot.h (which I will, and include it in v2).
>
> However, I can't get rid of the #ifndef magic in kernel/smpboot.c because
> it will cause build failures on x86.
>
> I addressed this same issue in another email:
> https://lkml.org/lkml/2012/6/3/33

> In short, to avoid breaking build on x86.
> We wanted to follow the x86 convention of having static inline functions in
> arch/x86/include/asm/smp.h and use the smp_ops structure to route the calls
> to x86 or xen as appropriate (see patch 4 in this series).

Oh no. That's not really a good argument.

There is no reason why this _must_ be an inline function on x86. It
can be a proper function in arch/x86/kernel/smpboot.c as well.

The alternative solution is to generalize the smp_ops approach and get
rid of the weak implementations completely.

Thanks,

tglx

2012-06-04 14:30:17

by Thomas Gleixner

[permalink] [raw]
Subject: Re: [PATCH 07/27] x86, smpboot: Use generic SMP booting infrastructure

On Fri, 1 Jun 2012, Srivatsa S. Bhat wrote:
> From: Nikunj A. Dadhania <[email protected]>
>
> Convert x86 to use the generic framework to boot secondary CPUs.
>
> Notes:
> 1. x86 manipulates the cpu_online_mask under vector_lock. So, while
> converting over to the generic smp booting code, override arch_vector_lock()
> and arch_vector_unlock() to lock_vector_lock() and unlock_vector_lock()
> respectively.
>
> 2. In smp_callin(), we allow the master to continue as soon as the physical
> booting of the secondary processor is done. That is, we don't wait till the
> CPU_STARTING notifications are sent.
>
> Implications:
> - This does not alter the order in which the notifications are sent (i.e.,
> still CPU_STARTING is followed by CPU_ONLINE) because the master waits till
> the new cpu is set in the cpu_online_mask before returning to generic code.
>
> - This approach is better because of 2 reasons:
> a. It makes more sense: the master has a timeout for waiting on the
> cpu_callin_map - which means we should report back as soon as possible.
> The whole idea of having a timeout is to estimate the maximum time that
> could be taken for physical booting. This approach separates out the
> physical booting vs running CPU hotplug callbacks and reports back to
> the master as soon as physical booting is done.

How do you deal with the problem that the master does not come back in
time? There is a timeout on the booting side as well. I haven't found
out why this timeout exists at all, but we need to take care of that
and there is a patch on LKML which removes the panic as this can
happen on virt. I really wonder whether the hardware for which this
timeout stuff was introduced still exists or whether we can simply get
rid of it completely.

Also the whole callin/callout mask business wants to be in the generic
code. It can be replaced completely by cpu_state, at least that's what
I was aiming for. There is no need for several variables tracking the
same thing in different ways.

Thanks,

tglx

2012-06-04 16:55:10

by Srivatsa S. Bhat

[permalink] [raw]
Subject: Re: [PATCH 01/27] smpboot: Provide a generic method to boot secondary processors

On 06/04/2012 06:48 PM, Thomas Gleixner wrote:

> On Mon, 4 Jun 2012, Srivatsa S. Bhat wrote:
>> On 06/04/2012 04:02 PM, Thomas Gleixner wrote:
>> I can add the prototype w/o the __weak prefix and the #ifndef magic in
>> include/linux/smpboot.h (which I will, and include it in v2).
>>
>> However, I can't get rid of the #ifndef magic in kernel/smpboot.c because
>> it will cause build failures on x86.
>>
>> I addressed this same issue in another email:
>> https://lkml.org/lkml/2012/6/3/33
>
>> In short, to avoid breaking build on x86.
>> We wanted to follow the x86 convention of having static inline functions in
>> arch/x86/include/asm/smp.h and use the smp_ops structure to route the calls
>> to x86 or xen as appropriate (see patch 4 in this series).
>
> Oh no. That's not really a good argument.
>
> There is no reason why this _must_ be an inline function on x86. It
> can be a proper function in arch/x86/kernel/smpboot.c as well.
>


Ok, I'll adopt this approach.

> The alternative solution is to generalize the smp_ops approach and get
> rid of the weak implementations completely.
>


Ok, but this would mean changes to all architectures, which could be avoided.

One reason why x86 uses that smp_ops structure is because it decides the
implementation routines between x86 and xen at run-time. Such requirements don't
exist on other architectures, from what I saw (except sparc32).

So I'll stick with the former approach of adding a proper function for x86, and
retaining the default weak implementations in kernel/smpboot.c.

Regards,
Srivatsa S. Bhat

2012-06-04 17:01:32

by Srivatsa S. Bhat

[permalink] [raw]
Subject: Re: [PATCH 07/27] x86, smpboot: Use generic SMP booting infrastructure

On 06/04/2012 07:59 PM, Thomas Gleixner wrote:

> On Fri, 1 Jun 2012, Srivatsa S. Bhat wrote:
>> From: Nikunj A. Dadhania <[email protected]>
>>
>> Convert x86 to use the generic framework to boot secondary CPUs.
>>
>> Notes:
>> 1. x86 manipulates the cpu_online_mask under vector_lock. So, while
>> converting over to the generic smp booting code, override arch_vector_lock()
>> and arch_vector_unlock() to lock_vector_lock() and unlock_vector_lock()
>> respectively.
>>
>> 2. In smp_callin(), we allow the master to continue as soon as the physical
>> booting of the secondary processor is done. That is, we don't wait till the
>> CPU_STARTING notifications are sent.
>>
>> Implications:
>> - This does not alter the order in which the notifications are sent (i.e.,
>> still CPU_STARTING is followed by CPU_ONLINE) because the master waits till
>> the new cpu is set in the cpu_online_mask before returning to generic code.
>>
>> - This approach is better because of 2 reasons:
>> a. It makes more sense: the master has a timeout for waiting on the
>> cpu_callin_map - which means we should report back as soon as possible.
>> The whole idea of having a timeout is to estimate the maximum time that
>> could be taken for physical booting. This approach separates out the
>> physical booting vs running CPU hotplug callbacks and reports back to
>> the master as soon as physical booting is done.
>
> How do you deal with the problem that the master does not come back in
> time?


Sorry, I didn't quite get your point... This doesn't completely solve the timeout
problem, but makes the situation a little bit better, that's all.

> There is a timeout on the booting side as well.


Yes, the wait for the cpu_callout_mask.

> I haven't found
> out why this timeout exists at all, but we need to take care of that
> and there is a patch on LKML which removes the panic as this can
> happen on virt.


Oh! Ok..

> I really wonder whether the hardware for which this
> timeout stuff was introduced still exists or whether we can simply get
> rid of it completely.
>


Sounds good :-)


> Also the whole callin/callout mask business wants to be in the generic
> code. It can be replaced completely by cpu_state, at least that's what
> I was aiming for. There is no need for several variables tracking the
> same thing in different ways.
>


Exactly! I was thinking of consolidating the callin/callout/commenced/whatever
mask in various architectures in a future patchset. I haven't touched them
in this one. And using cpu_state for that is a very good point. I'll keep
that in mind, thanks!



Regards,
Srivatsa S. Bhat

2012-06-05 05:12:09

by Yong Zhang

[permalink] [raw]
Subject: Re: [PATCH 01/27] smpboot: Provide a generic method to boot secondary processors

On Mon, Jun 04, 2012 at 06:37:39PM +0530, Srivatsa S. Bhat wrote:
> On 06/04/2012 04:02 PM, Thomas Gleixner wrote:
> >> However, I do want to emphasize that this isn't generic code. So how about
> >> an "arch_" prefix instead? Something like:
> >> arch_cpu_pre_starting(), arch_cpu_pre_online() and arch_cpu_post_online()?
> >
> > Yes, please.
>
>
> Sure, queued up for v2. (Atm, figuring out how to deal with xen (patch 5). Once
> that gets done, will post a v2.)

Since there is dependency on ipi_call_lock cleanup patches
http://marc.info/?l=linux-kernel&m=133827595625509&w=2

So please feel free to include the patches into yours.

And It will be better that Thomas could pick them up :)

Thanks,
Yong

>
> >
> > Otherwise, thanks for that work!
>
>
> We are glad that it helps :-)
>
> > From the first glance, it's not
> > colliding much with the changes I have in the pipeline, but I will
> > have a closer look.
> >
>
>
> Great! Thanks a lot for your time, Thomas!
>
> Regards,
> Srivatsa S. Bhat
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/

--
Only stand for myself

2012-06-05 06:53:05

by Srivatsa S. Bhat

[permalink] [raw]
Subject: Re: [PATCH 01/27] smpboot: Provide a generic method to boot secondary processors

On 06/05/2012 10:41 AM, Yong Zhang wrote:

> On Mon, Jun 04, 2012 at 06:37:39PM +0530, Srivatsa S. Bhat wrote:
>> On 06/04/2012 04:02 PM, Thomas Gleixner wrote:
>>>> However, I do want to emphasize that this isn't generic code. So how about
>>>> an "arch_" prefix instead? Something like:
>>>> arch_cpu_pre_starting(), arch_cpu_pre_online() and arch_cpu_post_online()?
>>>
>>> Yes, please.
>>
>>
>> Sure, queued up for v2. (Atm, figuring out how to deal with xen (patch 5). Once
>> that gets done, will post a v2.)
>
> Since there is dependency on ipi_call_lock cleanup patches
> http://marc.info/?l=linux-kernel&m=133827595625509&w=2
>
> So please feel free to include the patches into yours.
>


Sure, if that would help Thomas pick them up easily.


> And It will be better that Thomas could pick them up :)
>


Yes :)

Regards,
Srivatsa S. Bhat

>
>>
>>>
>>> Otherwise, thanks for that work!
>>
>>
>> We are glad that it helps :-)
>>
>>> From the first glance, it's not
>>> colliding much with the changes I have in the pipeline, but I will
>>> have a closer look.
>>>
>>
>>
>> Great! Thanks a lot for your time, Thomas!
>>
>> Regards,
>> Srivatsa S. Bhat
>>

2012-06-05 16:59:08

by Konrad Rzeszutek Wilk

[permalink] [raw]
Subject: Re: [Xen-devel] [PATCH 05/27] xen, cpu hotplug: Don't call cpu_bringup() in xen_play_dead()

On Sat, Jun 02, 2012 at 11:36:00PM +0530, Srivatsa S. Bhat wrote:
> On 06/01/2012 09:06 PM, Jan Beulich wrote:
>
> >>>> On 01.06.12 at 17:13, "Srivatsa S. Bhat" <[email protected]>
> > wrote:
> >> On 06/01/2012 06:29 PM, Jan Beulich wrote:
> >>
> >>>>>> On 01.06.12 at 11:11, "Srivatsa S. Bhat" <[email protected]>
> >>> wrote:
> >>>> xen_play_dead calls cpu_bringup() which looks weird, because xen_play_dead()
> >>>> is invoked in the cpu down path, whereas cpu_bringup() (as the name
> >>>> suggests) is useful in the cpu bringup path.
> >>>
> >>> This might not be correct - the code as it is without this change is
> >>> safe even when the vCPU gets onlined back later by an external
> >>> entity (e.g. the Xen tool stack), and it would in that case resume
> >>> at the return point of the VCPUOP_down hypercall. That might
> >>> be a heritage from the original XenoLinux tree though, and be
> >>> meaningless in pv-ops context - Jeremy, Konrad?
> >>>
> >>> Possibly it was bogus/unused even in that original tree - Keir?
> >>>
> >>
> >>
> >> Thanks for your comments Jan!
> >>
> >> In case this change is wrong, the other method I had in mind was to call
> >> cpu_bringup_and_idle() in xen_play_dead(). (Even ARM does something similar,
> >> in the sense that it runs the cpu bringup code including cpu_idle(), in the
> >> cpu offline path, namely the cpu_die() function). Would that approach work
> >> for xen as well? If yes, then we wouldn't have any issues to convert xen to
> >> generic code.
> >
> > No, that wouldn't work either afaict - the function is expected
> > to return.
> >
>
>
> Ok.. So, I would love to hear a confirmation about whether this patch (which
> removes cpu_bringup() in xen_play_dead()) will break things or it is good as is.

I think it will break - are these patches available on some git tree to test them out?

>
> If its not correct, then we can probably make __cpu_post_online() return an int,
> with the meaning:
>
> 0 => success, go ahead and call cpu_idle()
> non-zero => stop here, thanks for your services so far.. now leave the rest to me.
>
> So all other archs will return 0, Xen will return non-zero, and it will handle
> when to call cpu_idle() and when not to do so.

The call-chain (this is taken from 41bd956de3dfdc3a43708fe2e0c8096c69064a1e):

cpu_bringup_and_idle:
\- cpu_bringup
| \-[preempt_disable]
|
|- cpu_idle
\- play_dead [assuming the user offlined the VCPU]
| \
| +- (xen_play_dead)
| \- HYPERVISOR_VCPU_off [so VCPU is dead, once user
| | onlines it starts from here]
| \- cpu_bringup [preempt_disable]
|
+- preempt_enable_no_reschedule()
+- schedule()
\- preempt_enable()

Which I think is a bit different from your use-case?

>
> Might sound a bit ugly, but I don't see much other option. Suggestions are
> appreciated!
>
> Regards,
> Srivatsa S. Bhat

2012-06-05 17:37:27

by Srivatsa S. Bhat

[permalink] [raw]
Subject: Re: [Xen-devel] [PATCH 05/27] xen, cpu hotplug: Don't call cpu_bringup() in xen_play_dead()

On 06/05/2012 10:19 PM, Konrad Rzeszutek Wilk wrote:

> On Sat, Jun 02, 2012 at 11:36:00PM +0530, Srivatsa S. Bhat wrote:
>> On 06/01/2012 09:06 PM, Jan Beulich wrote:
>>
>>>>>> On 01.06.12 at 17:13, "Srivatsa S. Bhat" <[email protected]>
>>> wrote:
>>>> On 06/01/2012 06:29 PM, Jan Beulich wrote:
>>>>
>>>>>>>> On 01.06.12 at 11:11, "Srivatsa S. Bhat" <[email protected]>
>>>>> wrote:
>>>>>> xen_play_dead calls cpu_bringup() which looks weird, because xen_play_dead()
>>>>>> is invoked in the cpu down path, whereas cpu_bringup() (as the name
>>>>>> suggests) is useful in the cpu bringup path.
>>>>>
>>>>> This might not be correct - the code as it is without this change is
>>>>> safe even when the vCPU gets onlined back later by an external
>>>>> entity (e.g. the Xen tool stack), and it would in that case resume
>>>>> at the return point of the VCPUOP_down hypercall. That might
>>>>> be a heritage from the original XenoLinux tree though, and be
>>>>> meaningless in pv-ops context - Jeremy, Konrad?
>>>>>
>>>>> Possibly it was bogus/unused even in that original tree - Keir?
>>>>>
>>>>
>>>>
>>>> Thanks for your comments Jan!
>>>>
>>>> In case this change is wrong, the other method I had in mind was to call
>>>> cpu_bringup_and_idle() in xen_play_dead(). (Even ARM does something similar,
>>>> in the sense that it runs the cpu bringup code including cpu_idle(), in the
>>>> cpu offline path, namely the cpu_die() function). Would that approach work
>>>> for xen as well? If yes, then we wouldn't have any issues to convert xen to
>>>> generic code.
>>>
>>> No, that wouldn't work either afaict - the function is expected
>>> to return.
>>>
>>
>>
>> Ok.. So, I would love to hear a confirmation about whether this patch (which
>> removes cpu_bringup() in xen_play_dead()) will break things or it is good as is.
>
> I think it will break - are these patches available on some git tree to test them out?
>


Oh, sorry I haven't hosted them on any git tree..

>>
>> If its not correct, then we can probably make __cpu_post_online() return an int,
>> with the meaning:
>>
>> 0 => success, go ahead and call cpu_idle()
>> non-zero => stop here, thanks for your services so far.. now leave the rest to me.
>>
>> So all other archs will return 0, Xen will return non-zero, and it will handle
>> when to call cpu_idle() and when not to do so.
>
> The call-chain (this is taken from 41bd956de3dfdc3a43708fe2e0c8096c69064a1e):
>
> cpu_bringup_and_idle:
> \- cpu_bringup
> | \-[preempt_disable]
> |
> |- cpu_idle
> \- play_dead [assuming the user offlined the VCPU]
> | \
> | +- (xen_play_dead)
> | \- HYPERVISOR_VCPU_off [so VCPU is dead, once user
> | | onlines it starts from here]
> | \- cpu_bringup [preempt_disable]
> |
> +- preempt_enable_no_reschedule()
> +- schedule()
> \- preempt_enable()
>
> Which I think is a bit different from your use-case?
>


Yes, when this patch is applied, the call to cpu_bringup() after HYPERVISOR_VCPU_off
gets removed. So it will look like this:

cpu_bringup_and_idle:
\- cpu_bringup
| \-[preempt_disable]
|
|- cpu_idle
\- play_dead [assuming the user offlined the VCPU]
| \
| +- (xen_play_dead)
| \- HYPERVISOR_VCPU_off [so VCPU is dead, once user
| onlines it starts from here]
|
|
+- preempt_enable_no_reschedule()
+- schedule()
\- preempt_enable()


And hence we wouldn't have the preempt imbalance, hence no need for the
extra preempt_enable() in xen_play_dead().

So, basically our problem is this:
The generic smp booting code calls cpu_idle() after setting the cpu in
cpu_online_mask etc, because this call to cpu_idle() is used in every
architecture. But just for xen, that too only in the cpu down path, this
call is omitted - which makes it difficult to convert xen to the generic
smp booting framework.

So I proposed 3 solutions, of which we can choose whichever that doesn't
break stuff and whichever looks the least ugly:

1. Omit the call to cpu_bringup() after HYPERVISOR_VCPU_off (which this
patch does).

2. Or, invoke cpu_bringup_and_idle() after HYPERVISOR_VCPU_off (Jan said
this might not work since we are expected to return). ARM actually does
something like this (it does the complete bringup including idle in the
cpu down path).

3. Or, Just for the sake of Xen, convert the __cpu_post_online() function to
return an int - Xen will return non-zero and do the next steps itself,
also deciding between whether or not to call cpu_idle(). All other archs
will just return 0, and allow the generic smp booting code to continue
on to cpu_idle().

Please let me know your thoughts.

Regards,
Srivatsa S. Bhat

2012-06-05 17:41:38

by Thomas Gleixner

[permalink] [raw]
Subject: Re: [Xen-devel] [PATCH 05/27] xen, cpu hotplug: Don't call cpu_bringup() in xen_play_dead()

On Sat, 2 Jun 2012, Srivatsa S. Bhat wrote:
> Ok.. So, I would love to hear a confirmation about whether this patch (which
> removes cpu_bringup() in xen_play_dead()) will break things or it is good as is.
>
> If its not correct, then we can probably make __cpu_post_online() return an int,
> with the meaning:
>
> 0 => success, go ahead and call cpu_idle()
> non-zero => stop here, thanks for your services so far.. now leave the rest to me.
>
> So all other archs will return 0, Xen will return non-zero, and it will handle
> when to call cpu_idle() and when not to do so.
>
> Might sound a bit ugly, but I don't see much other option. Suggestions are
> appreciated!

Yes, it's butt ugly.

You are tripping over the main misconception of the current hotplug
code: It's asymetric.

So people added warts and workarounds like the xen one. What you are
proposing is another wart and workaround.

The real way to avoid it, is to have the symetric state machine in
place first and then convert everything to that instead of introducing
an intermediate state which resembles the existing state.

One of the main things we need to do to make it symetric is to kill
the play_dead() thing in the idle loop and make idle a function which
returns on cpu_should_die().

Give me a day or two and I get you a working version of that. (Up is
functional, just down refuses to play along)

Thanks,

tglx

2012-06-05 17:49:26

by Srivatsa S. Bhat

[permalink] [raw]
Subject: Re: [Xen-devel] [PATCH 05/27] xen, cpu hotplug: Don't call cpu_bringup() in xen_play_dead()

On 06/05/2012 11:10 PM, Thomas Gleixner wrote:

> On Sat, 2 Jun 2012, Srivatsa S. Bhat wrote:
>> Ok.. So, I would love to hear a confirmation about whether this patch (which
>> removes cpu_bringup() in xen_play_dead()) will break things or it is good as is.
>>
>> If its not correct, then we can probably make __cpu_post_online() return an int,
>> with the meaning:
>>
>> 0 => success, go ahead and call cpu_idle()
>> non-zero => stop here, thanks for your services so far.. now leave the rest to me.
>>
>> So all other archs will return 0, Xen will return non-zero, and it will handle
>> when to call cpu_idle() and when not to do so.
>>
>> Might sound a bit ugly, but I don't see much other option. Suggestions are
>> appreciated!
>
> Yes, it's butt ugly.
>
> You are tripping over the main misconception of the current hotplug
> code: It's asymetric.
>
> So people added warts and workarounds like the xen one. What you are
> proposing is another wart and workaround.
>
> The real way to avoid it, is to have the symetric state machine in
> place first and then convert everything to that instead of introducing
> an intermediate state which resembles the existing state.
>
> One of the main things we need to do to make it symetric is to kill
> the play_dead() thing in the idle loop and make idle a function which
> returns on cpu_should_die().
>
> Give me a day or two and I get you a working version of that. (Up is
> functional, just down refuses to play along)
>


Oh great! So, then I'll wait for your patches and then adapt this patchset
to your model then. Let me know if I can help out with something..

Regards,
Srivatsa S. Bhat