When a cpu goes up some architectures (e.g. x86) have to walk the irq
space to set up the vector space for the cpu. While this needs extra
protection at the architecture level we can avoid a few race
conditions by preventing the concurrent allocation/free of irq
descriptors and the associated data.
When a cpu goes down it moves the interrupts which are targeted to
this cpu away by reassigning the affinities. While this happens
interrupts can be allocated and freed, which opens a can of race
conditions in the code which reassignes the affinities because
interrupt descriptors might be freed underneath.
Example:
CPU1 CPU2
cpu_up/down
irq_desc = irq_to_desc(irq);
remove_from_radix_tree(desc);
raw_spin_lock(&desc->lock);
free(desc);
We could protect the irq descriptors with RCU, but that would require
a full tree change of all accesses to interrupt descriptors. But
fortunately these kind of race conditions are rather limited to a few
things like cpu hotplug. The normal setup/teardown is very well
serialized. So the simpler and obvious solution is:
Prevent allocation and freeing of interrupt descriptors accross cpu
hotplug.
Signed-off-by: Thomas Gleixner <[email protected]>
---
include/linux/irqdesc.h | 7 ++++++-
kernel/cpu.c | 21 ++++++++++++++++++++-
kernel/irq/internals.h | 4 ----
3 files changed, 26 insertions(+), 6 deletions(-)
Index: tip/include/linux/irqdesc.h
===================================================================
--- tip.orig/include/linux/irqdesc.h
+++ tip/include/linux/irqdesc.h
@@ -90,7 +90,12 @@ struct irq_desc {
const char *name;
} ____cacheline_internodealigned_in_smp;
-#ifndef CONFIG_SPARSE_IRQ
+#ifdef CONFIG_SPARSE_IRQ
+extern void irq_lock_sparse(void);
+extern void irq_unlock_sparse(void);
+#else
+static inline void irq_lock_sparse(void) { }
+static inline void irq_unlock_sparse(void) { }
extern struct irq_desc irq_desc[NR_IRQS];
#endif
Index: tip/kernel/cpu.c
===================================================================
--- tip.orig/kernel/cpu.c
+++ tip/kernel/cpu.c
@@ -392,13 +392,19 @@ static int __ref _cpu_down(unsigned int
smpboot_park_threads(cpu);
/*
- * So now all preempt/rcu users must observe !cpu_active().
+ * Prevent irq alloc/free while the dying cpu reorganizes the
+ * interrupt affinities.
*/
+ irq_lock_sparse();
+ /*
+ * So now all preempt/rcu users must observe !cpu_active().
+ */
err = __stop_machine(take_cpu_down, &tcd_param, cpumask_of(cpu));
if (err) {
/* CPU didn't die: tell everyone. Can't complain. */
cpu_notify_nofail(CPU_DOWN_FAILED | mod, hcpu);
+ irq_unlock_sparse();
goto out_release;
}
BUG_ON(cpu_online(cpu));
@@ -415,6 +421,9 @@ static int __ref _cpu_down(unsigned int
smp_mb(); /* Read from cpu_dead_idle before __cpu_die(). */
per_cpu(cpu_dead_idle, cpu) = false;
+ /* Interrupts are moved away from the dying cpu, reenable alloc/free */
+ irq_unlock_sparse();
+
hotplug_cpu__broadcast_tick_pull(cpu);
/* This actually kills the CPU. */
__cpu_die(cpu);
@@ -517,8 +526,18 @@ static int _cpu_up(unsigned int cpu, int
goto out_notify;
}
+ /*
+ * Some architectures have to walk the irq descriptors to
+ * setup the vector space for the cpu which comes online.
+ * Prevent irq alloc/free across the bringup.
+ */
+ irq_lock_sparse();
+
/* Arch-specific enabling code. */
ret = __cpu_up(cpu, idle);
+
+ irq_unlock_sparse();
+
if (ret != 0)
goto out_notify;
BUG_ON(!cpu_online(cpu));
Index: tip/kernel/irq/internals.h
===================================================================
--- tip.orig/kernel/irq/internals.h
+++ tip/kernel/irq/internals.h
@@ -76,12 +76,8 @@ extern void unmask_threaded_irq(struct i
#ifdef CONFIG_SPARSE_IRQ
static inline void irq_mark_irq(unsigned int irq) { }
-extern void irq_lock_sparse(void);
-extern void irq_unlock_sparse(void);
#else
extern void irq_mark_irq(unsigned int irq);
-static inline void irq_lock_sparse(void) { }
-static inline void irq_unlock_sparse(void) { }
#endif
extern void init_kstat_irqs(struct irq_desc *desc, int node, int nr);
Commit-ID: fc862aa8288be8ace91013375ff0e3c48815c662
Gitweb: http://git.kernel.org/tip/fc862aa8288be8ace91013375ff0e3c48815c662
Author: Thomas Gleixner <[email protected]>
AuthorDate: Sun, 5 Jul 2015 17:12:30 +0000
Committer: Thomas Gleixner <[email protected]>
CommitDate: Tue, 7 Jul 2015 11:33:44 +0200
hotplug: Prevent alloc/free of irq descriptors during cpu up/down
When a cpu goes up some architectures (e.g. x86) have to walk the irq
space to set up the vector space for the cpu. While this needs extra
protection at the architecture level we can avoid a few race
conditions by preventing the concurrent allocation/free of irq
descriptors and the associated data.
When a cpu goes down it moves the interrupts which are targeted to
this cpu away by reassigning the affinities. While this happens
interrupts can be allocated and freed, which opens a can of race
conditions in the code which reassignes the affinities because
interrupt descriptors might be freed underneath.
Example:
CPU1 CPU2
cpu_up/down
irq_desc = irq_to_desc(irq);
remove_from_radix_tree(desc);
raw_spin_lock(&desc->lock);
free(desc);
We could protect the irq descriptors with RCU, but that would require
a full tree change of all accesses to interrupt descriptors. But
fortunately these kind of race conditions are rather limited to a few
things like cpu hotplug. The normal setup/teardown is very well
serialized. So the simpler and obvious solution is:
Prevent allocation and freeing of interrupt descriptors accross cpu
hotplug.
Signed-off-by: Thomas Gleixner <[email protected]>
Acked-by: Peter Zijlstra <[email protected]>
Cc: xiao jin <[email protected]>
Cc: Joerg Roedel <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Yanmin Zhang <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
---
include/linux/irqdesc.h | 7 ++++++-
kernel/cpu.c | 21 ++++++++++++++++++++-
kernel/irq/internals.h | 4 ----
3 files changed, 26 insertions(+), 6 deletions(-)
diff --git a/include/linux/irqdesc.h b/include/linux/irqdesc.h
index 624a668..fcea4e4 100644
--- a/include/linux/irqdesc.h
+++ b/include/linux/irqdesc.h
@@ -87,7 +87,12 @@ struct irq_desc {
const char *name;
} ____cacheline_internodealigned_in_smp;
-#ifndef CONFIG_SPARSE_IRQ
+#ifdef CONFIG_SPARSE_IRQ
+extern void irq_lock_sparse(void);
+extern void irq_unlock_sparse(void);
+#else
+static inline void irq_lock_sparse(void) { }
+static inline void irq_unlock_sparse(void) { }
extern struct irq_desc irq_desc[NR_IRQS];
#endif
diff --git a/kernel/cpu.c b/kernel/cpu.c
index 9c9c9fa..fa6dc67 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -392,13 +392,19 @@ static int __ref _cpu_down(unsigned int cpu, int tasks_frozen)
smpboot_park_threads(cpu);
/*
- * So now all preempt/rcu users must observe !cpu_active().
+ * Prevent irq alloc/free while the dying cpu reorganizes the
+ * interrupt affinities.
*/
+ irq_lock_sparse();
+ /*
+ * So now all preempt/rcu users must observe !cpu_active().
+ */
err = __stop_machine(take_cpu_down, &tcd_param, cpumask_of(cpu));
if (err) {
/* CPU didn't die: tell everyone. Can't complain. */
cpu_notify_nofail(CPU_DOWN_FAILED | mod, hcpu);
+ irq_unlock_sparse();
goto out_release;
}
BUG_ON(cpu_online(cpu));
@@ -415,6 +421,9 @@ static int __ref _cpu_down(unsigned int cpu, int tasks_frozen)
smp_mb(); /* Read from cpu_dead_idle before __cpu_die(). */
per_cpu(cpu_dead_idle, cpu) = false;
+ /* Interrupts are moved away from the dying cpu, reenable alloc/free */
+ irq_unlock_sparse();
+
hotplug_cpu__broadcast_tick_pull(cpu);
/* This actually kills the CPU. */
__cpu_die(cpu);
@@ -517,8 +526,18 @@ static int _cpu_up(unsigned int cpu, int tasks_frozen)
goto out_notify;
}
+ /*
+ * Some architectures have to walk the irq descriptors to
+ * setup the vector space for the cpu which comes online.
+ * Prevent irq alloc/free across the bringup.
+ */
+ irq_lock_sparse();
+
/* Arch-specific enabling code. */
ret = __cpu_up(cpu, idle);
+
+ irq_unlock_sparse();
+
if (ret != 0)
goto out_notify;
BUG_ON(!cpu_online(cpu));
diff --git a/kernel/irq/internals.h b/kernel/irq/internals.h
index 4834ee8..61008b8 100644
--- a/kernel/irq/internals.h
+++ b/kernel/irq/internals.h
@@ -76,12 +76,8 @@ extern void unmask_threaded_irq(struct irq_desc *desc);
#ifdef CONFIG_SPARSE_IRQ
static inline void irq_mark_irq(unsigned int irq) { }
-extern void irq_lock_sparse(void);
-extern void irq_unlock_sparse(void);
#else
extern void irq_mark_irq(unsigned int irq);
-static inline void irq_lock_sparse(void) { }
-static inline void irq_unlock_sparse(void) { }
#endif
extern void init_kstat_irqs(struct irq_desc *desc, int node, int nr);
Commit-ID: bdcbafe3402cb337752c4c8bce3445ee4c5559a5
Gitweb: http://git.kernel.org/tip/bdcbafe3402cb337752c4c8bce3445ee4c5559a5
Author: Thomas Gleixner <[email protected]>
AuthorDate: Sun, 5 Jul 2015 17:12:30 +0000
Committer: Thomas Gleixner <[email protected]>
CommitDate: Tue, 7 Jul 2015 22:03:22 +0200
hotplug: Prevent alloc/free of irq descriptors during cpu up/down
When a cpu goes up some architectures (e.g. x86) have to walk the irq
space to set up the vector space for the cpu. While this needs extra
protection at the architecture level we can avoid a few race
conditions by preventing the concurrent allocation/free of irq
descriptors and the associated data.
When a cpu goes down it moves the interrupts which are targeted to
this cpu away by reassigning the affinities. While this happens
interrupts can be allocated and freed, which opens a can of race
conditions in the code which reassignes the affinities because
interrupt descriptors might be freed underneath.
Example:
CPU1 CPU2
cpu_up/down
irq_desc = irq_to_desc(irq);
remove_from_radix_tree(desc);
raw_spin_lock(&desc->lock);
free(desc);
We could protect the irq descriptors with RCU, but that would require
a full tree change of all accesses to interrupt descriptors. But
fortunately these kind of race conditions are rather limited to a few
things like cpu hotplug. The normal setup/teardown is very well
serialized. So the simpler and obvious solution is:
Prevent allocation and freeing of interrupt descriptors accross cpu
hotplug.
Signed-off-by: Thomas Gleixner <[email protected]>
Acked-by: Peter Zijlstra <[email protected]>
Cc: xiao jin <[email protected]>
Cc: Joerg Roedel <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Yanmin Zhang <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
---
kernel/cpu.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/kernel/cpu.c b/kernel/cpu.c
index fa6dc67..6a37454 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -21,6 +21,7 @@
#include <linux/suspend.h>
#include <linux/lockdep.h>
#include <linux/tick.h>
+#include <linux/irq.h>
#include <trace/events/power.h>
#include "smpboot.h"
Commit-ID: a899418167264c7bac574b1a0f1b2c26c5b0995a
Gitweb: http://git.kernel.org/tip/a899418167264c7bac574b1a0f1b2c26c5b0995a
Author: Thomas Gleixner <[email protected]>
AuthorDate: Sun, 5 Jul 2015 17:12:30 +0000
Committer: Thomas Gleixner <[email protected]>
CommitDate: Wed, 8 Jul 2015 11:32:25 +0200
hotplug: Prevent alloc/free of irq descriptors during cpu up/down
When a cpu goes up some architectures (e.g. x86) have to walk the irq
space to set up the vector space for the cpu. While this needs extra
protection at the architecture level we can avoid a few race
conditions by preventing the concurrent allocation/free of irq
descriptors and the associated data.
When a cpu goes down it moves the interrupts which are targeted to
this cpu away by reassigning the affinities. While this happens
interrupts can be allocated and freed, which opens a can of race
conditions in the code which reassignes the affinities because
interrupt descriptors might be freed underneath.
Example:
CPU1 CPU2
cpu_up/down
irq_desc = irq_to_desc(irq);
remove_from_radix_tree(desc);
raw_spin_lock(&desc->lock);
free(desc);
We could protect the irq descriptors with RCU, but that would require
a full tree change of all accesses to interrupt descriptors. But
fortunately these kind of race conditions are rather limited to a few
things like cpu hotplug. The normal setup/teardown is very well
serialized. So the simpler and obvious solution is:
Prevent allocation and freeing of interrupt descriptors accross cpu
hotplug.
Signed-off-by: Thomas Gleixner <[email protected]>
Acked-by: Peter Zijlstra <[email protected]>
Cc: xiao jin <[email protected]>
Cc: Joerg Roedel <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Yanmin Zhang <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
---
include/linux/irqdesc.h | 7 ++++++-
kernel/cpu.c | 22 +++++++++++++++++++++-
kernel/irq/internals.h | 4 ----
3 files changed, 27 insertions(+), 6 deletions(-)
diff --git a/include/linux/irqdesc.h b/include/linux/irqdesc.h
index 624a668..fcea4e4 100644
--- a/include/linux/irqdesc.h
+++ b/include/linux/irqdesc.h
@@ -87,7 +87,12 @@ struct irq_desc {
const char *name;
} ____cacheline_internodealigned_in_smp;
-#ifndef CONFIG_SPARSE_IRQ
+#ifdef CONFIG_SPARSE_IRQ
+extern void irq_lock_sparse(void);
+extern void irq_unlock_sparse(void);
+#else
+static inline void irq_lock_sparse(void) { }
+static inline void irq_unlock_sparse(void) { }
extern struct irq_desc irq_desc[NR_IRQS];
#endif
diff --git a/kernel/cpu.c b/kernel/cpu.c
index 9c9c9fa..6a37454 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -21,6 +21,7 @@
#include <linux/suspend.h>
#include <linux/lockdep.h>
#include <linux/tick.h>
+#include <linux/irq.h>
#include <trace/events/power.h>
#include "smpboot.h"
@@ -392,13 +393,19 @@ static int __ref _cpu_down(unsigned int cpu, int tasks_frozen)
smpboot_park_threads(cpu);
/*
- * So now all preempt/rcu users must observe !cpu_active().
+ * Prevent irq alloc/free while the dying cpu reorganizes the
+ * interrupt affinities.
*/
+ irq_lock_sparse();
+ /*
+ * So now all preempt/rcu users must observe !cpu_active().
+ */
err = __stop_machine(take_cpu_down, &tcd_param, cpumask_of(cpu));
if (err) {
/* CPU didn't die: tell everyone. Can't complain. */
cpu_notify_nofail(CPU_DOWN_FAILED | mod, hcpu);
+ irq_unlock_sparse();
goto out_release;
}
BUG_ON(cpu_online(cpu));
@@ -415,6 +422,9 @@ static int __ref _cpu_down(unsigned int cpu, int tasks_frozen)
smp_mb(); /* Read from cpu_dead_idle before __cpu_die(). */
per_cpu(cpu_dead_idle, cpu) = false;
+ /* Interrupts are moved away from the dying cpu, reenable alloc/free */
+ irq_unlock_sparse();
+
hotplug_cpu__broadcast_tick_pull(cpu);
/* This actually kills the CPU. */
__cpu_die(cpu);
@@ -517,8 +527,18 @@ static int _cpu_up(unsigned int cpu, int tasks_frozen)
goto out_notify;
}
+ /*
+ * Some architectures have to walk the irq descriptors to
+ * setup the vector space for the cpu which comes online.
+ * Prevent irq alloc/free across the bringup.
+ */
+ irq_lock_sparse();
+
/* Arch-specific enabling code. */
ret = __cpu_up(cpu, idle);
+
+ irq_unlock_sparse();
+
if (ret != 0)
goto out_notify;
BUG_ON(!cpu_online(cpu));
diff --git a/kernel/irq/internals.h b/kernel/irq/internals.h
index 4834ee8..61008b8 100644
--- a/kernel/irq/internals.h
+++ b/kernel/irq/internals.h
@@ -76,12 +76,8 @@ extern void unmask_threaded_irq(struct irq_desc *desc);
#ifdef CONFIG_SPARSE_IRQ
static inline void irq_mark_irq(unsigned int irq) { }
-extern void irq_lock_sparse(void);
-extern void irq_unlock_sparse(void);
#else
extern void irq_mark_irq(unsigned int irq);
-static inline void irq_lock_sparse(void) { }
-static inline void irq_unlock_sparse(void) { }
#endif
extern void init_kstat_irqs(struct irq_desc *desc, int node, int nr);
On 07/05/2015 01:12 PM, Thomas Gleixner wrote:
> When a cpu goes up some architectures (e.g. x86) have to walk the irq
> space to set up the vector space for the cpu. While this needs extra
> protection at the architecture level we can avoid a few race
> conditions by preventing the concurrent allocation/free of irq
> descriptors and the associated data.
>
> When a cpu goes down it moves the interrupts which are targeted to
> this cpu away by reassigning the affinities. While this happens
> interrupts can be allocated and freed, which opens a can of race
> conditions in the code which reassignes the affinities because
> interrupt descriptors might be freed underneath.
>
> Example:
>
> CPU1 CPU2
> cpu_up/down
> irq_desc = irq_to_desc(irq);
> remove_from_radix_tree(desc);
> raw_spin_lock(&desc->lock);
> free(desc);
>
> We could protect the irq descriptors with RCU, but that would require
> a full tree change of all accesses to interrupt descriptors. But
> fortunately these kind of race conditions are rather limited to a few
> things like cpu hotplug. The normal setup/teardown is very well
> serialized. So the simpler and obvious solution is:
>
> Prevent allocation and freeing of interrupt descriptors accross cpu
> hotplug.
This breaks Xen guests that allocate interrupt descriptors in .cpu_up().
Any chance this locking can be moved into arch code? Otherwise we will
need to have something like arch_post_cpu_up() after the lock is released.
(The patch doesn't appear to have any side effects for the down path
since Xen guests deallocate descriptors in __cpu_die()).
-boris
>
> Signed-off-by: Thomas Gleixner<[email protected]>
> ---
> include/linux/irqdesc.h | 7 ++++++-
> kernel/cpu.c | 21 ++++++++++++++++++++-
> kernel/irq/internals.h | 4 ----
> 3 files changed, 26 insertions(+), 6 deletions(-)
>
> Index: tip/include/linux/irqdesc.h
> ===================================================================
> --- tip.orig/include/linux/irqdesc.h
> +++ tip/include/linux/irqdesc.h
> @@ -90,7 +90,12 @@ struct irq_desc {
> const char *name;
> } ____cacheline_internodealigned_in_smp;
>
> -#ifndef CONFIG_SPARSE_IRQ
> +#ifdef CONFIG_SPARSE_IRQ
> +extern void irq_lock_sparse(void);
> +extern void irq_unlock_sparse(void);
> +#else
> +static inline void irq_lock_sparse(void) { }
> +static inline void irq_unlock_sparse(void) { }
> extern struct irq_desc irq_desc[NR_IRQS];
> #endif
>
> Index: tip/kernel/cpu.c
> ===================================================================
> --- tip.orig/kernel/cpu.c
> +++ tip/kernel/cpu.c
> @@ -392,13 +392,19 @@ static int __ref _cpu_down(unsigned int
> smpboot_park_threads(cpu);
>
> /*
> - * So now all preempt/rcu users must observe !cpu_active().
> + * Prevent irq alloc/free while the dying cpu reorganizes the
> + * interrupt affinities.
> */
> + irq_lock_sparse();
>
> + /*
> + * So now all preempt/rcu users must observe !cpu_active().
> + */
> err = __stop_machine(take_cpu_down, &tcd_param, cpumask_of(cpu));
> if (err) {
> /* CPU didn't die: tell everyone. Can't complain. */
> cpu_notify_nofail(CPU_DOWN_FAILED | mod, hcpu);
> + irq_unlock_sparse();
> goto out_release;
> }
> BUG_ON(cpu_online(cpu));
> @@ -415,6 +421,9 @@ static int __ref _cpu_down(unsigned int
> smp_mb(); /* Read from cpu_dead_idle before __cpu_die(). */
> per_cpu(cpu_dead_idle, cpu) = false;
>
> + /* Interrupts are moved away from the dying cpu, reenable alloc/free */
> + irq_unlock_sparse();
> +
> hotplug_cpu__broadcast_tick_pull(cpu);
> /* This actually kills the CPU. */
> __cpu_die(cpu);
> @@ -517,8 +526,18 @@ static int _cpu_up(unsigned int cpu, int
> goto out_notify;
> }
>
> + /*
> + * Some architectures have to walk the irq descriptors to
> + * setup the vector space for the cpu which comes online.
> + * Prevent irq alloc/free across the bringup.
> + */
> + irq_lock_sparse();
> +
> /* Arch-specific enabling code. */
> ret = __cpu_up(cpu, idle);
> +
> + irq_unlock_sparse();
> +
> if (ret != 0)
> goto out_notify;
> BUG_ON(!cpu_online(cpu));
> Index: tip/kernel/irq/internals.h
> ===================================================================
> --- tip.orig/kernel/irq/internals.h
> +++ tip/kernel/irq/internals.h
> @@ -76,12 +76,8 @@ extern void unmask_threaded_irq(struct i
>
> #ifdef CONFIG_SPARSE_IRQ
> static inline void irq_mark_irq(unsigned int irq) { }
> -extern void irq_lock_sparse(void);
> -extern void irq_unlock_sparse(void);
> #else
> extern void irq_mark_irq(unsigned int irq);
> -static inline void irq_lock_sparse(void) { }
> -static inline void irq_unlock_sparse(void) { }
> #endif
>
> extern void init_kstat_irqs(struct irq_desc *desc, int node, int nr);
>
>
> --
On Tue, 14 Jul 2015, Boris Ostrovsky wrote:
> > Prevent allocation and freeing of interrupt descriptors accross cpu
> > hotplug.
>
>
> This breaks Xen guests that allocate interrupt descriptors in .cpu_up().
And where exactly does XEN allocate those descriptors?
> Any chance this locking can be moved into arch code?
No.
> (The patch doesn't appear to have any side effects for the down path since Xen
> guests deallocate descriptors in __cpu_die()).
Exact place please.
Thanks,
tglx
On 07/14/2015 11:44 AM, Thomas Gleixner wrote:
> On Tue, 14 Jul 2015, Boris Ostrovsky wrote:
>>> Prevent allocation and freeing of interrupt descriptors accross cpu
>>> hotplug.
>>
>> This breaks Xen guests that allocate interrupt descriptors in .cpu_up().
> And where exactly does XEN allocate those descriptors?
xen_cpu_up()
xen_setup_timer()
bind_virq_to_irqhandler()
bind_virq_to_irq()
xen_allocate_irq_dynamic()
xen_allocate_irqs_dynamic()
irq_alloc_descs()
There is also a similar pass via xen_cpu_up() -> xen_smp_intr_init()
>
>> Any chance this locking can be moved into arch code?
> No.
>
>> (The patch doesn't appear to have any side effects for the down path since Xen
>> guests deallocate descriptors in __cpu_die()).
>
> Exact place please.
Whose place? Where descriptors are deallocated?
__cpu_die()
xen_cpu_die()
xen_teardown_timer()
unbind_from_irqhandler()
unbind_from_irq()
__unbind_from_irq()
xen_free_irq()
irq_free_descs()
free_desc()
-boris
On Tue, 14 Jul 2015, Boris Ostrovsky wrote:
> On 07/14/2015 11:44 AM, Thomas Gleixner wrote:
> > On Tue, 14 Jul 2015, Boris Ostrovsky wrote:
> > > > Prevent allocation and freeing of interrupt descriptors accross cpu
> > > > hotplug.
> > >
> > > This breaks Xen guests that allocate interrupt descriptors in .cpu_up().
> > And where exactly does XEN allocate those descriptors?
>
> xen_cpu_up()
> xen_setup_timer()
> bind_virq_to_irqhandler()
> bind_virq_to_irq()
> xen_allocate_irq_dynamic()
> xen_allocate_irqs_dynamic()
> irq_alloc_descs()
>
>
> There is also a similar pass via xen_cpu_up() -> xen_smp_intr_init()
Sigh.
>
> >
> > > Any chance this locking can be moved into arch code?
> > No.
The issue here is that all architectures need that protection and just
Xen does irq allocations in cpu_up.
So moving that protection into architecture code is not really an
option.
> > > Otherwise we will need to have something like arch_post_cpu_up()
> > > after the lock is released.
I'm not sure, that this will work. You probably want to do this in the
cpu prepare stage, i.e. before calling __cpu_up().
I have to walk the dogs now. Will look into it later tonight.
> > > (The patch doesn't appear to have any side effects for the down path since
> > > Xen
> > > guests deallocate descriptors in __cpu_die()).
> > Exact place please.
>
> Whose place? Where descriptors are deallocated?
>
> __cpu_die()
> xen_cpu_die()
> xen_teardown_timer()
> unbind_from_irqhandler()
> unbind_from_irq()
> __unbind_from_irq()
> xen_free_irq()
> irq_free_descs()
> free_desc()
Right, that's outside the lock held region.
Thanks,
tglx
On 07/14/2015 01:32 PM, Thomas Gleixner wrote:
> On Tue, 14 Jul 2015, Boris Ostrovsky wrote:
>> On 07/14/2015 11:44 AM, Thomas Gleixner wrote:
>>> On Tue, 14 Jul 2015, Boris Ostrovsky wrote:
>>>>> Prevent allocation and freeing of interrupt descriptors accross cpu
>>>>> hotplug.
>>>> This breaks Xen guests that allocate interrupt descriptors in .cpu_up().
>>> And where exactly does XEN allocate those descriptors?
>> xen_cpu_up()
>> xen_setup_timer()
>> bind_virq_to_irqhandler()
>> bind_virq_to_irq()
>> xen_allocate_irq_dynamic()
>> xen_allocate_irqs_dynamic()
>> irq_alloc_descs()
>>
>>
>> There is also a similar pass via xen_cpu_up() -> xen_smp_intr_init()
> Sigh.
>
>>>
>>>> Any chance this locking can be moved into arch code?
>>> No.
> The issue here is that all architectures need that protection and just
> Xen does irq allocations in cpu_up.
>
> So moving that protection into architecture code is not really an
> option.
>
>>>> Otherwise we will need to have something like arch_post_cpu_up()
>>>> after the lock is released.
> I'm not sure, that this will work. You probably want to do this in the
> cpu prepare stage, i.e. before calling __cpu_up().
For PV guests (the ones that use xen_cpu_up()) it will work either
before or after __cpu_up(). At least my (somewhat limited) testing
didn't show any problems so far.
However, HVM CPUs use xen_hvm_cpu_up() and if you read comments there
you will see that xen_smp_intr_init() needs to be called before
native_cpu_up() but xen_init_lock_cpu() (which eventually calls
irq_alloc_descs()) needs to be called after.
I think I can split xen_init_lock_cpu() so that the part that needs to
be called after will avoid going into irq core code. And then the rest
will go into arch_cpu_prepare().
-boris
On Tue, 14 Jul 2015, Boris Ostrovsky wrote:
> On 07/14/2015 01:32 PM, Thomas Gleixner wrote:
> > On Tue, 14 Jul 2015, Boris Ostrovsky wrote:
> > > On 07/14/2015 11:44 AM, Thomas Gleixner wrote:
> > > > On Tue, 14 Jul 2015, Boris Ostrovsky wrote:
> > > > > > Prevent allocation and freeing of interrupt descriptors accross cpu
> > > > > > hotplug.
> > > > > This breaks Xen guests that allocate interrupt descriptors in
> > > > > .cpu_up().
> > > > And where exactly does XEN allocate those descriptors?
> > > xen_cpu_up()
> > > xen_setup_timer()
> > > bind_virq_to_irqhandler()
> > > bind_virq_to_irq()
> > > xen_allocate_irq_dynamic()
> > > xen_allocate_irqs_dynamic()
> > > irq_alloc_descs()
> > >
> > >
> > > There is also a similar pass via xen_cpu_up() -> xen_smp_intr_init()
> > Sigh.
> >
> > > >
> > > > > Any chance this locking can be moved into arch code?
> > > > No.
> > The issue here is that all architectures need that protection and just
> > Xen does irq allocations in cpu_up.
> >
> > So moving that protection into architecture code is not really an
> > option.
> >
> > > > > Otherwise we will need to have something like arch_post_cpu_up()
> > > > > after the lock is released.
> > I'm not sure, that this will work. You probably want to do this in the
> > cpu prepare stage, i.e. before calling __cpu_up().
>
> For PV guests (the ones that use xen_cpu_up()) it will work either before or
> after __cpu_up(). At least my (somewhat limited) testing didn't show any
> problems so far.
>
> However, HVM CPUs use xen_hvm_cpu_up() and if you read comments there you will
> see that xen_smp_intr_init() needs to be called before native_cpu_up() but
> xen_init_lock_cpu() (which eventually calls irq_alloc_descs()) needs to be
> called after.
>
> I think I can split xen_init_lock_cpu() so that the part that needs to be
> called after will avoid going into irq core code. And then the rest will go
> into arch_cpu_prepare().
I think we should revisit this for 4.3. For 4.2 we can do the trivial
variant and move the locking in native_cpu_up() and x86 only. x86 was
the only arch on which such wreckage has been seen in the wild, but we
should have that protection for all archs in the long run.
Patch below should fix the issue.
Thanks,
tglx
---
commit d4a969314077914a623f3e2c5120cd2ef31aba30
Author: Thomas Gleixner <[email protected]>
Date: Tue Jul 14 22:03:57 2015 +0200
genirq: Revert sparse irq locking around __cpu_up() and move it to x86 for now
Boris reported that the sparse_irq protection around __cpu_up() in the
generic code causes a regression on Xen. Xen allocates interrupts and
some more in the xen_cpu_up() function, so it deadlocks on the
sparse_irq_lock.
There is no simple fix for this and we really should have the
protection for all architectures, but for now the only solution is to
move it to x86 where actual wreckage due to the lack of protection has
been observed.
Reported-by: Boris Ostrovsky <[email protected]>
Fixes: a89941816726 'hotplug: Prevent alloc/free of irq descriptors during cpu up/down'
Signed-off-by: Thomas Gleixner <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: xiao jin <[email protected]>
Cc: Joerg Roedel <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Yanmin Zhang <[email protected]>
Cc: xen-devel <[email protected]>
diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index d3010aa79daf..b1f3ed9c7a9e 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -992,8 +992,17 @@ int native_cpu_up(unsigned int cpu, struct task_struct *tidle)
common_cpu_up(cpu, tidle);
+ /*
+ * We have to walk the irq descriptors to setup the vector
+ * space for the cpu which comes online. Prevent irq
+ * alloc/free across the bringup.
+ */
+ irq_lock_sparse();
+
err = do_boot_cpu(apicid, cpu, tidle);
+
if (err) {
+ irq_unlock_sparse();
pr_err("do_boot_cpu failed(%d) to wakeup CPU#%u\n", err, cpu);
return -EIO;
}
@@ -1011,6 +1020,8 @@ int native_cpu_up(unsigned int cpu, struct task_struct *tidle)
touch_nmi_watchdog();
}
+ irq_unlock_sparse();
+
return 0;
}
diff --git a/kernel/cpu.c b/kernel/cpu.c
index 6a374544d495..5644ec5582b9 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -527,18 +527,9 @@ static int _cpu_up(unsigned int cpu, int tasks_frozen)
goto out_notify;
}
- /*
- * Some architectures have to walk the irq descriptors to
- * setup the vector space for the cpu which comes online.
- * Prevent irq alloc/free across the bringup.
- */
- irq_lock_sparse();
-
/* Arch-specific enabling code. */
ret = __cpu_up(cpu, idle);
- irq_unlock_sparse();
-
if (ret != 0)
goto out_notify;
BUG_ON(!cpu_online(cpu));
On 07/14/2015 04:15 PM, Thomas Gleixner wrote:
> On Tue, 14 Jul 2015, Boris Ostrovsky wrote:
>> On 07/14/2015 01:32 PM, Thomas Gleixner wrote:
>>> On Tue, 14 Jul 2015, Boris Ostrovsky wrote:
>>>> On 07/14/2015 11:44 AM, Thomas Gleixner wrote:
>>>>> On Tue, 14 Jul 2015, Boris Ostrovsky wrote:
>>>>>>> Prevent allocation and freeing of interrupt descriptors accross cpu
>>>>>>> hotplug.
>>>>>> This breaks Xen guests that allocate interrupt descriptors in
>>>>>> .cpu_up().
>>>>> And where exactly does XEN allocate those descriptors?
>>>> xen_cpu_up()
>>>> xen_setup_timer()
>>>> bind_virq_to_irqhandler()
>>>> bind_virq_to_irq()
>>>> xen_allocate_irq_dynamic()
>>>> xen_allocate_irqs_dynamic()
>>>> irq_alloc_descs()
>>>>
>>>>
>>>> There is also a similar pass via xen_cpu_up() -> xen_smp_intr_init()
>>> Sigh.
>>>
>>>>>
>>>>>> Any chance this locking can be moved into arch code?
>>>>> No.
>>> The issue here is that all architectures need that protection and just
>>> Xen does irq allocations in cpu_up.
>>>
>>> So moving that protection into architecture code is not really an
>>> option.
>>>
>>>>>> Otherwise we will need to have something like arch_post_cpu_up()
>>>>>> after the lock is released.
>>> I'm not sure, that this will work. You probably want to do this in the
>>> cpu prepare stage, i.e. before calling __cpu_up().
>> For PV guests (the ones that use xen_cpu_up()) it will work either before or
>> after __cpu_up(). At least my (somewhat limited) testing didn't show any
>> problems so far.
>>
>> However, HVM CPUs use xen_hvm_cpu_up() and if you read comments there you will
>> see that xen_smp_intr_init() needs to be called before native_cpu_up() but
>> xen_init_lock_cpu() (which eventually calls irq_alloc_descs()) needs to be
>> called after.
>>
>> I think I can split xen_init_lock_cpu() so that the part that needs to be
>> called after will avoid going into irq core code. And then the rest will go
>> into arch_cpu_prepare().
> I think we should revisit this for 4.3. For 4.2 we can do the trivial
> variant and move the locking in native_cpu_up() and x86 only. x86 was
> the only arch on which such wreckage has been seen in the wild, but we
> should have that protection for all archs in the long run.
>
> Patch below should fix the issue.
Thanks! Most of my tests passed, I had a couple of failures but I will
need to see whether they are related to this patch.
-boris
Boris,
On Tue, 14 Jul 2015, Boris Ostrovsky wrote:
> On 07/14/2015 04:15 PM, Thomas Gleixner wrote:
> > > > The issue here is that all architectures need that protection and just
> > > > Xen does irq allocations in cpu_up.
> > > >
> > > > So moving that protection into architecture code is not really an
> > > > option.
> > > >
> > > > > > > Otherwise we will need to have something like arch_post_cpu_up()
> > > > > > > after the lock is released.
> > > > I'm not sure, that this will work. You probably want to do this in the
> > > > cpu prepare stage, i.e. before calling __cpu_up().
> > > For PV guests (the ones that use xen_cpu_up()) it will work either before
> > > or
> > > after __cpu_up(). At least my (somewhat limited) testing didn't show any
> > > problems so far.
> > >
> > > However, HVM CPUs use xen_hvm_cpu_up() and if you read comments there you
> > > will
> > > see that xen_smp_intr_init() needs to be called before native_cpu_up() but
> > > xen_init_lock_cpu() (which eventually calls irq_alloc_descs()) needs to be
> > > called after.
> > >
> > > I think I can split xen_init_lock_cpu() so that the part that needs to be
> > > called after will avoid going into irq core code. And then the rest will
> > > go
> > > into arch_cpu_prepare().
> > I think we should revisit this for 4.3. For 4.2 we can do the trivial
> > variant and move the locking in native_cpu_up() and x86 only. x86 was
> > the only arch on which such wreckage has been seen in the wild, but we
> > should have that protection for all archs in the long run.
> >
> > Patch below should fix the issue.
>
> Thanks! Most of my tests passed, I had a couple of failures but I will need to
> see whether they are related to this patch.
Did you ever come around to address that irq allocation from within cpu_up()?
I really want to generalize the protection instead of carrying that x86 only
hack forever.
Thanks,
tglx
On 03/12/2016 04:19 AM, Thomas Gleixner wrote:
> Boris,
>
> On Tue, 14 Jul 2015, Boris Ostrovsky wrote:
>> On 07/14/2015 04:15 PM, Thomas Gleixner wrote:
>>>>> The issue here is that all architectures need that protection and just
>>>>> Xen does irq allocations in cpu_up.
>>>>>
>>>>> So moving that protection into architecture code is not really an
>>>>> option.
>>>>>
>>>>>>>> Otherwise we will need to have something like arch_post_cpu_up()
>>>>>>>> after the lock is released.
>>>>> I'm not sure, that this will work. You probably want to do this in the
>>>>> cpu prepare stage, i.e. before calling __cpu_up().
>>>> For PV guests (the ones that use xen_cpu_up()) it will work either before
>>>> or
>>>> after __cpu_up(). At least my (somewhat limited) testing didn't show any
>>>> problems so far.
>>>>
>>>> However, HVM CPUs use xen_hvm_cpu_up() and if you read comments there you
>>>> will
>>>> see that xen_smp_intr_init() needs to be called before native_cpu_up() but
>>>> xen_init_lock_cpu() (which eventually calls irq_alloc_descs()) needs to be
>>>> called after.
>>>>
>>>> I think I can split xen_init_lock_cpu() so that the part that needs to be
>>>> called after will avoid going into irq core code. And then the rest will
>>>> go
>>>> into arch_cpu_prepare().
>>> I think we should revisit this for 4.3. For 4.2 we can do the trivial
>>> variant and move the locking in native_cpu_up() and x86 only. x86 was
>>> the only arch on which such wreckage has been seen in the wild, but we
>>> should have that protection for all archs in the long run.
>>>
>>> Patch below should fix the issue.
>> Thanks! Most of my tests passed, I had a couple of failures but I will need to
>> see whether they are related to this patch.
> Did you ever come around to address that irq allocation from within cpu_up()?
>
> I really want to generalize the protection instead of carrying that x86 only
> hack forever.
Sorry, I completely forgot about this. Let me see how I can take
allocations from under the lock. I might just be able to put them in CPU
notifiers --- most into CPU_UP_PREPARE but spinlock interrupt may need
to go into CPU_ONLINE.
-boris