2010-11-12 14:51:09

by Don Zickus

[permalink] [raw]
Subject: [PATCH 1/3] x86: only call smp_processor_id in non-preempt cases

There are some paths that walk the die_chain with preemption on.
Make sure we are in an NMI call before we start doing anything.

Reported-by: Jan Kiszka <[email protected]>
Signed-off-by: Don Zickus <[email protected]>
---
arch/x86/kernel/apic/hw_nmi.c | 3 ++-
1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/arch/x86/kernel/apic/hw_nmi.c b/arch/x86/kernel/apic/hw_nmi.c
index 5c4f952..ef4755d 100644
--- a/arch/x86/kernel/apic/hw_nmi.c
+++ b/arch/x86/kernel/apic/hw_nmi.c
@@ -49,7 +49,7 @@ arch_trigger_all_cpu_backtrace_handler(struct notifier_block *self,
{
struct die_args *args = __args;
struct pt_regs *regs;
- int cpu = smp_processor_id();
+ int cpu;

switch (cmd) {
case DIE_NMI:
@@ -60,6 +60,7 @@ arch_trigger_all_cpu_backtrace_handler(struct notifier_block *self,
}

regs = args->regs;
+ cpu = smp_processor_id();

if (cpumask_test_cpu(cpu, to_cpumask(backtrace_mask))) {
static arch_spinlock_t lock = __ARCH_SPIN_LOCK_UNLOCKED;
--
1.7.2.3


2010-11-12 14:51:18

by Don Zickus

[permalink] [raw]
Subject: [PATCH 2/3] x86, hw_nmi: Move backtrace_mask declaration under ARCH_HAS_NMI_WATCHDOG.

From: Rakib Mullick <[email protected]>

backtrace_mask has been used under the code context of
ARCH_HAS_NMI_WATCHDOG. So put it into that context.
We were warned by the following warning:

arch/x86/kernel/apic/hw_nmi.c:21: warning: ‘backtrace_mask’ defined but not used

Signed-off-by: Rakib Mullick <[email protected]>
Signed-off-by: Don Zickus <[email protected]>
---
arch/x86/kernel/apic/hw_nmi.c | 7 ++++---
1 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kernel/apic/hw_nmi.c b/arch/x86/kernel/apic/hw_nmi.c
index ef4755d..f349647 100644
--- a/arch/x86/kernel/apic/hw_nmi.c
+++ b/arch/x86/kernel/apic/hw_nmi.c
@@ -17,15 +17,16 @@
#include <linux/nmi.h>
#include <linux/module.h>

-/* For reliability, we're prepared to waste bits here. */
-static DECLARE_BITMAP(backtrace_mask, NR_CPUS) __read_mostly;
-
u64 hw_nmi_get_sample_period(void)
{
return (u64)(cpu_khz) * 1000 * 60;
}

#ifdef ARCH_HAS_NMI_WATCHDOG
+
+/* For reliability, we're prepared to waste bits here. */
+static DECLARE_BITMAP(backtrace_mask, NR_CPUS) __read_mostly;
+
void arch_trigger_all_cpu_backtrace(void)
{
int i;
--
1.7.2.3

2010-11-12 14:51:27

by Don Zickus

[permalink] [raw]
Subject: [PATCH 3/3] x86: Avoid calling arch_trigger_all_cpu_backtrace() at the same time

From: Dongdong Deng <[email protected]>

The spin_lock_debug/rcu_cpu_stall detector uses
trigger_all_cpu_backtrace() to dump cpu backtrace.
Therefore it is possible that trigger_all_cpu_backtrace()
could be called at the same time on different CPUs, which
triggers and 'unknown reason NMI' warning. The following case
illustrates the problem:

CPU1 CPU2 ... CPU N
trigger_all_cpu_backtrace()
set "backtrace_mask" to cpu mask
|
generate NMI interrupts generate NMI interrupts ...
\ | /
\ | /

The "backtrace_mask" will be cleaned by the first NMI interrupt
at nmi_watchdog_tick(), then the following NMI interrupts generated
by other cpus's arch_trigger_all_cpu_backtrace() will be taken as
unknown reason NMI interrupts.

This patch uses a test_and_set to avoid the problem, and stop the
arch_trigger_all_cpu_backtrace() from calling to avoid dumping a
double cpu backtrace info when there is already a
trigger_all_cpu_backtrace() in progress.

Signed-off-by: Dongdong Deng <[email protected]>
Reviewed-by: Bruce Ashfield <[email protected]>
CC: Thomas Gleixner <[email protected]>
CC: Ingo Molnar <[email protected]>
CC: "H. Peter Anvin" <[email protected]>
CC: [email protected]
CC: [email protected]
Signed-off-by: Don Zickus <[email protected]>
---
arch/x86/kernel/apic/hw_nmi.c | 24 ++++++++++++++++++++++++
1 files changed, 24 insertions(+), 0 deletions(-)

diff --git a/arch/x86/kernel/apic/hw_nmi.c b/arch/x86/kernel/apic/hw_nmi.c
index f349647..d892896 100644
--- a/arch/x86/kernel/apic/hw_nmi.c
+++ b/arch/x86/kernel/apic/hw_nmi.c
@@ -27,9 +27,27 @@ u64 hw_nmi_get_sample_period(void)
/* For reliability, we're prepared to waste bits here. */
static DECLARE_BITMAP(backtrace_mask, NR_CPUS) __read_mostly;

+/* "in progress" flag of arch_trigger_all_cpu_backtrace */
+static unsigned long backtrace_flag;
+
void arch_trigger_all_cpu_backtrace(void)
{
int i;
+ unsigned long flags;
+
+ /*
+ * Have to disable irq here, as the
+ * arch_trigger_all_cpu_backtrace() could be
+ * triggered by "spin_lock()" with irqs on.
+ */
+ local_irq_save(flags);
+
+ if (test_and_set_bit(0, &backtrace_flag))
+ /*
+ * If there is already a trigger_all_cpu_backtrace() in progress
+ * (backtrace_flag == 1), don't output double cpu dump infos.
+ */
+ goto out_restore_irq;

cpumask_copy(to_cpumask(backtrace_mask), cpu_online_mask);

@@ -42,6 +60,12 @@ void arch_trigger_all_cpu_backtrace(void)
break;
mdelay(1);
}
+
+ clear_bit(0, &backtrace_flag);
+ smp_mb__after_clear_bit();
+
+out_restore_irq:
+ local_irq_restore(flags);
}

static int __kprobes
--
1.7.2.3

2010-11-18 08:14:18

by Ingo Molnar

[permalink] [raw]
Subject: Re: [PATCH 1/3] x86: only call smp_processor_id in non-preempt cases


* Don Zickus <[email protected]> wrote:

> There are some paths that walk the die_chain with preemption on.

What are those codepaths? At minimum it's worth documenting them.

Thanks,

Ingo

2010-11-18 14:22:28

by Don Zickus

[permalink] [raw]
Subject: Re: [PATCH 1/3] x86: only call smp_processor_id in non-preempt cases

On Thu, Nov 18, 2010 at 09:14:07AM +0100, Ingo Molnar wrote:
>
> * Don Zickus <[email protected]> wrote:
>
> > There are some paths that walk the die_chain with preemption on.
>
> What are those codepaths? At minimum it's worth documenting them.

Well the one that caused the bug was do_general_protection which walks the
die_chain with DIE_GPF.

I can document them, though it might be time consuming to audit them and
hope they don't change. I guess my bigger question is, is it expected
that anyone who calls the die_chain to have preemption disabled? If not,
then does it matter if we document it?

Cheers,
Don

2010-11-18 14:49:34

by Ingo Molnar

[permalink] [raw]
Subject: Re: [PATCH 1/3] x86: only call smp_processor_id in non-preempt cases


* Don Zickus <[email protected]> wrote:

> On Thu, Nov 18, 2010 at 09:14:07AM +0100, Ingo Molnar wrote:
> >
> > * Don Zickus <[email protected]> wrote:
> >
> > > There are some paths that walk the die_chain with preemption on.
> >
> > What are those codepaths? At minimum it's worth documenting them.
>
> Well the one that caused the bug was do_general_protection which walks the
> die_chain with DIE_GPF.
>
> I can document them, though it might be time consuming to audit them and hope they
> don't change.

Listing one example is enough.

> [...] I guess my bigger question is, is it expected that anyone who calls the
> die_chain to have preemption disabled? If not, then does it matter if we document
> it?

Yes, it might be a bug to call those handlers with preemption on (or even with irqs
on). But if the code is fine as-is then documenting a single example would be nice.

Thanks,

Ingo

2010-11-18 15:36:15

by Don Zickus

[permalink] [raw]
Subject: Re: [PATCH 1/3] x86: only call smp_processor_id in non-preempt cases

On Thu, Nov 18, 2010 at 03:49:21PM +0100, Ingo Molnar wrote:
>
> * Don Zickus <[email protected]> wrote:
>
> > On Thu, Nov 18, 2010 at 09:14:07AM +0100, Ingo Molnar wrote:
> > >
> > > * Don Zickus <[email protected]> wrote:
> > >
> > > > There are some paths that walk the die_chain with preemption on.
> > >
> > > What are those codepaths? At minimum it's worth documenting them.
> >
> > Well the one that caused the bug was do_general_protection which walks the
> > die_chain with DIE_GPF.
> >
> > I can document them, though it might be time consuming to audit them and hope they
> > don't change.
>
> Listing one example is enough.
>
> > [...] I guess my bigger question is, is it expected that anyone who calls the
> > die_chain to have preemption disabled? If not, then does it matter if we document
> > it?
>
> Yes, it might be a bug to call those handlers with preemption on (or even with irqs
> on). But if the code is fine as-is then documenting a single example would be nice.
>

Is this better?

Cheers,
Don

------------------------------------->
From: Don Zickus <[email protected]>
Date: Mon, 1 Nov 2010 13:34:33 -0400
Subject: [PATCH 1/6] x86: only call smp_processor_id in non-preempt cases

There are some paths that walk the die_chain with preemption on.
Make sure we are in an NMI call before we start doing anything.

This was triggered by do_general_protection calling notify_die with
DIE_GPF.

Reported-by: Jan Kiszka <[email protected]>
Signed-off-by: Don Zickus <[email protected]>
---
arch/x86/kernel/apic/hw_nmi.c | 3 ++-
1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/arch/x86/kernel/apic/hw_nmi.c b/arch/x86/kernel/apic/hw_nmi.c
index 5c4f952..ef4755d 100644
--- a/arch/x86/kernel/apic/hw_nmi.c
+++ b/arch/x86/kernel/apic/hw_nmi.c
@@ -49,7 +49,7 @@ arch_trigger_all_cpu_backtrace_handler(struct notifier_block *self,
{
struct die_args *args = __args;
struct pt_regs *regs;
- int cpu = smp_processor_id();
+ int cpu;

switch (cmd) {
case DIE_NMI:
@@ -60,6 +60,7 @@ arch_trigger_all_cpu_backtrace_handler(struct notifier_block *self,
}

regs = args->regs;
+ cpu = smp_processor_id();

if (cpumask_test_cpu(cpu, to_cpumask(backtrace_mask))) {
static arch_spinlock_t lock = __ARCH_SPIN_LOCK_UNLOCKED;
--
1.7.3.2

2010-11-18 15:57:33

by Frederic Weisbecker

[permalink] [raw]
Subject: Re: [PATCH 3/3] x86: Avoid calling arch_trigger_all_cpu_backtrace() at the same time

On Fri, Nov 12, 2010 at 09:50:55AM -0500, Don Zickus wrote:
> From: Dongdong Deng <[email protected]>
>
> The spin_lock_debug/rcu_cpu_stall detector uses
> trigger_all_cpu_backtrace() to dump cpu backtrace.
> Therefore it is possible that trigger_all_cpu_backtrace()
> could be called at the same time on different CPUs, which
> triggers and 'unknown reason NMI' warning. The following case
> illustrates the problem:
>
> CPU1 CPU2 ... CPU N
> trigger_all_cpu_backtrace()
> set "backtrace_mask" to cpu mask
> |
> generate NMI interrupts generate NMI interrupts ...
> \ | /
> \ | /
>
> The "backtrace_mask" will be cleaned by the first NMI interrupt
> at nmi_watchdog_tick(), then the following NMI interrupts generated
> by other cpus's arch_trigger_all_cpu_backtrace() will be taken as
> unknown reason NMI interrupts.
>
> This patch uses a test_and_set to avoid the problem, and stop the
> arch_trigger_all_cpu_backtrace() from calling to avoid dumping a
> double cpu backtrace info when there is already a
> trigger_all_cpu_backtrace() in progress.
>
> Signed-off-by: Dongdong Deng <[email protected]>
> Reviewed-by: Bruce Ashfield <[email protected]>
> CC: Thomas Gleixner <[email protected]>
> CC: Ingo Molnar <[email protected]>
> CC: "H. Peter Anvin" <[email protected]>
> CC: [email protected]
> CC: [email protected]
> Signed-off-by: Don Zickus <[email protected]>
> ---
> arch/x86/kernel/apic/hw_nmi.c | 24 ++++++++++++++++++++++++
> 1 files changed, 24 insertions(+), 0 deletions(-)
>
> diff --git a/arch/x86/kernel/apic/hw_nmi.c b/arch/x86/kernel/apic/hw_nmi.c
> index f349647..d892896 100644
> --- a/arch/x86/kernel/apic/hw_nmi.c
> +++ b/arch/x86/kernel/apic/hw_nmi.c
> @@ -27,9 +27,27 @@ u64 hw_nmi_get_sample_period(void)
> /* For reliability, we're prepared to waste bits here. */
> static DECLARE_BITMAP(backtrace_mask, NR_CPUS) __read_mostly;
>
> +/* "in progress" flag of arch_trigger_all_cpu_backtrace */
> +static unsigned long backtrace_flag;
> +
> void arch_trigger_all_cpu_backtrace(void)
> {
> int i;
> + unsigned long flags;
> +
> + /*
> + * Have to disable irq here, as the
> + * arch_trigger_all_cpu_backtrace() could be
> + * triggered by "spin_lock()" with irqs on.
> + */
> + local_irq_save(flags);



I'm not sure I understand why you disable irqs here. It looks
safe with the test_and_set_bit already.



> +
> + if (test_and_set_bit(0, &backtrace_flag))
> + /*
> + * If there is already a trigger_all_cpu_backtrace() in progress
> + * (backtrace_flag == 1), don't output double cpu dump infos.
> + */
> + goto out_restore_irq;
>
> cpumask_copy(to_cpumask(backtrace_mask), cpu_online_mask);
>
> @@ -42,6 +60,12 @@ void arch_trigger_all_cpu_backtrace(void)
> break;
> mdelay(1);
> }
> +
> + clear_bit(0, &backtrace_flag);
> + smp_mb__after_clear_bit();
> +
> +out_restore_irq:
> + local_irq_restore(flags);
> }
>
> static int __kprobes
> --
> 1.7.2.3
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/

2010-11-19 02:59:54

by DDD

[permalink] [raw]
Subject: Re: [PATCH 3/3] x86: Avoid calling arch_trigger_all_cpu_backtrace() at the same time

Frederic Weisbecker wrote:
> On Fri, Nov 12, 2010 at 09:50:55AM -0500, Don Zickus wrote:
>> From: Dongdong Deng <[email protected]>
>>
>> The spin_lock_debug/rcu_cpu_stall detector uses
>> trigger_all_cpu_backtrace() to dump cpu backtrace.
>> Therefore it is possible that trigger_all_cpu_backtrace()
>> could be called at the same time on different CPUs, which
>> triggers and 'unknown reason NMI' warning. The following case
>> illustrates the problem:
>>
>> CPU1 CPU2 ... CPU N
>> trigger_all_cpu_backtrace()
>> set "backtrace_mask" to cpu mask
>> |
>> generate NMI interrupts generate NMI interrupts ...
>> \ | /
>> \ | /
>>
>> The "backtrace_mask" will be cleaned by the first NMI interrupt
>> at nmi_watchdog_tick(), then the following NMI interrupts generated
>> by other cpus's arch_trigger_all_cpu_backtrace() will be taken as
>> unknown reason NMI interrupts.
>>
>> This patch uses a test_and_set to avoid the problem, and stop the
>> arch_trigger_all_cpu_backtrace() from calling to avoid dumping a
>> double cpu backtrace info when there is already a
>> trigger_all_cpu_backtrace() in progress.
>>
>> Signed-off-by: Dongdong Deng <[email protected]>
>> Reviewed-by: Bruce Ashfield <[email protected]>
>> CC: Thomas Gleixner <[email protected]>
>> CC: Ingo Molnar <[email protected]>
>> CC: "H. Peter Anvin" <[email protected]>
>> CC: [email protected]
>> CC: [email protected]
>> Signed-off-by: Don Zickus <[email protected]>
>> ---
>> arch/x86/kernel/apic/hw_nmi.c | 24 ++++++++++++++++++++++++
>> 1 files changed, 24 insertions(+), 0 deletions(-)
>>
>> diff --git a/arch/x86/kernel/apic/hw_nmi.c b/arch/x86/kernel/apic/hw_nmi.c
>> index f349647..d892896 100644
>> --- a/arch/x86/kernel/apic/hw_nmi.c
>> +++ b/arch/x86/kernel/apic/hw_nmi.c
>> @@ -27,9 +27,27 @@ u64 hw_nmi_get_sample_period(void)
>> /* For reliability, we're prepared to waste bits here. */
>> static DECLARE_BITMAP(backtrace_mask, NR_CPUS) __read_mostly;
>>
>> +/* "in progress" flag of arch_trigger_all_cpu_backtrace */
>> +static unsigned long backtrace_flag;
>> +
>> void arch_trigger_all_cpu_backtrace(void)
>> {
>> int i;
>> + unsigned long flags;
>> +
>> + /*
>> + * Have to disable irq here, as the
>> + * arch_trigger_all_cpu_backtrace() could be
>> + * triggered by "spin_lock()" with irqs on.
>> + */
>> + local_irq_save(flags);
>
>
>
> I'm not sure I understand why you disable irqs here. It looks
> safe with the test_and_set_bit already.

Hi Frederic,

Yep, after we use test_and_set_bit to replace spin_lock,
the disable irqs ops obvious could be removed here.

I will redo this patch, and send it to Don.

Thanks,
Dongdong

>
>
>
>> +
>> + if (test_and_set_bit(0, &backtrace_flag))
>> + /*
>> + * If there is already a trigger_all_cpu_backtrace() in progress
>> + * (backtrace_flag == 1), don't output double cpu dump infos.
>> + */
>> + goto out_restore_irq;
>>
>> cpumask_copy(to_cpumask(backtrace_mask), cpu_online_mask);
>>
>> @@ -42,6 +60,12 @@ void arch_trigger_all_cpu_backtrace(void)
>> break;
>> mdelay(1);
>> }
>> +
>> + clear_bit(0, &backtrace_flag);
>> + smp_mb__after_clear_bit();
>> +
>> +out_restore_irq:
>> + local_irq_restore(flags);
>> }
>>
>> static int __kprobes
>> --
>> 1.7.2.3
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
>> the body of a message to [email protected]
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>> Please read the FAQ at http://www.tux.org/lkml/
>
>