2015-04-29 15:06:01

by Ethan Zhao

[permalink] [raw]
Subject: [PATCH] x86/mce: fix mce_restart() race with CPU hotplug operation

while testing CPU hotplug and MCE with following two scripts,

script 1:

for i in {1..30}; do while :; do ((a=$RANDOM%160)); echo 0 >>
/sys/devices/system/cpu/cpu${i}/online; echo 1 >>
/sys/devices/system/cpu/cpu${i}/online; done & done

script 2:

while :; do for i in $(ls
/sys/devices/system/machinecheck/machinecheck*/check_interval); do echo 1 >>
$i; done; done

We got panic call trace as:

------------[ cut here ]------------
kernel BUG at kernel/timer.c:929!
invalid opcode: 0000 [#1] SMP
Modules linked in: fuse tun coretemp acpi_cpufreq mperf freq_table
intel_powerclampsmpboot: CPU 29 is now offline
kvm_intel kvm crc32c_intel ghash_clmulni_intel aesni_intel xts aes_x86_64
lrw gf128mul ablk_helper cryptd iTCO_wdt iTCO_vendor_support ses microcode
pcspkr enclosure i2c_i801 i2c_core lpc_ich i7core_edac mfd_core edac_core
shpchp ext3 mbcache jbd sd_mod crc_t10dif ixgbe ptp igb pps_core ahci libahci
dca megaraid_sas hwmon ipv6 autofs4
CPU 101
Pid: 0, comm: swapper/101 Tainted: G W 3.8.13
#2 Oracle Corporation Sun Fire X4800 M2 /
RIP: 0010:[<ffffffff8106bb92>] [<ffffffff8106bb92>] add_timer_on+0xe2/0xf0
RSP: 0000:ffff88303f843de8 EFLAGS: 00010282
RAX: 0000000000000000 RBX: ffff88303f84c920 RCX: 000000011eb1d709
RDX: ffff88303f840000 RSI: 0000000000000065 RDI: ffff88303f84c920
RBP: ffff88303f843e18 R08: 000000011eb1d03a R09: ffff88303f843d68
R10: ffff88303f843d6c R11: 0000000000000006 R12: 00000000000007d0
R13: ffff883029710000 R14: 0000000000000065 R15: 0000000000000066
FS: 0000000000000000(0000) GS:ffff88303f840000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: 00007f7e1b7b7000 CR3: 0000000001886000 CR4: 00000000000007e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process swapper/101 (pid: 0, threadinfo ffff881028f94000, task
ffff881028f92500)
Stack:
0000000001000c18 ffff88303f84c920 00000000000007d0 0000000000000100
ffffffff81030bb0 0000000000000066 ffff88303f843e38 ffffffff81030c44
ffff88303f88c920 ffff88303f88c920 ffff88303f843e70 ffffffff8106acbb
Call Trace:
<IRQ>
[<ffffffff81030bb0>] ? mce_cpu_restart+0x40/0x40
[<ffffffff81030c44>] mce_timer_fn+0x94/0x130
[<ffffffff8106acbb>] call_timer_fn+0x3b/0x110
[<ffffffff81030bb0>] ? mce_cpu_restart+0x40/0x40
[<ffffffff8106b8dd>] run_timer_softirq+0x1cd/0x2b0
[<ffffffff81063108>] __do_softirq+0xd8/0x210
[<ffffffff8144f7c0>] ? intel_pstate_timer_func+0x3a0/0x3a0
[<ffffffff8157e15c>] call_softirq+0x1c/0x30
[<ffffffff81017185>] do_softirq+0x65/0xa0
[<ffffffff810633b5>] irq_exit+0xa5/0xb0
[<ffffffff8157eede>] smp_apic_timer_interrupt+0x6e/0x9c
[<ffffffff8157da1d>] apic_timer_interrupt+0x6d/0x80
<EOI>
[<ffffffff81450131>] ? cpuidle_wrap_enter+0x41/0x80
[<ffffffff81450180>] cpuidle_enter_tk+0x10/0x20
[<ffffffff8144fef7>] cpuidle_idle_call+0xb7/0x1e0
[<ffffffff8101dd75>] cpu_idle+0xe5/0x140
[<ffffffff815610b4>] start_secondary+0x24e/0x250
Code: 90 00 4d 85 ff 74 22 49 8b 0f 0f 1f 80 00 00 00 00 49 8b 7f 08 49 83 c7
10 4c 89 e2 48 89 de ff d1 49 8b 0f 48 85 c9 75 e8 eb 97 <0f> 0b 66 66 66 2e
0f 1f 84 00 00 00 00 00 66 66 66 66 90 55 48
RIP [<ffffffff8106bb92>] add_timer_on+0xe2/0xf0
RSP <ffff88303f843de8>

This panic was caused by race condition between mce_restart() and CPU hotplug
operation. we should protect the mce_restart() operation with

get_online_cpus();
put_online_cpus();

functions pair, just as other subsystem that does iteration on cpu_online_mask.

This bug will affect stable branch 4.0, 3.8, 3.19 (didn't check others).
and this patch has been verified on stable 4.0 branch.

Reported-by: Tim Uglow <[email protected]>
Signed-off-by: Ethan Zhao <[email protected]>
---
arch/x86/kernel/cpu/mcheck/mce.c | 4 ++++
1 file changed, 4 insertions(+)

diff --git a/arch/x86/kernel/cpu/mcheck/mce.c b/arch/x86/kernel/cpu/mcheck/mce.c
index 3c036cb..fcc2794 100644
--- a/arch/x86/kernel/cpu/mcheck/mce.c
+++ b/arch/x86/kernel/cpu/mcheck/mce.c
@@ -1338,8 +1338,10 @@ static void mce_timer_delete_all(void)
{
int cpu;

+ get_online_cpus();
for_each_online_cpu(cpu)
del_timer_sync(&per_cpu(mce_timer, cpu));
+ put_online_cpus();
}

static void mce_do_trigger(struct work_struct *work)
@@ -2085,7 +2087,9 @@ static void mce_cpu_restart(void *data)
static void mce_restart(void)
{
mce_timer_delete_all();
+ get_online_cpus();
on_each_cpu(mce_cpu_restart, NULL, 1);
+ put_online_cpus();
}

/* Toggle features for corrected errors */
--
1.8.3.1


2015-04-30 16:29:54

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH] x86/mce: fix mce_restart() race with CPU hotplug operation

On Thu, Apr 30, 2015 at 12:04:53AM +0900, Ethan Zhao wrote:
> while testing CPU hotplug and MCE with following two scripts,
>
> script 1:
>
> for i in {1..30}; do while :; do ((a=$RANDOM%160)); echo 0 >>
> /sys/devices/system/cpu/cpu${i}/online; echo 1 >>
> /sys/devices/system/cpu/cpu${i}/online; done & done
>
> script 2:
>
> while :; do for i in $(ls
> /sys/devices/system/machinecheck/machinecheck*/check_interval); do echo 1 >>
> $i; done; done

For the record, it is a public secret that CPU hotplug is broken. IOW,
you're wasting your time with those senseless pounder tests but ok.

...

> ---
> arch/x86/kernel/cpu/mcheck/mce.c | 4 ++++
> 1 file changed, 4 insertions(+)
>
> diff --git a/arch/x86/kernel/cpu/mcheck/mce.c b/arch/x86/kernel/cpu/mcheck/mce.c
> index 3c036cb..fcc2794 100644
> --- a/arch/x86/kernel/cpu/mcheck/mce.c
> +++ b/arch/x86/kernel/cpu/mcheck/mce.c
> @@ -1338,8 +1338,10 @@ static void mce_timer_delete_all(void)
> {
> int cpu;
>
> + get_online_cpus();
> for_each_online_cpu(cpu)
> del_timer_sync(&per_cpu(mce_timer, cpu));
> + put_online_cpus();
> }
>
> static void mce_do_trigger(struct work_struct *work)
> @@ -2085,7 +2087,9 @@ static void mce_cpu_restart(void *data)
> static void mce_restart(void)
> {
> mce_timer_delete_all();
> + get_online_cpus();
> on_each_cpu(mce_cpu_restart, NULL, 1);
> + put_online_cpus();

With your patch applied I get on 4.1-rc1+:

---
[ 41.364909] kvm: disabling virtualization on CPU1
[ 41.371083] smpboot: CPU 1 is now offline
[ 41.381190] x86: Booting SMP configuration:
[ 41.385405] smpboot: Booting Node 0 Processor 1 APIC 0x2
[ 41.402901] kvm: enabling virtualization on CPU1
[ 41.440944] kvm: disabling virtualization on CPU1
[ 41.447010] smpboot: CPU 1 is now offline
[ 41.486082] kvm: disabling virtualization on CPU6
[ 41.491827] smpboot: CPU 6 is now offline
[ 41.497521] smpboot: Booting Node 0 Processor 6 APIC 0x5
[ 41.514983] kvm: enabling virtualization on CPU6
[ 41.561643] kvm: disabling virtualization on CPU6
[ 41.566848] smpboot: CPU 6 is now offline
[ 41.572606] smpboot: Booting Node 0 Processor 6 APIC 0x5
[ 41.590049] kvm: enabling virtualization on CPU6
[ 41.636817] kvm: disabling virtualization on CPU6
[ 41.642575] smpboot: CPU 6 is now offline
[ 41.676812] kvm: disabling virtualization on CPU7
[ 41.682429] smpboot: CPU 7 is now offline
[ 41.687974] smpboot: Booting Node 0 Processor 7 APIC 0x7
[ 41.705416] kvm: enabling virtualization on CPU7
[ 41.752739] kvm: disabling virtualization on CPU7
[ 41.758455] smpboot: CPU 7 is now offline
[ 41.764089] smpboot: Booting Node 0 Processor 7 APIC 0x7
[ 41.781561] kvm: enabling virtualization on CPU7
[ 41.831610] kvm: disabling virtualization on CPU7
[ 41.837280] smpboot: CPU 7 is now offline

[ 41.843341] ======================================================
[ 41.849561] [ INFO: possible circular locking dependency detected ]
[ 41.855883] 4.1.0-rc1+ #2 Not tainted
[ 41.859564] -------------------------------------------------------
[ 41.865871] script2.sh/2071 is trying to acquire lock:
[ 41.871044] (cpu_hotplug.lock){++++++}, at: [<ffffffff81066c02>] get_online_cpus+0x32/0x80
[ 41.879521]
but task is already holding lock:
[ 41.885392] (s_active#121){++++.+}, at: [<ffffffff812944de>] kernfs_fop_write+0x6e/0x1a0
[ 41.893695]
which lock already depends on the new lock.

[ 41.901925]
the existing dependency chain (in reverse order) is:
[ 41.909465]
-> #2 (s_active#121){++++.+}:
[ 41.913739] [<ffffffff810c6e11>] lock_acquire+0xd1/0x2b0
[ 41.919718] [<ffffffff81292088>] __kernfs_remove+0x228/0x300
[ 41.926046] [<ffffffff81293269>] kernfs_remove_by_name_ns+0x49/0xb0
[ 41.932976] [<ffffffff81294de5>] sysfs_remove_file_ns+0x15/0x20
[ 41.939552] [<ffffffff81600939>] device_remove_file+0x19/0x20
[ 41.945968] [<ffffffff810308b4>] mce_device_remove+0x54/0xd0
[ 41.952284] [<ffffffff81030b89>] mce_cpu_callback+0x69/0x120
[ 41.958608] [<ffffffff8108fb46>] notifier_call_chain+0x66/0x90
[ 41.965124] [<ffffffff8108fb7e>] __raw_notifier_call_chain+0xe/0x10
[ 41.972053] [<ffffffff81066d33>] cpu_notify+0x23/0x50
[ 41.977761] [<ffffffff81066dfe>] cpu_notify_nofail+0xe/0x20
[ 41.983986] [<ffffffff818961d6>] _cpu_down+0x1b6/0x2d0
[ 41.989787] [<ffffffff81896326>] cpu_down+0x36/0x50
[ 41.995324] [<ffffffff816087d4>] cpu_subsys_offline+0x14/0x20
[ 42.001734] [<ffffffff816030c5>] device_offline+0x95/0xc0
[ 42.007797] [<ffffffff816031cd>] online_store+0x3d/0x90
[ 42.013673] [<ffffffff81600608>] dev_attr_store+0x18/0x30
[ 42.019735] [<ffffffff81294ca9>] sysfs_kf_write+0x49/0x60
[ 42.025796] [<ffffffff812945b0>] kernfs_fop_write+0x140/0x1a0
[ 42.032211] [<ffffffff8120d878>] __vfs_write+0x28/0xf0
[ 42.038013] [<ffffffff8120df89>] vfs_write+0xa9/0x1b0
[ 42.043715] [<ffffffff8120ed79>] SyS_write+0x49/0xb0
[ 42.049334] [<ffffffff818aacf2>] system_call_fastpath+0x16/0x7a
[ 42.055926]
-> #1 (cpu_hotplug.lock#2){+.+.+.}:
[ 42.060705] [<ffffffff810c6e11>] lock_acquire+0xd1/0x2b0
[ 42.066674] [<ffffffff818a5771>] mutex_lock_nested+0x61/0x460
[ 42.073087] [<ffffffff81066e8b>] cpu_hotplug_begin+0x7b/0xd0
[ 42.079400] [<ffffffff81066f4b>] _cpu_up+0x2b/0x160
[ 42.084927] [<ffffffff8106710a>] cpu_up+0x8a/0xb0
[ 42.090285] [<ffffffff81fac845>] smp_init+0x86/0x88
[ 42.095818] [<ffffffff81f8e113>] kernel_init_freeable+0x160/0x27d
[ 42.102577] [<ffffffff8189540e>] kernel_init+0xe/0xf0
[ 42.108455] [<ffffffff818ab162>] ret_from_fork+0x42/0x70
[ 42.114586]
-> #0 (cpu_hotplug.lock){++++++}:
[ 42.119497] [<ffffffff810c60c6>] __lock_acquire+0x1db6/0x2040
[ 42.126062] [<ffffffff810c6e11>] lock_acquire+0xd1/0x2b0
[ 42.132182] [<ffffffff81066c28>] get_online_cpus+0x58/0x80
[ 42.138476] [<ffffffff8102fc23>] mce_timer_delete_all+0x23/0x70
[ 42.145215] [<ffffffff8102fd0e>] mce_restart+0xe/0x30
[ 42.151071] [<ffffffff8102fddb>] store_int_with_restart+0x1b/0x30
[ 42.157975] [<ffffffff81600608>] dev_attr_store+0x18/0x30
[ 42.164179] [<ffffffff81294ca9>] sysfs_kf_write+0x49/0x60
[ 42.170383] [<ffffffff812945b0>] kernfs_fop_write+0x140/0x1a0
[ 42.176938] [<ffffffff8120d878>] __vfs_write+0x28/0xf0
[ 42.182879] [<ffffffff8120df89>] vfs_write+0xa9/0x1b0
[ 42.188733] [<ffffffff8120ed79>] SyS_write+0x49/0xb0
[ 42.194498] [<ffffffff818aacf2>] system_call_fastpath+0x16/0x7a
[ 42.201230]
other info that might help us debug this:

[ 42.210268] Chain exists of:
cpu_hotplug.lock --> cpu_hotplug.lock#2 --> s_active#121

[ 42.219214] Possible unsafe locking scenario:

[ 42.225875] CPU0 CPU1
[ 42.230578] ---- ----
[ 42.235283] lock(s_active#121);
[ 42.238791] lock(cpu_hotplug.lock#2);
[ 42.245362] lock(s_active#121);
[ 42.251401] lock(cpu_hotplug.lock);
[ 42.255250]
*** DEADLOCK ***

[ 42.262008] 3 locks held by script2.sh/2071:
[ 42.266446] #0: (sb_writers#6){.+.+.+}, at: [<ffffffff8120e043>] vfs_write+0x163/0x1b0
[ 42.274792] #1: (&of->mutex){+.+.+.}, at: [<ffffffff812944d6>] kernfs_fop_write+0x66/0x1a0
[ 42.283485] #2: (s_active#121){++++.+}, at: [<ffffffff812944de>] kernfs_fop_write+0x6e/0x1a0
[ 42.292353]
stack backtrace:
[ 42.297084] CPU: 3 PID: 2071 Comm: script2.sh Not tainted 4.1.0-rc1+ #2
[ 42.303880] Hardware name: Dell Inc. Precision T3600/0PTTT9, BIOS A13 05/11/2014
[ 42.311474] ffffffff82847b60 ffff88043934fb68 ffffffff818a08e0 0000000000000011
[ 42.319092] ffffffff828479b0 ffff88043934fbb8 ffffffff810c2a9d 000000000000aae0
[ 42.326712] ffff88043934fc28 0000000000000002 ffff8800a82cc0b8 0000000000000002
[ 42.334332] Call Trace:
[ 42.336939] [<ffffffff818a08e0>] dump_stack+0x4f/0x7b
[ 42.342251] [<ffffffff810c2a9d>] print_circular_bug+0x1cd/0x230
[ 42.348436] [<ffffffff810c60c6>] __lock_acquire+0x1db6/0x2040
[ 42.354455] [<ffffffff810c34d9>] ? check_irq_usage+0x89/0xc0
[ 42.360385] [<ffffffff810c6e11>] lock_acquire+0xd1/0x2b0
[ 42.365958] [<ffffffff81066c02>] ? get_online_cpus+0x32/0x80
[ 42.371880] [<ffffffff81066c28>] get_online_cpus+0x58/0x80
[ 42.377626] [<ffffffff81066c02>] ? get_online_cpus+0x32/0x80
[ 42.383556] [<ffffffff813f94ee>] ? simple_strtol+0xe/0x20
[ 42.389215] [<ffffffff8102fc23>] mce_timer_delete_all+0x23/0x70
[ 42.395401] [<ffffffff8102fd0e>] mce_restart+0xe/0x30
[ 42.400721] [<ffffffff8102fddb>] store_int_with_restart+0x1b/0x30
[ 42.407080] [<ffffffff81600608>] dev_attr_store+0x18/0x30
[ 42.412739] [<ffffffff81294ca9>] sysfs_kf_write+0x49/0x60
[ 42.418396] [<ffffffff812945b0>] kernfs_fop_write+0x140/0x1a0
[ 42.424405] [<ffffffff8120d878>] __vfs_write+0x28/0xf0
[ 42.429803] [<ffffffff8120e043>] ? vfs_write+0x163/0x1b0
[ 42.435385] [<ffffffff81188b94>] ? context_tracking_exit+0x64/0x1c0
[ 42.441922] [<ffffffff8120df89>] vfs_write+0xa9/0x1b0
[ 42.447234] [<ffffffff8120ed79>] SyS_write+0x49/0xb0
[ 42.452459] [<ffffffff818aacf2>] system_call_fastpath+0x16/0x7a
[ 42.459043] smpboot: Booting Node 0 Processor 7 APIC 0x7
[ 42.476980] kvm: enabling virtualization on CPU7
[ 42.525076] kvm: disabling virtualization on CPU7
[ 42.530297] smpboot: CPU 7 is now offline
[ 42.535071] smpboot: Booting Node 0 Processor 7 APIC 0x7
[ 42.552746] kvm: enabling virtualization on CPU7
[ 42.600174] kvm: disabling virtualization on CPU7
[ 42.605374] smpboot: CPU 7 is now offline
[ 42.610194] smpboot: Booting Node 0 Processor 7 APIC 0x7
[ 42.627827] kvm: enabling virtualization on CPU7
[ 42.681186] kvm: disabling virtualization on CPU7
[ 42.687093] smpboot: CPU 7 is now offline
[ 42.692417] smpboot: Booting Node 0 Processor 7 APIC 0x7
[ 42.710046] kvm: enabling virtualization on CPU7
[ 42.751031] kvm: disabling virtualization on CPU7
[ 42.756130] smpboot: CPU 7 is now offline

--
Regards/Gruss,
Boris.

ECO tip #101: Trim your mails when you reply.
--

2015-05-01 05:54:43

by ethan zhao

[permalink] [raw]
Subject: Re: [PATCH] x86/mce: fix mce_restart() race with CPU hotplug operation

On Fri, May 1, 2015 at 12:29 AM, Borislav Petkov <[email protected]> wrote:
> On Thu, Apr 30, 2015 at 12:04:53AM +0900, Ethan Zhao wrote:
>> while testing CPU hotplug and MCE with following two scripts,
>>
>> script 1:
>>
>> for i in {1..30}; do while :; do ((a=$RANDOM%160)); echo 0 >>
>> /sys/devices/system/cpu/cpu${i}/online; echo 1 >>
>> /sys/devices/system/cpu/cpu${i}/online; done & done
>>
>> script 2:
>>
>> while :; do for i in $(ls
>> /sys/devices/system/machinecheck/machinecheck*/check_interval); do echo 1 >>
>> $i; done; done
>
> For the record, it is a public secret that CPU hotplug is broken. IOW,
> you're wasting your time with those senseless pounder tests but ok.
>
> ...
>
>> ---
>> arch/x86/kernel/cpu/mcheck/mce.c | 4 ++++
>> 1 file changed, 4 insertions(+)
>>
>> diff --git a/arch/x86/kernel/cpu/mcheck/mce.c b/arch/x86/kernel/cpu/mcheck/mce.c
>> index 3c036cb..fcc2794 100644
>> --- a/arch/x86/kernel/cpu/mcheck/mce.c
>> +++ b/arch/x86/kernel/cpu/mcheck/mce.c
>> @@ -1338,8 +1338,10 @@ static void mce_timer_delete_all(void)
>> {
>> int cpu;
>>
>> + get_online_cpus();
>> for_each_online_cpu(cpu)
>> del_timer_sync(&per_cpu(mce_timer, cpu));
>> + put_online_cpus();
>> }
>>
>> static void mce_do_trigger(struct work_struct *work)
>> @@ -2085,7 +2087,9 @@ static void mce_cpu_restart(void *data)
>> static void mce_restart(void)
>> {
>> mce_timer_delete_all();
>> + get_online_cpus();
>> on_each_cpu(mce_cpu_restart, NULL, 1);
>> + put_online_cpus();
>
> With your patch applied I get on 4.1-rc1+:

I didn't test it with 4.1-rc1+ yet.
Let' us check it.

Thanks,
Ethan
>
> ---
> [ 41.364909] kvm: disabling virtualization on CPU1
> [ 41.371083] smpboot: CPU 1 is now offline
> [ 41.381190] x86: Booting SMP configuration:
> [ 41.385405] smpboot: Booting Node 0 Processor 1 APIC 0x2
> [ 41.402901] kvm: enabling virtualization on CPU1
> [ 41.440944] kvm: disabling virtualization on CPU1
> [ 41.447010] smpboot: CPU 1 is now offline
> [ 41.486082] kvm: disabling virtualization on CPU6
> [ 41.491827] smpboot: CPU 6 is now offline
> [ 41.497521] smpboot: Booting Node 0 Processor 6 APIC 0x5
> [ 41.514983] kvm: enabling virtualization on CPU6
> [ 41.561643] kvm: disabling virtualization on CPU6
> [ 41.566848] smpboot: CPU 6 is now offline
> [ 41.572606] smpboot: Booting Node 0 Processor 6 APIC 0x5
> [ 41.590049] kvm: enabling virtualization on CPU6
> [ 41.636817] kvm: disabling virtualization on CPU6
> [ 41.642575] smpboot: CPU 6 is now offline
> [ 41.676812] kvm: disabling virtualization on CPU7
> [ 41.682429] smpboot: CPU 7 is now offline
> [ 41.687974] smpboot: Booting Node 0 Processor 7 APIC 0x7
> [ 41.705416] kvm: enabling virtualization on CPU7
> [ 41.752739] kvm: disabling virtualization on CPU7
> [ 41.758455] smpboot: CPU 7 is now offline
> [ 41.764089] smpboot: Booting Node 0 Processor 7 APIC 0x7
> [ 41.781561] kvm: enabling virtualization on CPU7
> [ 41.831610] kvm: disabling virtualization on CPU7
> [ 41.837280] smpboot: CPU 7 is now offline
>
> [ 41.843341] ======================================================
> [ 41.849561] [ INFO: possible circular locking dependency detected ]
> [ 41.855883] 4.1.0-rc1+ #2 Not tainted
> [ 41.859564] -------------------------------------------------------
> [ 41.865871] script2.sh/2071 is trying to acquire lock:
> [ 41.871044] (cpu_hotplug.lock){++++++}, at: [<ffffffff81066c02>] get_online_cpus+0x32/0x80
> [ 41.879521]
> but task is already holding lock:
> [ 41.885392] (s_active#121){++++.+}, at: [<ffffffff812944de>] kernfs_fop_write+0x6e/0x1a0
> [ 41.893695]
> which lock already depends on the new lock.
>
> [ 41.901925]
> the existing dependency chain (in reverse order) is:
> [ 41.909465]
> -> #2 (s_active#121){++++.+}:
> [ 41.913739] [<ffffffff810c6e11>] lock_acquire+0xd1/0x2b0
> [ 41.919718] [<ffffffff81292088>] __kernfs_remove+0x228/0x300
> [ 41.926046] [<ffffffff81293269>] kernfs_remove_by_name_ns+0x49/0xb0
> [ 41.932976] [<ffffffff81294de5>] sysfs_remove_file_ns+0x15/0x20
> [ 41.939552] [<ffffffff81600939>] device_remove_file+0x19/0x20
> [ 41.945968] [<ffffffff810308b4>] mce_device_remove+0x54/0xd0
> [ 41.952284] [<ffffffff81030b89>] mce_cpu_callback+0x69/0x120
> [ 41.958608] [<ffffffff8108fb46>] notifier_call_chain+0x66/0x90
> [ 41.965124] [<ffffffff8108fb7e>] __raw_notifier_call_chain+0xe/0x10
> [ 41.972053] [<ffffffff81066d33>] cpu_notify+0x23/0x50
> [ 41.977761] [<ffffffff81066dfe>] cpu_notify_nofail+0xe/0x20
> [ 41.983986] [<ffffffff818961d6>] _cpu_down+0x1b6/0x2d0
> [ 41.989787] [<ffffffff81896326>] cpu_down+0x36/0x50
> [ 41.995324] [<ffffffff816087d4>] cpu_subsys_offline+0x14/0x20
> [ 42.001734] [<ffffffff816030c5>] device_offline+0x95/0xc0
> [ 42.007797] [<ffffffff816031cd>] online_store+0x3d/0x90
> [ 42.013673] [<ffffffff81600608>] dev_attr_store+0x18/0x30
> [ 42.019735] [<ffffffff81294ca9>] sysfs_kf_write+0x49/0x60
> [ 42.025796] [<ffffffff812945b0>] kernfs_fop_write+0x140/0x1a0
> [ 42.032211] [<ffffffff8120d878>] __vfs_write+0x28/0xf0
> [ 42.038013] [<ffffffff8120df89>] vfs_write+0xa9/0x1b0
> [ 42.043715] [<ffffffff8120ed79>] SyS_write+0x49/0xb0
> [ 42.049334] [<ffffffff818aacf2>] system_call_fastpath+0x16/0x7a
> [ 42.055926]
> -> #1 (cpu_hotplug.lock#2){+.+.+.}:
> [ 42.060705] [<ffffffff810c6e11>] lock_acquire+0xd1/0x2b0
> [ 42.066674] [<ffffffff818a5771>] mutex_lock_nested+0x61/0x460
> [ 42.073087] [<ffffffff81066e8b>] cpu_hotplug_begin+0x7b/0xd0
> [ 42.079400] [<ffffffff81066f4b>] _cpu_up+0x2b/0x160
> [ 42.084927] [<ffffffff8106710a>] cpu_up+0x8a/0xb0
> [ 42.090285] [<ffffffff81fac845>] smp_init+0x86/0x88
> [ 42.095818] [<ffffffff81f8e113>] kernel_init_freeable+0x160/0x27d
> [ 42.102577] [<ffffffff8189540e>] kernel_init+0xe/0xf0
> [ 42.108455] [<ffffffff818ab162>] ret_from_fork+0x42/0x70
> [ 42.114586]
> -> #0 (cpu_hotplug.lock){++++++}:
> [ 42.119497] [<ffffffff810c60c6>] __lock_acquire+0x1db6/0x2040
> [ 42.126062] [<ffffffff810c6e11>] lock_acquire+0xd1/0x2b0
> [ 42.132182] [<ffffffff81066c28>] get_online_cpus+0x58/0x80
> [ 42.138476] [<ffffffff8102fc23>] mce_timer_delete_all+0x23/0x70
> [ 42.145215] [<ffffffff8102fd0e>] mce_restart+0xe/0x30
> [ 42.151071] [<ffffffff8102fddb>] store_int_with_restart+0x1b/0x30
> [ 42.157975] [<ffffffff81600608>] dev_attr_store+0x18/0x30
> [ 42.164179] [<ffffffff81294ca9>] sysfs_kf_write+0x49/0x60
> [ 42.170383] [<ffffffff812945b0>] kernfs_fop_write+0x140/0x1a0
> [ 42.176938] [<ffffffff8120d878>] __vfs_write+0x28/0xf0
> [ 42.182879] [<ffffffff8120df89>] vfs_write+0xa9/0x1b0
> [ 42.188733] [<ffffffff8120ed79>] SyS_write+0x49/0xb0
> [ 42.194498] [<ffffffff818aacf2>] system_call_fastpath+0x16/0x7a
> [ 42.201230]
> other info that might help us debug this:
>
> [ 42.210268] Chain exists of:
> cpu_hotplug.lock --> cpu_hotplug.lock#2 --> s_active#121
>
> [ 42.219214] Possible unsafe locking scenario:
>
> [ 42.225875] CPU0 CPU1
> [ 42.230578] ---- ----
> [ 42.235283] lock(s_active#121);
> [ 42.238791] lock(cpu_hotplug.lock#2);
> [ 42.245362] lock(s_active#121);
> [ 42.251401] lock(cpu_hotplug.lock);
> [ 42.255250]
> *** DEADLOCK ***
>
> [ 42.262008] 3 locks held by script2.sh/2071:
> [ 42.266446] #0: (sb_writers#6){.+.+.+}, at: [<ffffffff8120e043>] vfs_write+0x163/0x1b0
> [ 42.274792] #1: (&of->mutex){+.+.+.}, at: [<ffffffff812944d6>] kernfs_fop_write+0x66/0x1a0
> [ 42.283485] #2: (s_active#121){++++.+}, at: [<ffffffff812944de>] kernfs_fop_write+0x6e/0x1a0
> [ 42.292353]
> stack backtrace:
> [ 42.297084] CPU: 3 PID: 2071 Comm: script2.sh Not tainted 4.1.0-rc1+ #2
> [ 42.303880] Hardware name: Dell Inc. Precision T3600/0PTTT9, BIOS A13 05/11/2014
> [ 42.311474] ffffffff82847b60 ffff88043934fb68 ffffffff818a08e0 0000000000000011
> [ 42.319092] ffffffff828479b0 ffff88043934fbb8 ffffffff810c2a9d 000000000000aae0
> [ 42.326712] ffff88043934fc28 0000000000000002 ffff8800a82cc0b8 0000000000000002
> [ 42.334332] Call Trace:
> [ 42.336939] [<ffffffff818a08e0>] dump_stack+0x4f/0x7b
> [ 42.342251] [<ffffffff810c2a9d>] print_circular_bug+0x1cd/0x230
> [ 42.348436] [<ffffffff810c60c6>] __lock_acquire+0x1db6/0x2040
> [ 42.354455] [<ffffffff810c34d9>] ? check_irq_usage+0x89/0xc0
> [ 42.360385] [<ffffffff810c6e11>] lock_acquire+0xd1/0x2b0
> [ 42.365958] [<ffffffff81066c02>] ? get_online_cpus+0x32/0x80
> [ 42.371880] [<ffffffff81066c28>] get_online_cpus+0x58/0x80
> [ 42.377626] [<ffffffff81066c02>] ? get_online_cpus+0x32/0x80
> [ 42.383556] [<ffffffff813f94ee>] ? simple_strtol+0xe/0x20
> [ 42.389215] [<ffffffff8102fc23>] mce_timer_delete_all+0x23/0x70
> [ 42.395401] [<ffffffff8102fd0e>] mce_restart+0xe/0x30
> [ 42.400721] [<ffffffff8102fddb>] store_int_with_restart+0x1b/0x30
> [ 42.407080] [<ffffffff81600608>] dev_attr_store+0x18/0x30
> [ 42.412739] [<ffffffff81294ca9>] sysfs_kf_write+0x49/0x60
> [ 42.418396] [<ffffffff812945b0>] kernfs_fop_write+0x140/0x1a0
> [ 42.424405] [<ffffffff8120d878>] __vfs_write+0x28/0xf0
> [ 42.429803] [<ffffffff8120e043>] ? vfs_write+0x163/0x1b0
> [ 42.435385] [<ffffffff81188b94>] ? context_tracking_exit+0x64/0x1c0
> [ 42.441922] [<ffffffff8120df89>] vfs_write+0xa9/0x1b0
> [ 42.447234] [<ffffffff8120ed79>] SyS_write+0x49/0xb0
> [ 42.452459] [<ffffffff818aacf2>] system_call_fastpath+0x16/0x7a
> [ 42.459043] smpboot: Booting Node 0 Processor 7 APIC 0x7
> [ 42.476980] kvm: enabling virtualization on CPU7
> [ 42.525076] kvm: disabling virtualization on CPU7
> [ 42.530297] smpboot: CPU 7 is now offline
> [ 42.535071] smpboot: Booting Node 0 Processor 7 APIC 0x7
> [ 42.552746] kvm: enabling virtualization on CPU7
> [ 42.600174] kvm: disabling virtualization on CPU7
> [ 42.605374] smpboot: CPU 7 is now offline
> [ 42.610194] smpboot: Booting Node 0 Processor 7 APIC 0x7
> [ 42.627827] kvm: enabling virtualization on CPU7
> [ 42.681186] kvm: disabling virtualization on CPU7
> [ 42.687093] smpboot: CPU 7 is now offline
> [ 42.692417] smpboot: Booting Node 0 Processor 7 APIC 0x7
> [ 42.710046] kvm: enabling virtualization on CPU7
> [ 42.751031] kvm: disabling virtualization on CPU7
> [ 42.756130] smpboot: CPU 7 is now offline
>
> --
> Regards/Gruss,
> Boris.
>
> ECO tip #101: Trim your mails when you reply.
> --

2015-05-01 06:03:52

by ethan zhao

[permalink] [raw]
Subject: Re: [PATCH] x86/mce: fix mce_restart() race with CPU hotplug operation

On Fri, May 1, 2015 at 12:29 AM, Borislav Petkov <[email protected]> wrote:
> On Thu, Apr 30, 2015 at 12:04:53AM +0900, Ethan Zhao wrote:
>> while testing CPU hotplug and MCE with following two scripts,
>>
>> script 1:
>>
>> for i in {1..30}; do while :; do ((a=$RANDOM%160)); echo 0 >>
>> /sys/devices/system/cpu/cpu${i}/online; echo 1 >>
>> /sys/devices/system/cpu/cpu${i}/online; done & done
>>
>> script 2:
>>
>> while :; do for i in $(ls
>> /sys/devices/system/machinecheck/machinecheck*/check_interval); do echo 1 >>
>> $i; done; done
>
> For the record, it is a public secret that CPU hotplug is broken. IOW,
> you're wasting your time with those senseless pounder tests but ok.

:<, Someone else is stressing the CPU hotplug, seems it is fragile.
My job is holding the system, not panic to the ground.

Thanks,
Ethan

>
> ...
>
>> ---
>> arch/x86/kernel/cpu/mcheck/mce.c | 4 ++++
>> 1 file changed, 4 insertions(+)
>>
>> diff --git a/arch/x86/kernel/cpu/mcheck/mce.c b/arch/x86/kernel/cpu/mcheck/mce.c
>> index 3c036cb..fcc2794 100644
>> --- a/arch/x86/kernel/cpu/mcheck/mce.c
>> +++ b/arch/x86/kernel/cpu/mcheck/mce.c
>> @@ -1338,8 +1338,10 @@ static void mce_timer_delete_all(void)
>> {
>> int cpu;
>>
>> + get_online_cpus();
>> for_each_online_cpu(cpu)
>> del_timer_sync(&per_cpu(mce_timer, cpu));
>> + put_online_cpus();
>> }
>>
>> static void mce_do_trigger(struct work_struct *work)
>> @@ -2085,7 +2087,9 @@ static void mce_cpu_restart(void *data)
>> static void mce_restart(void)
>> {
>> mce_timer_delete_all();
>> + get_online_cpus();
>> on_each_cpu(mce_cpu_restart, NULL, 1);
>> + put_online_cpus();
>
> With your patch applied I get on 4.1-rc1+:
>
> ---
> [ 41.364909] kvm: disabling virtualization on CPU1
> [ 41.371083] smpboot: CPU 1 is now offline
> [ 41.381190] x86: Booting SMP configuration:
> [ 41.385405] smpboot: Booting Node 0 Processor 1 APIC 0x2
> [ 41.402901] kvm: enabling virtualization on CPU1
> [ 41.440944] kvm: disabling virtualization on CPU1
> [ 41.447010] smpboot: CPU 1 is now offline
> [ 41.486082] kvm: disabling virtualization on CPU6
> [ 41.491827] smpboot: CPU 6 is now offline
> [ 41.497521] smpboot: Booting Node 0 Processor 6 APIC 0x5
> [ 41.514983] kvm: enabling virtualization on CPU6
> [ 41.561643] kvm: disabling virtualization on CPU6
> [ 41.566848] smpboot: CPU 6 is now offline
> [ 41.572606] smpboot: Booting Node 0 Processor 6 APIC 0x5
> [ 41.590049] kvm: enabling virtualization on CPU6
> [ 41.636817] kvm: disabling virtualization on CPU6
> [ 41.642575] smpboot: CPU 6 is now offline
> [ 41.676812] kvm: disabling virtualization on CPU7
> [ 41.682429] smpboot: CPU 7 is now offline
> [ 41.687974] smpboot: Booting Node 0 Processor 7 APIC 0x7
> [ 41.705416] kvm: enabling virtualization on CPU7
> [ 41.752739] kvm: disabling virtualization on CPU7
> [ 41.758455] smpboot: CPU 7 is now offline
> [ 41.764089] smpboot: Booting Node 0 Processor 7 APIC 0x7
> [ 41.781561] kvm: enabling virtualization on CPU7
> [ 41.831610] kvm: disabling virtualization on CPU7
> [ 41.837280] smpboot: CPU 7 is now offline
>
> [ 41.843341] ======================================================
> [ 41.849561] [ INFO: possible circular locking dependency detected ]
> [ 41.855883] 4.1.0-rc1+ #2 Not tainted
> [ 41.859564] -------------------------------------------------------
> [ 41.865871] script2.sh/2071 is trying to acquire lock:
> [ 41.871044] (cpu_hotplug.lock){++++++}, at: [<ffffffff81066c02>] get_online_cpus+0x32/0x80
> [ 41.879521]
> but task is already holding lock:
> [ 41.885392] (s_active#121){++++.+}, at: [<ffffffff812944de>] kernfs_fop_write+0x6e/0x1a0
> [ 41.893695]
> which lock already depends on the new lock.
>
> [ 41.901925]
> the existing dependency chain (in reverse order) is:
> [ 41.909465]
> -> #2 (s_active#121){++++.+}:
> [ 41.913739] [<ffffffff810c6e11>] lock_acquire+0xd1/0x2b0
> [ 41.919718] [<ffffffff81292088>] __kernfs_remove+0x228/0x300
> [ 41.926046] [<ffffffff81293269>] kernfs_remove_by_name_ns+0x49/0xb0
> [ 41.932976] [<ffffffff81294de5>] sysfs_remove_file_ns+0x15/0x20
> [ 41.939552] [<ffffffff81600939>] device_remove_file+0x19/0x20
> [ 41.945968] [<ffffffff810308b4>] mce_device_remove+0x54/0xd0
> [ 41.952284] [<ffffffff81030b89>] mce_cpu_callback+0x69/0x120
> [ 41.958608] [<ffffffff8108fb46>] notifier_call_chain+0x66/0x90
> [ 41.965124] [<ffffffff8108fb7e>] __raw_notifier_call_chain+0xe/0x10
> [ 41.972053] [<ffffffff81066d33>] cpu_notify+0x23/0x50
> [ 41.977761] [<ffffffff81066dfe>] cpu_notify_nofail+0xe/0x20
> [ 41.983986] [<ffffffff818961d6>] _cpu_down+0x1b6/0x2d0
> [ 41.989787] [<ffffffff81896326>] cpu_down+0x36/0x50
> [ 41.995324] [<ffffffff816087d4>] cpu_subsys_offline+0x14/0x20
> [ 42.001734] [<ffffffff816030c5>] device_offline+0x95/0xc0
> [ 42.007797] [<ffffffff816031cd>] online_store+0x3d/0x90
> [ 42.013673] [<ffffffff81600608>] dev_attr_store+0x18/0x30
> [ 42.019735] [<ffffffff81294ca9>] sysfs_kf_write+0x49/0x60
> [ 42.025796] [<ffffffff812945b0>] kernfs_fop_write+0x140/0x1a0
> [ 42.032211] [<ffffffff8120d878>] __vfs_write+0x28/0xf0
> [ 42.038013] [<ffffffff8120df89>] vfs_write+0xa9/0x1b0
> [ 42.043715] [<ffffffff8120ed79>] SyS_write+0x49/0xb0
> [ 42.049334] [<ffffffff818aacf2>] system_call_fastpath+0x16/0x7a
> [ 42.055926]
> -> #1 (cpu_hotplug.lock#2){+.+.+.}:
> [ 42.060705] [<ffffffff810c6e11>] lock_acquire+0xd1/0x2b0
> [ 42.066674] [<ffffffff818a5771>] mutex_lock_nested+0x61/0x460
> [ 42.073087] [<ffffffff81066e8b>] cpu_hotplug_begin+0x7b/0xd0
> [ 42.079400] [<ffffffff81066f4b>] _cpu_up+0x2b/0x160
> [ 42.084927] [<ffffffff8106710a>] cpu_up+0x8a/0xb0
> [ 42.090285] [<ffffffff81fac845>] smp_init+0x86/0x88
> [ 42.095818] [<ffffffff81f8e113>] kernel_init_freeable+0x160/0x27d
> [ 42.102577] [<ffffffff8189540e>] kernel_init+0xe/0xf0
> [ 42.108455] [<ffffffff818ab162>] ret_from_fork+0x42/0x70
> [ 42.114586]
> -> #0 (cpu_hotplug.lock){++++++}:
> [ 42.119497] [<ffffffff810c60c6>] __lock_acquire+0x1db6/0x2040
> [ 42.126062] [<ffffffff810c6e11>] lock_acquire+0xd1/0x2b0
> [ 42.132182] [<ffffffff81066c28>] get_online_cpus+0x58/0x80
> [ 42.138476] [<ffffffff8102fc23>] mce_timer_delete_all+0x23/0x70
> [ 42.145215] [<ffffffff8102fd0e>] mce_restart+0xe/0x30
> [ 42.151071] [<ffffffff8102fddb>] store_int_with_restart+0x1b/0x30
> [ 42.157975] [<ffffffff81600608>] dev_attr_store+0x18/0x30
> [ 42.164179] [<ffffffff81294ca9>] sysfs_kf_write+0x49/0x60
> [ 42.170383] [<ffffffff812945b0>] kernfs_fop_write+0x140/0x1a0
> [ 42.176938] [<ffffffff8120d878>] __vfs_write+0x28/0xf0
> [ 42.182879] [<ffffffff8120df89>] vfs_write+0xa9/0x1b0
> [ 42.188733] [<ffffffff8120ed79>] SyS_write+0x49/0xb0
> [ 42.194498] [<ffffffff818aacf2>] system_call_fastpath+0x16/0x7a
> [ 42.201230]
> other info that might help us debug this:
>
> [ 42.210268] Chain exists of:
> cpu_hotplug.lock --> cpu_hotplug.lock#2 --> s_active#121
>
> [ 42.219214] Possible unsafe locking scenario:
>
> [ 42.225875] CPU0 CPU1
> [ 42.230578] ---- ----
> [ 42.235283] lock(s_active#121);
> [ 42.238791] lock(cpu_hotplug.lock#2);
> [ 42.245362] lock(s_active#121);
> [ 42.251401] lock(cpu_hotplug.lock);
> [ 42.255250]
> *** DEADLOCK ***
>
> [ 42.262008] 3 locks held by script2.sh/2071:
> [ 42.266446] #0: (sb_writers#6){.+.+.+}, at: [<ffffffff8120e043>] vfs_write+0x163/0x1b0
> [ 42.274792] #1: (&of->mutex){+.+.+.}, at: [<ffffffff812944d6>] kernfs_fop_write+0x66/0x1a0
> [ 42.283485] #2: (s_active#121){++++.+}, at: [<ffffffff812944de>] kernfs_fop_write+0x6e/0x1a0
> [ 42.292353]
> stack backtrace:
> [ 42.297084] CPU: 3 PID: 2071 Comm: script2.sh Not tainted 4.1.0-rc1+ #2
> [ 42.303880] Hardware name: Dell Inc. Precision T3600/0PTTT9, BIOS A13 05/11/2014
> [ 42.311474] ffffffff82847b60 ffff88043934fb68 ffffffff818a08e0 0000000000000011
> [ 42.319092] ffffffff828479b0 ffff88043934fbb8 ffffffff810c2a9d 000000000000aae0
> [ 42.326712] ffff88043934fc28 0000000000000002 ffff8800a82cc0b8 0000000000000002
> [ 42.334332] Call Trace:
> [ 42.336939] [<ffffffff818a08e0>] dump_stack+0x4f/0x7b
> [ 42.342251] [<ffffffff810c2a9d>] print_circular_bug+0x1cd/0x230
> [ 42.348436] [<ffffffff810c60c6>] __lock_acquire+0x1db6/0x2040
> [ 42.354455] [<ffffffff810c34d9>] ? check_irq_usage+0x89/0xc0
> [ 42.360385] [<ffffffff810c6e11>] lock_acquire+0xd1/0x2b0
> [ 42.365958] [<ffffffff81066c02>] ? get_online_cpus+0x32/0x80
> [ 42.371880] [<ffffffff81066c28>] get_online_cpus+0x58/0x80
> [ 42.377626] [<ffffffff81066c02>] ? get_online_cpus+0x32/0x80
> [ 42.383556] [<ffffffff813f94ee>] ? simple_strtol+0xe/0x20
> [ 42.389215] [<ffffffff8102fc23>] mce_timer_delete_all+0x23/0x70
> [ 42.395401] [<ffffffff8102fd0e>] mce_restart+0xe/0x30
> [ 42.400721] [<ffffffff8102fddb>] store_int_with_restart+0x1b/0x30
> [ 42.407080] [<ffffffff81600608>] dev_attr_store+0x18/0x30
> [ 42.412739] [<ffffffff81294ca9>] sysfs_kf_write+0x49/0x60
> [ 42.418396] [<ffffffff812945b0>] kernfs_fop_write+0x140/0x1a0
> [ 42.424405] [<ffffffff8120d878>] __vfs_write+0x28/0xf0
> [ 42.429803] [<ffffffff8120e043>] ? vfs_write+0x163/0x1b0
> [ 42.435385] [<ffffffff81188b94>] ? context_tracking_exit+0x64/0x1c0
> [ 42.441922] [<ffffffff8120df89>] vfs_write+0xa9/0x1b0
> [ 42.447234] [<ffffffff8120ed79>] SyS_write+0x49/0xb0
> [ 42.452459] [<ffffffff818aacf2>] system_call_fastpath+0x16/0x7a
> [ 42.459043] smpboot: Booting Node 0 Processor 7 APIC 0x7
> [ 42.476980] kvm: enabling virtualization on CPU7
> [ 42.525076] kvm: disabling virtualization on CPU7
> [ 42.530297] smpboot: CPU 7 is now offline
> [ 42.535071] smpboot: Booting Node 0 Processor 7 APIC 0x7
> [ 42.552746] kvm: enabling virtualization on CPU7
> [ 42.600174] kvm: disabling virtualization on CPU7
> [ 42.605374] smpboot: CPU 7 is now offline
> [ 42.610194] smpboot: Booting Node 0 Processor 7 APIC 0x7
> [ 42.627827] kvm: enabling virtualization on CPU7
> [ 42.681186] kvm: disabling virtualization on CPU7
> [ 42.687093] smpboot: CPU 7 is now offline
> [ 42.692417] smpboot: Booting Node 0 Processor 7 APIC 0x7
> [ 42.710046] kvm: enabling virtualization on CPU7
> [ 42.751031] kvm: disabling virtualization on CPU7
> [ 42.756130] smpboot: CPU 7 is now offline
>
> --
> Regards/Gruss,
> Boris.
>
> ECO tip #101: Trim your mails when you reply.
> --

2015-05-03 09:38:17

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH] x86/mce: fix mce_restart() race with CPU hotplug operation

On Fri, May 01, 2015 at 02:03:48PM +0800, Ethan Zhao wrote:
> > For the record, it is a public secret that CPU hotplug is broken. IOW,
> > you're wasting your time with those senseless pounder tests but ok.
>
> :<, Someone else is stressing the CPU hotplug, seems it is fragile.
> My job is holding the system, not panic to the ground.

Please forward my message to that someone else then. The whole world
knows CPU hotplug is fragile and there are people working on improving
that. There's no need for you guys to pound on it, we know it is broken.
I'm sure you can find a lot other things in the kernel to pound on.
Things which make a lot more sense than CPU hotplug.

Thanks.

--
Regards/Gruss,
Boris.

ECO tip #101: Trim your mails when you reply.
--

2015-05-04 00:35:19

by Ethan Zhao

[permalink] [raw]
Subject: Re: [PATCH] x86/mce: fix mce_restart() race with CPU hotplug operation

Borislav,
On 2015/5/3 17:37, Borislav Petkov wrote:
> On Fri, May 01, 2015 at 02:03:48PM +0800, Ethan Zhao wrote:
>>> For the record, it is a public secret that CPU hotplug is broken. IOW,
>>> you're wasting your time with those senseless pounder tests but ok.
>> :<, Someone else is stressing the CPU hotplug, seems it is fragile.
>> My job is holding the system, not panic to the ground.
> Please forward my message to that someone else then. The whole world
Got it.

Thanks,
Ethan
> knows CPU hotplug is fragile and there are people working on improving
> that. There's no need for you guys to pound on it, we know it is broken.
> I'm sure you can find a lot other things in the kernel to pound on.
> Things which make a lot more sense than CPU hotplug.
>
> Thanks.
>