Hi,
I see the following traceback (or similar tracebacks) once in a while
during my boot tests. In this specific case it is with mainline
(v5.3-rc1-195-g3ea54d9b0d65), but I have seen it with other branches
as well. This isn't a new problem; I have seen it for quite some time.
There is no specific action required to make it appear; just running
reboot loops is sufficient. The problem doesn't happen a lot;
non-scientifically I would say I see it maybe once every few hundred
boots.
No specific action requested or asked for; this is just informational.
A complete log is at:
https://kerneltests.org/builders/qemu-x86-master/builds/1285/steps/qemubuildcommand/logs/stdio
Guenter
---
[ 61.248329] sd 0:0:0:0: [sda] Synchronizing SCSI cache
[ 61.268277] e1000e: EEE TX LPI TIMER: 00000000
[ 61.311435] reboot: Restarting system
[ 61.312321] reboot: machine restart
[ 61.342193] ------------[ cut here ]------------
[ 61.342660] sched: Unexpected reschedule of offline CPU#2!
ILLOPC: ce241f83: 0f 0b
[ 61.344323] WARNING: CPU: 1 PID: 15 at arch/x86/kernel/smp.c:126 native_smp_send_reschedule+0x33/0x40
[ 61.344836] Modules linked in:
[ 61.345694] CPU: 1 PID: 15 Comm: ksoftirqd/1 Not tainted 5.3.0-rc1+ #1
[ 61.345998] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.12.1-0-ga5cab58e9a3f-prebuilt.qemu.org 04/01/2014
[ 61.346569] EIP: native_smp_send_reschedule+0x33/0x40
[ 61.347099] Code: cf 73 1c 8b 15 60 54 2b cf 8b 4a 18 ba fd 00 00 00 e8 05 65 c7 00 c9 c3 8d b4 26 00 00 00 00 50 68 04 ca 1a cf e8 fe e3 01 00 <0f> 0b 58 5a c9 c3 8d b4 26 00 00 00 00 55 89 e5 56 53 83 ec 0c 65
[ 61.347726] EAX: 0000002e EBX: 00000002 ECX: 00000000 EDX: cdd64140
[ 61.347977] ESI: 00000002 EDI: 00000000 EBP: cdd73c88 ESP: cdd73c80
[ 61.348234] DS: 007b ES: 007b FS: 00d8 GS: 00e0 SS: 0068 EFLAGS: 00000096
[ 61.348514] CR0: 80050033 CR2: b7ee7048 CR3: 0c28f000 CR4: 000006d0
[ 61.348866] Call Trace:
[ 61.349392] kick_ilb+0x90/0xa0
[ 61.349629] trigger_load_balance+0xf0/0x5c0
[ 61.349859] ? check_preempt_wakeup+0x1b0/0x1b0
[ 61.350057] scheduler_tick+0xa7/0xd0
[ 61.350266] update_process_times+0x4a/0x60
[ 61.350467] tick_sched_handle+0x3e/0x50
[ 61.350650] tick_sched_timer+0x37/0x90
[ 61.350847] __hrtimer_run_queues+0xf7/0x440
[ 61.351056] ? tick_sched_do_timer+0x70/0x70
[ 61.351281] hrtimer_interrupt+0x10e/0x260
[ 61.351541] smp_apic_timer_interrupt+0x68/0x210
[ 61.351750] apic_timer_interrupt+0x106/0x10c
[ 61.352040] EIP: _raw_spin_unlock_irqrestore+0x47/0x50
[ 61.352254] Code: 66 40 ff f6 c7 02 75 1b 53 9d e8 c4 67 49 ff 64 ff 0d 84 27 50 cf 5b 5e 5d c3 8d b4 26 00 00 00 00 66 90 e8 ab 69 49 ff 53 9d <eb> e3 8d b4 26 00 00 00 00 55 64 ff 05 84 27 50 cf 89 e5 53 89 c3
[ 61.352810] EAX: cdd64140 EBX: 00000282 ECX: 00000003 EDX: 00000002
[ 61.353041] ESI: cdc01940 EDI: 00000001 EBP: cdd73e08 ESP: cdd73e00
[ 61.353273] DS: 007b ES: 007b FS: 00d8 GS: 00e0 SS: 0068 EFLAGS: 00000282
[ 61.353705] ? _raw_spin_unlock_irqrestore+0x47/0x50
[ 61.362142] free_debug_processing+0x199/0x220
[ 61.362413] __slab_free+0x220/0x3b0
[ 61.362599] ? irq_kobj_release+0x1c/0x20
[ 61.362845] ? kfree+0x1ad/0x270
[ 61.363002] ? kfree+0x1ad/0x270
[ 61.363162] kfree+0x264/0x270
[ 61.363305] ? kfree+0x264/0x270
[ 61.363458] ? irq_kobj_release+0x1c/0x20
[ 61.363624] ? irq_kobj_release+0x1c/0x20
[ 61.363824] irq_kobj_release+0x1c/0x20
[ 61.364018] kobject_put+0x58/0xc0
[ 61.364211] ? hwirq_show+0x50/0x50
[ 61.364439] delayed_free_desc+0xb/0x10
[ 61.364621] rcu_core+0x288/0xb50
[ 61.364805] ? __do_softirq+0x7e/0x3bb
[ 61.365042] rcu_core_si+0x8/0x10
[ 61.365209] __do_softirq+0xa9/0x3bb
[ 61.365445] run_ksoftirqd+0x25/0x50
[ 61.365615] smpboot_thread_fn+0xef/0x1d0
[ 61.365834] kthread+0xf2/0x110
[ 61.365986] ? sort_range+0x20/0x20
[ 61.366156] ? kthread_create_on_node+0x20/0x20
[ 61.366360] ret_from_fork+0x2e/0x38
[ 61.366818] irq event stamp: 1267
[ 61.367115] hardirqs last enabled at (1266): [<ceeb37f5>] _raw_spin_unlock_irqrestore+0x45/0x50
[ 61.367448] hardirqs last disabled at (1267): [<ce20178a>] trace_hardirqs_off_thunk+0xc/0x12
[ 61.367769] softirqs last enabled at (1232): [<ceeb7a45>] __do_softirq+0x2c5/0x3bb
[ 61.368057] softirqs last disabled at (1237): [<ce267605>] run_ksoftirqd+0x25/0x50
[ 61.368389] ---[ end trace 3465d631a21844b8 ]---
On Sat, Jul 27, 2019 at 09:44:50AM -0700, Guenter Roeck wrote:
> Hi,
>
> I see the following traceback (or similar tracebacks) once in a while
> during my boot tests. In this specific case it is with mainline
> (v5.3-rc1-195-g3ea54d9b0d65), but I have seen it with other branches
> as well. This isn't a new problem; I have seen it for quite some time.
> There is no specific action required to make it appear; just running
> reboot loops is sufficient. The problem doesn't happen a lot;
> non-scientifically I would say I see it maybe once every few hundred
> boots.
>
> No specific action requested or asked for; this is just informational.
>
> A complete log is at:
> https://kerneltests.org/builders/qemu-x86-master/builds/1285/steps/qemubuildcommand/logs/stdio
>
> Guenter
>
> ---
> [ 61.248329] sd 0:0:0:0: [sda] Synchronizing SCSI cache
> [ 61.268277] e1000e: EEE TX LPI TIMER: 00000000
> [ 61.311435] reboot: Restarting system
> [ 61.312321] reboot: machine restart
> [ 61.342193] ------------[ cut here ]------------
> [ 61.342660] sched: Unexpected reschedule of offline CPU#2!
> ILLOPC: ce241f83: 0f 0b
> [ 61.344323] WARNING: CPU: 1 PID: 15 at arch/x86/kernel/smp.c:126 native_smp_send_reschedule+0x33/0x40
> [ 61.344836] Modules linked in:
> [ 61.345694] CPU: 1 PID: 15 Comm: ksoftirqd/1 Not tainted 5.3.0-rc1+ #1
> [ 61.345998] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.12.1-0-ga5cab58e9a3f-prebuilt.qemu.org 04/01/2014
> [ 61.346569] EIP: native_smp_send_reschedule+0x33/0x40
> [ 61.347099] Code: cf 73 1c 8b 15 60 54 2b cf 8b 4a 18 ba fd 00 00 00 e8 05 65 c7 00 c9 c3 8d b4 26 00 00 00 00 50 68 04 ca 1a cf e8 fe e3 01 00 <0f> 0b 58 5a c9 c3 8d b4 26 00 00 00 00 55 89 e5 56 53 83 ec 0c 65
> [ 61.347726] EAX: 0000002e EBX: 00000002 ECX: 00000000 EDX: cdd64140
> [ 61.347977] ESI: 00000002 EDI: 00000000 EBP: cdd73c88 ESP: cdd73c80
> [ 61.348234] DS: 007b ES: 007b FS: 00d8 GS: 00e0 SS: 0068 EFLAGS: 00000096
> [ 61.348514] CR0: 80050033 CR2: b7ee7048 CR3: 0c28f000 CR4: 000006d0
> [ 61.348866] Call Trace:
> [ 61.349392] kick_ilb+0x90/0xa0
> [ 61.349629] trigger_load_balance+0xf0/0x5c0
> [ 61.349859] ? check_preempt_wakeup+0x1b0/0x1b0
> [ 61.350057] scheduler_tick+0xa7/0xd0
kick_ilb() iterates nohz.idle_cpus_mask to find itself an idle_cpu().
idle_cpus_mask() is set from nohz_balance_enter_idle() and cleared from
nohz_balance_exit_idle(). nohz_balance_enter_idle() is called from
__tick_nohz_idle_stop_tick() when entering nohz idle, this includes the
cpu_is_offline() clause of the idle loop.
However, when offline, cpu_active() should also be false, and this
function should no-op.
Then we have nohz_balance_exit_idle() from sched_cpu_dying(), which
should explicitly clear the CPU from the mask when going offline.
So I'm not immediately seeing how we can select an offline CPU to kick.
On Mon, 29 Jul 2019, Peter Zijlstra wrote:
> On Sat, Jul 27, 2019 at 09:44:50AM -0700, Guenter Roeck wrote:
> > [ 61.348866] Call Trace:
> > [ 61.349392] kick_ilb+0x90/0xa0
> > [ 61.349629] trigger_load_balance+0xf0/0x5c0
> > [ 61.349859] ? check_preempt_wakeup+0x1b0/0x1b0
> > [ 61.350057] scheduler_tick+0xa7/0xd0
>
> kick_ilb() iterates nohz.idle_cpus_mask to find itself an idle_cpu().
>
> idle_cpus_mask() is set from nohz_balance_enter_idle() and cleared from
> nohz_balance_exit_idle(). nohz_balance_enter_idle() is called from
> __tick_nohz_idle_stop_tick() when entering nohz idle, this includes the
> cpu_is_offline() clause of the idle loop.
>
> However, when offline, cpu_active() should also be false, and this
> function should no-op.
Ha. That reboot mess is not clearing cpu active as it's not going through
the regular cpu hotplug path. It's using reboot IPI which 'stops' the cpus
dead in their tracks after clearing cpu online....
Thanks,
tglx
On Mon, Jul 29, 2019 at 11:58:24AM +0200, Thomas Gleixner wrote:
> On Mon, 29 Jul 2019, Peter Zijlstra wrote:
> > On Sat, Jul 27, 2019 at 09:44:50AM -0700, Guenter Roeck wrote:
> > > [ 61.348866] Call Trace:
> > > [ 61.349392] kick_ilb+0x90/0xa0
> > > [ 61.349629] trigger_load_balance+0xf0/0x5c0
> > > [ 61.349859] ? check_preempt_wakeup+0x1b0/0x1b0
> > > [ 61.350057] scheduler_tick+0xa7/0xd0
> >
> > kick_ilb() iterates nohz.idle_cpus_mask to find itself an idle_cpu().
> >
> > idle_cpus_mask() is set from nohz_balance_enter_idle() and cleared from
> > nohz_balance_exit_idle(). nohz_balance_enter_idle() is called from
> > __tick_nohz_idle_stop_tick() when entering nohz idle, this includes the
> > cpu_is_offline() clause of the idle loop.
> >
> > However, when offline, cpu_active() should also be false, and this
> > function should no-op.
>
> Ha. That reboot mess is not clearing cpu active as it's not going through
> the regular cpu hotplug path. It's using reboot IPI which 'stops' the cpus
> dead in their tracks after clearing cpu online....
$string-of-cock-compliant-curses
What a trainwreck...
So if it doesn't play by the normal rules; how does it expect to work?
So what do we do? 'Fix' reboot or extend the rules?
On Mon, 29 Jul 2019, Peter Zijlstra wrote:
> On Mon, Jul 29, 2019 at 11:58:24AM +0200, Thomas Gleixner wrote:
> > On Mon, 29 Jul 2019, Peter Zijlstra wrote:
> > > On Sat, Jul 27, 2019 at 09:44:50AM -0700, Guenter Roeck wrote:
> > > > [ 61.348866] Call Trace:
> > > > [ 61.349392] kick_ilb+0x90/0xa0
> > > > [ 61.349629] trigger_load_balance+0xf0/0x5c0
> > > > [ 61.349859] ? check_preempt_wakeup+0x1b0/0x1b0
> > > > [ 61.350057] scheduler_tick+0xa7/0xd0
> > >
> > > kick_ilb() iterates nohz.idle_cpus_mask to find itself an idle_cpu().
> > >
> > > idle_cpus_mask() is set from nohz_balance_enter_idle() and cleared from
> > > nohz_balance_exit_idle(). nohz_balance_enter_idle() is called from
> > > __tick_nohz_idle_stop_tick() when entering nohz idle, this includes the
> > > cpu_is_offline() clause of the idle loop.
> > >
> > > However, when offline, cpu_active() should also be false, and this
> > > function should no-op.
> >
> > Ha. That reboot mess is not clearing cpu active as it's not going through
> > the regular cpu hotplug path. It's using reboot IPI which 'stops' the cpus
> > dead in their tracks after clearing cpu online....
>
> $string-of-cock-compliant-curses
>
> What a trainwreck...
>
> So if it doesn't play by the normal rules; how does it expect to work?
>
> So what do we do? 'Fix' reboot or extend the rules?
Reboot has two modes:
- Regular reboot initiated from user space
- Panic reboot
For the regular reboot we can make it go through proper hotplug, for the
panic case not so much.
thanks,
tglx
On Mon, Jul 29, 2019 at 12:38:30PM +0200, Thomas Gleixner wrote:
> On Mon, 29 Jul 2019, Peter Zijlstra wrote:
> > On Mon, Jul 29, 2019 at 11:58:24AM +0200, Thomas Gleixner wrote:
> > > On Mon, 29 Jul 2019, Peter Zijlstra wrote:
> > > > On Sat, Jul 27, 2019 at 09:44:50AM -0700, Guenter Roeck wrote:
> > > > > [ 61.348866] Call Trace:
> > > > > [ 61.349392] kick_ilb+0x90/0xa0
> > > > > [ 61.349629] trigger_load_balance+0xf0/0x5c0
> > > > > [ 61.349859] ? check_preempt_wakeup+0x1b0/0x1b0
> > > > > [ 61.350057] scheduler_tick+0xa7/0xd0
> > > >
> > > > kick_ilb() iterates nohz.idle_cpus_mask to find itself an idle_cpu().
> > > >
> > > > idle_cpus_mask() is set from nohz_balance_enter_idle() and cleared from
> > > > nohz_balance_exit_idle(). nohz_balance_enter_idle() is called from
> > > > __tick_nohz_idle_stop_tick() when entering nohz idle, this includes the
> > > > cpu_is_offline() clause of the idle loop.
> > > >
> > > > However, when offline, cpu_active() should also be false, and this
> > > > function should no-op.
> > >
> > > Ha. That reboot mess is not clearing cpu active as it's not going through
> > > the regular cpu hotplug path. It's using reboot IPI which 'stops' the cpus
> > > dead in their tracks after clearing cpu online....
> >
> > $string-of-cock-compliant-curses
> >
> > What a trainwreck...
> >
> > So if it doesn't play by the normal rules; how does it expect to work?
> >
> > So what do we do? 'Fix' reboot or extend the rules?
>
> Reboot has two modes:
>
> - Regular reboot initiated from user space
>
> - Panic reboot
>
> For the regular reboot we can make it go through proper hotplug,
That seems sensible.
> for the panic case not so much.
It's panic, shit has already hit fan, one or two more pieces shouldn't
something anybody cares about.
On Mon, Jul 29, 2019 at 12:47:45PM +0200, Peter Zijlstra wrote:
> On Mon, Jul 29, 2019 at 12:38:30PM +0200, Thomas Gleixner wrote:
> > On Mon, 29 Jul 2019, Peter Zijlstra wrote:
> > > On Mon, Jul 29, 2019 at 11:58:24AM +0200, Thomas Gleixner wrote:
> > > > On Mon, 29 Jul 2019, Peter Zijlstra wrote:
> > > > > On Sat, Jul 27, 2019 at 09:44:50AM -0700, Guenter Roeck wrote:
> > > > > > [ 61.348866] Call Trace:
> > > > > > [ 61.349392] kick_ilb+0x90/0xa0
> > > > > > [ 61.349629] trigger_load_balance+0xf0/0x5c0
> > > > > > [ 61.349859] ? check_preempt_wakeup+0x1b0/0x1b0
> > > > > > [ 61.350057] scheduler_tick+0xa7/0xd0
> > > > >
> > > > > kick_ilb() iterates nohz.idle_cpus_mask to find itself an idle_cpu().
> > > > >
> > > > > idle_cpus_mask() is set from nohz_balance_enter_idle() and cleared from
> > > > > nohz_balance_exit_idle(). nohz_balance_enter_idle() is called from
> > > > > __tick_nohz_idle_stop_tick() when entering nohz idle, this includes the
> > > > > cpu_is_offline() clause of the idle loop.
> > > > >
> > > > > However, when offline, cpu_active() should also be false, and this
> > > > > function should no-op.
> > > >
> > > > Ha. That reboot mess is not clearing cpu active as it's not going through
> > > > the regular cpu hotplug path. It's using reboot IPI which 'stops' the cpus
> > > > dead in their tracks after clearing cpu online....
> > >
> > > $string-of-cock-compliant-curses
> > >
> > > What a trainwreck...
> > >
> > > So if it doesn't play by the normal rules; how does it expect to work?
> > >
> > > So what do we do? 'Fix' reboot or extend the rules?
> >
> > Reboot has two modes:
> >
> > - Regular reboot initiated from user space
> >
> > - Panic reboot
> >
> > For the regular reboot we can make it go through proper hotplug,
>
> That seems sensible.
>
> > for the panic case not so much.
>
> It's panic, shit has already hit fan, one or two more pieces shouldn't
> something anybody cares about.
>
Some more digging shows that this happens a lot with Google GCE intances,
typically after a panic. The problem with that, if I understand correctly,
is that it may prevent coredumps from being written. So, while of course
the panic is what needs to be fixed, it is still quite annoying, and it
would help if this can be fixed for panic handling as well.
How about the patch suggested by Hillf Danton ? Would that help for the
panic case ?
Thanks,
Guenter
On Mon, 29 Jul 2019, Guenter Roeck wrote:
> On Mon, Jul 29, 2019 at 12:47:45PM +0200, Peter Zijlstra wrote:
> > On Mon, Jul 29, 2019 at 12:38:30PM +0200, Thomas Gleixner wrote:
> > > Reboot has two modes:
> > >
> > > - Regular reboot initiated from user space
> > >
> > > - Panic reboot
> > >
> > > For the regular reboot we can make it go through proper hotplug,
> >
> > That seems sensible.
> >
> > > for the panic case not so much.
> >
> > It's panic, shit has already hit fan, one or two more pieces shouldn't
> > something anybody cares about.
> >
>
> Some more digging shows that this happens a lot with Google GCE intances,
> typically after a panic. The problem with that, if I understand correctly,
> is that it may prevent coredumps from being written. So, while of course
> the panic is what needs to be fixed, it is still quite annoying, and it
> would help if this can be fixed for panic handling as well.
>
> How about the patch suggested by Hillf Danton ? Would that help for the
> panic case ?
I have no idea how that patch looks like, but the quick hack is below.
Thanks,
tglx
8<---------------
diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index 75fea0d48c0e..625627b1457c 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -601,6 +601,7 @@ void stop_this_cpu(void *dummy)
/*
* Remove this CPU:
*/
+ set_cpu_active(smp_processor_id(), false);
set_cpu_online(smp_processor_id(), false);
disable_local_APIC();
mcheck_cpu_clear(this_cpu_ptr(&cpu_info));
On Fri, Aug 16, 2019 at 12:22:22PM +0200, Thomas Gleixner wrote:
> On Mon, 29 Jul 2019, Guenter Roeck wrote:
> > On Mon, Jul 29, 2019 at 12:47:45PM +0200, Peter Zijlstra wrote:
> > > On Mon, Jul 29, 2019 at 12:38:30PM +0200, Thomas Gleixner wrote:
> > > > Reboot has two modes:
> > > >
> > > > - Regular reboot initiated from user space
> > > >
> > > > - Panic reboot
> > > >
> > > > For the regular reboot we can make it go through proper hotplug,
> > >
> > > That seems sensible.
> > >
> > > > for the panic case not so much.
> > >
> > > It's panic, shit has already hit fan, one or two more pieces shouldn't
> > > something anybody cares about.
> > >
> >
> > Some more digging shows that this happens a lot with Google GCE intances,
> > typically after a panic. The problem with that, if I understand correctly,
> > is that it may prevent coredumps from being written. So, while of course
> > the panic is what needs to be fixed, it is still quite annoying, and it
> > would help if this can be fixed for panic handling as well.
> >
> > How about the patch suggested by Hillf Danton ? Would that help for the
> > panic case ?
>
> I have no idea how that patch looks like, but the quick hack is below.
>
> Thanks,
>
> tglx
>
> 8<---------------
> diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
> index 75fea0d48c0e..625627b1457c 100644
> --- a/arch/x86/kernel/process.c
> +++ b/arch/x86/kernel/process.c
> @@ -601,6 +601,7 @@ void stop_this_cpu(void *dummy)
> /*
> * Remove this CPU:
> */
> + set_cpu_active(smp_processor_id(), false);
> set_cpu_online(smp_processor_id(), false);
> disable_local_APIC();
> mcheck_cpu_clear(this_cpu_ptr(&cpu_info));
>
No luck. The problem is still seen with this patch applied on top of
the mainline kernel (commit a69e90512d9def6).
Guenter
---
[ 22.315834] e1000e: EEE TX LPI TIMER: 00000000
[ 22.323624] reboot: Restarting system
[ 22.324260] reboot: machine restart
[ 22.325885] ------------[ cut here ]------------
[ 22.330425] sched: Unexpected reschedule of offline CPU#3!
ILLOPC: ffffffffb524403f: 0f 0b
[ 22.330926] WARNING: CPU: 1 PID: 0 at arch/x86/kernel/smp.c:126 native_smp_send_reschedule+0x2f/0x40
[ 22.331238] Modules linked in:
[ 22.331427] CPU: 1 PID: 0 Comm: swapper/1 Not tainted 5.3.0-rc4+ #1
[ 22.331626] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.12.1-0-ga5cab58e9a3f-prebuilt.qemu.org 04/01/2014
[ 22.331971] RIP: 0010:native_smp_send_reschedule+0x2f/0x40
[ 22.332164] Code: 05 de 81 95 01 73 15 48 8b 05 bd fa 61 01 be fd 00 00 00 48 8b 40 30 e9 6f d0 fb 00 89 fe 48 c7 c7 88 da 74 b6 e8 7f 6c 02 00 <0f> 0b c3 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 00 55 53 48 83 ec
[ 22.332705] RSP: 0018:ffffa457800d0d68 EFLAGS: 00000086
[ 22.332884] RAX: 0000000000000000 RBX: ffff9a8cbb9ba000 RCX: 0000000000000103
[ 22.333109] RDX: 0000000080000103 RSI: 0000000000000000 RDI: 00000000ffffffff
[ 22.333327] RBP: ffffa457800d0e90 R08: 0000000000000000 R09: 0000000000000000
[ 22.333546] R10: 0000000000000000 R11: ffffa457800d0c10 R12: 000000000000a1b9
[ 22.333767] R13: ffff9a8cbae26030 R14: ffff9a8cbae25f80 R15: ffff9a8cbb83a000
[ 22.334045] FS: 0000000000000000(0000) GS:ffff9a8cbb880000(0000) knlGS:0000000000000000
[ 22.334321] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 22.334520] CR2: 00007fba66a35010 CR3: 0000000176cd6000 CR4: 00000000007406e0
[ 22.334794] PKRU: 55555554
[ 22.334915] Call Trace:
[ 22.335062] <IRQ>
[ 22.335148] check_preempt_curr+0x7f/0xc0
[ 22.335295] load_balance+0x589/0xc50
[ 22.335513] rebalance_domains+0x30d/0x410
[ 22.335684] _nohz_idle_balance+0x1bd/0x200
[ 22.335854] __do_softirq+0xe5/0x478
[ 22.336023] irq_exit+0xa9/0xc0
[ 22.336163] reschedule_interrupt+0xf/0x20
[ 22.336317] </IRQ>
[ 22.336409] RIP: 0010:default_idle+0x23/0x180
[ 22.336561] Code: ff 90 90 90 90 90 90 41 55 41 54 55 53 e8 45 75 7c ff 0f 1f 44 00 00 e8 0b aa 40 ff e9 07 00 00 00 0f 00 2d 31 94 4a 00 fb f4 <e8> 28 75 7c ff 89 c5 0f 1f 44 00 00 5b 5d 41 5c 41 5d c3 65 8b 05
[ 22.337102] RSP: 0018:ffffa4578006bec0 EFLAGS: 00000202 ORIG_RAX: ffffffffffffff02
[ 22.337342] RAX: ffff9a8cbae23fc0 RBX: 0000000000000001 RCX: 0000000000000001
[ 22.337561] RDX: 0000000000000046 RSI: 0000000000000006 RDI: ffffffffb6852dd6
[ 22.337780] RBP: ffffffffb6b9c1f8 R08: 0000000000000001 R09: 0000000000000000
[ 22.337996] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
[ 22.338229] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
[ 22.338501] do_idle+0x1df/0x260
[ 22.338588] ? _raw_spin_unlock_irqrestore+0x4c/0x60
[ 22.338706] cpu_startup_entry+0x14/0x20
[ 22.338793] start_secondary+0x151/0x180
[ 22.338885] secondary_startup_64+0xa4/0xb0
[ 22.339060] irq event stamp: 61631
[ 22.339176] hardirqs last enabled at (61630): [<ffffffffb5f5c6dc>] _raw_spin_unlock_irqrestore+0x4c/0x60
[ 22.339373] hardirqs last disabled at (61631): [<ffffffffb5f5c46d>] _raw_spin_lock_irqsave+0xd/0x50
[ 22.339568] softirqs last enabled at (61626): [<ffffffffb5272bc8>] irq_enter+0x58/0x60
[ 22.339726] softirqs last disabled at (61627): [<ffffffffb5272c79>] irq_exit+0xa9/0xc0
[ 22.339897] ---[ end trace 8ad53445879058cc ]---
[ 22.340384] ACPI MEMORY or I/O RESET_REG.
On Fri, 16 Aug 2019, Guenter Roeck wrote:
> On Fri, Aug 16, 2019 at 12:22:22PM +0200, Thomas Gleixner wrote:
> > diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
> > index 75fea0d48c0e..625627b1457c 100644
> > --- a/arch/x86/kernel/process.c
> > +++ b/arch/x86/kernel/process.c
> > @@ -601,6 +601,7 @@ void stop_this_cpu(void *dummy)
> > /*
> > * Remove this CPU:
> > */
> > + set_cpu_active(smp_processor_id(), false);
> > set_cpu_online(smp_processor_id(), false);
> > disable_local_APIC();
> > mcheck_cpu_clear(this_cpu_ptr(&cpu_info));
> >
> No luck. The problem is still seen with this patch applied on top of
> the mainline kernel (commit a69e90512d9def6).
Yeah, was a bit too naive ....
We actually need to do the full cpuhotplug dance for a regular reboot. In
the panic case, there is nothing we can do about. I'll have a look tomorrow.
Thanks,
tglx
Was this ever resolved and if so can someone please point me to the
patches? I started digging a bit but could not yet find how that
continued.
I am seeing similar or maybe the same problem on 4.19.192 with the
ipipe patch from the xenomai project applied.
regards,
Henning
Am Sat, 17 Aug 2019 22:21:48 +0200
schrieb Thomas Gleixner <[email protected]>:
> On Fri, 16 Aug 2019, Guenter Roeck wrote:
> > On Fri, Aug 16, 2019 at 12:22:22PM +0200, Thomas Gleixner wrote:
> > > diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
> > > index 75fea0d48c0e..625627b1457c 100644
> > > --- a/arch/x86/kernel/process.c
> > > +++ b/arch/x86/kernel/process.c
> > > @@ -601,6 +601,7 @@ void stop_this_cpu(void *dummy)
> > > /*
> > > * Remove this CPU:
> > > */
> > > + set_cpu_active(smp_processor_id(), false);
> > > set_cpu_online(smp_processor_id(), false);
> > > disable_local_APIC();
> > > mcheck_cpu_clear(this_cpu_ptr(&cpu_info));
> > >
> > No luck. The problem is still seen with this patch applied on top of
> > the mainline kernel (commit a69e90512d9def6).
>
> Yeah, was a bit too naive ....
>
> We actually need to do the full cpuhotplug dance for a regular
> reboot. In the panic case, there is nothing we can do about. I'll
> have a look tomorrow.
>
> Thanks,
>
> tglx
[Henning, don't top-post ;)]
On 27.07.21 10:00, Henning Schild via Xenomai wrote:
> Was this ever resolved and if so can someone please point me to the
> patches? I started digging a bit but could not yet find how that
> continued.
>
> I am seeing similar or maybe the same problem on 4.19.192 with the
> ipipe patch from the xenomai project applied.
>
Before blaming the usual suspects, I have a general ordering question on
mainline below.
> regards,
> Henning
>
> Am Sat, 17 Aug 2019 22:21:48 +0200
> schrieb Thomas Gleixner <[email protected]>:
>
>> On Fri, 16 Aug 2019, Guenter Roeck wrote:
>>> On Fri, Aug 16, 2019 at 12:22:22PM +0200, Thomas Gleixner wrote:
>>>> diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
>>>> index 75fea0d48c0e..625627b1457c 100644
>>>> --- a/arch/x86/kernel/process.c
>>>> +++ b/arch/x86/kernel/process.c
>>>> @@ -601,6 +601,7 @@ void stop_this_cpu(void *dummy)
>>>> /*
>>>> * Remove this CPU:
>>>> */
>>>> + set_cpu_active(smp_processor_id(), false);
>>>> set_cpu_online(smp_processor_id(), false);
>>>> disable_local_APIC();
>>>> mcheck_cpu_clear(this_cpu_ptr(&cpu_info));
>>>>
>>> No luck. The problem is still seen with this patch applied on top of
>>> the mainline kernel (commit a69e90512d9def6).
>>
>> Yeah, was a bit too naive ....
>>
>> We actually need to do the full cpuhotplug dance for a regular
>> reboot. In the panic case, there is nothing we can do about. I'll
>> have a look tomorrow.
>>
What is supposed to prevent the following in mainline:
CPU 0 CPU 1 CPU 2
native_stop_other_cpus <INTERRUPT>
send_IPI_allbutself ...
<INTERRUPT>
sysvec_reboot
stop_this_cpu
set_cpu_online(false)
native_smp_send_reschedule(1)
if (cpu_is_offline(1)) ...
Jan
--
Siemens AG, T RDA IOT
Corporate Competence Center Embedded Linux