2017-09-07 15:11:06

by Peter Zijlstra

[permalink] [raw]
Subject: [PATCH 3/4] sched: WARN when migrating to an offline CPU

Migrating tasks to offline CPUs is a pretty big fail, warn about it.

Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
---
kernel/sched/core.c | 4 ++++
1 file changed, 4 insertions(+)

--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1173,6 +1173,10 @@ void set_task_cpu(struct task_struct *p,
WARN_ON_ONCE(debug_locks && !(lockdep_is_held(&p->pi_lock) ||
lockdep_is_held(&task_rq(p)->lock)));
#endif
+ /*
+ * Clearly, migrating tasks to offline CPUs is a fairly daft thing.
+ */
+ WARN_ON_ONCE(!cpu_online(new_cpu));
#endif

trace_sched_migrate_task(p, new_cpu);



Subject: [tip:sched/urgent] sched/core: WARN() when migrating to an offline CPU

Commit-ID: 4ff9083b8a9a80bdf4ebbbec22cda4cbfb60f7aa
Gitweb: http://git.kernel.org/tip/4ff9083b8a9a80bdf4ebbbec22cda4cbfb60f7aa
Author: Peter Zijlstra <[email protected]>
AuthorDate: Thu, 7 Sep 2017 17:03:52 +0200
Committer: Ingo Molnar <[email protected]>
CommitDate: Tue, 12 Sep 2017 17:41:04 +0200

sched/core: WARN() when migrating to an offline CPU

Migrating tasks to offline CPUs is a pretty big fail, warn about it.

Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
---
kernel/sched/core.c | 4 ++++
1 file changed, 4 insertions(+)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 136a76d..18a6966 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1173,6 +1173,10 @@ void set_task_cpu(struct task_struct *p, unsigned int new_cpu)
WARN_ON_ONCE(debug_locks && !(lockdep_is_held(&p->pi_lock) ||
lockdep_is_held(&task_rq(p)->lock)));
#endif
+ /*
+ * Clearly, migrating tasks to offline CPUs is a fairly daft thing.
+ */
+ WARN_ON_ONCE(!cpu_online(new_cpu));
#endif

trace_sched_migrate_task(p, new_cpu);

2017-09-28 09:14:38

by Sasha Levin

[permalink] [raw]
Subject: Re: [PATCH 3/4] sched: WARN when migrating to an offline CPU

On Thu, Sep 7, 2017 at 8:03 AM, Peter Zijlstra <[email protected]> wrote:
> Migrating tasks to offline CPUs is a pretty big fail, warn about it.

Hey Peter,

This seems to get hit on the latest -next:

[2035565360.446794] Unregister pv shared memory for cpu 2
[2035565360.467930] numa_remove_cpu cpu 2 node 2: mask now 6
[2035565360.471431] ------------[ cut here ]------------
[2035565360.472548] WARNING: CPU: 2 PID: 24 at
kernel/sched/core.c:1178 set_task_cpu (kernel/sched/core.c:1157)
[2035565360.473840] Modules linked in:
[2035565360.474632] CPU: 2 PID: 24 Comm: migration/2 Not tainted
4.14.0-rc2-next-20170927+ #252
[2035565360.476128] task: ffff8801ab7dc040 task.stack: ffff8801ab7e0000
[2035565360.477108] RIP: 0010:set_task_cpu (??:?)
[2035565360.477850] RSP: 0018:ffff8801ab807b18 EFLAGS: 00010096
[2035565360.478734] RAX: 0000000000000002 RBX: ffff8801a8d94040 RCX:
1ffff1003577cb3c
[2035565360.479872] RDX: 0000000000000004 RSI: 0000000000000002 RDI:
ffffffff9177d940
[2035565360.480998] RBP: ffff8801ab807b58 R08: 000000000000b6b2 R09:
ffff8801abbe5a2c
[2035565360.482356] R10: 0000000000000001 R11: 0000000000000001 R12:
ffff8801a8d94048
[2035565360.483490] R13: ffff8801ab807c88 R14: 0000000000000002 R15:
ffff8801a8d940a4
[2035565360.484617] FS: 0000000000000000(0000)
GS:ffff8801ab800000(0000) knlGS:0000000000000000
[2035565360.485881] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[2035565360.490482] CR2: 00000000012dfae4 CR3: 00000001a30c5002 CR4:
00000000000606a0
[2035565360.491836] Call Trace:
[2035565360.492301] <IRQ>
[2035565360.492710] detach_task.isra.80 (kernel/sched/fair.c:6816)
[2035565360.493430] load_balance (./include/linux/list.h:78
kernel/sched/fair.c:6902 kernel/sched/fair.c:8201)
[2035565360.494094] ? find_busiest_group (kernel/sched/fair.c:8133)
[2035565360.494861] ? rebalance_domains (kernel/sched/fair.c:8785)
[2035565360.495571] rebalance_domains (kernel/sched/fair.c:8829)
[2035565360.496292] ? find_next_bit (lib/find_bit.c:65)
[2035565360.496961] ? pick_next_task_fair (kernel/sched/fair.c:8779)
[2035565360.497753] ? check_preemption_disabled (lib/smp_processor_id.c:52)
[2035565360.498574] run_rebalance_domains (kernel/sched/fair.c:9052)
[2035565360.499318] ? check_preemption_disabled (lib/smp_processor_id.c:52)
[2035565360.500388] __do_softirq (kernel/softirq.c:284
./include/linux/jump_label.h:141 ./include/trace/events/irq.h:141
kernel/softirq.c:285)
[2035565360.501039] ? hrtimer_interrupt (kernel/time/hrtimer.c:1370)
[2035565360.502014] irq_exit (kernel/softirq.c:364 kernel/softirq.c:405)
[2035565360.502598] smp_apic_timer_interrupt
(./arch/x86/include/asm/irq_regs.h:26
arch/x86/kernel/apic/apic.c:1043)
[2035565360.503605] apic_timer_interrupt (arch/x86/entry/entry_64.S:770)
[2035565360.504313] </IRQ>
[2035565360.504724] RIP: 0010:multi_cpu_stop (??:?)
[2035565360.505503] RSP: 0018:ffff8801ab7e7d88 EFLAGS: 00000292
ORIG_RAX: ffffffffffffff11
[2035565360.506721] RAX: 0000000000000003 RBX: 0000000000000004 RCX:
000000000000000a
[2035565360.507862] RDX: 0000000000000004 RSI: 0000000000000040 RDI:
0000000000000292
[2035565360.508997] RBP: ffff8801ab7e7de0 R08: ffff8801ab7dc040 R09:
0000000000000000
[2035565360.510566] R10: 0000000000000001 R11: 1ffff1003577cb55 R12:
ffff8801aae2f890
[2035565360.511701] R13: ffff8801aae2f8b4 R14: dffffc0000000000 R15:
0000000000000003
[2035565360.512843] ? cpu_stop_queue_work (kernel/stop_machine.c:176)
[2035565360.513591] cpu_stopper_thread (kernel/stop_machine.c:480)
[2035565360.514316] ? cpu_stop_create (kernel/stop_machine.c:458)
[2035565360.515218] smpboot_thread_fn (kernel/smpboot.c:164)
[2035565360.515918] ? sort_range (kernel/smpboot.c:107)
[2035565360.516528] ? schedule (./arch/x86/include/asm/bitops.h:324
(discriminator 1) ./include/linux/thread_info.h:79 (discriminator 1)
./include/linux/sched.h:1605 (discriminator 1)
kernel/sched/core.c:3435 (discriminator 1))
[2035565360.517119] ? __kthread_parkme (kernel/kthread.c:188)
[2035565360.517835] kthread (kernel/kthread.c:242)
[2035565360.518413] ? sort_range (kernel/smpboot.c:107)
[2035565360.519008] ? kthread_create_on_node (kernel/kthread.c:198)
[2035565360.519792] ret_from_fork (arch/x86/entry/entry_64.S:437)
[2035565360.520407] Code: 09 84 d2 74 05 e8 94 38 4b 00 f7 83 84 00 00
00 fd ff ff ff 0f 84 a0 f9 ff ff 0f ff e9 99 f9 ff ff e8 68 18 ff ff
e9 c7 fd ff ff <0f> ff e9 ea f9 ff ff e8 57 18 ff ff e9 f5 f9 ff ff e8
0d 84 0f
All code
========
0: 09 84 d2 74 05 e8 94 or %eax,-0x6b17fa8c(%rdx,%rdx,8)
7: 38 4b 00 cmp %cl,0x0(%rbx)
a: f7 83 84 00 00 00 fd testl $0xfffffffd,0x84(%rbx)
11: ff ff ff
14: 0f 84 a0 f9 ff ff je 0xfffffffffffff9ba
1a: 0f ff (bad)
1c: e9 99 f9 ff ff jmpq 0xfffffffffffff9ba
21: e8 68 18 ff ff callq 0xffffffffffff188e
26: e9 c7 fd ff ff jmpq 0xfffffffffffffdf2
2b:* 0f ff (bad) <-- trapping instruction
2d: e9 ea f9 ff ff jmpq 0xfffffffffffffa1c
32: e8 57 18 ff ff callq 0xffffffffffff188e
37: e9 f5 f9 ff ff jmpq 0xfffffffffffffa31
3c: e8 0d 84 0f 00 callq 0xf844e

Code starting with the faulting instruction
===========================================
0: 0f ff (bad)
2: e9 ea f9 ff ff jmpq 0xfffffffffffff9f1
7: e8 57 18 ff ff callq 0xffffffffffff1863
c: e9 f5 f9 ff ff jmpq 0xfffffffffffffa06
11: e8 0d 84 0f 00 callq 0xf8423
[2035565360.523585] ---[ end trace 812b7ff74114db6c ]---
[2035565360.524744] smpboot: CPU 2 is now offline


Thanks,
Sasha

2017-09-28 10:35:51

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 3/4] sched: WARN when migrating to an offline CPU

On Thu, Sep 28, 2017 at 02:14:15AM -0700, Sasha Levin wrote:
> On Thu, Sep 7, 2017 at 8:03 AM, Peter Zijlstra <[email protected]> wrote:
> > Migrating tasks to offline CPUs is a pretty big fail, warn about it.
>
> Hey Peter,
>
> This seems to get hit on the latest -next:
>
> [2035565360.446794] Unregister pv shared memory for cpu 2
> [2035565360.467930] numa_remove_cpu cpu 2 node 2: mask now 6
> [2035565360.471431] ------------[ cut here ]------------
> [2035565360.472548] WARNING: CPU: 2 PID: 24 at > kernel/sched/core.c:1178 set_task_cpu (kernel/sched/core.c:1157)
> [2035565360.473840] Modules linked in:
> [2035565360.474632] CPU: 2 PID: 24 Comm: migration/2 Not tainted > 4.14.0-rc2-next-20170927+ #252

Urgh, weird. That really shouldn't happen. Can you easily reproduce?

> [2035565360.491836] Call Trace:
> [2035565360.492301] <IRQ>
> [2035565360.492710] detach_task.isra.80 (kernel/sched/fair.c:6816)
> [2035565360.493430] load_balance (./include/linux/list.h:78 > kernel/sched/fair.c:6902 kernel/sched/fair.c:8201)
> [2035565360.494094] ? find_busiest_group (kernel/sched/fair.c:8133)
> [2035565360.494861] ? rebalance_domains (kernel/sched/fair.c:8785)
> [2035565360.495571] rebalance_domains (kernel/sched/fair.c:8829)
> [2035565360.496292] ? find_next_bit (lib/find_bit.c:65)
> [2035565360.496961] ? pick_next_task_fair (kernel/sched/fair.c:8779)
> [2035565360.497753] ? check_preemption_disabled (lib/smp_processor_id.c:52)
> [2035565360.498574] run_rebalance_domains (kernel/sched/fair.c:9052)
> [2035565360.499318] ? check_preemption_disabled (lib/smp_processor_id.c:52)
> [2035565360.500388] __do_softirq (kernel/softirq.c:284 > ./include/linux/jump_label.h:141 ./include/trace/events/irq.h:141 > kernel/softirq.c:285)
> [2035565360.501039] ? hrtimer_interrupt (kernel/time/hrtimer.c:1370)
> [2035565360.502014] irq_exit (kernel/softirq.c:364 kernel/softirq.c:405)
> [2035565360.502598] smp_apic_timer_interrupt > (./arch/x86/include/asm/irq_regs.h:26 > arch/x86/kernel/apic/apic.c:1043)
> [2035565360.503605] apic_timer_interrupt (arch/x86/entry/entry_64.S:770)
> [2035565360.504313] </IRQ>

Subject: Re: [PATCH 3/4] sched: WARN when migrating to an offline CPU

On Thu, Sep 28, 2017 at 12:35:41PM +0200, Peter Zijlstra wrote:
>On Thu, Sep 28, 2017 at 02:14:15AM -0700, Sasha Levin wrote:
>> On Thu, Sep 7, 2017 at 8:03 AM, Peter Zijlstra <[email protected]> wrote:
>> > Migrating tasks to offline CPUs is a pretty big fail, warn about it.
>>
>> Hey Peter,
>>
>> This seems to get hit on the latest -next:
>>
>> [2035565360.446794] Unregister pv shared memory for cpu 2
>> [2035565360.467930] numa_remove_cpu cpu 2 node 2: mask now 6
>> [2035565360.471431] ------------[ cut here ]------------
>> [2035565360.472548] WARNING: CPU: 2 PID: 24 at > kernel/sched/core.c:1178 set_task_cpu (kernel/sched/core.c:1157)
>> [2035565360.473840] Modules linked in:
>> [2035565360.474632] CPU: 2 PID: 24 Comm: migration/2 Not tainted > 4.14.0-rc2-next-20170927+ #252
>
>Urgh, weird. That really shouldn't happen. Can you easily reproduce?

Looks like yes. Seems like it's enough to stress CPU hotplug + trinity.

If you have patches you want me try send them my way.

--

Thanks,
Sasha

2017-09-28 11:42:56

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 3/4] sched: WARN when migrating to an offline CPU

On Thu, Sep 28, 2017 at 11:03:10AM +0000, Levin, Alexander (Sasha Levin) wrote:
> On Thu, Sep 28, 2017 at 12:35:41PM +0200, Peter Zijlstra wrote:
> >On Thu, Sep 28, 2017 at 02:14:15AM -0700, Sasha Levin wrote:

> >> [2035565360.446794] Unregister pv shared memory for cpu 2
> >> [2035565360.467930] numa_remove_cpu cpu 2 node 2: mask now 6
> >> [2035565360.471431] ------------[ cut here ]------------
> >> [2035565360.472548] WARNING: CPU: 2 PID: 24 at > kernel/sched/core.c:1178 set_task_cpu (kernel/sched/core.c:1157)
> >> [2035565360.473840] Modules linked in:
> >> [2035565360.474632] CPU: 2 PID: 24 Comm: migration/2 Not tainted > 4.14.0-rc2-next-20170927+ #252
> >
> >Urgh, weird. That really shouldn't happen. Can you easily reproduce?
>
> Looks like yes. Seems like it's enough to stress CPU hotplug + trinity.

OK, I'll see if I can reproduce building kernels and hotplug stress.
Otherwise I'll try and cook up some debug patches for you.

2017-09-29 11:11:32

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 3/4] sched: WARN when migrating to an offline CPU

On Thu, Sep 28, 2017 at 01:42:49PM +0200, Peter Zijlstra wrote:
> On Thu, Sep 28, 2017 at 11:03:10AM +0000, Levin, Alexander (Sasha Levin) wrote:
> > On Thu, Sep 28, 2017 at 12:35:41PM +0200, Peter Zijlstra wrote:
> > >On Thu, Sep 28, 2017 at 02:14:15AM -0700, Sasha Levin wrote:
>
> > >> [2035565360.446794] Unregister pv shared memory for cpu 2
> > >> [2035565360.467930] numa_remove_cpu cpu 2 node 2: mask now 6
> > >> [2035565360.471431] ------------[ cut here ]------------
> > >> [2035565360.472548] WARNING: CPU: 2 PID: 24 at > kernel/sched/core.c:1178 set_task_cpu (kernel/sched/core.c:1157)
> > >> [2035565360.473840] Modules linked in:
> > >> [2035565360.474632] CPU: 2 PID: 24 Comm: migration/2 Not tainted > 4.14.0-rc2-next-20170927+ #252
> > >
> > >Urgh, weird. That really shouldn't happen. Can you easily reproduce?
> >
> > Looks like yes. Seems like it's enough to stress CPU hotplug + trinity.
>
> OK, I'll see if I can reproduce building kernels and hotplug stress.
> Otherwise I'll try and cook up some debug patches for you.

I can't seem to trigger :-(

Can you please run with the below patch and:

# echo 1 > /proc/sys/kernel/traceoff_on_warning

---
kernel/sched/core.c | 3 +++
kernel/sched/fair.c | 10 ++++++++++
2 files changed, 13 insertions(+)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 18a6966567da..c613f7756981 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5592,6 +5592,7 @@ int sched_cpu_activate(unsigned int cpu)
struct rq_flags rf;

set_cpu_active(cpu, true);
+ trace_printk("active: %d mask: %*pbl\n", cpu, cpumask_pr_args(cpu_active_mask));

if (sched_smp_initialized) {
sched_domains_numa_masks_set(cpu);
@@ -5624,6 +5625,7 @@ int sched_cpu_deactivate(unsigned int cpu)
int ret;

set_cpu_active(cpu, false);
+ trace_printk("not-active: %d mask: %*pbl\n", cpu, cpumask_pr_args(cpu_active_mask));
/*
* We've cleared cpu_active_mask, wait for all preempt-disabled and RCU
* users of this state to go away such that all new such users will
@@ -5632,6 +5634,7 @@ int sched_cpu_deactivate(unsigned int cpu)
* Do sync before park smpboot threads to take care the rcu boost case.
*/
synchronize_rcu_mult(call_rcu, call_rcu_sched);
+ trace_printk("rcu-sync: %d\n", cpu);

if (!sched_smp_initialized)
return 0;
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 70ba32e08a23..cb8f43a59f33 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8150,8 +8150,11 @@ static int load_balance(int this_cpu, struct rq *this_rq,
.tasks = LIST_HEAD_INIT(env.tasks),
};

+
cpumask_and(cpus, sched_domain_span(sd), cpu_active_mask);

+ trace_printk("dst_cpu: %d cpus: %*pbl\n", this_cpu, cpumask_pr_args(cpus));
+
schedstat_inc(sd->lb_count[idle]);

redo:
@@ -8248,6 +8251,9 @@ static int load_balance(int this_cpu, struct rq *this_rq,

env.dst_rq = cpu_rq(env.new_dst_cpu);
env.dst_cpu = env.new_dst_cpu;
+
+ trace_printk("dst_cpu: %d\n", env.dst_cpu);
+
env.flags &= ~LBF_DST_PINNED;
env.loop = 0;
env.loop_break = sched_nr_migrate_break;
@@ -8465,6 +8471,7 @@ static int idle_balance(struct rq *this_rq, struct rq_flags *rf)

update_blocked_averages(this_cpu);
rcu_read_lock();
+ trace_printk("rcu-read-lock: %d\n", this_cpu);
for_each_domain(this_cpu, sd) {
int continue_balancing = 1;
u64 t0, domain_cost;
@@ -8500,6 +8507,7 @@ static int idle_balance(struct rq *this_rq, struct rq_flags *rf)
if (pulled_task || this_rq->nr_running > 0)
break;
}
+ trace_printk("rcu-read-unlock: %d\n", this_cpu);
rcu_read_unlock();

raw_spin_lock(&this_rq->lock);
@@ -8790,6 +8798,7 @@ static void rebalance_domains(struct rq *rq, enum cpu_idle_type idle)
update_blocked_averages(cpu);

rcu_read_lock();
+ trace_printk("rcu-read-lock: %d\n", cpu);
for_each_domain(cpu, sd) {
/*
* Decay the newidle max times here because this is a regular
@@ -8853,6 +8862,7 @@ static void rebalance_domains(struct rq *rq, enum cpu_idle_type idle)
rq->max_idle_balance_cost =
max((u64)sysctl_sched_migration_cost, max_cost);
}
+ trace_printk("rcu-read-unlock: %d\n", cpu);
rcu_read_unlock();

/*