2012-10-17 03:49:26

by Dave Jones

[permalink] [raw]
Subject: lots of suspicious RCU traces

I just triggered this while fuzz testing. All traces happened
in quick succession.

Dave

===============================
[ INFO: suspicious RCU usage. ]
3.7.0-rc1+ #31 Not tainted
-------------------------------
include/linux/rcupdate.h:738 rcu_read_lock() used illegally while idle!

other info that might help us debug this:


RCU used illegally from idle CPU!
rcu_scheduler_active = 1, debug_locks = 1
RCU used illegally from extended quiescent state!
2 locks on stack by trinity-main/9791:
#0: held: (&(&sighand->siglock)->rlock){-.-.-.}, instance: ffff8801435dce20, at: [<ffffffff8108afd3>] ptrace_notify+0x53/0x90
#1: blocked: (rcu_read_lock){.+.+..}, instance: ffffffff81c34e80, at: [<ffffffff81097e75>] __task_pid_nr_ns+0x5/0x1a0

stack backtrace:
Pid: 9791, comm: trinity-main Not tainted 3.7.0-rc1+ #31
Call Trace:
[<ffffffff810dfb22>] lockdep_rcu_suspicious+0xe2/0x130
[<ffffffff81097fd5>] __task_pid_nr_ns+0x165/0x1a0
[<ffffffff81097e75>] ? __task_pid_nr_ns+0x5/0x1a0
[<ffffffff8108924f>] ptrace_do_notify+0x5f/0xb0
[<ffffffff8108afe4>] ptrace_notify+0x64/0x90
[<ffffffff8102be46>] tracehook_report_syscall_exit+0x36/0xb0
[<ffffffff810ab02c>] ? finish_task_switch+0x7c/0x120
[<ffffffff816ca143>] ? int_very_careful+0x5/0xd
[<ffffffff8102c269>] syscall_trace_leave+0x79/0x150
[<ffffffff816ca17f>] int_check_syscall_exit_work+0x34/0x3d

===============================
[ INFO: suspicious RCU usage. ]
3.7.0-rc1+ #31 Not tainted
-------------------------------
include/linux/rcupdate.h:759 rcu_read_unlock() used illegally while idle!

other info that might help us debug this:


RCU used illegally from idle CPU!
rcu_scheduler_active = 1, debug_locks = 1
RCU used illegally from extended quiescent state!
2 locks on stack by trinity-main/9791:
#0: held: (&(&sighand->siglock)->rlock){-.-.-.}, instance: ffff8801435dce20, at: [<ffffffff8108afd3>] ptrace_notify+0x53/0x90
#1: blocked: (rcu_read_lock){.+.+..}, instance: ffffffff81c34e80, at: [<ffffffff81097e75>] __task_pid_nr_ns+0x5/0x1a0

stack backtrace:
Pid: 9791, comm: trinity-main Not tainted 3.7.0-rc1+ #31
Call Trace:
[<ffffffff810dfb22>] lockdep_rcu_suspicious+0xe2/0x130
[<ffffffff81097fa1>] __task_pid_nr_ns+0x131/0x1a0
[<ffffffff81097e75>] ? __task_pid_nr_ns+0x5/0x1a0
[<ffffffff8108924f>] ptrace_do_notify+0x5f/0xb0
[<ffffffff8108afe4>] ptrace_notify+0x64/0x90
[<ffffffff8102be46>] tracehook_report_syscall_exit+0x36/0xb0
[<ffffffff810ab02c>] ? finish_task_switch+0x7c/0x120
[<ffffffff816ca143>] ? int_very_careful+0x5/0xd
[<ffffffff8102c269>] syscall_trace_leave+0x79/0x150
[<ffffffff816ca17f>] int_check_syscall_exit_work+0x34/0x3d

===============================
[ INFO: suspicious RCU usage. ]
3.7.0-rc1+ #31 Not tainted
-------------------------------
include/linux/rcupdate.h:738 rcu_read_lock() used illegally while idle!

other info that might help us debug this:


RCU used illegally from idle CPU!
rcu_scheduler_active = 1, debug_locks = 1
RCU used illegally from extended quiescent state!
3 locks on stack by migration/2/24:
#0: held: (&rq->lock){-.-.-.}, instance: ffff8801481d4a98, at: [<ffffffff810ba558>] active_load_balance_cpu_stop+0x328/0x490
#1: held: (&rq->lock/1){..-.-.}, instance: ffff8801483d4a98, at: [<ffffffff810ba569>] active_load_balance_cpu_stop+0x339/0x490
#2: blocked: (rcu_read_lock){.+.+..}, instance: ffffffff81c34e80, at: [<ffffffff810ba324>] active_load_balance_cpu_stop+0xf4/0x490

stack backtrace:
Pid: 24, comm: migration/2 Not tainted 3.7.0-rc1+ #31
Call Trace:
[<ffffffff810dfb22>] lockdep_rcu_suspicious+0xe2/0x130
[<ffffffff810ba5dc>] active_load_balance_cpu_stop+0x3ac/0x490
[<ffffffff810ba324>] ? active_load_balance_cpu_stop+0xf4/0x490
[<ffffffff81107923>] ? cpu_stopper_thread+0xb3/0x1d0
[<ffffffff810de02e>] ? put_lock_stats.isra.25+0xe/0x40
[<ffffffff816c1030>] ? _raw_spin_unlock_irq+0x30/0x80
[<ffffffff810b0de1>] ? get_parent_ip+0x11/0x50
[<ffffffff810b0de1>] ? get_parent_ip+0x11/0x50
[<ffffffff810ba230>] ? select_task_rq_fair+0xad0/0xad0
[<ffffffff8110794b>] cpu_stopper_thread+0xdb/0x1d0
[<ffffffff816c5a59>] ? sub_preempt_count+0x79/0xd0
[<ffffffff816bef69>] ? __schedule+0x499/0xa40
[<ffffffff81107870>] ? cpu_stop_queue_work+0x80/0x80
[<ffffffff8109d0cd>] kthread+0xed/0x100
[<ffffffff810de02e>] ? put_lock_stats.isra.25+0xe/0x40
[<ffffffff8109cfe0>] ? kthread_create_on_node+0x160/0x160
[<ffffffff816c9dac>] ret_from_fork+0x7c/0xb0
[<ffffffff8109cfe0>] ? kthread_create_on_node+0x160/0x160

===============================
[ INFO: suspicious RCU usage. ]
3.7.0-rc1+ #31 Not tainted
-------------------------------
kernel/sched/fair.c:4538 suspicious rcu_dereference_check() usage!

other info that might help us debug this:


RCU used illegally from idle CPU!
rcu_scheduler_active = 1, debug_locks = 1
RCU used illegally from extended quiescent state!
3 locks on stack by migration/2/24:
#0: held: (&rq->lock){-.-.-.}, instance: ffff8801481d4a98, at: [<ffffffff810ba558>] active_load_balance_cpu_stop+0x328/0x490
#1: held: (&rq->lock/1){..-.-.}, instance: ffff8801483d4a98, at: [<ffffffff810ba569>] active_load_balance_cpu_stop+0x339/0x490
#2: blocked: (rcu_read_lock){.+.+..}, instance: ffffffff81c34e80, at: [<ffffffff810ba324>] active_load_balance_cpu_stop+0xf4/0x490

stack backtrace:
Pid: 24, comm: migration/2 Not tainted 3.7.0-rc1+ #31
Call Trace:
[<ffffffff810dfb22>] lockdep_rcu_suspicious+0xe2/0x130
[<ffffffff810ba665>] active_load_balance_cpu_stop+0x435/0x490
[<ffffffff810ba324>] ? active_load_balance_cpu_stop+0xf4/0x490
[<ffffffff81107923>] ? cpu_stopper_thread+0xb3/0x1d0
[<ffffffff810de02e>] ? put_lock_stats.isra.25+0xe/0x40
[<ffffffff816c1030>] ? _raw_spin_unlock_irq+0x30/0x80
[<ffffffff810b0de1>] ? get_parent_ip+0x11/0x50
[<ffffffff810b0de1>] ? get_parent_ip+0x11/0x50
[<ffffffff810ba230>] ? select_task_rq_fair+0xad0/0xad0
[<ffffffff8110794b>] cpu_stopper_thread+0xdb/0x1d0
[<ffffffff816c5a59>] ? sub_preempt_count+0x79/0xd0
[<ffffffff816bef69>] ? __schedule+0x499/0xa40
[<ffffffff81107870>] ? cpu_stop_queue_work+0x80/0x80
[<ffffffff8109d0cd>] kthread+0xed/0x100
[<ffffffff810de02e>] ? put_lock_stats.isra.25+0xe/0x40
[<ffffffff8109cfe0>] ? kthread_create_on_node+0x160/0x160
[<ffffffff816c9dac>] ret_from_fork+0x7c/0xb0
[<ffffffff8109cfe0>] ? kthread_create_on_node+0x160/0x160

===============================
[ INFO: suspicious RCU usage. ]
3.7.0-rc1+ #31 Not tainted
-------------------------------
include/linux/rcupdate.h:738 rcu_read_lock() used illegally while idle!

other info that might help us debug this:


RCU used illegally from idle CPU!
rcu_scheduler_active = 1, debug_locks = 1
RCU used illegally from extended quiescent state!
4 locks on stack by migration/2/24:
#0: held: (&rq->lock){-.-.-.}, instance: ffff8801481d4a98, at: [<ffffffff810ba558>] active_load_balance_cpu_stop+0x328/0x490
#1: held: (&rq->lock/1){..-.-.}, instance: ffff8801483d4a98, at: [<ffffffff810ba569>] active_load_balance_cpu_stop+0x339/0x490
#2: blocked: (rcu_read_lock){.+.+..}, instance: ffffffff81c34e80, at: [<ffffffff810ba324>] active_load_balance_cpu_stop+0xf4/0x490
#3: blocked: (rcu_read_lock){.+.+..}, instance: ffffffff81c34e80, at: [<ffffffff810b6a53>] cpuacct_charge+0x13/0x230

stack backtrace:
Pid: 24, comm: migration/2 Not tainted 3.7.0-rc1+ #31
Call Trace:
[<ffffffff810dfb22>] lockdep_rcu_suspicious+0xe2/0x130
[<ffffffff810b6c0c>] cpuacct_charge+0x1cc/0x230
[<ffffffff810b6a53>] ? cpuacct_charge+0x13/0x230
[<ffffffff810bafa4>] update_curr+0x104/0x270
[<ffffffff810bc244>] enqueue_entity+0x44/0x720
[<ffffffff810bc962>] enqueue_task_fair+0x42/0xf0
[<ffffffff810af326>] enqueue_task+0x66/0x80
[<ffffffff810b01c3>] activate_task+0x23/0x30
[<ffffffff810b8aec>] move_task+0x3c/0x50
[<ffffffff810ba4b2>] active_load_balance_cpu_stop+0x282/0x490
[<ffffffff810ba324>] ? active_load_balance_cpu_stop+0xf4/0x490
[<ffffffff81107923>] ? cpu_stopper_thread+0xb3/0x1d0
[<ffffffff810ba230>] ? select_task_rq_fair+0xad0/0xad0
[<ffffffff8110794b>] cpu_stopper_thread+0xdb/0x1d0
[<ffffffff816c5a59>] ? sub_preempt_count+0x79/0xd0
[<ffffffff816bef69>] ? __schedule+0x499/0xa40
[<ffffffff81107870>] ? cpu_stop_queue_work+0x80/0x80
[<ffffffff8109d0cd>] kthread+0xed/0x100
[<ffffffff810de02e>] ? put_lock_stats.isra.25+0xe/0x40
[<ffffffff8109cfe0>] ? kthread_create_on_node+0x160/0x160
[<ffffffff816c9dac>] ret_from_fork+0x7c/0xb0
[<ffffffff8109cfe0>] ? kthread_create_on_node+0x160/0x160

===============================
[ INFO: suspicious RCU usage. ]
3.7.0-rc1+ #31 Not tainted
-------------------------------
include/linux/cgroup.h:566 suspicious rcu_dereference_check() usage!

other info that might help us debug this:


RCU used illegally from idle CPU!
rcu_scheduler_active = 1, debug_locks = 1
RCU used illegally from extended quiescent state!
4 locks on stack by migration/2/24:
#0: held: (&rq->lock){-.-.-.}, instance: ffff8801481d4a98, at: [<ffffffff810ba558>] active_load_balance_cpu_stop+0x328/0x490
#1: held: (&rq->lock/1){..-.-.}, instance: ffff8801483d4a98, at: [<ffffffff810ba569>] active_load_balance_cpu_stop+0x339/0x490
#2: blocked: (rcu_read_lock){.+.+..}, instance: ffffffff81c34e80, at: [<ffffffff810ba324>] active_load_balance_cpu_stop+0xf4/0x490
#3: blocked: (rcu_read_lock){.+.+..}, instance: ffffffff81c34e80, at: [<ffffffff810b6a53>] cpuacct_charge+0x13/0x230

stack backtrace:
Pid: 24, comm: migration/2 Not tainted 3.7.0-rc1+ #31
Call Trace:
[<ffffffff810dfb22>] lockdep_rcu_suspicious+0xe2/0x130
[<ffffffff810b6bd5>] cpuacct_charge+0x195/0x230
[<ffffffff810b6a53>] ? cpuacct_charge+0x13/0x230
[<ffffffff810bafa4>] update_curr+0x104/0x270
[<ffffffff810bc244>] enqueue_entity+0x44/0x720
[<ffffffff810bc962>] enqueue_task_fair+0x42/0xf0
[<ffffffff810af326>] enqueue_task+0x66/0x80
[<ffffffff810b01c3>] activate_task+0x23/0x30
[<ffffffff810b8aec>] move_task+0x3c/0x50
[<ffffffff810ba4b2>] active_load_balance_cpu_stop+0x282/0x490
[<ffffffff810ba324>] ? active_load_balance_cpu_stop+0xf4/0x490
[<ffffffff81107923>] ? cpu_stopper_thread+0xb3/0x1d0
[<ffffffff810ba230>] ? select_task_rq_fair+0xad0/0xad0
[<ffffffff8110794b>] cpu_stopper_thread+0xdb/0x1d0
[<ffffffff816c5a59>] ? sub_preempt_count+0x79/0xd0
[<ffffffff816bef69>] ? __schedule+0x499/0xa40
[<ffffffff81107870>] ? cpu_stop_queue_work+0x80/0x80
[<ffffffff8109d0cd>] kthread+0xed/0x100
[<ffffffff810de02e>] ? put_lock_stats.isra.25+0xe/0x40
[<ffffffff8109cfe0>] ? kthread_create_on_node+0x160/0x160
[<ffffffff816c9dac>] ret_from_fork+0x7c/0xb0
[<ffffffff8109cfe0>] ? kthread_create_on_node+0x160/0x160

===============================
[ INFO: suspicious RCU usage. ]
3.7.0-rc1+ #31 Not tainted
-------------------------------
include/linux/rcupdate.h:759 rcu_read_unlock() used illegally while idle!

other info that might help us debug this:


RCU used illegally from idle CPU!
rcu_scheduler_active = 1, debug_locks = 1
RCU used illegally from extended quiescent state!
4 locks on stack by migration/2/24:
#0: held: (&rq->lock){-.-.-.}, instance: ffff8801481d4a98, at: [<ffffffff810ba558>] active_load_balance_cpu_stop+0x328/0x490
#1: held: (&rq->lock/1){..-.-.}, instance: ffff8801483d4a98, at: [<ffffffff810ba569>] active_load_balance_cpu_stop+0x339/0x490
#2: blocked: (rcu_read_lock){.+.+..}, instance: ffffffff81c34e80, at: [<ffffffff810ba324>] active_load_balance_cpu_stop+0xf4/0x490
#3: blocked: (rcu_read_lock){.+.+..}, instance: ffffffff81c34e80, at: [<ffffffff810b6a53>] cpuacct_charge+0x13/0x230

stack backtrace:
Pid: 24, comm: migration/2 Not tainted 3.7.0-rc1+ #31
Call Trace:
[<ffffffff810dfb22>] lockdep_rcu_suspicious+0xe2/0x130
[<ffffffff810b6b70>] cpuacct_charge+0x130/0x230
[<ffffffff810b6a53>] ? cpuacct_charge+0x13/0x230
[<ffffffff810bafa4>] update_curr+0x104/0x270
[<ffffffff810bc244>] enqueue_entity+0x44/0x720
[<ffffffff810bc962>] enqueue_task_fair+0x42/0xf0
[<ffffffff810af326>] enqueue_task+0x66/0x80
[<ffffffff810b01c3>] activate_task+0x23/0x30
[<ffffffff810b8aec>] move_task+0x3c/0x50
[<ffffffff810ba4b2>] active_load_balance_cpu_stop+0x282/0x490
[<ffffffff810ba324>] ? active_load_balance_cpu_stop+0xf4/0x490
[<ffffffff81107923>] ? cpu_stopper_thread+0xb3/0x1d0
[<ffffffff810ba230>] ? select_task_rq_fair+0xad0/0xad0
[<ffffffff8110794b>] cpu_stopper_thread+0xdb/0x1d0
[<ffffffff816c5a59>] ? sub_preempt_count+0x79/0xd0
[<ffffffff816bef69>] ? __schedule+0x499/0xa40
[<ffffffff81107870>] ? cpu_stop_queue_work+0x80/0x80
[<ffffffff8109d0cd>] kthread+0xed/0x100
[<ffffffff810de02e>] ? put_lock_stats.isra.25+0xe/0x40
[<ffffffff8109cfe0>] ? kthread_create_on_node+0x160/0x160
[<ffffffff816c9dac>] ret_from_fork+0x7c/0xb0
[<ffffffff8109cfe0>] ? kthread_create_on_node+0x160/0x160

===============================
[ INFO: suspicious RCU usage. ]
3.7.0-rc1+ #31 Not tainted
-------------------------------
include/linux/rcupdate.h:759 rcu_read_unlock() used illegally while idle!

other info that might help us debug this:


RCU used illegally from idle CPU!
rcu_scheduler_active = 1, debug_locks = 1
RCU used illegally from extended quiescent state!
3 locks on stack by migration/2/24:
#0: held: (&rq->lock){-.-.-.}, instance: ffff8801481d4a98, at: [<ffffffff810ba558>] active_load_balance_cpu_stop+0x328/0x490
#1: held: (&rq->lock/1){..-.-.}, instance: ffff8801483d4a98, at: [<ffffffff810ba569>] active_load_balance_cpu_stop+0x339/0x490
#2: blocked: (rcu_read_lock){.+.+..}, instance: ffffffff81c34e80, at: [<ffffffff810ba324>] active_load_balance_cpu_stop+0xf4/0x490

stack backtrace:
Pid: 24, comm: migration/2 Not tainted 3.7.0-rc1+ #31
Call Trace:
[<ffffffff810dfb22>] lockdep_rcu_suspicious+0xe2/0x130
[<ffffffff810ba615>] active_load_balance_cpu_stop+0x3e5/0x490
[<ffffffff810ba324>] ? active_load_balance_cpu_stop+0xf4/0x490
[<ffffffff81107923>] ? cpu_stopper_thread+0xb3/0x1d0
[<ffffffff810ba230>] ? select_task_rq_fair+0xad0/0xad0
[<ffffffff8110794b>] cpu_stopper_thread+0xdb/0x1d0
[<ffffffff816c5a59>] ? sub_preempt_count+0x79/0xd0
[<ffffffff816bef69>] ? __schedule+0x499/0xa40
[<ffffffff81107870>] ? cpu_stop_queue_work+0x80/0x80
[<ffffffff8109d0cd>] kthread+0xed/0x100
[<ffffffff810de02e>] ? put_lock_stats.isra.25+0xe/0x40
[<ffffffff8109cfe0>] ? kthread_create_on_node+0x160/0x160
[<ffffffff816c9dac>] ret_from_fork+0x7c/0xb0
[<ffffffff8109cfe0>] ? kthread_create_on_node+0x160/0x160


===============================
[ INFO: suspicious RCU usage. ]
3.7.0-rc1+ #31 Not tainted
-------------------------------
kernel/rcutree.c:360 Illegal idle entry in RCU read-side critical section.!

other info that might help us debug this:


RCU used illegally from idle CPU!
rcu_scheduler_active = 1, debug_locks = 1
RCU used illegally from extended quiescent state!
1 lock on stack by rcuc/2/22:
#0: blocked: (rcu_read_lock){.+.+..}, instance: ffffffff81c34e80, at: [<ffffffff816c5855>] __atomic_notifier_call_chain+0x5/0x140

stack backtrace:
Pid: 22, comm: rcuc/2 Not tainted 3.7.0-rc1+ #31
Call Trace:
<IRQ> [<ffffffff810dfb22>] lockdep_rcu_suspicious+0xe2/0x130
[<ffffffff8112a2d3>] rcu_eqs_enter_common.isra.41+0x1b3/0x1f0
[<ffffffff8112d1dd>] rcu_irq_exit+0xbd/0x100
[<ffffffff8107ba2b>] irq_exit+0x8b/0xe0
[<ffffffff816cbc0b>] smp_apic_timer_interrupt+0x6b/0x98
[<ffffffff816cab32>] apic_timer_interrupt+0x72/0x80
<EOI> [<ffffffff810e4472>] ? mark_held_locks+0xb2/0x130
[<ffffffff810736d5>] ? vprintk_emit+0x1b5/0x600
[<ffffffff816b3b1b>] printk+0x61/0x63
[<ffffffff810dfa6f>] lockdep_rcu_suspicious+0x2f/0x130
[<ffffffff816c597c>] __atomic_notifier_call_chain+0x12c/0x140
[<ffffffff816c5855>] ? __atomic_notifier_call_chain+0x5/0x140
[<ffffffff816c59a6>] atomic_notifier_call_chain+0x16/0x20
[<ffffffff810d34da>] profile_handoff_task+0x1a/0x30
[<ffffffff8106dc18>] __put_task_struct+0x88/0x140
[<ffffffff81075c28>] delayed_put_task_struct+0x48/0x170
[<ffffffff8112acda>] rcu_do_batch.isra.22+0x15a/0x440
[<ffffffff8112c963>] rcu_cpu_kthread+0x143/0x670
[<ffffffff810a8464>] smpboot_thread_fn+0x1d4/0x300
[<ffffffff810a8290>] ? __smpboot_create_thread.part.2+0xc0/0xc0
[<ffffffff8109d0cd>] kthread+0xed/0x100
[<ffffffff816be985>] ? wait_for_common+0x135/0x180
[<ffffffff8109cfe0>] ? kthread_create_on_node+0x160/0x160
[<ffffffff816c9dac>] ret_from_fork+0x7c/0xb0
[<ffffffff8109cfe0>] ? kthread_create_on_node+0x160/0x160
===============================
[ INFO: suspicious RCU usage. ]
3.7.0-rc1+ #31 Not tainted
-------------------------------
include/linux/rcupdate.h:738 rcu_read_lock() used illegally while idle!

other info that might help us debug this:


RCU used illegally from idle CPU!
rcu_scheduler_active = 1, debug_locks = 1
RCU used illegally from extended quiescent state!
1 lock on stack by rcuc/2/22:
#0: blocked: (rcu_read_lock){.+.+..}, instance: ffffffff81c34e80, at: [<ffffffff816c5855>] __atomic_notifier_call_chain+0x5/0x140

stack backtrace:
Pid: 22, comm: rcuc/2 Not tainted 3.7.0-rc1+ #31
Call Trace:
[<ffffffff810dfb22>] lockdep_rcu_suspicious+0xe2/0x130
[<ffffffff816c597c>] __atomic_notifier_call_chain+0x12c/0x140
[<ffffffff816c5855>] ? __atomic_notifier_call_chain+0x5/0x140
[<ffffffff816c59a6>] atomic_notifier_call_chain+0x16/0x20
[<ffffffff810d34da>] profile_handoff_task+0x1a/0x30
[<ffffffff8106dc18>] __put_task_struct+0x88/0x140
[<ffffffff81075c28>] delayed_put_task_struct+0x48/0x170
[<ffffffff8112acda>] rcu_do_batch.isra.22+0x15a/0x440
[<ffffffff8112c963>] rcu_cpu_kthread+0x143/0x670
[<ffffffff810a8464>] smpboot_thread_fn+0x1d4/0x300
[<ffffffff810a8290>] ? __smpboot_create_thread.part.2+0xc0/0xc0
[<ffffffff8109d0cd>] kthread+0xed/0x100
[<ffffffff816be985>] ? wait_for_common+0x135/0x180
[<ffffffff8109cfe0>] ? kthread_create_on_node+0x160/0x160
[<ffffffff816c9dac>] ret_from_fork+0x7c/0xb0
[<ffffffff8109cfe0>] ? kthread_create_on_node+0x160/0x160

===============================
[ INFO: suspicious RCU usage. ]
3.7.0-rc1+ #31 Not tainted
-------------------------------
include/linux/rcupdate.h:759 rcu_read_unlock() used illegally while idle!

other info that might help us debug this:


RCU used illegally from idle CPU!
rcu_scheduler_active = 1, debug_locks = 1
RCU used illegally from extended quiescent state!
1 lock on stack by rcuc/2/22:
#0: blocked: (rcu_read_lock){.+.+..}, instance: ffffffff81c34e80, at: [<ffffffff816c5855>] __atomic_notifier_call_chain+0x5/0x140

stack backtrace:
Pid: 22, comm: rcuc/2 Not tainted 3.7.0-rc1+ #31
Call Trace:
[<ffffffff810dfb22>] lockdep_rcu_suspicious+0xe2/0x130
[<ffffffff816c5948>] __atomic_notifier_call_chain+0xf8/0x140
[<ffffffff816c5855>] ? __atomic_notifier_call_chain+0x5/0x140
[<ffffffff816c59a6>] atomic_notifier_call_chain+0x16/0x20
[<ffffffff810d34da>] profile_handoff_task+0x1a/0x30
[<ffffffff8106dc18>] __put_task_struct+0x88/0x140
[<ffffffff81075c28>] delayed_put_task_struct+0x48/0x170
[<ffffffff8112acda>] rcu_do_batch.isra.22+0x15a/0x440
[<ffffffff8112c963>] rcu_cpu_kthread+0x143/0x670
[<ffffffff810a8464>] smpboot_thread_fn+0x1d4/0x300
[<ffffffff810a8290>] ? __smpboot_create_thread.part.2+0xc0/0xc0
[<ffffffff8109d0cd>] kthread+0xed/0x100
[<ffffffff816be985>] ? wait_for_common+0x135/0x180
[<ffffffff8109cfe0>] ? kthread_create_on_node+0x160/0x160
[<ffffffff816c9dac>] ret_from_fork+0x7c/0xb0
[<ffffffff8109cfe0>] ? kthread_create_on_node+0x160/0x160

===============================
[ INFO: suspicious RCU usage. ]
3.7.0-rc1+ #31 Not tainted
-------------------------------
kernel/sched/fair.c:4723 suspicious rcu_dereference_check() usage!

other info that might help us debug this:


RCU used illegally from idle CPU!
rcu_scheduler_active = 1, debug_locks = 1
RCU used illegally from extended quiescent state!
1 lock on stack by rcuc/2/22:
#0: blocked: (rcu_read_lock){.+.+..}, instance: ffffffff81c34e80, at: [<ffffffff810bea25>] rebalance_domains+0x5/0x310

stack backtrace:
Pid: 22, comm: rcuc/2 Not tainted 3.7.0-rc1+ #31
Call Trace:
<IRQ> [<ffffffff810dfb22>] lockdep_rcu_suspicious+0xe2/0x130
[<ffffffff810bed05>] rebalance_domains+0x2e5/0x310
[<ffffffff810bea25>] ? rebalance_domains+0x5/0x310
[<ffffffff810e4576>] ? trace_hardirqs_on_caller+0x86/0x1e0
[<ffffffff810bed7d>] run_rebalance_domains+0x4d/0x1c0
[<ffffffffa030d310>] ? dn_neigh_elist+0x40/0x160 [decnet]
[<ffffffff8107b520>] __do_softirq+0xf0/0x440
[<ffffffff816cb27c>] call_softirq+0x1c/0x30
<EOI> [<ffffffff8101f175>] do_softirq+0xd5/0x110
[<ffffffff8112c968>] ? rcu_cpu_kthread+0x148/0x670
[<ffffffff8107b3eb>] local_bh_enable+0x11b/0x160
[<ffffffff8112c968>] rcu_cpu_kthread+0x148/0x670
[<ffffffff810a8464>] smpboot_thread_fn+0x1d4/0x300
[<ffffffff810a8290>] ? __smpboot_create_thread.part.2+0xc0/0xc0
[<ffffffff8109d0cd>] kthread+0xed/0x100
[<ffffffff816be985>] ? wait_for_common+0x135/0x180
[<ffffffff8109cfe0>] ? kthread_create_on_node+0x160/0x160
[<ffffffff816c9dac>] ret_from_fork+0x7c/0xb0
[<ffffffff8109cfe0>] ? kthread_create_on_node+0x160/0x160

===============================
[ INFO: suspicious RCU usage. ]
3.7.0-rc1+ #31 Not tainted
-------------------------------
kernel/sched/fair.c:2715 suspicious rcu_dereference_check() usage!

other info that might help us debug this:


RCU used illegally from idle CPU!
rcu_scheduler_active = 1, debug_locks = 1
RCU used illegally from extended quiescent state!
5 locks on stack by kworker/2:0/9822:
#0: blocked: (events){.+.+.+}, instance: ffff880147814fd8, at: [<ffffffff810959c7>] process_one_work+0x197/0x780
#1: blocked: ((&barr->work)){+.+...}, instance: ffff880083ad5d38, at: [<ffffffff810959c7>] process_one_work+0x197/0x780
#2: held: (&x->wait#8){......}, instance: ffff88005a183e90, at: [<ffffffff810ab2b8>] complete+0x28/0x60
#3: held: (&p->pi_lock){-.-.-.}, instance: ffff88003606cf78, at: [<ffffffff810b37f1>] try_to_wake_up+0x31/0x320
#4: blocked: (rcu_read_lock){.+.+..}, instance: ffffffff81c34e80, at: [<ffffffff810b97c1>] select_task_rq_fair+0x61/0xad0

stack backtrace:
Pid: 9822, comm: kworker/2:0 Not tainted 3.7.0-rc1+ #31
Call Trace:
[<ffffffff810dfb22>] lockdep_rcu_suspicious+0xe2/0x130
[<ffffffff810b9b10>] select_task_rq_fair+0x3b0/0xad0
[<ffffffff810b97c1>] ? select_task_rq_fair+0x61/0xad0
[<ffffffff810de02e>] ? put_lock_stats.isra.25+0xe/0x40
[<ffffffff810b37f1>] ? try_to_wake_up+0x31/0x320
[<ffffffff810b387e>] try_to_wake_up+0xbe/0x320
[<ffffffff810b3af2>] default_wake_function+0x12/0x20
[<ffffffff810a8b75>] __wake_up_common+0x55/0x90
[<ffffffff810ab2b8>] ? complete+0x28/0x60
[<ffffffff810ab2d7>] complete+0x47/0x60
[<ffffffff81092422>] wq_barrier_func+0x12/0x20
[<ffffffff81095a37>] process_one_work+0x207/0x780
[<ffffffff810959c7>] ? process_one_work+0x197/0x780
[<ffffffff81092410>] ? destroy_work_on_stack+0x20/0x20
[<ffffffff81095fdc>] process_scheduled_works+0x2c/0x40
[<ffffffff8109652a>] worker_thread+0x2fa/0x440
[<ffffffff81096230>] ? rescuer_thread+0x240/0x240
[<ffffffff8109d0cd>] kthread+0xed/0x100
[<ffffffff810de02e>] ? put_lock_stats.isra.25+0xe/0x40
[<ffffffff8109cfe0>] ? kthread_create_on_node+0x160/0x160
[<ffffffff816c9dac>] ret_from_fork+0x7c/0xb0
[<ffffffff8109cfe0>] ? kthread_create_on_node+0x160/0x160

===============================
[ INFO: suspicious RCU usage. ]
3.7.0-rc1+ #31 Not tainted
-------------------------------
kernel/sched/core.c:1216 suspicious rcu_dereference_check() usage!

other info that might help us debug this:


RCU used illegally from idle CPU!
rcu_scheduler_active = 1, debug_locks = 1
RCU used illegally from extended quiescent state!
5 locks on stack by kworker/2:0/9822:
#0: blocked: (events){.+.+.+}, instance: ffff880147814fd8, at: [<ffffffff810959c7>] process_one_work+0x197/0x780
#1: blocked: ((&barr->work)){+.+...}, instance: ffff880083ad5d38, at: [<ffffffff810959c7>] process_one_work+0x197/0x780
#2: held: (&x->wait#8){......}, instance: ffff88005a183e90, at: [<ffffffff810ab2b8>] complete+0x28/0x60
#3: held: (&p->pi_lock){-.-.-.}, instance: ffff88003606cf78, at: [<ffffffff810b37f1>] try_to_wake_up+0x31/0x320
#4: blocked: (rcu_read_lock){.+.+..}, instance: ffffffff81c34e80, at: [<ffffffff810ae169>] ttwu_stat+0x49/0x280

stack backtrace:
Pid: 9822, comm: kworker/2:0 Not tainted 3.7.0-rc1+ #31
Call Trace:
[<ffffffff810dfb22>] lockdep_rcu_suspicious+0xe2/0x130
[<ffffffff810ae21c>] ttwu_stat+0xfc/0x280
[<ffffffff810ae169>] ? ttwu_stat+0x49/0x280
[<ffffffff810b3950>] try_to_wake_up+0x190/0x320
[<ffffffff810b3af2>] default_wake_function+0x12/0x20
[<ffffffff810a8b75>] __wake_up_common+0x55/0x90
[<ffffffff810ab2b8>] ? complete+0x28/0x60
[<ffffffff810ab2d7>] complete+0x47/0x60
[<ffffffff81092422>] wq_barrier_func+0x12/0x20
[<ffffffff81095a37>] process_one_work+0x207/0x780
[<ffffffff810959c7>] ? process_one_work+0x197/0x780
[<ffffffff81092410>] ? destroy_work_on_stack+0x20/0x20
[<ffffffff81095fdc>] process_scheduled_works+0x2c/0x40
[<ffffffff8109652a>] worker_thread+0x2fa/0x440
[<ffffffff81096230>] ? rescuer_thread+0x240/0x240
[<ffffffff8109d0cd>] kthread+0xed/0x100
[<ffffffff810de02e>] ? put_lock_stats.isra.25+0xe/0x40
[<ffffffff8109cfe0>] ? kthread_create_on_node+0x160/0x160
[<ffffffff816c9dac>] ret_from_fork+0x7c/0xb0
[<ffffffff8109cfe0>] ? kthread_create_on_node+0x160/0x160

===============================
[ INFO: suspicious RCU usage. ]
3.7.0-rc1+ #31 Not tainted
-------------------------------
include/linux/rcupdate.h:738 rcu_read_lock() used illegally while idle!

other info that might help us debug this:


RCU used illegally from idle CPU!
rcu_scheduler_active = 1, debug_locks = 1
RCU used illegally from extended quiescent state!
2 locks on stack by trinity-main/9791:
#0: held: (tasklist_lock){.+.+..}, instance: ffffffff81c05098, at: [<ffffffff81089031>] ptrace_stop+0xc1/0x280
#1: blocked: (rcu_read_lock){.+.+..}, instance: ffffffff81c34e80, at: [<ffffffff81088a4e>] do_notify_parent_cldstop+0x2e/0x550

stack backtrace:
Pid: 9791, comm: trinity-main Not tainted 3.7.0-rc1+ #31
Call Trace:
[<ffffffff810dfb22>] lockdep_rcu_suspicious+0xe2/0x130
[<ffffffff81088de5>] do_notify_parent_cldstop+0x3c5/0x550
[<ffffffff81088a4e>] ? do_notify_parent_cldstop+0x2e/0x550
[<ffffffff810de02e>] ? put_lock_stats.isra.25+0xe/0x40
[<ffffffff8108906a>] ptrace_stop+0xfa/0x280
[<ffffffff81097e75>] ? __task_pid_nr_ns+0x5/0x1a0
[<ffffffff81089287>] ptrace_do_notify+0x97/0xb0
[<ffffffff8108afe4>] ptrace_notify+0x64/0x90
[<ffffffff8102be46>] tracehook_report_syscall_exit+0x36/0xb0
[<ffffffff810ab02c>] ? finish_task_switch+0x7c/0x120
[<ffffffff816ca143>] ? int_very_careful+0x5/0xd
[<ffffffff8102c269>] syscall_trace_leave+0x79/0x150
[<ffffffff816ca17f>] int_check_syscall_exit_work+0x34/0x3d

===============================
[ INFO: suspicious RCU usage. ]
3.7.0-rc1+ #31 Not tainted
-------------------------------
kernel/signal.c:1756 suspicious rcu_dereference_check() usage!

other info that might help us debug this:


RCU used illegally from idle CPU!
rcu_scheduler_active = 1, debug_locks = 1
RCU used illegally from extended quiescent state!
3 locks on stack by trinity-main/9791:
#0: held: (tasklist_lock){.+.+..}, instance: ffffffff81c05098, at: [<ffffffff81089031>] ptrace_stop+0xc1/0x280
#1: blocked: (rcu_read_lock){.+.+..}, instance: ffffffff81c34e80, at: [<ffffffff81088a4e>] do_notify_parent_cldstop+0x2e/0x550
#2: blocked: (rcu_read_lock){.+.+..}, instance: ffffffff81c34e80, at: [<ffffffff81088aa3>] do_notify_parent_cldstop+0x83/0x550

stack backtrace:
Pid: 9791, comm: trinity-main Not tainted 3.7.0-rc1+ #31
Call Trace:
[<ffffffff810dfb22>] lockdep_rcu_suspicious+0xe2/0x130
[<ffffffff81088ed8>] do_notify_parent_cldstop+0x4b8/0x550
[<ffffffff81088aa3>] ? do_notify_parent_cldstop+0x83/0x550
[<ffffffff810de02e>] ? put_lock_stats.isra.25+0xe/0x40
[<ffffffff8108906a>] ptrace_stop+0xfa/0x280
[<ffffffff81097e75>] ? __task_pid_nr_ns+0x5/0x1a0
[<ffffffff81089287>] ptrace_do_notify+0x97/0xb0
[<ffffffff8108afe4>] ptrace_notify+0x64/0x90
[<ffffffff8102be46>] tracehook_report_syscall_exit+0x36/0xb0
[<ffffffff810ab02c>] ? finish_task_switch+0x7c/0x120
[<ffffffff816ca143>] ? int_very_careful+0x5/0xd
[<ffffffff8102c269>] syscall_trace_leave+0x79/0x150
[<ffffffff816ca17f>] int_check_syscall_exit_work+0x34/0x3d

===============================
[ INFO: suspicious RCU usage. ]
3.7.0-rc1+ #31 Not tainted
-------------------------------
include/linux/rcupdate.h:759 rcu_read_unlock() used illegally while idle!

other info that might help us debug this:


RCU used illegally from idle CPU!
rcu_scheduler_active = 1, debug_locks = 1
RCU used illegally from extended quiescent state!
3 locks on stack by trinity-main/9791:
#0: held: (tasklist_lock){.+.+..}, instance: ffffffff81c05098, at: [<ffffffff81089031>] ptrace_stop+0xc1/0x280
#1: blocked: (rcu_read_lock){.+.+..}, instance: ffffffff81c34e80, at: [<ffffffff81088a4e>] do_notify_parent_cldstop+0x2e/0x550
#2: blocked: (rcu_read_lock){.+.+..}, instance: ffffffff81c34e80, at: [<ffffffff81088aa3>] do_notify_parent_cldstop+0x83/0x550

stack backtrace:
Pid: 9791, comm: trinity-main Not tainted 3.7.0-rc1+ #31
Call Trace:
[<ffffffff810dfb22>] lockdep_rcu_suspicious+0xe2/0x130
[<ffffffff81088d75>] do_notify_parent_cldstop+0x355/0x550
[<ffffffff81088aa3>] ? do_notify_parent_cldstop+0x83/0x550
[<ffffffff810de02e>] ? put_lock_stats.isra.25+0xe/0x40
[<ffffffff8108906a>] ptrace_stop+0xfa/0x280
[<ffffffff81097e75>] ? __task_pid_nr_ns+0x5/0x1a0
[<ffffffff81089287>] ptrace_do_notify+0x97/0xb0
[<ffffffff8108afe4>] ptrace_notify+0x64/0x90
[<ffffffff8102be46>] tracehook_report_syscall_exit+0x36/0xb0
[<ffffffff810ab02c>] ? finish_task_switch+0x7c/0x120
[<ffffffff816ca143>] ? int_very_careful+0x5/0xd
[<ffffffff8102c269>] syscall_trace_leave+0x79/0x150
[<ffffffff816ca17f>] int_check_syscall_exit_work+0x34/0x3d

===============================
[ INFO: suspicious RCU usage. ]
3.7.0-rc1+ #31 Not tainted
-------------------------------
kernel/signal.c:1756 suspicious rcu_dereference_check() usage!

other info that might help us debug this:


RCU used illegally from idle CPU!
rcu_scheduler_active = 1, debug_locks = 1
RCU used illegally from extended quiescent state!
3 locks on stack by trinity-main/9791:
#0: held: (tasklist_lock){.+.+..}, instance: ffffffff81c05098, at: [<ffffffff81089031>] ptrace_stop+0xc1/0x280
#1: blocked: (rcu_read_lock){.+.+..}, instance: ffffffff81c34e80, at: [<ffffffff81088a4e>] do_notify_parent_cldstop+0x2e/0x550
#2: blocked: (rcu_read_lock){.+.+..}, instance: ffffffff81c34e80, at: [<ffffffff81088b37>] do_notify_parent_cldstop+0x117/0x550

stack backtrace:
Pid: 9791, comm: trinity-main Not tainted 3.7.0-rc1+ #31
Call Trace:
[<ffffffff810dfb22>] lockdep_rcu_suspicious+0xe2/0x130
[<ffffffff81088f18>] do_notify_parent_cldstop+0x4f8/0x550
[<ffffffff81088b37>] ? do_notify_parent_cldstop+0x117/0x550
[<ffffffff810de02e>] ? put_lock_stats.isra.25+0xe/0x40
[<ffffffff8108906a>] ptrace_stop+0xfa/0x280
[<ffffffff81097e75>] ? __task_pid_nr_ns+0x5/0x1a0
[<ffffffff81089287>] ptrace_do_notify+0x97/0xb0
[<ffffffff8108afe4>] ptrace_notify+0x64/0x90
[<ffffffff8102be46>] tracehook_report_syscall_exit+0x36/0xb0
[<ffffffff810ab02c>] ? finish_task_switch+0x7c/0x120
[<ffffffff816ca143>] ? int_very_careful+0x5/0xd
[<ffffffff8102c269>] syscall_trace_leave+0x79/0x150
[<ffffffff816ca17f>] int_check_syscall_exit_work+0x34/0x3d

===============================
[ INFO: suspicious RCU usage. ]
3.7.0-rc1+ #31 Not tainted
-------------------------------
kernel/sched/fair.c:2660 suspicious rcu_dereference_check() usage!

other info that might help us debug this:


RCU used illegally from idle CPU!
rcu_scheduler_active = 1, debug_locks = 1
RCU used illegally from extended quiescent state!
5 locks on stack by trinity-main/9791:
#0: held: (tasklist_lock){.+.+..}, instance: ffffffff81c05098, at: [<ffffffff81089031>] ptrace_stop+0xc1/0x280
#1: held: (&(&sighand->siglock)->rlock){-.-.-.}, instance: ffff880107520820, at: [<ffffffff81088e16>] do_notify_parent_cldstop+0x3f6/0x550
#2: held: (&sig->wait_chldexit){......}, instance: ffff880035679528, at: [<ffffffff810ab327>] __wake_up_sync_key+0x37/0x80
#3: held: (&p->pi_lock){-.-.-.}, instance: ffff880035738658, at: [<ffffffff810b37f1>] try_to_wake_up+0x31/0x320
#4: blocked: (rcu_read_lock){.+.+..}, instance: ffffffff81c34e80, at: [<ffffffff810b97c1>] select_task_rq_fair+0x61/0xad0

stack backtrace:
Pid: 9791, comm: trinity-main Not tainted 3.7.0-rc1+ #31
Call Trace:
[<ffffffff810dfb22>] lockdep_rcu_suspicious+0xe2/0x130
[<ffffffff810ba10c>] select_task_rq_fair+0x9ac/0xad0
[<ffffffff810b97c1>] ? select_task_rq_fair+0x61/0xad0
[<ffffffff810de049>] ? put_lock_stats.isra.25+0x29/0x40
[<ffffffff810b37f1>] ? try_to_wake_up+0x31/0x320
[<ffffffff810b387e>] try_to_wake_up+0xbe/0x320
[<ffffffff810b3af2>] default_wake_function+0x12/0x20
[<ffffffff8107586a>] child_wait_callback+0x5a/0x60
[<ffffffff810a8b75>] __wake_up_common+0x55/0x90
[<ffffffff810ab327>] ? __wake_up_sync_key+0x37/0x80
[<ffffffff810ab343>] __wake_up_sync_key+0x53/0x80
[<ffffffff81078b36>] __wake_up_parent+0x26/0x30
[<ffffffff81088e4e>] do_notify_parent_cldstop+0x42e/0x550
[<ffffffff81088b37>] ? do_notify_parent_cldstop+0x117/0x550
[<ffffffff8108906a>] ptrace_stop+0xfa/0x280
[<ffffffff81097e75>] ? __task_pid_nr_ns+0x5/0x1a0
[<ffffffff81089287>] ptrace_do_notify+0x97/0xb0
[<ffffffff8108afe4>] ptrace_notify+0x64/0x90
[<ffffffff8102be46>] tracehook_report_syscall_exit+0x36/0xb0
[<ffffffff810ab02c>] ? finish_task_switch+0x7c/0x120
[<ffffffff816ca143>] ? int_very_careful+0x5/0xd
[<ffffffff8102c269>] syscall_trace_leave+0x79/0x150
[<ffffffff816ca17f>] int_check_syscall_exit_work+0x34/0x3d

===============================
[ INFO: suspicious RCU usage. ]
3.7.0-rc1+ #31 Not tainted
-------------------------------
include/linux/rcupdate.h:738 rcu_read_lock() used illegally while idle!

other info that might help us debug this:


RCU used illegally from idle CPU!
rcu_scheduler_active = 1, debug_locks = 1
RCU used illegally from extended quiescent state!
3 locks on stack by strace/1365:
#0: held: (tasklist_lock){.+.+..}, instance: ffffffff81c05098, at: [<ffffffff81077402>] do_wait+0xb2/0x370
#1: held: (&(&sighand->siglock)->rlock){-.-.-.}, instance: ffff8801435dce20, at: [<ffffffff8107652d>] wait_consider_task+0x11d/0xf40
#2: blocked: (rcu_read_lock){.+.+..}, instance: ffffffff81c34e80, at: [<ffffffff8107655a>] wait_consider_task+0x14a/0xf40

stack backtrace:
Pid: 1365, comm: strace Not tainted 3.7.0-rc1+ #31
Call Trace:
[<ffffffff810dfb22>] lockdep_rcu_suspicious+0xe2/0x130
[<ffffffff8107696c>] wait_consider_task+0x55c/0xf40
[<ffffffff8107655a>] ? wait_consider_task+0x14a/0xf40
[<ffffffff810774d3>] do_wait+0x183/0x370
[<ffffffff81078d75>] sys_wait4+0x75/0xf0
[<ffffffff81075810>] ? task_stopped_code+0x60/0x60
[<ffffffff816ca088>] tracesys+0xe1/0xe6

===============================
[ INFO: suspicious RCU usage. ]
3.7.0-rc1+ #31 Not tainted
-------------------------------
kernel/exit.c:1350 suspicious rcu_dereference_check() usage!

other info that might help us debug this:


RCU used illegally from idle CPU!
rcu_scheduler_active = 1, debug_locks = 1
RCU used illegally from extended quiescent state!
3 locks on stack by strace/1365:
#0: held: (tasklist_lock){.+.+..}, instance: ffffffff81c05098, at: [<ffffffff81077402>] do_wait+0xb2/0x370
#1: held: (&(&sighand->siglock)->rlock){-.-.-.}, instance: ffff8801435dce20, at: [<ffffffff8107652d>] wait_consider_task+0x11d/0xf40
#2: blocked: (rcu_read_lock){.+.+..}, instance: ffffffff81c34e80, at: [<ffffffff8107655a>] wait_consider_task+0x14a/0xf40

stack backtrace:
Pid: 1365, comm: strace Not tainted 3.7.0-rc1+ #31
Call Trace:
[<ffffffff810dfb22>] lockdep_rcu_suspicious+0xe2/0x130
[<ffffffff810765ec>] wait_consider_task+0x1dc/0xf40
[<ffffffff8107655a>] ? wait_consider_task+0x14a/0xf40
[<ffffffff810774d3>] do_wait+0x183/0x370
[<ffffffff81078d75>] sys_wait4+0x75/0xf0
[<ffffffff81075810>] ? task_stopped_code+0x60/0x60
[<ffffffff816ca088>] tracesys+0xe1/0xe6

===============================
[ INFO: suspicious RCU usage. ]
3.7.0-rc1+ #31 Not tainted
-------------------------------
include/linux/rcupdate.h:759 rcu_read_unlock() used illegally while idle!

other info that might help us debug this:


RCU used illegally from idle CPU!
rcu_scheduler_active = 1, debug_locks = 1
RCU used illegally from extended quiescent state!
3 locks on stack by strace/1365:
#0: held: (tasklist_lock){.+.+..}, instance: ffffffff81c05098, at: [<ffffffff81077402>] do_wait+0xb2/0x370
#1: held: (&(&sighand->siglock)->rlock){-.-.-.}, instance: ffff8801435dce20, at: [<ffffffff8107652d>] wait_consider_task+0x11d/0xf40
#2: blocked: (rcu_read_lock){.+.+..}, instance: ffffffff81c34e80, at: [<ffffffff8107655a>] wait_consider_task+0x14a/0xf40

stack backtrace:
Pid: 1365, comm: strace Not tainted 3.7.0-rc1+ #31
Call Trace:
[<ffffffff810dfb22>] lockdep_rcu_suspicious+0xe2/0x130
[<ffffffff810769a5>] wait_consider_task+0x595/0xf40
[<ffffffff8107655a>] ? wait_consider_task+0x14a/0xf40
[<ffffffff810774d3>] do_wait+0x183/0x370
[<ffffffff81078d75>] sys_wait4+0x75/0xf0
[<ffffffff81075810>] ? task_stopped_code+0x60/0x60
[<ffffffff816ca088>] tracesys+0xe1/0xe6


2012-10-24 16:45:01

by Sergey Senozhatsky

[permalink] [raw]
Subject: Re: lots of suspicious RCU traces

On (10/16/12 23:49), Dave Jones wrote:
> I just triggered this while fuzz testing. All traces happened
> in quick succession.
>

I can easily reproduce this one

[ 5714.696329] ===============================
[ 5714.696332] [ INFO: suspicious RCU usage. ]
[ 5714.696336] 3.7.0-rc2-dbg-01220-g698b6f8-dirty #1376 Not tainted
[ 5714.696338] -------------------------------
[ 5714.696341] include/linux/rcupdate.h:738 rcu_read_lock() used illegally while idle!
[ 5714.696343]
other info that might help us debug this:

[ 5714.696347]
RCU used illegally from idle CPU!
rcu_scheduler_active = 1, debug_locks = 1
[ 5714.696350] RCU used illegally from extended quiescent state!
[ 5714.696354] 2 locks held by coretemp-sensor/16214:
[ 5714.696356] #0: (&(&sighand->siglock)->rlock){-.-...}, at: [<ffffffff8105a543>] ptrace_notify+0x53/0x90
[ 5714.696373] #1: (rcu_read_lock){.+.+..}, at: [<ffffffff81066c55>] __task_pid_nr_ns+0x5/0x1b0
[ 5714.696385]
stack backtrace:
[ 5714.696389] Pid: 16214, comm: coretemp-sensor Not tainted 3.7.0-rc2-dbg-01220-g698b6f8-dirty #1376
[ 5714.696392] Call Trace:
[ 5714.696403] [<ffffffff810a8142>] lockdep_rcu_suspicious+0xe2/0x130
[ 5714.696408] [<ffffffff81066dc5>] __task_pid_nr_ns+0x175/0x1b0
[ 5714.696413] [<ffffffff81066c55>] ? __task_pid_nr_ns+0x5/0x1b0
[ 5714.696418] [<ffffffff8105867f>] ptrace_do_notify+0x5f/0xb0
[ 5714.696423] [<ffffffff8105a554>] ptrace_notify+0x64/0x90
[ 5714.696431] [<ffffffff8100fa7d>] syscall_trace_leave+0x8d/0x220
[ 5714.696439] [<ffffffff815dc7c1>] ? int_very_careful+0x5/0xd
[ 5714.696444] [<ffffffff810ac7c5>] ? trace_hardirqs_on_caller+0x105/0x190
[ 5714.696450] [<ffffffff815dc7fd>] int_check_syscall_exit_work+0x34/0x3d



small question,

ptrace_notify() and forward calls are able to both indirectly and directly call schedule(),
/* direct call from ptrace_stop()*/,
should, in this case, rcu_user_enter() be called before tracehook_report_syscall_exit(regs, step)
and ptrace chain?


-ss


>
> ===============================
> [ INFO: suspicious RCU usage. ]
> 3.7.0-rc1+ #31 Not tainted
> -------------------------------
> include/linux/rcupdate.h:738 rcu_read_lock() used illegally while idle!
>
> other info that might help us debug this:
>
>
> RCU used illegally from idle CPU!
> rcu_scheduler_active = 1, debug_locks = 1
> RCU used illegally from extended quiescent state!
> 2 locks on stack by trinity-main/9791:
> #0: held: (&(&sighand->siglock)->rlock){-.-.-.}, instance: ffff8801435dce20, at: [<ffffffff8108afd3>] ptrace_notify+0x53/0x90
> #1: blocked: (rcu_read_lock){.+.+..}, instance: ffffffff81c34e80, at: [<ffffffff81097e75>] __task_pid_nr_ns+0x5/0x1a0
>
> stack backtrace:
> Pid: 9791, comm: trinity-main Not tainted 3.7.0-rc1+ #31
> Call Trace:
> [<ffffffff810dfb22>] lockdep_rcu_suspicious+0xe2/0x130
> [<ffffffff81097fd5>] __task_pid_nr_ns+0x165/0x1a0
> [<ffffffff81097e75>] ? __task_pid_nr_ns+0x5/0x1a0
> [<ffffffff8108924f>] ptrace_do_notify+0x5f/0xb0
> [<ffffffff8108afe4>] ptrace_notify+0x64/0x90
> [<ffffffff8102be46>] tracehook_report_syscall_exit+0x36/0xb0
> [<ffffffff810ab02c>] ? finish_task_switch+0x7c/0x120
> [<ffffffff816ca143>] ? int_very_careful+0x5/0xd
> [<ffffffff8102c269>] syscall_trace_leave+0x79/0x150
> [<ffffffff816ca17f>] int_check_syscall_exit_work+0x34/0x3d
>
> ===============================
> [ INFO: suspicious RCU usage. ]
> 3.7.0-rc1+ #31 Not tainted
> -------------------------------
> include/linux/rcupdate.h:759 rcu_read_unlock() used illegally while idle!
>
> other info that might help us debug this:
>
>
> RCU used illegally from idle CPU!
> rcu_scheduler_active = 1, debug_locks = 1
> RCU used illegally from extended quiescent state!
> 2 locks on stack by trinity-main/9791:
> #0: held: (&(&sighand->siglock)->rlock){-.-.-.}, instance: ffff8801435dce20, at: [<ffffffff8108afd3>] ptrace_notify+0x53/0x90
> #1: blocked: (rcu_read_lock){.+.+..}, instance: ffffffff81c34e80, at: [<ffffffff81097e75>] __task_pid_nr_ns+0x5/0x1a0
>
> stack backtrace:
> Pid: 9791, comm: trinity-main Not tainted 3.7.0-rc1+ #31
> Call Trace:
> [<ffffffff810dfb22>] lockdep_rcu_suspicious+0xe2/0x130
> [<ffffffff81097fa1>] __task_pid_nr_ns+0x131/0x1a0
> [<ffffffff81097e75>] ? __task_pid_nr_ns+0x5/0x1a0
> [<ffffffff8108924f>] ptrace_do_notify+0x5f/0xb0
> [<ffffffff8108afe4>] ptrace_notify+0x64/0x90
> [<ffffffff8102be46>] tracehook_report_syscall_exit+0x36/0xb0
> [<ffffffff810ab02c>] ? finish_task_switch+0x7c/0x120
> [<ffffffff816ca143>] ? int_very_careful+0x5/0xd
> [<ffffffff8102c269>] syscall_trace_leave+0x79/0x150
> [<ffffffff816ca17f>] int_check_syscall_exit_work+0x34/0x3d
>
> ===============================
> [ INFO: suspicious RCU usage. ]
> 3.7.0-rc1+ #31 Not tainted
> -------------------------------
> include/linux/rcupdate.h:738 rcu_read_lock() used illegally while idle!
>
> other info that might help us debug this:
>
>
> RCU used illegally from idle CPU!
> rcu_scheduler_active = 1, debug_locks = 1
> RCU used illegally from extended quiescent state!
> 3 locks on stack by migration/2/24:
> #0: held: (&rq->lock){-.-.-.}, instance: ffff8801481d4a98, at: [<ffffffff810ba558>] active_load_balance_cpu_stop+0x328/0x490
> #1: held: (&rq->lock/1){..-.-.}, instance: ffff8801483d4a98, at: [<ffffffff810ba569>] active_load_balance_cpu_stop+0x339/0x490
> #2: blocked: (rcu_read_lock){.+.+..}, instance: ffffffff81c34e80, at: [<ffffffff810ba324>] active_load_balance_cpu_stop+0xf4/0x490
>
> stack backtrace:
> Pid: 24, comm: migration/2 Not tainted 3.7.0-rc1+ #31
> Call Trace:
> [<ffffffff810dfb22>] lockdep_rcu_suspicious+0xe2/0x130
> [<ffffffff810ba5dc>] active_load_balance_cpu_stop+0x3ac/0x490
> [<ffffffff810ba324>] ? active_load_balance_cpu_stop+0xf4/0x490
> [<ffffffff81107923>] ? cpu_stopper_thread+0xb3/0x1d0
> [<ffffffff810de02e>] ? put_lock_stats.isra.25+0xe/0x40
> [<ffffffff816c1030>] ? _raw_spin_unlock_irq+0x30/0x80
> [<ffffffff810b0de1>] ? get_parent_ip+0x11/0x50
> [<ffffffff810b0de1>] ? get_parent_ip+0x11/0x50
> [<ffffffff810ba230>] ? select_task_rq_fair+0xad0/0xad0
> [<ffffffff8110794b>] cpu_stopper_thread+0xdb/0x1d0
> [<ffffffff816c5a59>] ? sub_preempt_count+0x79/0xd0
> [<ffffffff816bef69>] ? __schedule+0x499/0xa40
> [<ffffffff81107870>] ? cpu_stop_queue_work+0x80/0x80
> [<ffffffff8109d0cd>] kthread+0xed/0x100
> [<ffffffff810de02e>] ? put_lock_stats.isra.25+0xe/0x40
> [<ffffffff8109cfe0>] ? kthread_create_on_node+0x160/0x160
> [<ffffffff816c9dac>] ret_from_fork+0x7c/0xb0
> [<ffffffff8109cfe0>] ? kthread_create_on_node+0x160/0x160
>
> ===============================
> [ INFO: suspicious RCU usage. ]
> 3.7.0-rc1+ #31 Not tainted
> -------------------------------
> kernel/sched/fair.c:4538 suspicious rcu_dereference_check() usage!
>
> other info that might help us debug this:
>
>
> RCU used illegally from idle CPU!
> rcu_scheduler_active = 1, debug_locks = 1
> RCU used illegally from extended quiescent state!
> 3 locks on stack by migration/2/24:
> #0: held: (&rq->lock){-.-.-.}, instance: ffff8801481d4a98, at: [<ffffffff810ba558>] active_load_balance_cpu_stop+0x328/0x490
> #1: held: (&rq->lock/1){..-.-.}, instance: ffff8801483d4a98, at: [<ffffffff810ba569>] active_load_balance_cpu_stop+0x339/0x490
> #2: blocked: (rcu_read_lock){.+.+..}, instance: ffffffff81c34e80, at: [<ffffffff810ba324>] active_load_balance_cpu_stop+0xf4/0x490
>
> stack backtrace:
> Pid: 24, comm: migration/2 Not tainted 3.7.0-rc1+ #31
> Call Trace:
> [<ffffffff810dfb22>] lockdep_rcu_suspicious+0xe2/0x130
> [<ffffffff810ba665>] active_load_balance_cpu_stop+0x435/0x490
> [<ffffffff810ba324>] ? active_load_balance_cpu_stop+0xf4/0x490
> [<ffffffff81107923>] ? cpu_stopper_thread+0xb3/0x1d0
> [<ffffffff810de02e>] ? put_lock_stats.isra.25+0xe/0x40
> [<ffffffff816c1030>] ? _raw_spin_unlock_irq+0x30/0x80
> [<ffffffff810b0de1>] ? get_parent_ip+0x11/0x50
> [<ffffffff810b0de1>] ? get_parent_ip+0x11/0x50
> [<ffffffff810ba230>] ? select_task_rq_fair+0xad0/0xad0
> [<ffffffff8110794b>] cpu_stopper_thread+0xdb/0x1d0
> [<ffffffff816c5a59>] ? sub_preempt_count+0x79/0xd0
> [<ffffffff816bef69>] ? __schedule+0x499/0xa40
> [<ffffffff81107870>] ? cpu_stop_queue_work+0x80/0x80
> [<ffffffff8109d0cd>] kthread+0xed/0x100
> [<ffffffff810de02e>] ? put_lock_stats.isra.25+0xe/0x40
> [<ffffffff8109cfe0>] ? kthread_create_on_node+0x160/0x160
> [<ffffffff816c9dac>] ret_from_fork+0x7c/0xb0
> [<ffffffff8109cfe0>] ? kthread_create_on_node+0x160/0x160
>
> ===============================
> [ INFO: suspicious RCU usage. ]
> 3.7.0-rc1+ #31 Not tainted
> -------------------------------
> include/linux/rcupdate.h:738 rcu_read_lock() used illegally while idle!
>
> other info that might help us debug this:
>
>
> RCU used illegally from idle CPU!
> rcu_scheduler_active = 1, debug_locks = 1
> RCU used illegally from extended quiescent state!
> 4 locks on stack by migration/2/24:
> #0: held: (&rq->lock){-.-.-.}, instance: ffff8801481d4a98, at: [<ffffffff810ba558>] active_load_balance_cpu_stop+0x328/0x490
> #1: held: (&rq->lock/1){..-.-.}, instance: ffff8801483d4a98, at: [<ffffffff810ba569>] active_load_balance_cpu_stop+0x339/0x490
> #2: blocked: (rcu_read_lock){.+.+..}, instance: ffffffff81c34e80, at: [<ffffffff810ba324>] active_load_balance_cpu_stop+0xf4/0x490
> #3: blocked: (rcu_read_lock){.+.+..}, instance: ffffffff81c34e80, at: [<ffffffff810b6a53>] cpuacct_charge+0x13/0x230
>
> stack backtrace:
> Pid: 24, comm: migration/2 Not tainted 3.7.0-rc1+ #31
> Call Trace:
> [<ffffffff810dfb22>] lockdep_rcu_suspicious+0xe2/0x130
> [<ffffffff810b6c0c>] cpuacct_charge+0x1cc/0x230
> [<ffffffff810b6a53>] ? cpuacct_charge+0x13/0x230
> [<ffffffff810bafa4>] update_curr+0x104/0x270
> [<ffffffff810bc244>] enqueue_entity+0x44/0x720
> [<ffffffff810bc962>] enqueue_task_fair+0x42/0xf0
> [<ffffffff810af326>] enqueue_task+0x66/0x80
> [<ffffffff810b01c3>] activate_task+0x23/0x30
> [<ffffffff810b8aec>] move_task+0x3c/0x50
> [<ffffffff810ba4b2>] active_load_balance_cpu_stop+0x282/0x490
> [<ffffffff810ba324>] ? active_load_balance_cpu_stop+0xf4/0x490
> [<ffffffff81107923>] ? cpu_stopper_thread+0xb3/0x1d0
> [<ffffffff810ba230>] ? select_task_rq_fair+0xad0/0xad0
> [<ffffffff8110794b>] cpu_stopper_thread+0xdb/0x1d0
> [<ffffffff816c5a59>] ? sub_preempt_count+0x79/0xd0
> [<ffffffff816bef69>] ? __schedule+0x499/0xa40
> [<ffffffff81107870>] ? cpu_stop_queue_work+0x80/0x80
> [<ffffffff8109d0cd>] kthread+0xed/0x100
> [<ffffffff810de02e>] ? put_lock_stats.isra.25+0xe/0x40
> [<ffffffff8109cfe0>] ? kthread_create_on_node+0x160/0x160
> [<ffffffff816c9dac>] ret_from_fork+0x7c/0xb0
> [<ffffffff8109cfe0>] ? kthread_create_on_node+0x160/0x160
>
> ===============================
> [ INFO: suspicious RCU usage. ]
> 3.7.0-rc1+ #31 Not tainted
> -------------------------------
> include/linux/cgroup.h:566 suspicious rcu_dereference_check() usage!
>
> other info that might help us debug this:
>
>
> RCU used illegally from idle CPU!
> rcu_scheduler_active = 1, debug_locks = 1
> RCU used illegally from extended quiescent state!
> 4 locks on stack by migration/2/24:
> #0: held: (&rq->lock){-.-.-.}, instance: ffff8801481d4a98, at: [<ffffffff810ba558>] active_load_balance_cpu_stop+0x328/0x490
> #1: held: (&rq->lock/1){..-.-.}, instance: ffff8801483d4a98, at: [<ffffffff810ba569>] active_load_balance_cpu_stop+0x339/0x490
> #2: blocked: (rcu_read_lock){.+.+..}, instance: ffffffff81c34e80, at: [<ffffffff810ba324>] active_load_balance_cpu_stop+0xf4/0x490
> #3: blocked: (rcu_read_lock){.+.+..}, instance: ffffffff81c34e80, at: [<ffffffff810b6a53>] cpuacct_charge+0x13/0x230
>
> stack backtrace:
> Pid: 24, comm: migration/2 Not tainted 3.7.0-rc1+ #31
> Call Trace:
> [<ffffffff810dfb22>] lockdep_rcu_suspicious+0xe2/0x130
> [<ffffffff810b6bd5>] cpuacct_charge+0x195/0x230
> [<ffffffff810b6a53>] ? cpuacct_charge+0x13/0x230
> [<ffffffff810bafa4>] update_curr+0x104/0x270
> [<ffffffff810bc244>] enqueue_entity+0x44/0x720
> [<ffffffff810bc962>] enqueue_task_fair+0x42/0xf0
> [<ffffffff810af326>] enqueue_task+0x66/0x80
> [<ffffffff810b01c3>] activate_task+0x23/0x30
> [<ffffffff810b8aec>] move_task+0x3c/0x50
> [<ffffffff810ba4b2>] active_load_balance_cpu_stop+0x282/0x490
> [<ffffffff810ba324>] ? active_load_balance_cpu_stop+0xf4/0x490
> [<ffffffff81107923>] ? cpu_stopper_thread+0xb3/0x1d0
> [<ffffffff810ba230>] ? select_task_rq_fair+0xad0/0xad0
> [<ffffffff8110794b>] cpu_stopper_thread+0xdb/0x1d0
> [<ffffffff816c5a59>] ? sub_preempt_count+0x79/0xd0
> [<ffffffff816bef69>] ? __schedule+0x499/0xa40
> [<ffffffff81107870>] ? cpu_stop_queue_work+0x80/0x80
> [<ffffffff8109d0cd>] kthread+0xed/0x100
> [<ffffffff810de02e>] ? put_lock_stats.isra.25+0xe/0x40
> [<ffffffff8109cfe0>] ? kthread_create_on_node+0x160/0x160
> [<ffffffff816c9dac>] ret_from_fork+0x7c/0xb0
> [<ffffffff8109cfe0>] ? kthread_create_on_node+0x160/0x160
>
> ===============================
> [ INFO: suspicious RCU usage. ]
> 3.7.0-rc1+ #31 Not tainted
> -------------------------------
> include/linux/rcupdate.h:759 rcu_read_unlock() used illegally while idle!
>
> other info that might help us debug this:
>
>
> RCU used illegally from idle CPU!
> rcu_scheduler_active = 1, debug_locks = 1
> RCU used illegally from extended quiescent state!
> 4 locks on stack by migration/2/24:
> #0: held: (&rq->lock){-.-.-.}, instance: ffff8801481d4a98, at: [<ffffffff810ba558>] active_load_balance_cpu_stop+0x328/0x490
> #1: held: (&rq->lock/1){..-.-.}, instance: ffff8801483d4a98, at: [<ffffffff810ba569>] active_load_balance_cpu_stop+0x339/0x490
> #2: blocked: (rcu_read_lock){.+.+..}, instance: ffffffff81c34e80, at: [<ffffffff810ba324>] active_load_balance_cpu_stop+0xf4/0x490
> #3: blocked: (rcu_read_lock){.+.+..}, instance: ffffffff81c34e80, at: [<ffffffff810b6a53>] cpuacct_charge+0x13/0x230
>
> stack backtrace:
> Pid: 24, comm: migration/2 Not tainted 3.7.0-rc1+ #31
> Call Trace:
> [<ffffffff810dfb22>] lockdep_rcu_suspicious+0xe2/0x130
> [<ffffffff810b6b70>] cpuacct_charge+0x130/0x230
> [<ffffffff810b6a53>] ? cpuacct_charge+0x13/0x230
> [<ffffffff810bafa4>] update_curr+0x104/0x270
> [<ffffffff810bc244>] enqueue_entity+0x44/0x720
> [<ffffffff810bc962>] enqueue_task_fair+0x42/0xf0
> [<ffffffff810af326>] enqueue_task+0x66/0x80
> [<ffffffff810b01c3>] activate_task+0x23/0x30
> [<ffffffff810b8aec>] move_task+0x3c/0x50
> [<ffffffff810ba4b2>] active_load_balance_cpu_stop+0x282/0x490
> [<ffffffff810ba324>] ? active_load_balance_cpu_stop+0xf4/0x490
> [<ffffffff81107923>] ? cpu_stopper_thread+0xb3/0x1d0
> [<ffffffff810ba230>] ? select_task_rq_fair+0xad0/0xad0
> [<ffffffff8110794b>] cpu_stopper_thread+0xdb/0x1d0
> [<ffffffff816c5a59>] ? sub_preempt_count+0x79/0xd0
> [<ffffffff816bef69>] ? __schedule+0x499/0xa40
> [<ffffffff81107870>] ? cpu_stop_queue_work+0x80/0x80
> [<ffffffff8109d0cd>] kthread+0xed/0x100
> [<ffffffff810de02e>] ? put_lock_stats.isra.25+0xe/0x40
> [<ffffffff8109cfe0>] ? kthread_create_on_node+0x160/0x160
> [<ffffffff816c9dac>] ret_from_fork+0x7c/0xb0
> [<ffffffff8109cfe0>] ? kthread_create_on_node+0x160/0x160
>
> ===============================
> [ INFO: suspicious RCU usage. ]
> 3.7.0-rc1+ #31 Not tainted
> -------------------------------
> include/linux/rcupdate.h:759 rcu_read_unlock() used illegally while idle!
>
> other info that might help us debug this:
>
>
> RCU used illegally from idle CPU!
> rcu_scheduler_active = 1, debug_locks = 1
> RCU used illegally from extended quiescent state!
> 3 locks on stack by migration/2/24:
> #0: held: (&rq->lock){-.-.-.}, instance: ffff8801481d4a98, at: [<ffffffff810ba558>] active_load_balance_cpu_stop+0x328/0x490
> #1: held: (&rq->lock/1){..-.-.}, instance: ffff8801483d4a98, at: [<ffffffff810ba569>] active_load_balance_cpu_stop+0x339/0x490
> #2: blocked: (rcu_read_lock){.+.+..}, instance: ffffffff81c34e80, at: [<ffffffff810ba324>] active_load_balance_cpu_stop+0xf4/0x490
>
> stack backtrace:
> Pid: 24, comm: migration/2 Not tainted 3.7.0-rc1+ #31
> Call Trace:
> [<ffffffff810dfb22>] lockdep_rcu_suspicious+0xe2/0x130
> [<ffffffff810ba615>] active_load_balance_cpu_stop+0x3e5/0x490
> [<ffffffff810ba324>] ? active_load_balance_cpu_stop+0xf4/0x490
> [<ffffffff81107923>] ? cpu_stopper_thread+0xb3/0x1d0
> [<ffffffff810ba230>] ? select_task_rq_fair+0xad0/0xad0
> [<ffffffff8110794b>] cpu_stopper_thread+0xdb/0x1d0
> [<ffffffff816c5a59>] ? sub_preempt_count+0x79/0xd0
> [<ffffffff816bef69>] ? __schedule+0x499/0xa40
> [<ffffffff81107870>] ? cpu_stop_queue_work+0x80/0x80
> [<ffffffff8109d0cd>] kthread+0xed/0x100
> [<ffffffff810de02e>] ? put_lock_stats.isra.25+0xe/0x40
> [<ffffffff8109cfe0>] ? kthread_create_on_node+0x160/0x160
> [<ffffffff816c9dac>] ret_from_fork+0x7c/0xb0
> [<ffffffff8109cfe0>] ? kthread_create_on_node+0x160/0x160
>
>
> ===============================
> [ INFO: suspicious RCU usage. ]
> 3.7.0-rc1+ #31 Not tainted
> -------------------------------
> kernel/rcutree.c:360 Illegal idle entry in RCU read-side critical section.!
>
> other info that might help us debug this:
>
>
> RCU used illegally from idle CPU!
> rcu_scheduler_active = 1, debug_locks = 1
> RCU used illegally from extended quiescent state!
> 1 lock on stack by rcuc/2/22:
> #0: blocked: (rcu_read_lock){.+.+..}, instance: ffffffff81c34e80, at: [<ffffffff816c5855>] __atomic_notifier_call_chain+0x5/0x140
>
> stack backtrace:
> Pid: 22, comm: rcuc/2 Not tainted 3.7.0-rc1+ #31
> Call Trace:
> <IRQ> [<ffffffff810dfb22>] lockdep_rcu_suspicious+0xe2/0x130
> [<ffffffff8112a2d3>] rcu_eqs_enter_common.isra.41+0x1b3/0x1f0
> [<ffffffff8112d1dd>] rcu_irq_exit+0xbd/0x100
> [<ffffffff8107ba2b>] irq_exit+0x8b/0xe0
> [<ffffffff816cbc0b>] smp_apic_timer_interrupt+0x6b/0x98
> [<ffffffff816cab32>] apic_timer_interrupt+0x72/0x80
> <EOI> [<ffffffff810e4472>] ? mark_held_locks+0xb2/0x130
> [<ffffffff810736d5>] ? vprintk_emit+0x1b5/0x600
> [<ffffffff816b3b1b>] printk+0x61/0x63
> [<ffffffff810dfa6f>] lockdep_rcu_suspicious+0x2f/0x130
> [<ffffffff816c597c>] __atomic_notifier_call_chain+0x12c/0x140
> [<ffffffff816c5855>] ? __atomic_notifier_call_chain+0x5/0x140
> [<ffffffff816c59a6>] atomic_notifier_call_chain+0x16/0x20
> [<ffffffff810d34da>] profile_handoff_task+0x1a/0x30
> [<ffffffff8106dc18>] __put_task_struct+0x88/0x140
> [<ffffffff81075c28>] delayed_put_task_struct+0x48/0x170
> [<ffffffff8112acda>] rcu_do_batch.isra.22+0x15a/0x440
> [<ffffffff8112c963>] rcu_cpu_kthread+0x143/0x670
> [<ffffffff810a8464>] smpboot_thread_fn+0x1d4/0x300
> [<ffffffff810a8290>] ? __smpboot_create_thread.part.2+0xc0/0xc0
> [<ffffffff8109d0cd>] kthread+0xed/0x100
> [<ffffffff816be985>] ? wait_for_common+0x135/0x180
> [<ffffffff8109cfe0>] ? kthread_create_on_node+0x160/0x160
> [<ffffffff816c9dac>] ret_from_fork+0x7c/0xb0
> [<ffffffff8109cfe0>] ? kthread_create_on_node+0x160/0x160
> ===============================
> [ INFO: suspicious RCU usage. ]
> 3.7.0-rc1+ #31 Not tainted
> -------------------------------
> include/linux/rcupdate.h:738 rcu_read_lock() used illegally while idle!
>
> other info that might help us debug this:
>
>
> RCU used illegally from idle CPU!
> rcu_scheduler_active = 1, debug_locks = 1
> RCU used illegally from extended quiescent state!
> 1 lock on stack by rcuc/2/22:
> #0: blocked: (rcu_read_lock){.+.+..}, instance: ffffffff81c34e80, at: [<ffffffff816c5855>] __atomic_notifier_call_chain+0x5/0x140
>
> stack backtrace:
> Pid: 22, comm: rcuc/2 Not tainted 3.7.0-rc1+ #31
> Call Trace:
> [<ffffffff810dfb22>] lockdep_rcu_suspicious+0xe2/0x130
> [<ffffffff816c597c>] __atomic_notifier_call_chain+0x12c/0x140
> [<ffffffff816c5855>] ? __atomic_notifier_call_chain+0x5/0x140
> [<ffffffff816c59a6>] atomic_notifier_call_chain+0x16/0x20
> [<ffffffff810d34da>] profile_handoff_task+0x1a/0x30
> [<ffffffff8106dc18>] __put_task_struct+0x88/0x140
> [<ffffffff81075c28>] delayed_put_task_struct+0x48/0x170
> [<ffffffff8112acda>] rcu_do_batch.isra.22+0x15a/0x440
> [<ffffffff8112c963>] rcu_cpu_kthread+0x143/0x670
> [<ffffffff810a8464>] smpboot_thread_fn+0x1d4/0x300
> [<ffffffff810a8290>] ? __smpboot_create_thread.part.2+0xc0/0xc0
> [<ffffffff8109d0cd>] kthread+0xed/0x100
> [<ffffffff816be985>] ? wait_for_common+0x135/0x180
> [<ffffffff8109cfe0>] ? kthread_create_on_node+0x160/0x160
> [<ffffffff816c9dac>] ret_from_fork+0x7c/0xb0
> [<ffffffff8109cfe0>] ? kthread_create_on_node+0x160/0x160
>
> ===============================
> [ INFO: suspicious RCU usage. ]
> 3.7.0-rc1+ #31 Not tainted
> -------------------------------
> include/linux/rcupdate.h:759 rcu_read_unlock() used illegally while idle!
>
> other info that might help us debug this:
>
>
> RCU used illegally from idle CPU!
> rcu_scheduler_active = 1, debug_locks = 1
> RCU used illegally from extended quiescent state!
> 1 lock on stack by rcuc/2/22:
> #0: blocked: (rcu_read_lock){.+.+..}, instance: ffffffff81c34e80, at: [<ffffffff816c5855>] __atomic_notifier_call_chain+0x5/0x140
>
> stack backtrace:
> Pid: 22, comm: rcuc/2 Not tainted 3.7.0-rc1+ #31
> Call Trace:
> [<ffffffff810dfb22>] lockdep_rcu_suspicious+0xe2/0x130
> [<ffffffff816c5948>] __atomic_notifier_call_chain+0xf8/0x140
> [<ffffffff816c5855>] ? __atomic_notifier_call_chain+0x5/0x140
> [<ffffffff816c59a6>] atomic_notifier_call_chain+0x16/0x20
> [<ffffffff810d34da>] profile_handoff_task+0x1a/0x30
> [<ffffffff8106dc18>] __put_task_struct+0x88/0x140
> [<ffffffff81075c28>] delayed_put_task_struct+0x48/0x170
> [<ffffffff8112acda>] rcu_do_batch.isra.22+0x15a/0x440
> [<ffffffff8112c963>] rcu_cpu_kthread+0x143/0x670
> [<ffffffff810a8464>] smpboot_thread_fn+0x1d4/0x300
> [<ffffffff810a8290>] ? __smpboot_create_thread.part.2+0xc0/0xc0
> [<ffffffff8109d0cd>] kthread+0xed/0x100
> [<ffffffff816be985>] ? wait_for_common+0x135/0x180
> [<ffffffff8109cfe0>] ? kthread_create_on_node+0x160/0x160
> [<ffffffff816c9dac>] ret_from_fork+0x7c/0xb0
> [<ffffffff8109cfe0>] ? kthread_create_on_node+0x160/0x160
>
> ===============================
> [ INFO: suspicious RCU usage. ]
> 3.7.0-rc1+ #31 Not tainted
> -------------------------------
> kernel/sched/fair.c:4723 suspicious rcu_dereference_check() usage!
>
> other info that might help us debug this:
>
>
> RCU used illegally from idle CPU!
> rcu_scheduler_active = 1, debug_locks = 1
> RCU used illegally from extended quiescent state!
> 1 lock on stack by rcuc/2/22:
> #0: blocked: (rcu_read_lock){.+.+..}, instance: ffffffff81c34e80, at: [<ffffffff810bea25>] rebalance_domains+0x5/0x310
>
> stack backtrace:
> Pid: 22, comm: rcuc/2 Not tainted 3.7.0-rc1+ #31
> Call Trace:
> <IRQ> [<ffffffff810dfb22>] lockdep_rcu_suspicious+0xe2/0x130
> [<ffffffff810bed05>] rebalance_domains+0x2e5/0x310
> [<ffffffff810bea25>] ? rebalance_domains+0x5/0x310
> [<ffffffff810e4576>] ? trace_hardirqs_on_caller+0x86/0x1e0
> [<ffffffff810bed7d>] run_rebalance_domains+0x4d/0x1c0
> [<ffffffffa030d310>] ? dn_neigh_elist+0x40/0x160 [decnet]
> [<ffffffff8107b520>] __do_softirq+0xf0/0x440
> [<ffffffff816cb27c>] call_softirq+0x1c/0x30
> <EOI> [<ffffffff8101f175>] do_softirq+0xd5/0x110
> [<ffffffff8112c968>] ? rcu_cpu_kthread+0x148/0x670
> [<ffffffff8107b3eb>] local_bh_enable+0x11b/0x160
> [<ffffffff8112c968>] rcu_cpu_kthread+0x148/0x670
> [<ffffffff810a8464>] smpboot_thread_fn+0x1d4/0x300
> [<ffffffff810a8290>] ? __smpboot_create_thread.part.2+0xc0/0xc0
> [<ffffffff8109d0cd>] kthread+0xed/0x100
> [<ffffffff816be985>] ? wait_for_common+0x135/0x180
> [<ffffffff8109cfe0>] ? kthread_create_on_node+0x160/0x160
> [<ffffffff816c9dac>] ret_from_fork+0x7c/0xb0
> [<ffffffff8109cfe0>] ? kthread_create_on_node+0x160/0x160
>
> ===============================
> [ INFO: suspicious RCU usage. ]
> 3.7.0-rc1+ #31 Not tainted
> -------------------------------
> kernel/sched/fair.c:2715 suspicious rcu_dereference_check() usage!
>
> other info that might help us debug this:
>
>
> RCU used illegally from idle CPU!
> rcu_scheduler_active = 1, debug_locks = 1
> RCU used illegally from extended quiescent state!
> 5 locks on stack by kworker/2:0/9822:
> #0: blocked: (events){.+.+.+}, instance: ffff880147814fd8, at: [<ffffffff810959c7>] process_one_work+0x197/0x780
> #1: blocked: ((&barr->work)){+.+...}, instance: ffff880083ad5d38, at: [<ffffffff810959c7>] process_one_work+0x197/0x780
> #2: held: (&x->wait#8){......}, instance: ffff88005a183e90, at: [<ffffffff810ab2b8>] complete+0x28/0x60
> #3: held: (&p->pi_lock){-.-.-.}, instance: ffff88003606cf78, at: [<ffffffff810b37f1>] try_to_wake_up+0x31/0x320
> #4: blocked: (rcu_read_lock){.+.+..}, instance: ffffffff81c34e80, at: [<ffffffff810b97c1>] select_task_rq_fair+0x61/0xad0
>
> stack backtrace:
> Pid: 9822, comm: kworker/2:0 Not tainted 3.7.0-rc1+ #31
> Call Trace:
> [<ffffffff810dfb22>] lockdep_rcu_suspicious+0xe2/0x130
> [<ffffffff810b9b10>] select_task_rq_fair+0x3b0/0xad0
> [<ffffffff810b97c1>] ? select_task_rq_fair+0x61/0xad0
> [<ffffffff810de02e>] ? put_lock_stats.isra.25+0xe/0x40
> [<ffffffff810b37f1>] ? try_to_wake_up+0x31/0x320
> [<ffffffff810b387e>] try_to_wake_up+0xbe/0x320
> [<ffffffff810b3af2>] default_wake_function+0x12/0x20
> [<ffffffff810a8b75>] __wake_up_common+0x55/0x90
> [<ffffffff810ab2b8>] ? complete+0x28/0x60
> [<ffffffff810ab2d7>] complete+0x47/0x60
> [<ffffffff81092422>] wq_barrier_func+0x12/0x20
> [<ffffffff81095a37>] process_one_work+0x207/0x780
> [<ffffffff810959c7>] ? process_one_work+0x197/0x780
> [<ffffffff81092410>] ? destroy_work_on_stack+0x20/0x20
> [<ffffffff81095fdc>] process_scheduled_works+0x2c/0x40
> [<ffffffff8109652a>] worker_thread+0x2fa/0x440
> [<ffffffff81096230>] ? rescuer_thread+0x240/0x240
> [<ffffffff8109d0cd>] kthread+0xed/0x100
> [<ffffffff810de02e>] ? put_lock_stats.isra.25+0xe/0x40
> [<ffffffff8109cfe0>] ? kthread_create_on_node+0x160/0x160
> [<ffffffff816c9dac>] ret_from_fork+0x7c/0xb0
> [<ffffffff8109cfe0>] ? kthread_create_on_node+0x160/0x160
>
> ===============================
> [ INFO: suspicious RCU usage. ]
> 3.7.0-rc1+ #31 Not tainted
> -------------------------------
> kernel/sched/core.c:1216 suspicious rcu_dereference_check() usage!
>
> other info that might help us debug this:
>
>
> RCU used illegally from idle CPU!
> rcu_scheduler_active = 1, debug_locks = 1
> RCU used illegally from extended quiescent state!
> 5 locks on stack by kworker/2:0/9822:
> #0: blocked: (events){.+.+.+}, instance: ffff880147814fd8, at: [<ffffffff810959c7>] process_one_work+0x197/0x780
> #1: blocked: ((&barr->work)){+.+...}, instance: ffff880083ad5d38, at: [<ffffffff810959c7>] process_one_work+0x197/0x780
> #2: held: (&x->wait#8){......}, instance: ffff88005a183e90, at: [<ffffffff810ab2b8>] complete+0x28/0x60
> #3: held: (&p->pi_lock){-.-.-.}, instance: ffff88003606cf78, at: [<ffffffff810b37f1>] try_to_wake_up+0x31/0x320
> #4: blocked: (rcu_read_lock){.+.+..}, instance: ffffffff81c34e80, at: [<ffffffff810ae169>] ttwu_stat+0x49/0x280
>
> stack backtrace:
> Pid: 9822, comm: kworker/2:0 Not tainted 3.7.0-rc1+ #31
> Call Trace:
> [<ffffffff810dfb22>] lockdep_rcu_suspicious+0xe2/0x130
> [<ffffffff810ae21c>] ttwu_stat+0xfc/0x280
> [<ffffffff810ae169>] ? ttwu_stat+0x49/0x280
> [<ffffffff810b3950>] try_to_wake_up+0x190/0x320
> [<ffffffff810b3af2>] default_wake_function+0x12/0x20
> [<ffffffff810a8b75>] __wake_up_common+0x55/0x90
> [<ffffffff810ab2b8>] ? complete+0x28/0x60
> [<ffffffff810ab2d7>] complete+0x47/0x60
> [<ffffffff81092422>] wq_barrier_func+0x12/0x20
> [<ffffffff81095a37>] process_one_work+0x207/0x780
> [<ffffffff810959c7>] ? process_one_work+0x197/0x780
> [<ffffffff81092410>] ? destroy_work_on_stack+0x20/0x20
> [<ffffffff81095fdc>] process_scheduled_works+0x2c/0x40
> [<ffffffff8109652a>] worker_thread+0x2fa/0x440
> [<ffffffff81096230>] ? rescuer_thread+0x240/0x240
> [<ffffffff8109d0cd>] kthread+0xed/0x100
> [<ffffffff810de02e>] ? put_lock_stats.isra.25+0xe/0x40
> [<ffffffff8109cfe0>] ? kthread_create_on_node+0x160/0x160
> [<ffffffff816c9dac>] ret_from_fork+0x7c/0xb0
> [<ffffffff8109cfe0>] ? kthread_create_on_node+0x160/0x160
>
> ===============================
> [ INFO: suspicious RCU usage. ]
> 3.7.0-rc1+ #31 Not tainted
> -------------------------------
> include/linux/rcupdate.h:738 rcu_read_lock() used illegally while idle!
>
> other info that might help us debug this:
>
>
> RCU used illegally from idle CPU!
> rcu_scheduler_active = 1, debug_locks = 1
> RCU used illegally from extended quiescent state!
> 2 locks on stack by trinity-main/9791:
> #0: held: (tasklist_lock){.+.+..}, instance: ffffffff81c05098, at: [<ffffffff81089031>] ptrace_stop+0xc1/0x280
> #1: blocked: (rcu_read_lock){.+.+..}, instance: ffffffff81c34e80, at: [<ffffffff81088a4e>] do_notify_parent_cldstop+0x2e/0x550
>
> stack backtrace:
> Pid: 9791, comm: trinity-main Not tainted 3.7.0-rc1+ #31
> Call Trace:
> [<ffffffff810dfb22>] lockdep_rcu_suspicious+0xe2/0x130
> [<ffffffff81088de5>] do_notify_parent_cldstop+0x3c5/0x550
> [<ffffffff81088a4e>] ? do_notify_parent_cldstop+0x2e/0x550
> [<ffffffff810de02e>] ? put_lock_stats.isra.25+0xe/0x40
> [<ffffffff8108906a>] ptrace_stop+0xfa/0x280
> [<ffffffff81097e75>] ? __task_pid_nr_ns+0x5/0x1a0
> [<ffffffff81089287>] ptrace_do_notify+0x97/0xb0
> [<ffffffff8108afe4>] ptrace_notify+0x64/0x90
> [<ffffffff8102be46>] tracehook_report_syscall_exit+0x36/0xb0
> [<ffffffff810ab02c>] ? finish_task_switch+0x7c/0x120
> [<ffffffff816ca143>] ? int_very_careful+0x5/0xd
> [<ffffffff8102c269>] syscall_trace_leave+0x79/0x150
> [<ffffffff816ca17f>] int_check_syscall_exit_work+0x34/0x3d
>
> ===============================
> [ INFO: suspicious RCU usage. ]
> 3.7.0-rc1+ #31 Not tainted
> -------------------------------
> kernel/signal.c:1756 suspicious rcu_dereference_check() usage!
>
> other info that might help us debug this:
>
>
> RCU used illegally from idle CPU!
> rcu_scheduler_active = 1, debug_locks = 1
> RCU used illegally from extended quiescent state!
> 3 locks on stack by trinity-main/9791:
> #0: held: (tasklist_lock){.+.+..}, instance: ffffffff81c05098, at: [<ffffffff81089031>] ptrace_stop+0xc1/0x280
> #1: blocked: (rcu_read_lock){.+.+..}, instance: ffffffff81c34e80, at: [<ffffffff81088a4e>] do_notify_parent_cldstop+0x2e/0x550
> #2: blocked: (rcu_read_lock){.+.+..}, instance: ffffffff81c34e80, at: [<ffffffff81088aa3>] do_notify_parent_cldstop+0x83/0x550
>
> stack backtrace:
> Pid: 9791, comm: trinity-main Not tainted 3.7.0-rc1+ #31
> Call Trace:
> [<ffffffff810dfb22>] lockdep_rcu_suspicious+0xe2/0x130
> [<ffffffff81088ed8>] do_notify_parent_cldstop+0x4b8/0x550
> [<ffffffff81088aa3>] ? do_notify_parent_cldstop+0x83/0x550
> [<ffffffff810de02e>] ? put_lock_stats.isra.25+0xe/0x40
> [<ffffffff8108906a>] ptrace_stop+0xfa/0x280
> [<ffffffff81097e75>] ? __task_pid_nr_ns+0x5/0x1a0
> [<ffffffff81089287>] ptrace_do_notify+0x97/0xb0
> [<ffffffff8108afe4>] ptrace_notify+0x64/0x90
> [<ffffffff8102be46>] tracehook_report_syscall_exit+0x36/0xb0
> [<ffffffff810ab02c>] ? finish_task_switch+0x7c/0x120
> [<ffffffff816ca143>] ? int_very_careful+0x5/0xd
> [<ffffffff8102c269>] syscall_trace_leave+0x79/0x150
> [<ffffffff816ca17f>] int_check_syscall_exit_work+0x34/0x3d
>
> ===============================
> [ INFO: suspicious RCU usage. ]
> 3.7.0-rc1+ #31 Not tainted
> -------------------------------
> include/linux/rcupdate.h:759 rcu_read_unlock() used illegally while idle!
>
> other info that might help us debug this:
>
>
> RCU used illegally from idle CPU!
> rcu_scheduler_active = 1, debug_locks = 1
> RCU used illegally from extended quiescent state!
> 3 locks on stack by trinity-main/9791:
> #0: held: (tasklist_lock){.+.+..}, instance: ffffffff81c05098, at: [<ffffffff81089031>] ptrace_stop+0xc1/0x280
> #1: blocked: (rcu_read_lock){.+.+..}, instance: ffffffff81c34e80, at: [<ffffffff81088a4e>] do_notify_parent_cldstop+0x2e/0x550
> #2: blocked: (rcu_read_lock){.+.+..}, instance: ffffffff81c34e80, at: [<ffffffff81088aa3>] do_notify_parent_cldstop+0x83/0x550
>
> stack backtrace:
> Pid: 9791, comm: trinity-main Not tainted 3.7.0-rc1+ #31
> Call Trace:
> [<ffffffff810dfb22>] lockdep_rcu_suspicious+0xe2/0x130
> [<ffffffff81088d75>] do_notify_parent_cldstop+0x355/0x550
> [<ffffffff81088aa3>] ? do_notify_parent_cldstop+0x83/0x550
> [<ffffffff810de02e>] ? put_lock_stats.isra.25+0xe/0x40
> [<ffffffff8108906a>] ptrace_stop+0xfa/0x280
> [<ffffffff81097e75>] ? __task_pid_nr_ns+0x5/0x1a0
> [<ffffffff81089287>] ptrace_do_notify+0x97/0xb0
> [<ffffffff8108afe4>] ptrace_notify+0x64/0x90
> [<ffffffff8102be46>] tracehook_report_syscall_exit+0x36/0xb0
> [<ffffffff810ab02c>] ? finish_task_switch+0x7c/0x120
> [<ffffffff816ca143>] ? int_very_careful+0x5/0xd
> [<ffffffff8102c269>] syscall_trace_leave+0x79/0x150
> [<ffffffff816ca17f>] int_check_syscall_exit_work+0x34/0x3d
>
> ===============================
> [ INFO: suspicious RCU usage. ]
> 3.7.0-rc1+ #31 Not tainted
> -------------------------------
> kernel/signal.c:1756 suspicious rcu_dereference_check() usage!
>
> other info that might help us debug this:
>
>
> RCU used illegally from idle CPU!
> rcu_scheduler_active = 1, debug_locks = 1
> RCU used illegally from extended quiescent state!
> 3 locks on stack by trinity-main/9791:
> #0: held: (tasklist_lock){.+.+..}, instance: ffffffff81c05098, at: [<ffffffff81089031>] ptrace_stop+0xc1/0x280
> #1: blocked: (rcu_read_lock){.+.+..}, instance: ffffffff81c34e80, at: [<ffffffff81088a4e>] do_notify_parent_cldstop+0x2e/0x550
> #2: blocked: (rcu_read_lock){.+.+..}, instance: ffffffff81c34e80, at: [<ffffffff81088b37>] do_notify_parent_cldstop+0x117/0x550
>
> stack backtrace:
> Pid: 9791, comm: trinity-main Not tainted 3.7.0-rc1+ #31
> Call Trace:
> [<ffffffff810dfb22>] lockdep_rcu_suspicious+0xe2/0x130
> [<ffffffff81088f18>] do_notify_parent_cldstop+0x4f8/0x550
> [<ffffffff81088b37>] ? do_notify_parent_cldstop+0x117/0x550
> [<ffffffff810de02e>] ? put_lock_stats.isra.25+0xe/0x40
> [<ffffffff8108906a>] ptrace_stop+0xfa/0x280
> [<ffffffff81097e75>] ? __task_pid_nr_ns+0x5/0x1a0
> [<ffffffff81089287>] ptrace_do_notify+0x97/0xb0
> [<ffffffff8108afe4>] ptrace_notify+0x64/0x90
> [<ffffffff8102be46>] tracehook_report_syscall_exit+0x36/0xb0
> [<ffffffff810ab02c>] ? finish_task_switch+0x7c/0x120
> [<ffffffff816ca143>] ? int_very_careful+0x5/0xd
> [<ffffffff8102c269>] syscall_trace_leave+0x79/0x150
> [<ffffffff816ca17f>] int_check_syscall_exit_work+0x34/0x3d
>
> ===============================
> [ INFO: suspicious RCU usage. ]
> 3.7.0-rc1+ #31 Not tainted
> -------------------------------
> kernel/sched/fair.c:2660 suspicious rcu_dereference_check() usage!
>
> other info that might help us debug this:
>
>
> RCU used illegally from idle CPU!
> rcu_scheduler_active = 1, debug_locks = 1
> RCU used illegally from extended quiescent state!
> 5 locks on stack by trinity-main/9791:
> #0: held: (tasklist_lock){.+.+..}, instance: ffffffff81c05098, at: [<ffffffff81089031>] ptrace_stop+0xc1/0x280
> #1: held: (&(&sighand->siglock)->rlock){-.-.-.}, instance: ffff880107520820, at: [<ffffffff81088e16>] do_notify_parent_cldstop+0x3f6/0x550
> #2: held: (&sig->wait_chldexit){......}, instance: ffff880035679528, at: [<ffffffff810ab327>] __wake_up_sync_key+0x37/0x80
> #3: held: (&p->pi_lock){-.-.-.}, instance: ffff880035738658, at: [<ffffffff810b37f1>] try_to_wake_up+0x31/0x320
> #4: blocked: (rcu_read_lock){.+.+..}, instance: ffffffff81c34e80, at: [<ffffffff810b97c1>] select_task_rq_fair+0x61/0xad0
>
> stack backtrace:
> Pid: 9791, comm: trinity-main Not tainted 3.7.0-rc1+ #31
> Call Trace:
> [<ffffffff810dfb22>] lockdep_rcu_suspicious+0xe2/0x130
> [<ffffffff810ba10c>] select_task_rq_fair+0x9ac/0xad0
> [<ffffffff810b97c1>] ? select_task_rq_fair+0x61/0xad0
> [<ffffffff810de049>] ? put_lock_stats.isra.25+0x29/0x40
> [<ffffffff810b37f1>] ? try_to_wake_up+0x31/0x320
> [<ffffffff810b387e>] try_to_wake_up+0xbe/0x320
> [<ffffffff810b3af2>] default_wake_function+0x12/0x20
> [<ffffffff8107586a>] child_wait_callback+0x5a/0x60
> [<ffffffff810a8b75>] __wake_up_common+0x55/0x90
> [<ffffffff810ab327>] ? __wake_up_sync_key+0x37/0x80
> [<ffffffff810ab343>] __wake_up_sync_key+0x53/0x80
> [<ffffffff81078b36>] __wake_up_parent+0x26/0x30
> [<ffffffff81088e4e>] do_notify_parent_cldstop+0x42e/0x550
> [<ffffffff81088b37>] ? do_notify_parent_cldstop+0x117/0x550
> [<ffffffff8108906a>] ptrace_stop+0xfa/0x280
> [<ffffffff81097e75>] ? __task_pid_nr_ns+0x5/0x1a0
> [<ffffffff81089287>] ptrace_do_notify+0x97/0xb0
> [<ffffffff8108afe4>] ptrace_notify+0x64/0x90
> [<ffffffff8102be46>] tracehook_report_syscall_exit+0x36/0xb0
> [<ffffffff810ab02c>] ? finish_task_switch+0x7c/0x120
> [<ffffffff816ca143>] ? int_very_careful+0x5/0xd
> [<ffffffff8102c269>] syscall_trace_leave+0x79/0x150
> [<ffffffff816ca17f>] int_check_syscall_exit_work+0x34/0x3d
>
> ===============================
> [ INFO: suspicious RCU usage. ]
> 3.7.0-rc1+ #31 Not tainted
> -------------------------------
> include/linux/rcupdate.h:738 rcu_read_lock() used illegally while idle!
>
> other info that might help us debug this:
>
>
> RCU used illegally from idle CPU!
> rcu_scheduler_active = 1, debug_locks = 1
> RCU used illegally from extended quiescent state!
> 3 locks on stack by strace/1365:
> #0: held: (tasklist_lock){.+.+..}, instance: ffffffff81c05098, at: [<ffffffff81077402>] do_wait+0xb2/0x370
> #1: held: (&(&sighand->siglock)->rlock){-.-.-.}, instance: ffff8801435dce20, at: [<ffffffff8107652d>] wait_consider_task+0x11d/0xf40
> #2: blocked: (rcu_read_lock){.+.+..}, instance: ffffffff81c34e80, at: [<ffffffff8107655a>] wait_consider_task+0x14a/0xf40
>
> stack backtrace:
> Pid: 1365, comm: strace Not tainted 3.7.0-rc1+ #31
> Call Trace:
> [<ffffffff810dfb22>] lockdep_rcu_suspicious+0xe2/0x130
> [<ffffffff8107696c>] wait_consider_task+0x55c/0xf40
> [<ffffffff8107655a>] ? wait_consider_task+0x14a/0xf40
> [<ffffffff810774d3>] do_wait+0x183/0x370
> [<ffffffff81078d75>] sys_wait4+0x75/0xf0
> [<ffffffff81075810>] ? task_stopped_code+0x60/0x60
> [<ffffffff816ca088>] tracesys+0xe1/0xe6
>
> ===============================
> [ INFO: suspicious RCU usage. ]
> 3.7.0-rc1+ #31 Not tainted
> -------------------------------
> kernel/exit.c:1350 suspicious rcu_dereference_check() usage!
>
> other info that might help us debug this:
>
>
> RCU used illegally from idle CPU!
> rcu_scheduler_active = 1, debug_locks = 1
> RCU used illegally from extended quiescent state!
> 3 locks on stack by strace/1365:
> #0: held: (tasklist_lock){.+.+..}, instance: ffffffff81c05098, at: [<ffffffff81077402>] do_wait+0xb2/0x370
> #1: held: (&(&sighand->siglock)->rlock){-.-.-.}, instance: ffff8801435dce20, at: [<ffffffff8107652d>] wait_consider_task+0x11d/0xf40
> #2: blocked: (rcu_read_lock){.+.+..}, instance: ffffffff81c34e80, at: [<ffffffff8107655a>] wait_consider_task+0x14a/0xf40
>
> stack backtrace:
> Pid: 1365, comm: strace Not tainted 3.7.0-rc1+ #31
> Call Trace:
> [<ffffffff810dfb22>] lockdep_rcu_suspicious+0xe2/0x130
> [<ffffffff810765ec>] wait_consider_task+0x1dc/0xf40
> [<ffffffff8107655a>] ? wait_consider_task+0x14a/0xf40
> [<ffffffff810774d3>] do_wait+0x183/0x370
> [<ffffffff81078d75>] sys_wait4+0x75/0xf0
> [<ffffffff81075810>] ? task_stopped_code+0x60/0x60
> [<ffffffff816ca088>] tracesys+0xe1/0xe6
>
> ===============================
> [ INFO: suspicious RCU usage. ]
> 3.7.0-rc1+ #31 Not tainted
> -------------------------------
> include/linux/rcupdate.h:759 rcu_read_unlock() used illegally while idle!
>
> other info that might help us debug this:
>
>
> RCU used illegally from idle CPU!
> rcu_scheduler_active = 1, debug_locks = 1
> RCU used illegally from extended quiescent state!
> 3 locks on stack by strace/1365:
> #0: held: (tasklist_lock){.+.+..}, instance: ffffffff81c05098, at: [<ffffffff81077402>] do_wait+0xb2/0x370
> #1: held: (&(&sighand->siglock)->rlock){-.-.-.}, instance: ffff8801435dce20, at: [<ffffffff8107652d>] wait_consider_task+0x11d/0xf40
> #2: blocked: (rcu_read_lock){.+.+..}, instance: ffffffff81c34e80, at: [<ffffffff8107655a>] wait_consider_task+0x14a/0xf40
>
> stack backtrace:
> Pid: 1365, comm: strace Not tainted 3.7.0-rc1+ #31
> Call Trace:
> [<ffffffff810dfb22>] lockdep_rcu_suspicious+0xe2/0x130
> [<ffffffff810769a5>] wait_consider_task+0x595/0xf40
> [<ffffffff8107655a>] ? wait_consider_task+0x14a/0xf40
> [<ffffffff810774d3>] do_wait+0x183/0x370
> [<ffffffff81078d75>] sys_wait4+0x75/0xf0
> [<ffffffff81075810>] ? task_stopped_code+0x60/0x60
> [<ffffffff816ca088>] tracesys+0xe1/0xe6
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>
>

2012-10-24 18:05:03

by Oleg Nesterov

[permalink] [raw]
Subject: Re: lots of suspicious RCU traces

On 10/24, Sergey Senozhatsky wrote:
>
> small question,
>
> ptrace_notify() and forward calls are able to both indirectly and directly call schedule(),
> /* direct call from ptrace_stop()*/,
> should, in this case, rcu_user_enter() be called before tracehook_report_syscall_exit(regs, step)
> and ptrace chain?

Well, I don't really understand this magic... but why?

Until we return to user-mode this CPU should be in "in_user = false" state.

I am not sure I understand how it is guaranteed that rcu_user_exit() was
called... probably TIF_NOHZ should trigger the slow path and ensure that
syscall_trace_enter()->rcu_user_exit() will be called.

Oleg.

2012-10-24 18:23:37

by Sergey Senozhatsky

[permalink] [raw]
Subject: Re: lots of suspicious RCU traces

On (10/24/12 20:06), Oleg Nesterov wrote:
> On 10/24, Sergey Senozhatsky wrote:
> >
> > small question,
> >
> > ptrace_notify() and forward calls are able to both indirectly and directly call schedule(),
> > /* direct call from ptrace_stop()*/,
> > should, in this case, rcu_user_enter() be called before tracehook_report_syscall_exit(regs, step)
> > and ptrace chain?
>
> Well, I don't really understand this magic... but why?
>

My understanding is (I may be wrong) that we can schedule() from ptrace chain to
some arbitrary task, which will continue its execution from the point where RCU assumes
CPU as not idle, while CPU in fact still in idle state -- no one said rcu_idle_exit()
(or similar) prior to schedule() call.

if so, does the same apply to in_user?

-ss

> Until we return to user-mode this CPU should be in "in_user = false" state.
>
> I am not sure I understand how it is guaranteed that rcu_user_exit() was
> called... probably TIF_NOHZ should trigger the slow path and ensure that
> syscall_trace_enter()->rcu_user_exit() will be called.
>
> Oleg.
>

2012-10-24 18:51:23

by Oleg Nesterov

[permalink] [raw]
Subject: Re: lots of suspicious RCU traces

On 10/24, Sergey Senozhatsky wrote:
>
> On (10/24/12 20:06), Oleg Nesterov wrote:
> > On 10/24, Sergey Senozhatsky wrote:
> > >
> > > small question,
> > >
> > > ptrace_notify() and forward calls are able to both indirectly and directly call schedule(),
> > > /* direct call from ptrace_stop()*/,
> > > should, in this case, rcu_user_enter() be called before tracehook_report_syscall_exit(regs, step)
> > > and ptrace chain?
> >
> > Well, I don't really understand this magic... but why?
> >
>
> My understanding is (I may be wrong)

Oh, I bet I have much more chances to be wrong ;)

> that we can schedule() from ptrace chain to

I don't understand how ptrace chain differs from, say, audit_syscall_exit().
There is nothing special in ptrace_stop() in this respect.

> some arbitrary task, which will continue its execution from the point where RCU assumes
> CPU as not idle, while CPU in fact still in idle state -- no one said rcu_idle_exit()

confused... of course it would be wrong if syscall_trace_leave() is
called when CPU is considered idle,

> if so, does the same apply to in_user?

Not sure we understand each other. But I believe that ->in_user should be
already false when syscall_trace_leave() is called.

Oleg.

2012-10-24 19:19:40

by Sergey Senozhatsky

[permalink] [raw]
Subject: Re: lots of suspicious RCU traces

On (10/24/12 20:52), Oleg Nesterov wrote:
> On 10/24, Sergey Senozhatsky wrote:
> >
> > On (10/24/12 20:06), Oleg Nesterov wrote:
> > > On 10/24, Sergey Senozhatsky wrote:
> > > >
> > > > small question,
> > > >
> > > > ptrace_notify() and forward calls are able to both indirectly and directly call schedule(),
> > > > /* direct call from ptrace_stop()*/,
> > > > should, in this case, rcu_user_enter() be called before tracehook_report_syscall_exit(regs, step)
> > > > and ptrace chain?
> > >
> > > Well, I don't really understand this magic... but why?
> > >
> >
> > My understanding is (I may be wrong)
>
> Oh, I bet I have much more chances to be wrong ;)
>
> > that we can schedule() from ptrace chain to
>
> I don't understand how ptrace chain differs from, say, audit_syscall_exit().
> There is nothing special in ptrace_stop() in this respect.
>

hm.

> > some arbitrary task, which will continue its execution from the point where RCU assumes
> > CPU as not idle, while CPU in fact still in idle state -- no one said rcu_idle_exit()
>
> confused... of course it would be wrong if syscall_trace_leave() is
> called when CPU is considered idle,
>

sorry, I meant idle from RCU point of view:

int rcu_is_cpu_idle(void)
{
return !rcu_dynticks_nesting;
}



> > if so, does the same apply to in_user?
>
> Not sure we understand each other. But I believe that ->in_user should be
> already false when syscall_trace_leave() is called.
>

oh, my apology. I was very wrong about this.

-ss

>
> Oleg.
>

2012-10-24 19:42:31

by Paul E. McKenney

[permalink] [raw]
Subject: Re: lots of suspicious RCU traces

On Wed, Oct 24, 2012 at 12:17:16PM -0700, Sergey Senozhatsky wrote:
> On (10/24/12 20:52), Oleg Nesterov wrote:
> > On 10/24, Sergey Senozhatsky wrote:
> > >
> > > On (10/24/12 20:06), Oleg Nesterov wrote:
> > > > On 10/24, Sergey Senozhatsky wrote:
> > > > >
> > > > > small question,
> > > > >
> > > > > ptrace_notify() and forward calls are able to both indirectly and directly call schedule(),
> > > > > /* direct call from ptrace_stop()*/,
> > > > > should, in this case, rcu_user_enter() be called before tracehook_report_syscall_exit(regs, step)
> > > > > and ptrace chain?
> > > >
> > > > Well, I don't really understand this magic... but why?
> > > >
> > >
> > > My understanding is (I may be wrong)
> >
> > Oh, I bet I have much more chances to be wrong ;)
> >
> > > that we can schedule() from ptrace chain to
> >
> > I don't understand how ptrace chain differs from, say, audit_syscall_exit().
> > There is nothing special in ptrace_stop() in this respect.
> >
>
> hm.
>
> > > some arbitrary task, which will continue its execution from the point where RCU assumes
> > > CPU as not idle, while CPU in fact still in idle state -- no one said rcu_idle_exit()
> >
> > confused... of course it would be wrong if syscall_trace_leave() is
> > called when CPU is considered idle,
> >
>
> sorry, I meant idle from RCU point of view:
>
> int rcu_is_cpu_idle(void)
> {
> return !rcu_dynticks_nesting;
> }

Hmmm... This reproduces on UP builds, then?

> > > if so, does the same apply to in_user?
> >
> > Not sure we understand each other. But I believe that ->in_user should be
> > already false when syscall_trace_leave() is called.
>
> oh, my apology. I was very wrong about this.

Frederic, thoughts?

Thanx, Paul

2012-10-24 19:52:58

by Sergey Senozhatsky

[permalink] [raw]
Subject: Re: lots of suspicious RCU traces

On (10/24/12 12:41), Paul E. McKenney wrote:
> On Wed, Oct 24, 2012 at 12:17:16PM -0700, Sergey Senozhatsky wrote:
> > On (10/24/12 20:52), Oleg Nesterov wrote:
> > > On 10/24, Sergey Senozhatsky wrote:
> > > >
> > > > On (10/24/12 20:06), Oleg Nesterov wrote:
> > > > > On 10/24, Sergey Senozhatsky wrote:
> > > > > >
> > > > > > small question,
> > > > > >
> > > > > > ptrace_notify() and forward calls are able to both indirectly and directly call schedule(),
> > > > > > /* direct call from ptrace_stop()*/,
> > > > > > should, in this case, rcu_user_enter() be called before tracehook_report_syscall_exit(regs, step)
> > > > > > and ptrace chain?
> > > > >
> > > > > Well, I don't really understand this magic... but why?
> > > > >
> > > >
> > > > My understanding is (I may be wrong)
> > >
> > > Oh, I bet I have much more chances to be wrong ;)
> > >
> > > > that we can schedule() from ptrace chain to
> > >
> > > I don't understand how ptrace chain differs from, say, audit_syscall_exit().
> > > There is nothing special in ptrace_stop() in this respect.
> > >
> >
> > hm.
> >
> > > > some arbitrary task, which will continue its execution from the point where RCU assumes
> > > > CPU as not idle, while CPU in fact still in idle state -- no one said rcu_idle_exit()
> > >
> > > confused... of course it would be wrong if syscall_trace_leave() is
> > > called when CPU is considered idle,
> > >
> >
> > sorry, I meant idle from RCU point of view:
> >
> > int rcu_is_cpu_idle(void)
> > {
> > return !rcu_dynticks_nesting;
> > }
>
> Hmmm... This reproduces on UP builds, then?
>

I'll compile UP build (will offlining of N-1 CPUs do the trick?).

-ss

> > > > if so, does the same apply to in_user?
> > >
> > > Not sure we understand each other. But I believe that ->in_user should be
> > > already false when syscall_trace_leave() is called.
> >
> > oh, my apology. I was very wrong about this.
>
> Frederic, thoughts?
>
> Thanx, Paul
>

2012-10-24 20:12:33

by Paul E. McKenney

[permalink] [raw]
Subject: Re: lots of suspicious RCU traces

On Wed, Oct 24, 2012 at 12:50:33PM -0700, Sergey Senozhatsky wrote:
> On (10/24/12 12:41), Paul E. McKenney wrote:
> > On Wed, Oct 24, 2012 at 12:17:16PM -0700, Sergey Senozhatsky wrote:
> > > On (10/24/12 20:52), Oleg Nesterov wrote:
> > > > On 10/24, Sergey Senozhatsky wrote:
> > > > >
> > > > > On (10/24/12 20:06), Oleg Nesterov wrote:
> > > > > > On 10/24, Sergey Senozhatsky wrote:
> > > > > > >
> > > > > > > small question,
> > > > > > >
> > > > > > > ptrace_notify() and forward calls are able to both indirectly and directly call schedule(),
> > > > > > > /* direct call from ptrace_stop()*/,
> > > > > > > should, in this case, rcu_user_enter() be called before tracehook_report_syscall_exit(regs, step)
> > > > > > > and ptrace chain?
> > > > > >
> > > > > > Well, I don't really understand this magic... but why?
> > > > > >
> > > > >
> > > > > My understanding is (I may be wrong)
> > > >
> > > > Oh, I bet I have much more chances to be wrong ;)
> > > >
> > > > > that we can schedule() from ptrace chain to
> > > >
> > > > I don't understand how ptrace chain differs from, say, audit_syscall_exit().
> > > > There is nothing special in ptrace_stop() in this respect.
> > > >
> > >
> > > hm.
> > >
> > > > > some arbitrary task, which will continue its execution from the point where RCU assumes
> > > > > CPU as not idle, while CPU in fact still in idle state -- no one said rcu_idle_exit()
> > > >
> > > > confused... of course it would be wrong if syscall_trace_leave() is
> > > > called when CPU is considered idle,
> > > >
> > >
> > > sorry, I meant idle from RCU point of view:
> > >
> > > int rcu_is_cpu_idle(void)
> > > {
> > > return !rcu_dynticks_nesting;
> > > }
> >
> > Hmmm... This reproduces on UP builds, then?
>
> I'll compile UP build (will offlining of N-1 CPUs do the trick?).

Oh -- you quoted the Tiny RCU (CONFIG_SMP=n) variant of rcu_is_cpu_idle(),
so I just thought that you were reproducing on CONFIG_SMP=n. You would
have to actually rebuild the kernel to get the different version.

But never mind! RCU_USER_QS depends on CONFIG_SMP=y, so my question
was irrelevant.

Thanx, Paul

> -ss
>
> > > > > if so, does the same apply to in_user?
> > > >
> > > > Not sure we understand each other. But I believe that ->in_user should be
> > > > already false when syscall_trace_leave() is called.
> > >
> > > oh, my apology. I was very wrong about this.
> >
> > Frederic, thoughts?
> >
> > Thanx, Paul
> >
>

2012-10-24 20:18:08

by Sergey Senozhatsky

[permalink] [raw]
Subject: Re: lots of suspicious RCU traces

On (10/24/12 13:11), Paul E. McKenney wrote:
> > > >
> > > > sorry, I meant idle from RCU point of view:
> > > >
> > > > int rcu_is_cpu_idle(void)
> > > > {
> > > > return !rcu_dynticks_nesting;
> > > > }
> > >
> > > Hmmm... This reproduces on UP builds, then?
> >
> > I'll compile UP build (will offlining of N-1 CPUs do the trick?).
>
> Oh -- you quoted the Tiny RCU (CONFIG_SMP=n) variant of rcu_is_cpu_idle(),

sorry, that was just an 'illustration'.

-ss

> so I just thought that you were reproducing on CONFIG_SMP=n. You would
> have to actually rebuild the kernel to get the different version.
>
> But never mind! RCU_USER_QS depends on CONFIG_SMP=y, so my question
> was irrelevant.
>
> Thanx, Paul
>

2012-10-24 22:32:18

by Frederic Weisbecker

[permalink] [raw]
Subject: Re: lots of suspicious RCU traces

First of all, thanks a lot for your report.

2012/10/24 Sergey Senozhatsky <[email protected]>:
> On (10/24/12 20:06), Oleg Nesterov wrote:
>> On 10/24, Sergey Senozhatsky wrote:
>> >
>> > small question,
>> >
>> > ptrace_notify() and forward calls are able to both indirectly and directly call schedule(),
>> > /* direct call from ptrace_stop()*/,
>> > should, in this case, rcu_user_enter() be called before tracehook_report_syscall_exit(regs, step)
>> > and ptrace chain?
>>
>> Well, I don't really understand this magic... but why?
>>
>
> My understanding is (I may be wrong) that we can schedule() from ptrace chain to
> some arbitrary task, which will continue its execution from the point where RCU assumes
> CPU as not idle, while CPU in fact still in idle state -- no one said rcu_idle_exit()
> (or similar) prior to schedule() call.

Yeah but when we are in syscall_trace_leave(), the CPU shouldn't be in
RCU idle mode. That's where the bug is. How do you manage to trigger
this bug?

>
> if so, does the same apply to in_user?

2012-10-25 05:53:20

by Sergey Senozhatsky

[permalink] [raw]
Subject: Re: lots of suspicious RCU traces

On (10/25/12 00:32), Frederic Weisbecker wrote:
> First of all, thanks a lot for your report.
>
> 2012/10/24 Sergey Senozhatsky <[email protected]>:
> > On (10/24/12 20:06), Oleg Nesterov wrote:
> >> On 10/24, Sergey Senozhatsky wrote:
> >> >
> >> > small question,
> >> >
> >> > ptrace_notify() and forward calls are able to both indirectly and directly call schedule(),
> >> > /* direct call from ptrace_stop()*/,
> >> > should, in this case, rcu_user_enter() be called before tracehook_report_syscall_exit(regs, step)
> >> > and ptrace chain?
> >>
> >> Well, I don't really understand this magic... but why?
> >>
> >
> > My understanding is (I may be wrong) that we can schedule() from ptrace chain to
> > some arbitrary task, which will continue its execution from the point where RCU assumes
> > CPU as not idle, while CPU in fact still in idle state -- no one said rcu_idle_exit()
> > (or similar) prior to schedule() call.
>
> Yeah but when we are in syscall_trace_leave(), the CPU shouldn't be in
> RCU idle mode. That's where the bug is. How do you manage to trigger
> this bug?
>

strace -f <anything>


-ss

2012-10-25 05:57:55

by Sergey Senozhatsky

[permalink] [raw]
Subject: Re: lots of suspicious RCU traces

On (10/25/12 00:32), Frederic Weisbecker wrote:
> > My understanding is (I may be wrong) that we can schedule() from ptrace chain to
> > some arbitrary task, which will continue its execution from the point where RCU assumes
> > CPU as not idle, while CPU in fact still in idle state -- no one said rcu_idle_exit()
> > (or similar) prior to schedule() call.
>
> Yeah but when we are in syscall_trace_leave(), the CPU shouldn't be in
> RCU idle mode. That's where the bug is. How do you manage to trigger
> this bug?
>

just for note,
git bisect good v3.6

[ 199.897703] ===============================
[ 199.897706] [ INFO: suspicious RCU usage. ]
[ 199.897710] 3.6.0-dbg-06307-ga78562e-dirty #1379 Not tainted
[ 199.897713] -------------------------------
[ 199.897717] include/linux/rcupdate.h:738 rcu_read_lock() used illegally while idle!
[ 199.897719]
other info that might help us debug this:

[ 199.897724]
RCU used illegally from idle CPU!
rcu_scheduler_active = 1, debug_locks = 1
[ 199.897729] RCU used illegally from extended quiescent state!
[ 199.897732] 2 locks held by top/2396:
[ 199.897735] #0: (&rq->lock){-.-.-.}, at: [<ffffffff815c7ae9>] __schedule+0x119/0xb10
[ 199.897755] #1: (rcu_read_lock){.+.+..}, at: [<ffffffff81081f25>] cpuacct_charge+0x15/0x250
[ 199.897770]
stack backtrace:
[ 199.897775] Pid: 2396, comm: top Not tainted 3.6.0-dbg-06307-ga78562e-dirty #1379
[ 199.897779] Call Trace:
[ 199.897791] [<ffffffff810a7ed2>] lockdep_rcu_suspicious+0xe2/0x130
[ 199.897798] [<ffffffff810820cc>] cpuacct_charge+0x1bc/0x250
[ 199.897804] [<ffffffff81081f25>] ? cpuacct_charge+0x15/0x250
[ 199.897810] [<ffffffff815c7ae9>] ? __schedule+0x119/0xb10
[ 199.897818] [<ffffffff8108567c>] update_curr+0xec/0x230
[ 199.897825] [<ffffffff8108653c>] put_prev_task_fair+0x9c/0xf0
[ 199.897831] [<ffffffff815c7b7c>] __schedule+0x1ac/0xb10
[ 199.897841] [<ffffffff8131addd>] ? do_raw_spin_unlock+0x5d/0xb0
[ 199.897847] [<ffffffff810a6fdd>] ? trace_hardirqs_off+0xd/0x10
[ 199.897853] [<ffffffff815ca0c7>] ? _raw_spin_unlock_irqrestore+0x77/0x80
[ 199.897860] [<ffffffff8107f7ff>] ? try_to_wake_up+0x1ff/0x350
[ 199.897867] [<ffffffff810aac19>] ? __lock_acquire+0x3d9/0xb70
[ 199.897875] [<ffffffff8115c5b9>] ? kfree+0xa9/0x260
[ 199.897882] [<ffffffff810dc575>] ? __call_rcu+0x105/0x250
[ 199.897887] [<ffffffff8107c67a>] __cond_resched+0x2a/0x40
[ 199.897891] [<ffffffff815c855f>] _cond_resched+0x2f/0x40
[ 199.897898] [<ffffffff81182da8>] dput+0x128/0x1e0
[ 199.897902] [<ffffffff81169c78>] __fput+0x148/0x260
[ 199.897907] [<ffffffff8107981f>] ? finish_task_switch+0x3f/0x120
[ 199.897911] [<ffffffff81169e4e>] ____fput+0xe/0x10
[ 199.897917] [<ffffffff81067bbc>] task_work_run+0xbc/0xf0
[ 199.897923] [<ffffffff8105a4a9>] ptrace_notify+0x89/0x90
[ 199.897931] [<ffffffff8100fa1d>] syscall_trace_leave+0x8d/0x220
[ 199.897939] [<ffffffff815d39d1>] ? int_very_careful+0x5/0xd
[ 199.897944] [<ffffffff810ac555>] ? trace_hardirqs_on_caller+0x105/0x190
[ 199.897949] [<ffffffff815d3a0d>] int_check_syscall_exit_work+0x34/0x3d


-ss

2012-10-25 07:06:39

by Frederic Weisbecker

[permalink] [raw]
Subject: Re: lots of suspicious RCU traces

2012/10/25 Sergey Senozhatsky <[email protected]>:
> On (10/25/12 00:32), Frederic Weisbecker wrote:
>> First of all, thanks a lot for your report.
>>
>> 2012/10/24 Sergey Senozhatsky <[email protected]>:
>> > On (10/24/12 20:06), Oleg Nesterov wrote:
>> >> On 10/24, Sergey Senozhatsky wrote:
>> >> >
>> >> > small question,
>> >> >
>> >> > ptrace_notify() and forward calls are able to both indirectly and directly call schedule(),
>> >> > /* direct call from ptrace_stop()*/,
>> >> > should, in this case, rcu_user_enter() be called before tracehook_report_syscall_exit(regs, step)
>> >> > and ptrace chain?
>> >>
>> >> Well, I don't really understand this magic... but why?
>> >>
>> >
>> > My understanding is (I may be wrong) that we can schedule() from ptrace chain to
>> > some arbitrary task, which will continue its execution from the point where RCU assumes
>> > CPU as not idle, while CPU in fact still in idle state -- no one said rcu_idle_exit()
>> > (or similar) prior to schedule() call.
>>
>> Yeah but when we are in syscall_trace_leave(), the CPU shouldn't be in
>> RCU idle mode. That's where the bug is. How do you manage to trigger
>> this bug?
>>
>
> strace -f <anything>

I can't reproduce. Can you send me your config?

Thanks.

2012-10-25 07:44:14

by Sergey Senozhatsky

[permalink] [raw]
Subject: Re: lots of suspicious RCU traces

On (10/25/12 09:06), Frederic Weisbecker wrote:
> >> > My understanding is (I may be wrong) that we can schedule() from ptrace chain to
> >> > some arbitrary task, which will continue its execution from the point where RCU assumes
> >> > CPU as not idle, while CPU in fact still in idle state -- no one said rcu_idle_exit()
> >> > (or similar) prior to schedule() call.
> >>
> >> Yeah but when we are in syscall_trace_leave(), the CPU shouldn't be in
> >> RCU idle mode. That's where the bug is. How do you manage to trigger
> >> this bug?
> >>
> >
> > strace -f <anything>
>
> I can't reproduce. Can you send me your config?
>

sure, attached.


-ss


Attachments:
(No filename) (653.00 B)
.config (79.51 kB)
Download all attachments

2012-10-26 09:40:38

by Frederic Weisbecker

[permalink] [raw]
Subject: [PATCH] rcu: Fix unrecovered RCU user mode in syscall_trace_leave()

On x86-64 syscall exit, 3 non exclusive events may happen
looping in the following order:

1) Check if we need resched for user preemption, if so call
schedule_user()

2) Check if we have pending signals, if so call do_notify_resume()

3) Check if we do syscall tracing, if so call syscall_trace_leave()

However syscall_trace_leave() has been written assuming it directly
follows the syscall and forget about the above possible 1st and 2nd
steps.

Now schedule_user() and do_notify_resume() exit in RCU user mode
because they have most chances to resume userspace immediately and
this avoids an rcu_user_enter() call in the syscall fast path.

So by the time we call syscall_trace_leave(), we may well be in RCU
user mode. To fix this up, simply call rcu_user_exit() in the beginning
of this function.

This fixes some reported RCU uses in extended quiescent state.

Reported-by: Dave Jones <[email protected]>
Reported-by: Sergey Senozhatsky <[email protected]>
Signed-off-by: Frederic Weisbecker <[email protected]>
---
arch/x86/kernel/ptrace.c | 7 +++++++
1 files changed, 7 insertions(+), 0 deletions(-)

diff --git a/arch/x86/kernel/ptrace.c b/arch/x86/kernel/ptrace.c
index b00b33a..eff5b8c 100644
--- a/arch/x86/kernel/ptrace.c
+++ b/arch/x86/kernel/ptrace.c
@@ -1511,6 +1511,13 @@ void syscall_trace_leave(struct pt_regs *regs)
{
bool step;

+ /*
+ * We may come here right after calling schedule_user()
+ * or do_notify_resume(), in which case we can be in RCU
+ * user mode.
+ */
+ rcu_user_exit();
+
audit_syscall_exit(regs);

if (unlikely(test_thread_flag(TIF_SYSCALL_TRACEPOINT)))
--
1.7.5.4

2012-10-26 09:44:16

by Frederic Weisbecker

[permalink] [raw]
Subject: Re: lots of suspicious RCU traces

2012/10/25 Sergey Senozhatsky <[email protected]>:
> On (10/25/12 09:06), Frederic Weisbecker wrote:
>> >> > My understanding is (I may be wrong) that we can schedule() from ptrace chain to
>> >> > some arbitrary task, which will continue its execution from the point where RCU assumes
>> >> > CPU as not idle, while CPU in fact still in idle state -- no one said rcu_idle_exit()
>> >> > (or similar) prior to schedule() call.
>> >>
>> >> Yeah but when we are in syscall_trace_leave(), the CPU shouldn't be in
>> >> RCU idle mode. That's where the bug is. How do you manage to trigger
>> >> this bug?
>> >>
>> >
>> > strace -f <anything>
>>
>> I can't reproduce. Can you send me your config?
>>
>
> sure, attached.

Thanks.

Sergey, Dave,

Does the patch I just sent you fix the issue?

Patch is "[PATCH] rcu: Fix unrecovered RCU user mode in syscall_trace_leave()"

Thanks.

2012-10-26 11:19:15

by Sergey Senozhatsky

[permalink] [raw]
Subject: Re: lots of suspicious RCU traces

On (10/26/12 11:44), Frederic Weisbecker wrote:
> Thanks.
>
> Sergey, Dave,
>
> Does the patch I just sent you fix the issue?
>
> Patch is "[PATCH] rcu: Fix unrecovered RCU user mode in syscall_trace_leave()"
>

good job, works fine for me.

Tested-by: Sergey Senozhatsky <[email protected]>


-ss

> Thanks.
>

2012-10-26 15:17:08

by Dave Jones

[permalink] [raw]
Subject: Re: lots of suspicious RCU traces

On Fri, Oct 26, 2012 at 11:44:11AM +0200, Frederic Weisbecker wrote:
> 2012/10/25 Sergey Senozhatsky <[email protected]>:
> > On (10/25/12 09:06), Frederic Weisbecker wrote:
> >> >> > My understanding is (I may be wrong) that we can schedule() from ptrace chain to
> >> >> > some arbitrary task, which will continue its execution from the point where RCU assumes
> >> >> > CPU as not idle, while CPU in fact still in idle state -- no one said rcu_idle_exit()
> >> >> > (or similar) prior to schedule() call.
> >> >>
> >> >> Yeah but when we are in syscall_trace_leave(), the CPU shouldn't be in
> >> >> RCU idle mode. That's where the bug is. How do you manage to trigger
> >> >> this bug?
> >> >
> >> > strace -f <anything>
> >> I can't reproduce. Can you send me your config?
> >
> > sure, attached.
>
> Thanks.
>
> Sergey, Dave,
>
> Does the patch I just sent you fix the issue?

I only saw this happening after hours of fuzzing.
Right now I keep running into other issues before that gets a chance to happen,
so I've not seen it in a while.

I'll apply it to my tree, and run with it though.

If I see it again, you'll be first to know :)

Dave