Hi All,
I am running a 2.6.38-rc4-next-20110215+ kernel on qemu x86_64 and the
following traces appear in the console:
Feb 15 15:00:24 lucid kernel: ------------[ cut here ]------------
Feb 15 15:00:24 lucid kernel: WARNING: at kernel/lockdep.c:2323
trace_hardirqs_on_caller+0xb9/0x16c()
Feb 15 15:00:24 lucid kernel: Hardware name: Bochs
Feb 15 15:00:24 lucid kernel: Pid: 1477, comm: mountall Not tainted
2.6.38-rc4-next-20110215+ #74
Feb 15 15:00:24 lucid kernel: Call Trace:
Feb 15 15:00:24 lucid kernel: <IRQ> [<ffffffff8102b8a5>] ?
warn_slowpath_common+0x7b/0x93
Feb 15 15:00:24 lucid kernel: [<ffffffff8146c097>] ?
_raw_spin_unlock_irq+0x2b/0x30
Feb 15 15:00:24 lucid kernel: [<ffffffff8102b8d2>] ?
warn_slowpath_null+0x15/0x17
Feb 15 15:00:24 lucid kernel: [<ffffffff8104f796>] ?
trace_hardirqs_on_caller+0xb9/0x16c
Feb 15 15:00:24 lucid kernel: [<ffffffff8104f856>] ?
trace_hardirqs_on+0xd/0xf
Feb 15 15:00:24 lucid kernel: [<ffffffff8146c097>] ?
_raw_spin_unlock_irq+0x2b/0x30
Feb 15 15:00:24 lucid kernel: [<ffffffff812e167e>] ?
do_ide_request+0x32/0x590
Feb 15 15:00:24 lucid kernel: [<ffffffff8124a708>] ? elv_insert+0x4f/0x144
Feb 15 15:00:24 lucid kernel: [<ffffffff8124c79f>] ?
__blk_run_queue+0x44/0x6d
Feb 15 15:00:24 lucid kernel: [<ffffffff8124eb04>] ?
flush_data_end_io+0x41/0x4d
Feb 15 15:00:24 lucid kernel: [<ffffffff8124bfad>] ?
blk_finish_request+0x1c4/0x1f9
Feb 15 15:00:24 lucid kernel: [<ffffffff8124c285>] ?
blk_end_bidi_request+0x3d/0x58
Feb 15 15:00:24 lucid kernel: [<ffffffff8124c2d2>] ? blk_end_request+0xb/0xd
Feb 15 15:00:24 lucid kernel: [<ffffffff812e1588>] ? ide_end_rq+0x44/0x4f
Feb 15 15:00:24 lucid kernel: [<ffffffff812e15c5>] ?
ide_complete_rq+0x32/0x46
Feb 15 15:00:24 lucid kernel: [<ffffffff812e992d>] ? ide_dma_intr+0x92/0xd5
Feb 15 15:00:24 lucid kernel: [<ffffffff812e119d>] ? ide_intr+0x173/0x223
Feb 15 15:00:24 lucid kernel: [<ffffffff812e989b>] ? ide_dma_intr+0x0/0xd5
Feb 15 15:00:24 lucid kernel: [<ffffffff8106efd8>] ?
handle_IRQ_event+0x20/0xa6
Feb 15 15:00:24 lucid kernel: [<ffffffff81070ba0>] ?
handle_edge_irq+0x101/0x14d
Feb 15 15:00:24 lucid kernel: [<ffffffff81004b7c>] ? handle_irq+0x82/0x8a
Feb 15 15:00:24 lucid kernel: [<ffffffff81004556>] ? do_IRQ+0x45/0xa5
Feb 15 15:00:24 lucid kernel: [<ffffffff8146c353>] ? ret_from_intr+0x0/0x12
Feb 15 15:00:24 lucid kernel: [<ffffffff810306d3>] ? __do_softirq+0x56/0x12b
Feb 15 15:00:24 lucid kernel: [<ffffffff810035aa>] ? call_softirq+0x1a/0x24
Feb 15 15:00:24 lucid kernel: [<ffffffff81004ab2>] ? do_softirq+0x36/0x7e
Feb 15 15:00:24 lucid kernel: [<ffffffff81030527>] ? irq_exit+0x42/0x44
Feb 15 15:00:24 lucid kernel: [<ffffffff810163e8>] ?
smp_apic_timer_interrupt+0x79/0x86
Feb 15 15:00:24 lucid kernel: [<ffffffff810031d3>] ?
apic_timer_interrupt+0x13/0x20
Feb 15 15:00:24 lucid kernel: <EOI> [<ffffffff810a26a0>] ?
kmem_cache_free+0xbc/0xc5
Feb 15 15:00:24 lucid kernel: [<ffffffff810995d9>] ? anon_vma_free+0x13/0x15
Feb 15 15:00:24 lucid kernel: [<ffffffff81099646>] ?
unlink_anon_vmas+0x6b/0xa5
Feb 15 15:00:24 lucid kernel: [<ffffffff8109324b>] ? free_pgtables+0x50/0xa5
Feb 15 15:00:24 lucid kernel: [<ffffffff81094ebe>] ? exit_mmap+0xb1/0x101
Feb 15 15:00:24 lucid kernel: [<ffffffff810298d4>] ? mmput+0x3e/0x99
Feb 15 15:00:24 lucid kernel: [<ffffffff810b17a7>] ?
flush_old_exec+0x4d6/0x5b0
Feb 15 15:00:24 lucid kernel: [<ffffffff810e7aeb>] ?
load_elf_binary+0x29c/0x1651
Feb 15 15:00:24 lucid kernel: [<ffffffff810b1159>] ?
search_binary_handler+0x14b/0x1c6
Feb 15 15:00:24 lucid kernel: [<ffffffff810b10b4>] ?
search_binary_handler+0xa6/0x1c6
Feb 15 15:00:24 lucid kernel: [<ffffffff810e784f>] ?
load_elf_binary+0x0/0x1651
Feb 15 15:00:24 lucid kernel: [<ffffffff810b2a02>] ? do_execve+0x163/0x259
Feb 15 15:00:24 lucid kernel: [<ffffffff81009071>] ? sys_execve+0x3e/0x55
Feb 15 15:00:24 lucid kernel: [<ffffffff81002c9b>] ? stub_execve+0x6b/0xc0
Feb 15 15:00:24 lucid kernel: ---[ end trace d9437fca559c1042 ]---
Then this message is followed by:
Feb 15 15:00:24 lucid kernel: INFO: lockdep is turned off.
Feb 15 15:00:24 lucid kernel: Pid: 1477, comm: mountall Tainted:
G W 2.6.38-rc4-next-20110215+ #74
Feb 15 15:00:24 lucid kernel: Call Trace:
Feb 15 15:00:24 lucid kernel: <IRQ> [<ffffffff81026646>] ?
__might_sleep+0xec/0xf1
Feb 15 15:00:24 lucid kernel: [<ffffffff812e1691>] ?
do_ide_request+0x45/0x590
Feb 15 15:00:24 lucid kernel: [<ffffffff8124a708>] ? elv_insert+0x4f/0x144
Feb 15 15:00:24 lucid kernel: [<ffffffff8124c79f>] ?
__blk_run_queue+0x44/0x6d
Feb 15 15:00:24 lucid kernel: [<ffffffff8124eb04>] ?
flush_data_end_io+0x41/0x4d
Feb 15 15:00:24 lucid kernel: [<ffffffff8124bfad>] ?
blk_finish_request+0x1c4/0x1f9
Feb 15 15:00:24 lucid kernel: [<ffffffff8124c285>] ?
blk_end_bidi_request+0x3d/0x58
Feb 15 15:00:24 lucid kernel: [<ffffffff8124c2d2>] ? blk_end_request+0xb/0xd
Feb 15 15:00:24 lucid kernel: [<ffffffff812e1588>] ? ide_end_rq+0x44/0x4f
Feb 15 15:00:24 lucid kernel: [<ffffffff812e15c5>] ?
ide_complete_rq+0x32/0x46
Feb 15 15:00:24 lucid kernel: [<ffffffff812e992d>] ? ide_dma_intr+0x92/0xd5
Feb 15 15:00:24 lucid kernel: [<ffffffff812e119d>] ? ide_intr+0x173/0x223
Feb 15 15:00:24 lucid kernel: [<ffffffff812e989b>] ? ide_dma_intr+0x0/0xd5
Feb 15 15:00:24 lucid kernel: [<ffffffff8106efd8>] ?
handle_IRQ_event+0x20/0xa6
Feb 15 15:00:24 lucid kernel: [<ffffffff81070ba0>] ?
handle_edge_irq+0x101/0x14d
Feb 15 15:00:24 lucid kernel: [<ffffffff81004b7c>] ? handle_irq+0x82/0x8a
Feb 15 15:00:24 lucid kernel: [<ffffffff81004556>] ? do_IRQ+0x45/0xa5
Feb 15 15:00:24 lucid kernel: [<ffffffff8146c353>] ? ret_from_intr+0x0/0x12
Feb 15 15:00:24 lucid kernel: [<ffffffff810306d3>] ? __do_softirq+0x56/0x12b
Feb 15 15:00:24 lucid kernel: [<ffffffff810035aa>] ? call_softirq+0x1a/0x24
Feb 15 15:00:24 lucid kernel: [<ffffffff81004ab2>] ? do_softirq+0x36/0x7e
Feb 15 15:00:24 lucid kernel: [<ffffffff81030527>] ? irq_exit+0x42/0x44
Feb 15 15:00:24 lucid kernel: [<ffffffff810163e8>] ?
smp_apic_timer_interrupt+0x79/0x86
Feb 15 15:00:24 lucid kernel: [<ffffffff810031d3>] ?
apic_timer_interrupt+0x13/0x20
Feb 15 15:00:24 lucid kernel: <EOI> [<ffffffff810a26a0>] ?
kmem_cache_free+0xbc/0xc5
Feb 15 15:00:24 lucid kernel: [<ffffffff810995d9>] ? anon_vma_free+0x13/0x15
Feb 15 15:00:24 lucid kernel: [<ffffffff81099646>] ?
unlink_anon_vmas+0x6b/0xa5
Feb 15 15:00:24 lucid kernel: [<ffffffff8109324b>] ? free_pgtables+0x50/0xa5
Feb 15 15:00:24 lucid kernel: [<ffffffff81094ebe>] ? exit_mmap+0xb1/0x101
Feb 15 15:00:24 lucid kernel: [<ffffffff810298d4>] ? mmput+0x3e/0x99
Feb 15 15:00:24 lucid kernel: [<ffffffff810b17a7>] ?
flush_old_exec+0x4d6/0x5b0
Feb 15 15:00:24 lucid kernel: [<ffffffff810e7aeb>] ?
load_elf_binary+0x29c/0x1651
Feb 15 15:00:24 lucid kernel: [<ffffffff810b1159>] ?
search_binary_handler+0x14b/0x1c6
Feb 15 15:00:24 lucid kernel: [<ffffffff810b10b4>] ?
search_binary_handler+0xa6/0x1c6
Feb 15 15:00:24 lucid kernel: [<ffffffff810e784f>] ?
load_elf_binary+0x0/0x1651
Feb 15 15:00:24 lucid kernel: [<ffffffff810b2a02>] ? do_execve+0x163/0x259
Feb 15 15:00:24 lucid kernel: [<ffffffff81009071>] ? sys_execve+0x3e/0x55
Feb 15 15:00:24 lucid kernel: [<ffffffff81002c9b>] ? stub_execve+0x6b/0xc0
Feb 15 15:00:24 lucid kernel: INFO: lockdep is turned off.
Feb 15 15:00:24 lucid kernel: Pid: 1478, comm: udevd Tainted: G
W 2.6.38-rc4-next-20110215+ #74
Feb 15 15:00:24 lucid kernel: Call Trace:
Feb 15 15:00:24 lucid kernel: <IRQ> [<ffffffff81026646>] ?
__might_sleep+0xec/0xf1
Feb 15 15:00:24 lucid kernel: [<ffffffff812e1691>] ?
do_ide_request+0x45/0x590
Feb 15 15:00:24 lucid kernel: [<ffffffff8104e6bc>] ?
trace_hardirqs_off+0xd/0xf
Feb 15 15:00:24 lucid kernel: [<ffffffff8124a708>] ? elv_insert+0x4f/0x144
Feb 15 15:00:24 lucid kernel: [<ffffffff8124c79f>] ?
__blk_run_queue+0x44/0x6d
Feb 15 15:00:24 lucid kernel: [<ffffffff8124eb04>] ?
flush_data_end_io+0x41/0x4d
Feb 15 15:00:24 lucid kernel: [<ffffffff8124bfad>] ?
blk_finish_request+0x1c4/0x1f9
Feb 15 15:00:24 lucid kernel: [<ffffffff8124c285>] ?
blk_end_bidi_request+0x3d/0x58
Feb 15 15:00:24 lucid kernel: [<ffffffff8124c2d2>] ? blk_end_request+0xb/0xd
Feb 15 15:00:24 lucid kernel: [<ffffffff812e1588>] ? ide_end_rq+0x44/0x4f
Feb 15 15:00:24 lucid kernel: [<ffffffff812e15c5>] ?
ide_complete_rq+0x32/0x46
Feb 15 15:00:24 lucid kernel: [<ffffffff812e992d>] ? ide_dma_intr+0x92/0xd5
Feb 15 15:00:24 lucid kernel: [<ffffffff812e119d>] ? ide_intr+0x173/0x223
Feb 15 15:00:24 lucid kernel: [<ffffffff812e989b>] ? ide_dma_intr+0x0/0xd5
Feb 15 15:00:24 lucid kernel: [<ffffffff8106efd8>] ?
handle_IRQ_event+0x20/0xa6
Feb 15 15:00:24 lucid kernel: [<ffffffff81070ba0>] ?
handle_edge_irq+0x101/0x14d
Feb 15 15:00:24 lucid kernel: [<ffffffff81004b7c>] ? handle_irq+0x82/0x8a
Feb 15 15:00:24 lucid kernel: [<ffffffff81004556>] ? do_IRQ+0x45/0xa5
Feb 15 15:00:24 lucid kernel: [<ffffffff8146c353>] ? ret_from_intr+0x0/0x12
Feb 15 15:00:24 lucid kernel: <EOI> [<ffffffff8101e30b>] ?
kernel_map_pages+0x139/0x14c
Feb 15 15:00:24 lucid kernel: [<ffffffff81080bbd>] ?
get_page_from_freelist+0x3ad/0x4fc
Feb 15 15:00:24 lucid kernel: [<ffffffff81080ebc>] ?
__alloc_pages_nodemask+0x1b0/0x717
Feb 15 15:00:24 lucid kernel: [<ffffffff8108fb59>] ? do_wp_page+0x32c/0x6ed
Feb 15 15:00:24 lucid kernel: [<ffffffff8109050a>] ?
handle_pte_fault+0x5f0/0x635
Feb 15 15:00:24 lucid kernel: [<ffffffff81020167>] ? pte_alloc_one+0x1a/0x3f
Feb 15 15:00:24 lucid kernel: [<ffffffff81091434>] ? __pte_alloc+0x19/0xb6
Feb 15 15:00:24 lucid kernel: [<ffffffff810915ba>] ?
handle_mm_fault+0xe9/0x131
Feb 15 15:00:24 lucid kernel: [<ffffffff8101d0c1>] ?
do_page_fault+0x366/0x38b
Feb 15 15:00:24 lucid kernel: [<ffffffff8146c103>] ?
_raw_spin_unlock+0x26/0x2a
Feb 15 15:00:24 lucid kernel: [<ffffffff810bda1c>] ? dput+0xde/0x162
Feb 15 15:00:24 lucid kernel: [<ffffffff810c4daa>] ? mntput+0x21/0x23
Feb 15 15:00:24 lucid kernel: [<ffffffff810b4a7b>] ? path_put+0x1d/0x21
Feb 15 15:00:24 lucid kernel: [<ffffffff810cd885>] ? do_utimes+0xbd/0xd2
Feb 15 15:00:24 lucid kernel: [<ffffffff8146b92d>] ?
trace_hardirqs_off_thunk+0x3a/0x3c
Feb 15 15:00:24 lucid kernel: [<ffffffff8146c5df>] ? page_fault+0x1f/0x30
Feb 15 15:00:24 lucid kernel: INFO: lockdep is turned off.
Feb 15 15:00:24 lucid kernel: Pid: 1, comm: init Tainted: G W
2.6.38-rc4-next-20110215+ #74
Feb 15 15:00:24 lucid kernel: Call Trace:
Feb 15 15:00:24 lucid kernel: <IRQ> [<ffffffff81026646>] ?
__might_sleep+0xec/0xf1
Feb 15 15:00:24 lucid kernel: [<ffffffff812e1691>] ?
do_ide_request+0x45/0x590
Feb 15 15:00:24 lucid kernel: [<ffffffff8104e6bc>] ?
trace_hardirqs_off+0xd/0xf
Feb 15 15:00:24 lucid kernel: [<ffffffff8124a708>] ? elv_insert+0x4f/0x144
Feb 15 15:00:24 lucid kernel: [<ffffffff8124c79f>] ?
__blk_run_queue+0x44/0x6d
Feb 15 15:00:24 lucid kernel: [<ffffffff8124eb04>] ?
flush_data_end_io+0x41/0x4d
Feb 15 15:00:24 lucid kernel: [<ffffffff8124bfad>] ?
blk_finish_request+0x1c4/0x1f9
Feb 15 15:00:24 lucid kernel: [<ffffffff8124c285>] ?
blk_end_bidi_request+0x3d/0x58
Feb 15 15:00:24 lucid kernel: [<ffffffff8124c2d2>] ? blk_end_request+0xb/0xd
Feb 15 15:00:24 lucid kernel: [<ffffffff812e1588>] ? ide_end_rq+0x44/0x4f
Feb 15 15:00:24 lucid kernel: [<ffffffff812e15c5>] ?
ide_complete_rq+0x32/0x46
Feb 15 15:00:24 lucid kernel: [<ffffffff812e992d>] ? ide_dma_intr+0x92/0xd5
Feb 15 15:00:24 lucid kernel: [<ffffffff812e119d>] ? ide_intr+0x173/0x223
Feb 15 15:00:24 lucid kernel: [<ffffffff812e989b>] ? ide_dma_intr+0x0/0xd5
Feb 15 15:00:24 lucid kernel: [<ffffffff8106efd8>] ?
handle_IRQ_event+0x20/0xa6
Feb 15 15:00:24 lucid kernel: [<ffffffff81070ba0>] ?
handle_edge_irq+0x101/0x14d
Feb 15 15:00:24 lucid kernel: [<ffffffff81004b7c>] ? handle_irq+0x82/0x8a
Feb 15 15:00:24 lucid kernel: [<ffffffff81004556>] ? do_IRQ+0x45/0xa5
Feb 15 15:00:24 lucid kernel: [<ffffffff8146c353>] ? ret_from_intr+0x0/0x12
Feb 15 15:00:24 lucid kernel: <EOI> [<ffffffff810292a0>] ? T.991+0x7a/0xb6
Feb 15 15:00:24 lucid kernel: [<ffffffff8146c097>] ?
_raw_spin_unlock_irq+0x2b/0x30
Feb 15 15:00:24 lucid kernel: [<ffffffff8146c099>] ?
_raw_spin_unlock_irq+0x2d/0x30
Feb 15 15:00:24 lucid kernel: [<ffffffff810292a0>] ? T.991+0x7a/0xb6
Feb 15 15:00:24 lucid kernel: [<ffffffff81029262>] ? T.991+0x3c/0xb6
Feb 15 15:00:24 lucid kernel: [<ffffffff81469d8f>] ? schedule+0x490/0x4d6
Feb 15 15:00:24 lucid kernel: [<ffffffff8101d0a7>] ?
do_page_fault+0x34c/0x38b
Feb 15 15:00:24 lucid kernel: [<ffffffff8108ec48>] ? might_fault+0x4c/0x9c
Feb 15 15:00:24 lucid kernel: [<ffffffff8146b964>] ?
lockdep_sys_exit_thunk+0x35/0x67
Feb 15 15:00:24 lucid kernel: [<ffffffff8146b8ee>] ?
trace_hardirqs_on_thunk+0x3a/0x3f
Feb 15 15:00:24 lucid kernel: [<ffffffff8146c442>] ?
retint_careful+0x12/0x2d
Feb 15 15:00:24 lucid kernel: INFO: lockdep is turned off.
Feb 15 15:00:24 lucid kernel: Pid: 1439, comm: udevd Tainted: G
W 2.6.38-rc4-next-20110215+ #74
Feb 15 15:00:24 lucid kernel: Call Trace:
Feb 15 15:00:24 lucid kernel: <IRQ> [<ffffffff81026646>] ?
__might_sleep+0xec/0xf1
Feb 15 15:00:24 lucid kernel: [<ffffffff812e1691>] ?
do_ide_request+0x45/0x590
Feb 15 15:00:24 lucid kernel: [<ffffffff8104e6bc>] ?
trace_hardirqs_off+0xd/0xf
Feb 15 15:00:24 lucid kernel: [<ffffffff8124a708>] ? elv_insert+0x4f/0x144
Feb 15 15:00:24 lucid kernel: [<ffffffff8124c79f>] ?
__blk_run_queue+0x44/0x6d
Feb 15 15:00:24 lucid kernel: [<ffffffff8124eb04>] ?
flush_data_end_io+0x41/0x4d
Feb 15 15:00:24 lucid kernel: [<ffffffff8124bfad>] ?
blk_finish_request+0x1c4/0x1f9
Feb 15 15:00:24 lucid kernel: [<ffffffff8124c285>] ?
blk_end_bidi_request+0x3d/0x58
Feb 15 15:00:24 lucid kernel: [<ffffffff8124c2d2>] ? blk_end_request+0xb/0xd
Feb 15 15:00:24 lucid kernel: [<ffffffff812e1588>] ? ide_end_rq+0x44/0x4f
Feb 15 15:00:24 lucid kernel: [<ffffffff812e15c5>] ?
ide_complete_rq+0x32/0x46
Feb 15 15:00:24 lucid kernel: [<ffffffff812e992d>] ? ide_dma_intr+0x92/0xd5
Feb 15 15:00:24 lucid kernel: [<ffffffff812e119d>] ? ide_intr+0x173/0x223
Feb 15 15:00:24 lucid kernel: [<ffffffff812e989b>] ? ide_dma_intr+0x0/0xd5
Feb 15 15:00:24 lucid kernel: [<ffffffff8106efd8>] ?
handle_IRQ_event+0x20/0xa6
Feb 15 15:00:24 lucid kernel: [<ffffffff81070ba0>] ?
handle_edge_irq+0x101/0x14d
Feb 15 15:00:24 lucid kernel: [<ffffffff81004b7c>] ? handle_irq+0x82/0x8a
Feb 15 15:00:24 lucid kernel: [<ffffffff81004556>] ? do_IRQ+0x45/0xa5
Feb 15 15:00:24 lucid kernel: [<ffffffff8146c353>] ? ret_from_intr+0x0/0x12
Feb 15 15:00:24 lucid kernel: <EOI> [<ffffffff810bda1c>] ? dput+0xde/0x162
Feb 15 15:00:24 lucid kernel: [<ffffffff810b7973>] ?
link_path_walk+0x9b9/0xa30
Feb 15 15:00:24 lucid kernel: [<ffffffff810b7973>] ?
link_path_walk+0x9b9/0xa30
Feb 15 15:00:24 lucid kernel: [<ffffffff810b6a9d>] ?
path_init_rcu+0x37/0x1f9
Feb 15 15:00:24 lucid kernel: [<ffffffff810b7c41>] ?
do_path_lookup+0x4a/0x107
Feb 15 15:00:24 lucid kernel: [<ffffffff810b871e>] ? user_path_at+0x51/0x8e
Feb 15 15:00:24 lucid kernel: [<ffffffff8104f856>] ?
trace_hardirqs_on+0xd/0xf
Feb 15 15:00:24 lucid kernel: [<ffffffff810bd97a>] ? dput+0x3c/0x162
Feb 15 15:00:24 lucid kernel: [<ffffffff810afeef>] ? vfs_fstatat+0x3c/0x67
Feb 15 15:00:24 lucid kernel: [<ffffffff8146c103>] ?
_raw_spin_unlock+0x26/0x2a
Feb 15 15:00:24 lucid kernel: [<ffffffff810afff3>] ? vfs_stat+0x16/0x18
Feb 15 15:00:24 lucid kernel: [<ffffffff810b000f>] ? sys_newstat+0x1a/0x38
Feb 15 15:00:24 lucid kernel: [<ffffffff810b4a7b>] ? path_put+0x1d/0x21
Feb 15 15:00:24 lucid kernel: [<ffffffff8146b8ee>] ?
trace_hardirqs_on_thunk+0x3a/0x3f
Feb 15 15:00:24 lucid kernel: [<ffffffff810027f8>] ?
system_call_fastpath+0x16/0x1b
Feb 15 15:00:24 lucid kernel: IPv4 FIB: Using LC-trie version 0.409
Feb 15 15:00:24 lucid kernel: INFO: lockdep is turned off.
Feb 15 15:00:24 lucid kernel: Pid: 1336, comm: plymouthd Tainted:
G W 2.6.38-rc4-next-20110215+ #74
Feb 15 15:00:24 lucid kernel: Call Trace:
Feb 15 15:00:24 lucid kernel: <IRQ> [<ffffffff81026646>] ?
__might_sleep+0xec/0xf1
Feb 15 15:00:24 lucid kernel: [<ffffffff812e1691>] ?
do_ide_request+0x45/0x590
Feb 15 15:00:24 lucid kernel: [<ffffffff8104e6bc>] ?
trace_hardirqs_off+0xd/0xf
Feb 15 15:00:24 lucid kernel: [<ffffffff8124a708>] ? elv_insert+0x4f/0x144
Feb 15 15:00:24 lucid kernel: [<ffffffff8124c79f>] ?
__blk_run_queue+0x44/0x6d
Feb 15 15:00:24 lucid kernel: [<ffffffff8124eb04>] ?
flush_data_end_io+0x41/0x4d
Feb 15 15:00:24 lucid kernel: [<ffffffff8124bfad>] ?
blk_finish_request+0x1c4/0x1f9
Feb 15 15:00:24 lucid kernel: [<ffffffff8124c285>] ?
blk_end_bidi_request+0x3d/0x58
Feb 15 15:00:24 lucid kernel: [<ffffffff8124c2d2>] ? blk_end_request+0xb/0xd
Feb 15 15:00:24 lucid kernel: [<ffffffff812e1588>] ? ide_end_rq+0x44/0x4f
Feb 15 15:00:24 lucid kernel: [<ffffffff812e15c5>] ?
ide_complete_rq+0x32/0x46
Feb 15 15:00:24 lucid kernel: [<ffffffff812e992d>] ? ide_dma_intr+0x92/0xd5
Feb 15 15:00:24 lucid kernel: [<ffffffff812e119d>] ? ide_intr+0x173/0x223
Feb 15 15:00:24 lucid kernel: [<ffffffff812e989b>] ? ide_dma_intr+0x0/0xd5
Feb 15 15:00:24 lucid kernel: [<ffffffff8106efd8>] ?
handle_IRQ_event+0x20/0xa6
Feb 15 15:00:24 lucid kernel: [<ffffffff81070ba0>] ?
handle_edge_irq+0x101/0x14d
Feb 15 15:00:24 lucid kernel: [<ffffffff81004b7c>] ? handle_irq+0x82/0x8a
Feb 15 15:00:24 lucid kernel: [<ffffffff81004556>] ? do_IRQ+0x45/0xa5
Feb 15 15:00:24 lucid kernel: [<ffffffff8146c353>] ? ret_from_intr+0x0/0x12
Feb 15 15:00:24 lucid kernel: [<ffffffff810306d0>] ? __do_softirq+0x53/0x12b
Feb 15 15:00:24 lucid kernel: [<ffffffff810306d3>] ? __do_softirq+0x56/0x12b
Feb 15 15:00:24 lucid kernel: [<ffffffff810035aa>] ? call_softirq+0x1a/0x24
Feb 15 15:00:24 lucid kernel: [<ffffffff81004ab2>] ? do_softirq+0x36/0x7e
Feb 15 15:00:24 lucid kernel: [<ffffffff81030527>] ? irq_exit+0x42/0x44
Feb 15 15:00:24 lucid kernel: [<ffffffff810163e8>] ?
smp_apic_timer_interrupt+0x79/0x86
Feb 15 15:00:24 lucid kernel: [<ffffffff810031d3>] ?
apic_timer_interrupt+0x13/0x20
Feb 15 15:00:24 lucid kernel: <EOI> [<ffffffff81002828>] ?
sysret_check+0x26/0x60
etc ...
Did anyone experienced the same problem ? Is there a fix for this ?
Thanks in advance
-- Daniel
On Tue, Feb 15, 2011 at 10:42 PM, Daniel Lezcano <[email protected]> wrote:
> Hi All,
>
> I am running a 2.6.38-rc4-next-20110215+ kernel on qemu x86_64 and the
> following traces appear in the console:
>
> Feb 15 15:00:24 lucid kernel: ------------[ cut here ]------------
> Feb 15 15:00:24 lucid kernel: WARNING: at kernel/lockdep.c:2323
> trace_hardirqs_on_caller+0xb9/0x16c()
> Feb 15 15:00:24 lucid kernel: Hardware name: Bochs
> Feb 15 15:00:24 lucid kernel: Pid: 1477, comm: mountall Not tainted
> 2.6.38-rc4-next-20110215+ #74
> Feb 15 15:00:24 lucid kernel: Call Trace:
> Feb 15 15:00:24 lucid kernel: <IRQ> [<ffffffff8102b8a5>] ?
> warn_slowpath_common+0x7b/0x93
> Feb 15 15:00:24 lucid kernel: [<ffffffff8146c097>] ?
> _raw_spin_unlock_irq+0x2b/0x30
> Feb 15 15:00:24 lucid kernel: [<ffffffff8102b8d2>] ?
> warn_slowpath_null+0x15/0x17
> Feb 15 15:00:24 lucid kernel: [<ffffffff8104f796>] ?
> trace_hardirqs_on_caller+0xb9/0x16c
> Feb 15 15:00:24 lucid kernel: [<ffffffff8104f856>] ?
> trace_hardirqs_on+0xd/0xf
> Feb 15 15:00:24 lucid kernel: [<ffffffff8146c097>] ?
> _raw_spin_unlock_irq+0x2b/0x30
> Feb 15 15:00:24 lucid kernel: [<ffffffff812e167e>] ?
> do_ide_request+0x32/0x590
Seems related to IDE SUBSYSTEM
Cc'ing Dave and [email protected]
Thanks,
Yong
--
Only stand for myself
From: Yong Zhang <[email protected]>
Date: Thu, 17 Feb 2011 09:37:30 +0800
> On Tue, Feb 15, 2011 at 10:42 PM, Daniel Lezcano <[email protected]> wrote:
>> Hi All,
>>
>> I am running a 2.6.38-rc4-next-20110215+ kernel on qemu x86_64 and the
>> following traces appear in the console:
>>
>> Feb 15 15:00:24 lucid kernel: ------------[ cut here ]------------
>> Feb 15 15:00:24 lucid kernel: WARNING: at kernel/lockdep.c:2323
>> trace_hardirqs_on_caller+0xb9/0x16c()
>> Feb 15 15:00:24 lucid kernel: Hardware name: Bochs
>> Feb 15 15:00:24 lucid kernel: Pid: 1477, comm: mountall Not tainted
>> 2.6.38-rc4-next-20110215+ #74
>> Feb 15 15:00:24 lucid kernel: Call Trace:
>> Feb 15 15:00:24 lucid kernel: <IRQ> ?[<ffffffff8102b8a5>] ?
>> warn_slowpath_common+0x7b/0x93
>> Feb 15 15:00:24 lucid kernel: [<ffffffff8146c097>] ?
>> _raw_spin_unlock_irq+0x2b/0x30
>> Feb 15 15:00:24 lucid kernel: [<ffffffff8102b8d2>] ?
>> warn_slowpath_null+0x15/0x17
>> Feb 15 15:00:24 lucid kernel: [<ffffffff8104f796>] ?
>> trace_hardirqs_on_caller+0xb9/0x16c
>> Feb 15 15:00:24 lucid kernel: [<ffffffff8104f856>] ?
>> trace_hardirqs_on+0xd/0xf
>> Feb 15 15:00:24 lucid kernel: [<ffffffff8146c097>] ?
>> _raw_spin_unlock_irq+0x2b/0x30
>> Feb 15 15:00:24 lucid kernel: [<ffffffff812e167e>] ?
>> do_ide_request+0x32/0x590
>
> Seems related to IDE SUBSYSTEM
Which hasn't had any changes in the past release.
On Thu, Feb 17, 2011 at 10:03 AM, David Miller <[email protected]> wrote:
> From: Yong Zhang <[email protected]>
> Date: Thu, 17 Feb 2011 09:37:30 +0800
>
>> On Tue, Feb 15, 2011 at 10:42 PM, Daniel Lezcano <[email protected]> wrote:
>>> Hi All,
>>>
>>> I am running a 2.6.38-rc4-next-20110215+ kernel on qemu x86_64 and the
>>> following traces appear in the console:
>>>
>>> Feb 15 15:00:24 lucid kernel: ------------[ cut here ]------------
>>> Feb 15 15:00:24 lucid kernel: WARNING: at kernel/lockdep.c:2323
>>> trace_hardirqs_on_caller+0xb9/0x16c()
>>> Feb 15 15:00:24 lucid kernel: Hardware name: Bochs
>>> Feb 15 15:00:24 lucid kernel: Pid: 1477, comm: mountall Not tainted
>>> 2.6.38-rc4-next-20110215+ #74
>>> Feb 15 15:00:24 lucid kernel: Call Trace:
>>> Feb 15 15:00:24 lucid kernel: <IRQ> [<ffffffff8102b8a5>] ?
>>> warn_slowpath_common+0x7b/0x93
>>> Feb 15 15:00:24 lucid kernel: [<ffffffff8146c097>] ?
>>> _raw_spin_unlock_irq+0x2b/0x30
>>> Feb 15 15:00:24 lucid kernel: [<ffffffff8102b8d2>] ?
>>> warn_slowpath_null+0x15/0x17
>>> Feb 15 15:00:24 lucid kernel: [<ffffffff8104f796>] ?
>>> trace_hardirqs_on_caller+0xb9/0x16c
>>> Feb 15 15:00:24 lucid kernel: [<ffffffff8104f856>] ?
>>> trace_hardirqs_on+0xd/0xf
>>> Feb 15 15:00:24 lucid kernel: [<ffffffff8146c097>] ?
>>> _raw_spin_unlock_irq+0x2b/0x30
>>> Feb 15 15:00:24 lucid kernel: [<ffffffff812e167e>] ?
>>> do_ide_request+0x32/0x590
>>
>> Seems related to IDE SUBSYSTEM
>
> Which hasn't had any changes in the past release.
OK.
Cc'ing Tejun Heo
For the back trace, I think __blk_run_queue() is the ligament.
As from the comment of __blk_run_queue(), it must be called
with the queue lock and interrupts disabled. And the lock
is hold through spin_lock_irqsave(q->queue_lock, flags); at
blk_end_bidi_request().
But in do_ide_request(), it realse the lock through
spin_unlock_irq(q->queue_lock); which make the state
inconsistent.
BTW, do_ide_request() also say it might_sleep(), this warning
also trigger in Daniel's log.
Thanks,
Yong
--
Only stand for myself
Hello,
On Thu, Feb 17, 2011 at 10:43:57AM +0800, Yong Zhang wrote:
> On Thu, Feb 17, 2011 at 10:03 AM, David Miller <[email protected]> wrote:
> > From: Yong Zhang <[email protected]>
> > Date: Thu, 17 Feb 2011 09:37:30 +0800
> >
> >> On Tue, Feb 15, 2011 at 10:42 PM, Daniel Lezcano <[email protected]> wrote:
> >>> Hi All,
> >>>
> >>> I am running a 2.6.38-rc4-next-20110215+ kernel on qemu x86_64 and the
> >>> following traces appear in the console:
> >>>
> >>> Feb 15 15:00:24 lucid kernel: ------------[ cut here ]------------
> >>> Feb 15 15:00:24 lucid kernel: WARNING: at kernel/lockdep.c:2323
> >>> trace_hardirqs_on_caller+0xb9/0x16c()
> >>> Feb 15 15:00:24 lucid kernel: Hardware name: Bochs
> >>> Feb 15 15:00:24 lucid kernel: Pid: 1477, comm: mountall Not tainted
> >>> 2.6.38-rc4-next-20110215+ #74
> >>> Feb 15 15:00:24 lucid kernel: Call Trace:
> >>> Feb 15 15:00:24 lucid kernel: <IRQ> ?[<ffffffff8102b8a5>] ?
> >>> warn_slowpath_common+0x7b/0x93
> >>> Feb 15 15:00:24 lucid kernel: [<ffffffff8146c097>] ?
> >>> _raw_spin_unlock_irq+0x2b/0x30
> >>> Feb 15 15:00:24 lucid kernel: [<ffffffff8102b8d2>] ?
> >>> warn_slowpath_null+0x15/0x17
> >>> Feb 15 15:00:24 lucid kernel: [<ffffffff8104f796>] ?
> >>> trace_hardirqs_on_caller+0xb9/0x16c
> >>> Feb 15 15:00:24 lucid kernel: [<ffffffff8104f856>] ?
> >>> trace_hardirqs_on+0xd/0xf
> >>> Feb 15 15:00:24 lucid kernel: [<ffffffff8146c097>] ?
> >>> _raw_spin_unlock_irq+0x2b/0x30
> >>> Feb 15 15:00:24 lucid kernel: [<ffffffff812e167e>] ?
> >>> do_ide_request+0x32/0x590
> >>
> >> Seems related to IDE SUBSYSTEM
> >
> > Which hasn't had any changes in the past release.
>
> OK.
>
> Cc'ing Tejun Heo
>
> For the back trace, I think __blk_run_queue() is the ligament.
> As from the comment of __blk_run_queue(), it must be called
> with the queue lock and interrupts disabled. And the lock
> is hold through spin_lock_irqsave(q->queue_lock, flags); at
> blk_end_bidi_request().
>
> But in do_ide_request(), it realse the lock through
> spin_unlock_irq(q->queue_lock); which make the state
> inconsistent.
>
> BTW, do_ide_request() also say it might_sleep(), this warning
> also trigger in Daniel's log.
This seems to be the same problem Jan reported and fixed by the
following patches.
http://article.gmane.org/gmane.linux.kernel/1101766/raw
http://article.gmane.org/gmane.linux.ide/48819/raw
Can you please test whether these two patches fix the problem?
Thanks.
--
tejun
On 02/17/2011 12:28 PM, Tejun Heo wrote:
> Hello,
>
> On Thu, Feb 17, 2011 at 10:43:57AM +0800, Yong Zhang wrote:
>> On Thu, Feb 17, 2011 at 10:03 AM, David Miller<[email protected]> wrote:
>>> From: Yong Zhang<[email protected]>
>>> Date: Thu, 17 Feb 2011 09:37:30 +0800
>>>
>>>> On Tue, Feb 15, 2011 at 10:42 PM, Daniel Lezcano<[email protected]> wrote:
>>>>> Hi All,
>>>>>
>>>>> I am running a 2.6.38-rc4-next-20110215+ kernel on qemu x86_64 and the
>>>>> following traces appear in the console:
>>>>>
>>>>> Feb 15 15:00:24 lucid kernel: ------------[ cut here ]------------
>>>>> Feb 15 15:00:24 lucid kernel: WARNING: at kernel/lockdep.c:2323
>>>>> trace_hardirqs_on_caller+0xb9/0x16c()
>>>>> Feb 15 15:00:24 lucid kernel: Hardware name: Bochs
>>>>> Feb 15 15:00:24 lucid kernel: Pid: 1477, comm: mountall Not tainted
>>>>> 2.6.38-rc4-next-20110215+ #74
>>>>> Feb 15 15:00:24 lucid kernel: Call Trace:
>>>>> Feb 15 15:00:24 lucid kernel:<IRQ> [<ffffffff8102b8a5>] ?
>>>>> warn_slowpath_common+0x7b/0x93
>>>>> Feb 15 15:00:24 lucid kernel: [<ffffffff8146c097>] ?
>>>>> _raw_spin_unlock_irq+0x2b/0x30
>>>>> Feb 15 15:00:24 lucid kernel: [<ffffffff8102b8d2>] ?
>>>>> warn_slowpath_null+0x15/0x17
>>>>> Feb 15 15:00:24 lucid kernel: [<ffffffff8104f796>] ?
>>>>> trace_hardirqs_on_caller+0xb9/0x16c
>>>>> Feb 15 15:00:24 lucid kernel: [<ffffffff8104f856>] ?
>>>>> trace_hardirqs_on+0xd/0xf
>>>>> Feb 15 15:00:24 lucid kernel: [<ffffffff8146c097>] ?
>>>>> _raw_spin_unlock_irq+0x2b/0x30
>>>>> Feb 15 15:00:24 lucid kernel: [<ffffffff812e167e>] ?
>>>>> do_ide_request+0x32/0x590
>>>> Seems related to IDE SUBSYSTEM
>>> Which hasn't had any changes in the past release.
>> OK.
>>
>> Cc'ing Tejun Heo
>>
>> For the back trace, I think __blk_run_queue() is the ligament.
>> As from the comment of __blk_run_queue(), it must be called
>> with the queue lock and interrupts disabled. And the lock
>> is hold through spin_lock_irqsave(q->queue_lock, flags); at
>> blk_end_bidi_request().
>>
>> But in do_ide_request(), it realse the lock through
>> spin_unlock_irq(q->queue_lock); which make the state
>> inconsistent.
>>
>> BTW, do_ide_request() also say it might_sleep(), this warning
>> also trigger in Daniel's log.
> This seems to be the same problem Jan reported and fixed by the
> following patches.
>
> http://article.gmane.org/gmane.linux.kernel/1101766/raw
> http://article.gmane.org/gmane.linux.ide/48819/raw
>
> Can you please test whether these two patches fix the problem?
Thanks Tejun !
I applied these patches to linux-next and I blindly fixed some minors
conflicts. AFAICT, the problem does no longer occur and it seems the
patches fix the problem. I am not sure I resolved the conflict correctly
as I know nothing about this subsystem. Shall I resend these patches for
inclusion and you check they are correct ?
-- Daniel
Hello,
On Thu, Feb 17, 2011 at 01:09:32PM +0100, Daniel Lezcano wrote:
> I applied these patches to linux-next and I blindly fixed some
> minors conflicts. AFAICT, the problem does no longer occur and it
> seems the patches fix the problem. I am not sure I resolved the
> conflict correctly as I know nothing about this subsystem. Shall I
> resend these patches for inclusion and you check they are correct ?
The posted patches are on top of the mainline and the conflicts
against linux-next should be minor. I'm waiting for Jens to pick up
the patches. Anyways, there's no harm in posting the rebased patches.
Thanks.
--
tejun
On 02/17/2011 01:54 PM, Tejun Heo wrote:
> Hello,
>
> On Thu, Feb 17, 2011 at 01:09:32PM +0100, Daniel Lezcano wrote:
>> I applied these patches to linux-next and I blindly fixed some
>> minors conflicts. AFAICT, the problem does no longer occur and it
>> seems the patches fix the problem. I am not sure I resolved the
>> conflict correctly as I know nothing about this subsystem. Shall I
>> resend these patches for inclusion and you check they are correct ?
> The posted patches are on top of the mainline and the conflicts
> against linux-next should be minor. I'm waiting for Jens to pick up
> the patches.
Great ! Thanks Tejun.
-- Daniel