Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752643AbaKCPDq (ORCPT ); Mon, 3 Nov 2014 10:03:46 -0500 Received: from yotta.elopez.com.ar ([31.220.24.173]:53283 "EHLO yotta.elopez.com.ar" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751972AbaKCPDo (ORCPT ); Mon, 3 Nov 2014 10:03:44 -0500 Message-ID: <545792E8.1020208@elopez.com.ar> Date: Mon, 03 Nov 2014 11:36:24 -0300 From: =?UTF-8?B?RW1pbGlvIEzDs3Bleg==?= User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.2.0 MIME-Version: 1.0 To: peppe.cavallaro@st.com, maxime.ripard@free-electrons.com CC: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: stmmac: potential circular locking dependency Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi everyone, I was playing with iperf on my Cubietruck today when I hit this lockdep report/breakage on stmmac. It seems to be fairly reproducible by 1) Being on a GbE network 2) "iperf -s" on the stmmmac device 3) "iperf -c -d -P 5" on some other device on the LAN Here it goes: ====================================================== [ INFO: possible circular locking dependency detected ] 3.18.0-rc3-00003-g7180edf #916 Not tainted ------------------------------------------------------- iperf/141 is trying to acquire lock: (&(&dev->tx_global_lock)->rlock){+.-...}, at: [] stmmac_tx_clean+0x350/0x43c but task is already holding lock: (&(&priv->tx_lock)->rlock){+.-...}, at: [] stmmac_tx_clean+0x30/0x43c which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #2 (&(&priv->tx_lock)->rlock){+.-...}: [] _raw_spin_lock+0x5c/0x94 [] stmmac_xmit+0x88/0x620 [] dev_hard_start_xmit+0x230/0x49c [] sch_direct_xmit+0xdc/0x20c [] __dev_queue_xmit+0x218/0x604 [] dev_queue_xmit+0x1c/0x20 [] neigh_resolve_output+0x180/0x254 [] ip6_finish_output2+0x188/0x8a4 [] ip6_output+0xc8/0x398 [] mld_sendpack+0x2e0/0x6d8 [] mld_ifc_timer_expire+0x1f0/0x308 [] call_timer_fn+0xb4/0x1f0 [] run_timer_softirq+0x224/0x2f0 [] __do_softirq+0x1d4/0x3e0 [] irq_exit+0x9c/0xd0 [] __handle_domain_irq+0x70/0xb4 [] gic_handle_irq+0x34/0x6c [] __irq_svc+0x44/0x5c [] lock_acquire+0xec/0x17c [] lock_acquire+0xec/0x17c [] _raw_spin_lock+0x5c/0x94 [] do_read_fault.isra.93+0xa8/0x2a0 [] handle_mm_fault+0x44c/0x8dc [] do_page_fault+0x160/0x2d8 [] do_PrefetchAbort+0x44/0xa8 [] ret_from_exception+0x0/0x20 [] 0xb6eb0120 -> #1 (_xmit_ETHER#2){+.-...}: [] _raw_spin_lock+0x5c/0x94 [] dev_deactivate_many+0xd0/0x250 [] dev_deactivate+0x3c/0x4c [] linkwatch_do_dev+0x50/0x84 [] __linkwatch_run_queue+0xdc/0x148 [] linkwatch_event+0x3c/0x44 [] process_one_work+0x1ec/0x510 [] worker_thread+0x5c/0x4d8 [] kthread+0xe8/0xfc [] ret_from_fork+0x14/0x20 -> #0 (&(&dev->tx_global_lock)->rlock){+.-...}: [] lock_acquire+0xdc/0x17c [] _raw_spin_lock+0x5c/0x94 [] stmmac_tx_clean+0x350/0x43c [] stmmac_poll+0x3c/0x618 [] net_rx_action+0x178/0x28c [] __do_softirq+0x1d4/0x3e0 [] irq_exit+0x9c/0xd0 [] __handle_domain_irq+0x70/0xb4 [] gic_handle_irq+0x34/0x6c [] __irq_svc+0x44/0x5c [] __local_bh_enable_ip+0x9c/0xfc [] __local_bh_enable_ip+0x9c/0xfc [] _raw_read_unlock_bh+0x40/0x44 [] inet6_dump_addr+0x33c/0x530 [] inet6_dump_ifaddr+0x1c/0x20 [] rtnl_dump_all+0x50/0xf4 [] netlink_dump+0xc0/0x250 [] netlink_recvmsg+0x234/0x300 [] sock_recvmsg+0xa4/0xc8 [] ___sys_recvmsg.part.33+0xe4/0x1c0 [] __sys_recvmsg+0x60/0x90 [] SyS_recvmsg+0x18/0x1c [] ret_fast_syscall+0x0/0x48 other info that might help us debug this: Chain exists of: &(&dev->tx_global_lock)->rlock --> _xmit_ETHER#2 --> &(&priv->tx_lock)->rlock Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&(&priv->tx_lock)->rlock); lock(_xmit_ETHER#2); lock(&(&priv->tx_lock)->rlock); lock(&(&dev->tx_global_lock)->rlock); *** DEADLOCK *** 3 locks held by iperf/141: #0: (rtnl_mutex){+.+.+.}, at: [] netlink_dump+0x28/0x250 #1: (rcu_read_lock){......}, at: [] inet6_dump_addr+0x0/0x530 #2: (&(&priv->tx_lock)->rlock){+.-...}, at: [] stmmac_tx_clean+0x30/0x43c stack backtrace: CPU: 0 PID: 141 Comm: iperf Not tainted 3.18.0-rc3-00003-g7180edf #916 [] (unwind_backtrace) from [] (show_stack+0x20/0x24) [] (show_stack) from [] (dump_stack+0x9c/0xbc) [] (dump_stack) from [] (print_circular_bug+0x21c/0x33c) [] (print_circular_bug) from [] (__lock_acquire+0x2060/0x2148) [] (__lock_acquire) from [] (lock_acquire+0xdc/0x17c) [] (lock_acquire) from [] (_raw_spin_lock+0x5c/0x94) [] (_raw_spin_lock) from [] (stmmac_tx_clean+0x350/0x43c) [] (stmmac_tx_clean) from [] (stmmac_poll+0x3c/0x618) [] (stmmac_poll) from [] (net_rx_action+0x178/0x28c) [] (net_rx_action) from [] (__do_softirq+0x1d4/0x3e0) [] (__do_softirq) from [] (irq_exit+0x9c/0xd0) [] (irq_exit) from [] (__handle_domain_irq+0x70/0xb4) [] (__handle_domain_irq) from [] (gic_handle_irq+0x34/0x6c) [] (gic_handle_irq) from [] (__irq_svc+0x44/0x5c) Exception stack(0xcabc1be0 to 0xcabc1c28) 1be0: 00000001 2df53000 00000000 caf15e80 cabc0000 00000201 ca9a9840 c03ad050 1c00: ca8d9404 00000000 ca9b4f50 cabc1c44 c08732d0 cabc1c28 c005d5d0 c002435c 1c20: 20000013 ffffffff [] (__irq_svc) from [] (__local_bh_enable_ip+0x9c/0xfc) [] (__local_bh_enable_ip) from [] (_raw_read_unlock_bh+0x40/0x44) [] (_raw_read_unlock_bh) from [] (inet6_dump_addr+0x33c/0x530) [] (inet6_dump_addr) from [] (inet6_dump_ifaddr+0x1c/0x20) [] (inet6_dump_ifaddr) from [] (rtnl_dump_all+0x50/0xf4) [] (rtnl_dump_all) from [] (netlink_dump+0xc0/0x250) [] (netlink_dump) from [] (netlink_recvmsg+0x234/0x300) [] (netlink_recvmsg) from [] (sock_recvmsg+0xa4/0xc8) [] (sock_recvmsg) from [] (___sys_recvmsg.part.33+0xe4/0x1c0) [] (___sys_recvmsg.part.33) from [] (__sys_recvmsg+0x60/0x90) [] (__sys_recvmsg) from [] (SyS_recvmsg+0x18/0x1c) [] (SyS_recvmsg) from [] (ret_fast_syscall+0x0/0x48) --------------------------------- Cheers, Emilio -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/