Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751767AbaJBThC (ORCPT ); Thu, 2 Oct 2014 15:37:02 -0400 Received: from e37.co.us.ibm.com ([32.97.110.158]:47436 "EHLO e37.co.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751656AbaJBThA (ORCPT ); Thu, 2 Oct 2014 15:37:00 -0400 Date: Thu, 2 Oct 2014 12:36:55 -0700 From: "Paul E. McKenney" To: Dave Jones , Linux Kernel Cc: tj@kernel.org Subject: Re: RCU stalls -> lockup. Message-ID: <20141002193655.GS5015@linux.vnet.ibm.com> Reply-To: paulmck@linux.vnet.ibm.com References: <20141002175515.GA28665@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20141002175515.GA28665@redhat.com> User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 14100219-7164-0000-0000-000005168F21 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Oct 02, 2014 at 01:55:15PM -0400, Dave Jones wrote: > I just hit this on my box running 3.17rc7 > It was followed by a userspace lockup. (Could still ping, and sysrq > from the console, but even getty wasn't responding on the console). > > I was trying to reproduce another bug faster, and had ramped up the > number of processes trinity to uses to 512. This didn't take long > to fall out.. This might be related to an exchange I had with Tejun (CCed), where the work queues were running all out, preventing any quiescent states from happening. One fix under consideration is to add a quiescent state, similar to the one in softirq handling. Thanx, Paul > INFO: rcu_preempt detected stalls on CPUs/tasks: > Tasks blocked on level-0 rcu_node (CPUs 0-3): > Tasks blocked on level-0 rcu_node (CPUs 0-3): > (detected by 3, t=6502 jiffies, g=1014253, c=1014252, q=0) > INFO: Stall ended before state dump start > INFO: rcu_preempt detected stalls on CPUs/tasks: > Tasks blocked on level-0 rcu_node (CPUs 0-3): > Tasks blocked on level-0 rcu_node (CPUs 0-3): > (detected by 0, t=26007 jiffies, g=1014253, c=1014252, q=0) > INFO: Stall ended before state dump start > INFO: rcu_preempt detected stalls on CPUs/tasks: > Tasks blocked on level-0 rcu_node (CPUs 0-3): > Tasks blocked on level-0 rcu_node (CPUs 0-3): > (detected by 2, t=45512 jiffies, g=1014253, c=1014252, q=0) > INFO: Stall ended before state dump start > INFO: rcu_preempt detected stalls on CPUs/tasks: > Tasks blocked on level-0 rcu_node (CPUs 0-3): > Tasks blocked on level-0 rcu_node (CPUs 0-3): > (detected by 1, t=65017 jiffies, g=1014253, c=1014252, q=0) > INFO: Stall ended before state dump start > INFO: rcu_preempt detected stalls on CPUs/tasks: > Tasks blocked on level-0 rcu_node (CPUs 0-3): P15547 P15232 P15616 P15634 > Tasks blocked on level-0 rcu_node (CPUs 0-3): P15547 P15232 P15616 P15634 > (detected by 1, t=6502 jiffies, g=1014254, c=1014253, q=0) > trinity-c318 R running task 13480 15547 14371 0x00000000 > ffff880031df7df0 0000000000000002 ffffffff870cb70e ffff88008ec30000 > 00000000001d4080 0000000000000000 ffff880031df7fd8 00000000001d4080 > ffff8802166c2de0 ffff88008ec30000 ffff880031df7fd8 ffffffff872361f4 > Call Trace: > [] ? put_lock_stats.isra.28+0xe/0x30 > [] ? bdi_queue_work+0xe4/0x1a0 > [] preempt_schedule+0x36/0x60 > [] ___preempt_schedule+0x56/0xb0 > [] ? bdi_queue_work+0xe4/0x1a0 > [] ? __local_bh_enable_ip+0xb7/0xe0 > [] _raw_spin_unlock_bh+0x35/0x40 > [] bdi_queue_work+0xe4/0x1a0 > [] __bdi_start_writeback+0x68/0x190 > [] wakeup_flusher_threads+0x100/0x1e0 > [] ? wakeup_flusher_threads+0x30/0x1e0 > [] sys_sync+0x36/0xb0 > [] tracesys+0xdd/0xe2 > trinity-c9 R running task 13496 15232 14371 0x00000000 > ffff88011a01fdf0 0000000000000002 ffffffff870cb70e ffff8800a19616f0 > 00000000001d4080 0000000000000000 ffff88011a01ffd8 00000000001d4080 > ffff8802166c2de0 ffff8800a19616f0 ffff88011a01ffd8 ffffffff872361f4 > Call Trace: > [] ? put_lock_stats.isra.28+0xe/0x30 > [] ? bdi_queue_work+0xe4/0x1a0 > [] preempt_schedule+0x36/0x60 > [] ___preempt_schedule+0x56/0xb0 > [] ? bdi_queue_work+0xe4/0x1a0 > [] ? __local_bh_enable_ip+0xb7/0xe0 > [] _raw_spin_unlock_bh+0x35/0x40 > [] bdi_queue_work+0xe4/0x1a0 > [] __bdi_start_writeback+0x68/0x190 > [] wakeup_flusher_threads+0x100/0x1e0 > [] ? wakeup_flusher_threads+0x30/0x1e0 > [] sys_sync+0x36/0xb0 > [] tracesys+0xdd/0xe2 > trinity-c387 R running task 13272 15616 14371 0x00000004 > ffff880043fd37f8 0000000000000002 ffff880001fd3868 ffff88023304ade0 > 00000000001d4080 0000000000000000 ffff880043fd3fd8 00000000001d4080 > ffff8802166c2de0 ffff88023304ade0 ffff880043fd3fd8 0000000000000000 > Call Trace: > [] preempt_schedule_irq+0x52/0xb0 > [] retint_kernel+0x20/0x30 > [] ? find_get_entry+0xb4/0x270 > [] ? find_get_entry+0x1fe/0x270 > [] ? find_get_entry+0x5/0x270 > [] find_lock_entry+0x1f/0x90 > [] shmem_getpage_gfp+0xd5/0xa10 > [] shmem_fault+0x6d/0x1c0 > [] __do_fault+0x48/0xd0 > [] do_shared_fault.isra.75+0x40/0x1c0 > [] ? __perf_sw_event+0x4b/0x380 > [] handle_mm_fault+0x261/0xcd0 > [] ? __lock_is_held+0x57/0x80 > [] __do_page_fault+0x1a4/0x600 > [] ? lock_release_holdtime.part.29+0xe6/0x160 > [] ? context_tracking_user_exit+0x67/0x1b0 > [] do_page_fault+0x1e/0x70 > [] page_fault+0x22/0x30 > [] ? copy_page_to_iter+0x3b3/0x500 > [] ? copy_page_to_iter+0x1ce/0x500 > [] ? vmsplice_to_user+0x130/0x130 > [] pipe_to_user+0x22/0x40 > [] __splice_from_pipe+0x11e/0x190 > [] vmsplice_to_user+0xd4/0x130 > [] ? trace_hardirqs_off_caller+0x21/0xc0 > [] ? retint_restore_args+0xe/0xe > [] ? get_parent_ip+0xd/0x50 > [] ? preempt_count_sub+0x6b/0xf0 > [<13480 15547 14371 0x00000000 > ffff880031df7df0 0000000000000002 ffffffff870cb70e ffff88008ec30000 > 00000000001d4080 0000000000000000 ffff880031df7fd8 00000000001d4080 > ffff8801039344d0 ffff88008ec30000 ffff880031df7fd8 ffffffff872361f4 > Call Trace: > [] ? put_lock_stats.isra.28+0xe/0x30 > [] ? bdi_queue_work+0xe4/0x1a0 > [] preempt_schedule+0x36/0x60 > [] ___preempt_schedule+0x56/0xb0 > [] ? bdi_queue_work+0xe4/0x1a0 > [] ? __local_bh_enable_ip+0xb7/0xe0 > [] _raw_spin_unlock_bh+0x35/0x40 > [] bdi_queue_work+0xe4/0x1a0 > [] __bdi_start_writeback+0x68/0x190 > [] wakeup_flusher_threads+0x100/0x1e0 > [] ? wakeup_flusher_threads+0x30/0x1e0 > [] sys_sync+0x36/0xb0 > [] tracesys+0xdd/0xe2 > trinity-c9 R running task 13496 15232 14371 0x00000000 > ffff88011a01fe58 0000000000000002 ffffffff87238ecf ffff8800a19616f0 > 00000000001d4080 0000000000000000 ffff88011a01ffd8 00000000001d4080 > ffff880085d35bc0 ffff8800a19616f0 ffff88011a01ffd8 0000000000000000 > Call Trace: > [] ? wakeup_flusher_threads+0x11f/0x1e0 > [] ? wakeup_flusher_threads+0x11f/0x1e0 > [] preempt_schedule_irq+0x52/0xb0 > [] retint_kernel+0x20/0x30 > [] ? wakeup_flusher_threads+0x11f/0x1e0 > [] ? lock_release+0x29/0x300 > [] wakeup_flusher_threads+0x137/0x1e0 > [] ? wakeup_flusher_threads+0x30/0x1e0 > [] sys_sync+0x36/0xb0 > [] tracesys+0xdd/0xe2 > trinity-c387 R running task 13272 15616 14371 0x00000004 > ffff880043fd37f8 0000000000000002 ffff880001fd3868 ffff88023304ade0 > 00000000001d4080 0000000000000000 ffff880043fd3fd8 00000000001d4080 > ffff8801039344d0 ffff88023304ade0 ffff880043fd3fd8 0000000000000000 > Call Trace: > [] preempt_schedule_irq+0x52/0xb0 > [] retint_kernel+0x20/0x30 > [] ? find_get_entry+0xb4/0x270 > [] ? find_get_entry+0x1fe/0x270 > [] ? find_get_entry+0x5/0x270 > [] find_lock_entry+0x1f/0x90 > [] shmem_getpage_gfp+0xd5/0xa10 > [] shmem_fault+0x6d/0x1c0 > [] __do_fault+0x48/0xd0 > [] do_shared_fault.isra.75+0x40/0x1c0 > [] ? __perf_sw_event+0x4b/0x380 > [] handle_mm_fault+0x261/0xcd0 > [] ? __lock_is_held+0x57/0x80 > [] __do_page_fault+0x1a4/0x600 > [] ? lock_release_holdtime.part.29+0xe6/0x160 > [] ? context_tracking_user_exit+0x67/0x1b0 > [] do_page_fault+0x1e/0x70 > [] page_fault+0x22/0x30 > [] ? copy_page_to_iter+0x3b3/0x500 > [] ? copy_page_to_iter+0x1ce/0x500 > [] ? vmsplice_to_user+0x130/0x130 > [] pipe_to_user+0x22/0x40 > [] __splice_from_pipe+0x11e/0x190 > [] vmsplice_to_user+0xd4/0x130 > [] ? trace_hardirqs_off_caller+0x21/0xc0 > [] ? retint_restore_args+0xe/0xe > [] ? get_parent_ip+0xd/0x50 > [] ? preempt_count_sub+0x6b/0xf0 > [] SyS_vmsplice+0xc1/0xe0 > -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/