Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932738AbbLGUAU (ORCPT ); Mon, 7 Dec 2015 15:00:20 -0500 Received: from e32.co.us.ibm.com ([32.97.110.150]:33163 "EHLO e32.co.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932542AbbLGUAS (ORCPT ); Mon, 7 Dec 2015 15:00:18 -0500 X-IBM-Helo: d03dlp03.boulder.ibm.com X-IBM-MailFrom: paulmck@linux.vnet.ibm.com X-IBM-RcptTo: linux-kernel@vger.kernel.org Date: Mon, 7 Dec 2015 12:00:13 -0800 From: "Paul E. McKenney" To: Frederic Weisbecker Cc: tglx@linutronix.de, peterz@infradead.org, preeti@linux.vnet.ibm.com, viresh.kumar@linaro.org, mtosatti@redhat.com, linux-kernel@vger.kernel.org, sasha.levin@oracle.com Subject: Re: Possible issue with commit 4961b6e11825? Message-ID: <20151207200013.GE28602@linux.vnet.ibm.com> Reply-To: paulmck@linux.vnet.ibm.com References: <20151204232022.GA15891@linux.vnet.ibm.com> <20151207190120.GB20032@lerouge> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20151207190120.GB20032@lerouge> User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 15120720-0005-0000-0000-00001A6AEB40 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3999 Lines: 84 On Mon, Dec 07, 2015 at 08:01:23PM +0100, Frederic Weisbecker wrote: > On Fri, Dec 04, 2015 at 03:20:22PM -0800, Paul E. McKenney wrote: > > Hello! > > > > Are there any known issues with commit 4961b6e11825 (sched: core: Use > > hrtimer_start[_expires]())? > > > > The reason that I ask is that I am about 90% sure that an rcutorture > > failure bisects to that commit. I will be running more tests on > > 3497d206c4d9 (perf: core: Use hrtimer_start()), which is the predecessor > > of 4961b6e11825, and which, unlike 4961b6e11825, passes a 12-hour > > rcutorture test with scenario TREE03. In contrast, 4961b6e11825 gets > > 131 RCU CPU stall warnings, 132 reports of one of RCU's grace-period > > kthreads being starved, and 525 reports of one of rcutorture's kthreads > > being starved. Most of the test runs hang on shutdown, which is no > > surprise if an RCU CPU stall is happening at about that time. > > I have no idea what the issue is but maybe you have the RCU stall backtrace > somewhere? First, please note that this commit might well be an innocent bystander. That said, I have lots and lots of them! ;-) They look like this: [ 4135.979013] Call Trace: [ 4135.979013] [] ? debug_smp_processor_id+0x17/0x20 [ 4135.979013] [] ? default_idle+0xc/0xe0 [ 4135.979013] [] ? arch_cpu_idle+0xa/0x10 [ 4135.979013] [] ? default_idle_call+0x27/0x30 [ 4135.979013] [] ? cpu_startup_entry+0x294/0x310 [ 4135.979013] [] ? start_secondary+0xef/0x100 Which says that they are in the idle loop, so the RCU grace-period kthread should notice within six jiffies and post a quiescent state on their behalf. But the next line is like this: [ 4135.979013] rcu_preempt kthread starved for 21024 jiffies! g102259 c102258 f0x0 RCU_GP_WAIT_FQS(3) ->state=0x1 Which says that RCU's grace-period kthread has not run for the full duration of the stall (21 seconds), hence its failure to record other tasks' quiescent states. Its state is 0x1, which is TASK_INTERRUPTIBLE. The last thing it did was wait either for all quiescent states to come in, or for three jiffies to elapse (this is the "RCU_GP_WAIT_FQS(3)" above). Given that 21024 is a bit larger than six (up to two rounds of RCU GP kthread execution are required to notice the quiescent state), something isn't quite right here. The RCU GP kthread's stack is as follows: [ 4135.979013] Call Trace: [ 4135.979013] [] schedule+0x3a/0x90 [ 4135.979013] [] schedule_timeout+0x148/0x290 [ 4135.979013] [] ? trace_raw_output_itimer_expire+0x70/0x70 [ 4135.979013] [] rcu_gp_kthread+0x934/0x1010 [ 4135.979013] [] ? prepare_to_wait_event+0xf0/0xf0 [ 4135.979013] [] ? rcu_barrier+0x20/0x20 [ 4135.979013] [] kthread+0xc4/0xe0 [ 4135.979013] [] ? kthread_create_on_node+0x170/0x170 [ 4135.979013] [] ret_from_fork+0x3f/0x70 [ 4135.979013] [] ? kthread_create_on_node+0x170/0x170 So it looks to be waiting with a timeout. Which was why I got excited when my bisection appeared to converge on timer-related commit. Except that further testing found failures prior to that commit, though arguably happening at a much lower rate. I later learned that even the current kernel runs with no stalls if CPU hotplug is disabled. So now I am wondering if there is some race that can happen when trying to awaken a task that last ran on a CPU that is just now in the process of going online. Or that is just in the process of coming online, for that matter. Hey, you asked! ;-) Thanx, Paul -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/