Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751789AbdFUUm1 (ORCPT ); Wed, 21 Jun 2017 16:42:27 -0400 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:42114 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750827AbdFUUmZ (ORCPT ); Wed, 21 Jun 2017 16:42:25 -0400 Date: Wed, 21 Jun 2017 09:18:53 -0700 From: "Paul E. McKenney" To: Jeffrey Hugo Cc: linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, pprakash@codeaurora.org, Josh Triplett , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Jens Axboe , Sebastian Andrzej Siewior , Thomas Gleixner , Richard Cochran , Boris Ostrovsky , Richard Weinberger Subject: Re: [BUG] Deadlock due due to interactions of block, RCU, and cpu offline Reply-To: paulmck@linux.vnet.ibm.com References: <20170326232843.GA3637@linux.vnet.ibm.com> <20170327181711.GF3637@linux.vnet.ibm.com> <20170620234623.GA16200@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-GCONF: 00 x-cbid: 17062116-2213-0000-0000-000001E3EA18 X-IBM-SpamModules-Scores: X-IBM-SpamModules-Versions: BY=3.00007266; HX=3.00000241; KW=3.00000007; PH=3.00000004; SC=3.00000214; SDB=6.00877873; UDB=6.00437362; IPR=6.00658014; BA=6.00005434; NDR=6.00000001; ZLA=6.00000005; ZF=6.00000009; ZB=6.00000000; ZP=6.00000000; ZH=6.00000000; ZU=6.00000002; MB=3.00015910; XFM=3.00000015; UTC=2017-06-21 16:18:57 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17062116-2214-0000-0000-00005697427A Message-Id: <20170621161853.GB3721@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2017-06-21_04:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=2 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1703280000 definitions=main-1706210275 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3593 Lines: 83 On Wed, Jun 21, 2017 at 08:39:45AM -0600, Jeffrey Hugo wrote: > On 6/20/2017 5:46 PM, Paul E. McKenney wrote: > >On Mon, Mar 27, 2017 at 11:17:11AM -0700, Paul E. McKenney wrote: > >>On Mon, Mar 27, 2017 at 12:02:27PM -0600, Jeffrey Hugo wrote: > >>>Hi Paul. > >>> > >>>Thanks for the quick reply. > >>> > >>>On 3/26/2017 5:28 PM, Paul E. McKenney wrote: > >>>>On Sun, Mar 26, 2017 at 05:10:40PM -0600, Jeffrey Hugo wrote: > >>> > >>>>>It is a race between this work running, and the cpu offline processing. > >>>> > >>>>One quick way to test this assumption is to build a kernel with Kconfig > >>>>options CONFIG_RCU_NOCB_CPU=y and CONFIG_RCU_NOCB_CPU_ALL=y. This will > >>>>cause call_rcu_sched() to queue the work to a kthread, which can migrate > >>>>to some other CPU. If your analysis is correct, this should avoid > >>>>the deadlock. (Note that the deadlock should be fixed in any case, > >>>>just a diagnostic assumption-check procedure.) > >>> > >>>I enabled CONFIG_RCU_EXPERT=y, CONFIG_RCU_NOCB_CPU=y, > >>>CONFIG_RCU_NOCB_CPU_ALL=y in my build. I've only had time so far to > >>>do one test run however the issue reproduced, but it took a fair bit > >>>longer to do so. An initial look at the data indicates that the > >>>work is still not running. An odd observation, the two threads are > >>>no longer blocked on the same queue, but different ones. > >> > >>I was afraid of that... > >> > >>>Let me look at this more and see what is going on now. > >> > >>Another thing to try would be to affinity the "rcuo" kthreads to > >>some CPU that is never taken offline, just in case that kthread is > >>sometimes somehow getting stuck during the CPU-hotplug operation. > >> > >>>>>What is the opinion of the domain experts? > >>>> > >>>>I do hope that we can come up with a better fix. No offense intended, > >>>>as coming up with -any- fix in the CPU-hotplug domain is not to be > >>>>denigrated, but this looks to be at vest quite fragile. > >>>> > >>>> Thanx, Paul > >>>> > >>> > >>>None taken. I'm not particularly attached to the current fix. I > >>>agree, it does appear to be quite fragile. > >>> > >>>I'm still not sure what a better solution would be though. Maybe > >>>the RCU framework flushes the work somehow during cpu offline? It > >>>would need to ensure further work is not queued after that point, > >>>which seems like it might be tricky to synchronize. I don't know > >>>enough about the working of RCU to even attempt to implement that. > >> > >>There are some ways that RCU might be able to shrink the window during > >>which the outgoing CPU's callbacks are in limbo, but they are not free > >>of risk, so we really need to compleetly understand what is going on > >>before making any possibly ill-conceived changes. ;-) > >> > >>>In any case, it seem like some more analysis is needed based on the > >>>latest data. > >> > >>Looking forward to hearing about you find! > > > >Hearing nothing, I eventually took unilateral action (I am a citizen of > >USA, after all!) and produced the lightly tested patch shown below. > > > >Does it help? > > > > Thanx, Paul > > > > Wow, has it been 3 months already? I am extremely sorry, I've been > preempted multiple times, and this has sat on my todo list where I > keep thinking I need to find time to come back to it but apparently > not doing enough to make that happen. > > Thank you for not forgetting about this. I promise I will somehow > clear my schedule to test this next week. No worries, and I am very much looking forward to seeing the results of your testing. Thanx, Paul