Received: by 2002:a25:f815:0:0:0:0:0 with SMTP id u21csp101018ybd; Fri, 28 Jun 2019 15:26:22 -0700 (PDT) X-Google-Smtp-Source: APXvYqyMqugIfUSsojMXKZMb6L2dSJOq+VMFU3CJ56CFLgbeNz/FgsDq6Q5ka4HwTrfxJI+wqouG X-Received: by 2002:a17:902:110b:: with SMTP id d11mr14915695pla.213.1561760782109; Fri, 28 Jun 2019 15:26:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561760782; cv=none; d=google.com; s=arc-20160816; b=ZFnWZyfMRbEmCOo0A/XSzyfFuPoAaxT1uAlVDqVgpK5HjLMD5NtFZZEQEeMAb42SN+ 40/CN0lJ2Z2fYZSlY/rPuEK5PVN9/rMLjbeqp3EnZTxx6vaBuBazlNbNPe+C3ufeoJXM Pe1Xr0DzxO5LZp/fLkRqI5nlncxAQWstBbUH4LtBUOyzt0RnPzYKtCCz3BrH97BgnP9r MwzSGVt81O9y+zn2UaOe9P3OPgpqXqYrt3fl97CKgHmP6rSdq5WdZNcrfk2ANL1iugEo 4vIbfKfRM+gN1iVWDDrbpbc9OxL7ot7HeVyTtGXHS3UFhmhUwnz9x6An+V4YDsAvGIP9 Bmmg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:user-agent:in-reply-to :content-disposition:mime-version:references:reply-to:subject:cc:to :from:date; bh=hsDwaua33HRUgSF4a3fJs8tT/YT84CDqtbv3eU6ereo=; b=GlCkZDiuUpmo2yCSHA3BGDuRkgH8XgFMha3N0DxME1jjNESccwLKTtu4SB8JRvAl6T oJzC96vmMEHFY1O+dduDrjVj6IFxmcW0bAvp0heDyhLfgsCFOPm4XP9K9a+ch8c7vl7K nqFx1vk6UNM3EhMy+BC1cVU4KnLabisp6pJqJKZiocGRdAwIlAh7kuxxOnqEJYYRDfs/ XdbTb0gsCbiuIsodoT3JOUB8kWNTqQuRqZAV8nfB3sXDRmSRwAq52LHDbYZXDZjhhqzA LH2uads5aalSHzObTKV4LdM2W3WJF2mgUlda0qNSxKxzc6XY528iDZTiIV1khaWxYKpr 3Bcw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=ibm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c139si3990768pfb.140.2019.06.28.15.26.05; Fri, 28 Jun 2019 15:26:22 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726862AbfF1WZy (ORCPT + 99 others); Fri, 28 Jun 2019 18:25:54 -0400 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:42910 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726794AbfF1WZy (ORCPT ); Fri, 28 Jun 2019 18:25:54 -0400 Received: from pps.filterd (m0098394.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5SMLl82044790 for ; Fri, 28 Jun 2019 18:25:53 -0400 Received: from e16.ny.us.ibm.com (e16.ny.us.ibm.com [129.33.205.206]) by mx0a-001b2d01.pphosted.com with ESMTP id 2tdtn1sv7n-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Fri, 28 Jun 2019 18:25:53 -0400 Received: from localhost by e16.ny.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Fri, 28 Jun 2019 23:25:51 +0100 Received: from b01cxnp23032.gho.pok.ibm.com (9.57.198.27) by e16.ny.us.ibm.com (146.89.104.203) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Fri, 28 Jun 2019 23:25:48 +0100 Received: from b01ledav003.gho.pok.ibm.com (b01ledav003.gho.pok.ibm.com [9.57.199.108]) by b01cxnp23032.gho.pok.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id x5SMPlPC47972702 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 28 Jun 2019 22:25:47 GMT Received: from b01ledav003.gho.pok.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 33011B2064; Fri, 28 Jun 2019 22:25:47 +0000 (GMT) Received: from b01ledav003.gho.pok.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 056AAB205F; Fri, 28 Jun 2019 22:25:47 +0000 (GMT) Received: from paulmck-ThinkPad-W541 (unknown [9.70.82.26]) by b01ledav003.gho.pok.ibm.com (Postfix) with ESMTP; Fri, 28 Jun 2019 22:25:46 +0000 (GMT) Received: by paulmck-ThinkPad-W541 (Postfix, from userid 1000) id EC07D16C2EA4; Fri, 28 Jun 2019 15:25:47 -0700 (PDT) Date: Fri, 28 Jun 2019 15:25:47 -0700 From: "Paul E. McKenney" To: Joel Fernandes Cc: Sebastian Andrzej Siewior , Peter Zijlstra , Steven Rostedt , rcu , LKML , Thomas Gleixner , Ingo Molnar , Josh Triplett , Mathieu Desnoyers , Lai Jiangshan Subject: Re: [RFC] Deadlock via recursive wakeup via RCU with threadirqs Reply-To: paulmck@linux.ibm.com References: <20190627173831.GW26519@linux.ibm.com> <20190627181638.GA209455@google.com> <20190627184107.GA26519@linux.ibm.com> <20190628135433.GE3402@hirez.programming.kicks-ass.net> <20190628153050.GU26519@linux.ibm.com> <20190628184026.fds6scgi2pnjnc5p@linutronix.de> <20190628185219.GA26519@linux.ibm.com> <20190628192407.GA89956@google.com> <20190628200423.GB26519@linux.ibm.com> <20190628214018.GB249127@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190628214018.GB249127@google.com> User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-GCONF: 00 x-cbid: 19062822-0072-0000-0000-0000044253D7 X-IBM-SpamModules-Scores: X-IBM-SpamModules-Versions: BY=3.00011348; HX=3.00000242; KW=3.00000007; PH=3.00000004; SC=3.00000286; SDB=6.01224669; UDB=6.00644589; IPR=6.01005861; MB=3.00027512; MTD=3.00000008; XFM=3.00000015; UTC=2019-06-28 22:25:51 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 19062822-0073-0000-0000-00004CB284B7 Message-Id: <20190628222547.GE26519@linux.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:,, definitions=2019-06-28_11:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=0 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1810050000 definitions=main-1906280256 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Jun 28, 2019 at 05:40:18PM -0400, Joel Fernandes wrote: > Hi Paul, > > On Fri, Jun 28, 2019 at 01:04:23PM -0700, Paul E. McKenney wrote: > [snip] > > > > > Commit > > > > > - 23634ebc1d946 ("rcu: Check for wakeup-safe conditions in > > > > > rcu_read_unlock_special()") does not trigger the bug within 94 > > > > > attempts. > > > > > > > > > > - 48d07c04b4cc1 ("rcu: Enable elimination of Tree-RCU softirq > > > > > processing") needed 12 attempts to trigger the bug. > > > > > > > > That matches my belief that 23634ebc1d946 ("rcu: Check for wakeup-safe > > > > conditions in rcu_read_unlock_special()") will at least greatly decrease > > > > the probability of this bug occurring. > > > > > > I was just typing a reply that I can't reproduce it with: > > > rcu: Check for wakeup-safe conditions in rcu_read_unlock_special() > > > > > > I am trying to revert enough of this patch to see what would break things, > > > however I think a better exercise might be to understand more what the patch > > > does why it fixes things in the first place ;-) It is probably the > > > deferred_qs thing. > > > > The deferred_qs flag is part of it! Looking forward to hearing what > > you come up with as being the critical piece of this commit. > > The new deferred_qs flag indeed saves the machine from the dead-lock. > > If we don't want the deferred_qs, then the below patch also fixes the issue. > However, I am more sure than not that it does not handle all cases (such as > what if we previously had an expedited grace period IPI in a previous reader > section and had to to defer processing. Then it seems a similar deadlock > would present. But anyway, the below patch does fix it for me! It is based on > your -rcu tree commit 23634ebc1d946f19eb112d4455c1d84948875e31 (rcu: Check > for wakeup-safe conditions in rcu_read_unlock_special()). The point here being that you rely on .b.blocked rather than .b.deferred_qs. Hmmm... There are a number of places that check all the bits via the .s leg of the rcu_special union. The .s check in rcu_preempt_need_deferred_qs() should be OK because it is conditioned on t->rcu_read_lock_nesting of zero or negative. Do rest of those also work out OK? It would be nice to remove the flag, but doing so clearly needs careful review and testing. Thanx, Paul > ---8<----------------------- > > From: "Joel Fernandes (Google)" > Subject: [PATCH] Fix RCU recursive deadlock > > Signed-off-by: Joel Fernandes (Google) > --- > include/linux/sched.h | 2 +- > kernel/rcu/tree_plugin.h | 17 +++++++++++++---- > 2 files changed, 14 insertions(+), 5 deletions(-) > > diff --git a/include/linux/sched.h b/include/linux/sched.h > index 942a44c1b8eb..347e6dfcc91b 100644 > --- a/include/linux/sched.h > +++ b/include/linux/sched.h > @@ -565,7 +565,7 @@ union rcu_special { > u8 blocked; > u8 need_qs; > u8 exp_hint; /* Hint for performance. */ > - u8 deferred_qs; > + u8 pad; > } b; /* Bits. */ > u32 s; /* Set of bits. */ > }; > diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h > index 75110ea75d01..5b9b12c1ba5c 100644 > --- a/kernel/rcu/tree_plugin.h > +++ b/kernel/rcu/tree_plugin.h > @@ -455,7 +455,6 @@ rcu_preempt_deferred_qs_irqrestore(struct task_struct *t, unsigned long flags) > local_irq_restore(flags); > return; > } > - t->rcu_read_unlock_special.b.deferred_qs = false; > if (special.b.need_qs) { > rcu_qs(); > t->rcu_read_unlock_special.b.need_qs = false; > @@ -608,13 +607,24 @@ static void rcu_read_unlock_special(struct task_struct *t) > if (preempt_bh_were_disabled || irqs_were_disabled) { > t->rcu_read_unlock_special.b.exp_hint = false; > // Need to defer quiescent state until everything is enabled. > + > + /* If unlock_special was called in the current reader section > + * just because we were blocked in a previous reader section, > + * then raising softirqs can deadlock. This is because the > + * scheduler executes RCU sections with preemption disabled, > + * however it may have previously blocked in a previous > + * non-scheduler reader section and .blocked got set. It is > + * never safe to call unlock_special from the scheduler path > + * due to recursive wake ups (unless we are in_irq(), so > + * prevent this by checking if we were previously blocked. > + */ > if (irqs_were_disabled && use_softirq && > - (in_irq() || !t->rcu_read_unlock_special.b.deferred_qs)) { > + (!t->rcu_read_unlock_special.b.blocked || in_irq())) { > // Using softirq, safe to awaken, and we get > // no help from enabling irqs, unlike bh/preempt. > raise_softirq_irqoff(RCU_SOFTIRQ); > } else if (irqs_were_disabled && !use_softirq && > - !t->rcu_read_unlock_special.b.deferred_qs) { > + !t->rcu_read_unlock_special.b.blocked) { > // Safe to awaken and we get no help from enabling > // irqs, unlike bh/preempt. > invoke_rcu_core(); > @@ -623,7 +633,6 @@ static void rcu_read_unlock_special(struct task_struct *t) > set_tsk_need_resched(current); > set_preempt_need_resched(); > } > - t->rcu_read_unlock_special.b.deferred_qs = true; > local_irq_restore(flags); > return; > } > -- > 2.22.0.410.gd8fdbe21b5-goog >