Received: by 2002:a25:c593:0:0:0:0:0 with SMTP id v141csp1986298ybe; Sat, 7 Sep 2019 06:48:06 -0700 (PDT) X-Google-Smtp-Source: APXvYqyvIS8LE23cbSrHhz8Maidbd+jGb0uhR7/Yi9LtmC7HxwJ55M/yvT/Jk//58TqTvCg/i3Vy X-Received: by 2002:a17:90a:350:: with SMTP id 16mr14747339pjf.110.1567864086382; Sat, 07 Sep 2019 06:48:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1567864086; cv=none; d=google.com; s=arc-20160816; b=pR/flprX6oscQz9M0tJ95FFD+MlYW8ZhUVGiaBN/L5+7qUkC8Cl6pJTa6SXJu2XVnp voLczRm9shuablnwltty3pwF9ohjBFIg5wkzSPepbSvLRUYFpPdGMQ7MsGY/2Homj3cN pyXASHEaioaganxXm+VimIWJ/YB94RHnDOOza8LV3MTeFbKTgEqLwu/0fzv5tnZQyMuL EL/k664kDCoVPWUz95njtNLqx79HKmkFbR3K+ZK67oi+jl/emLavNHp2da5/sXVR3Yku pTfEP4+EetlkwxaZRkeba5keJRR2CHFPwY4HXtZqNo8YfESCNoo8aJNbsLN1j7xgxwA0 89nA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:reply-to:message-id :subject:cc:to:from:date; bh=C5OhR6xqEpKcRBRGmv+Snf0snDZYPIR+Oj8m7dVrGls=; b=jiQUgt3wjaD5OkN7rZQ/NgDalDX3BJQx8bAeOyEk8XWtNXtvBiSWHO/kJzcjT/XpL+ ViwWfVCBEdjpSQQ7NhncRM1GamW91ap4DUSPfKaSJnOzDbZOG34Ii8851Sfi2txI5H9H wD6D4+zRA71Zmq+kaQVMDckqG/wB7xNyLNKpTiP8Bvn69KmlxXOYCxJNOdAfwJ0j+Xlj D1bAYu4LZ/4TVssJAfGCPrCHOmZwwxg28Hdj3vbuF0zHnBJUyK/Krimt6liQrLArFt1d TiInrbCP679Ugs+BNWHQugOqgR4PL9gf5JZTSrJwxOJGvRHTueGdQhlZqExvTK7LzoAo 5t5g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id az8si7298402plb.34.2019.09.07.06.47.21; Sat, 07 Sep 2019 06:48:06 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2406030AbfIFRRi (ORCPT + 99 others); Fri, 6 Sep 2019 13:17:38 -0400 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:33134 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2404502AbfIFRRi (ORCPT ); Fri, 6 Sep 2019 13:17:38 -0400 Received: from pps.filterd (m0098396.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x86HDR4t144070; Fri, 6 Sep 2019 13:16:41 -0400 Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com with ESMTP id 2uus1qy8gj-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 06 Sep 2019 13:16:40 -0400 Received: from m0098396.ppops.net (m0098396.ppops.net [127.0.0.1]) by pps.reinject (8.16.0.27/8.16.0.27) with SMTP id x86HDYQS144511; Fri, 6 Sep 2019 13:16:39 -0400 Received: from ppma05wdc.us.ibm.com (1b.90.2fa9.ip4.static.sl-reverse.com [169.47.144.27]) by mx0a-001b2d01.pphosted.com with ESMTP id 2uus1qy8fx-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 06 Sep 2019 13:16:39 -0400 Received: from pps.filterd (ppma05wdc.us.ibm.com [127.0.0.1]) by ppma05wdc.us.ibm.com (8.16.0.27/8.16.0.27) with SMTP id x86HFU9i012063; Fri, 6 Sep 2019 17:16:38 GMT Received: from b01cxnp23034.gho.pok.ibm.com (b01cxnp23034.gho.pok.ibm.com [9.57.198.29]) by ppma05wdc.us.ibm.com with ESMTP id 2uu32utmf6-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 06 Sep 2019 17:16:38 +0000 Received: from b01ledav003.gho.pok.ibm.com (b01ledav003.gho.pok.ibm.com [9.57.199.108]) by b01cxnp23034.gho.pok.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id x86HGcdQ48497114 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 6 Sep 2019 17:16:38 GMT Received: from b01ledav003.gho.pok.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 15941B2065; Fri, 6 Sep 2019 17:16:38 +0000 (GMT) Received: from b01ledav003.gho.pok.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id DAA83B205F; Fri, 6 Sep 2019 17:16:37 +0000 (GMT) Received: from paulmck-ThinkPad-W541 (unknown [9.70.82.154]) by b01ledav003.gho.pok.ibm.com (Postfix) with ESMTP; Fri, 6 Sep 2019 17:16:37 +0000 (GMT) Received: by paulmck-ThinkPad-W541 (Postfix, from userid 1000) id 05E2C16C5AE5; Fri, 6 Sep 2019 10:16:47 -0700 (PDT) Date: Fri, 6 Sep 2019 10:16:47 -0700 From: "Paul E. McKenney" To: Joel Fernandes Cc: linux-kernel@vger.kernel.org, Andy Lutomirski , Bjorn Helgaas , Ingo Molnar , Josh Triplett , Lai Jiangshan , Mathieu Desnoyers , Petr Mladek , "Rafael J. Wysocki" , rcu@vger.kernel.org, Steven Rostedt , Yafang Shao Subject: Re: [PATCH -rcu dev 1/2] Revert b8c17e6664c4 ("rcu: Maintain special bits at bottom of ->dynticks counter") Message-ID: <20190906171646.GI4051@linux.ibm.com> Reply-To: paulmck@kernel.org References: <20190904101210.GM4125@linux.ibm.com> <20190904135420.GB240514@google.com> <20190904231308.GB4125@linux.ibm.com> <20190905153620.GG26466@google.com> <20190905164329.GT4125@linux.ibm.com> <20190906000137.GA224720@google.com> <20190906150806.GA11355@google.com> <20190906152144.GF4051@linux.ibm.com> <20190906152753.GA18734@linux.ibm.com> <20190906165751.GA40876@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190906165751.GA40876@google.com> User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:,, definitions=2019-09-06_07:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=0 phishscore=0 bulkscore=0 spamscore=0 clxscore=1034 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1906280000 definitions=main-1909060183 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Sep 06, 2019 at 12:57:51PM -0400, Joel Fernandes wrote: > On Fri, Sep 06, 2019 at 08:27:53AM -0700, Paul E. McKenney wrote: > > On Fri, Sep 06, 2019 at 08:21:44AM -0700, Paul E. McKenney wrote: > > > On Fri, Sep 06, 2019 at 11:08:06AM -0400, Joel Fernandes wrote: > > > > On Thu, Sep 05, 2019 at 08:01:37PM -0400, Joel Fernandes wrote: > > > > [snip] > > > > > > > > @@ -3004,7 +3007,7 @@ static int rcu_pending(void) > > > > > > > > return 0; > > > > > > > > > > > > > > > > /* Is the RCU core waiting for a quiescent state from this CPU? */ > > > > > > > > - if (rdp->core_needs_qs && !rdp->cpu_no_qs.b.norm) > > > > > > > > + if (READ_ONCE(rdp->core_needs_qs) && !rdp->cpu_no_qs.b.norm) > > > > > > > > return 1; > > > > > > > > > > > > > > > > /* Does this CPU have callbacks ready to invoke? */ > > > > > > > > @@ -3244,7 +3247,6 @@ int rcutree_prepare_cpu(unsigned int cpu) > > > > > > > > rdp->gp_seq = rnp->gp_seq; > > > > > > > > rdp->gp_seq_needed = rnp->gp_seq; > > > > > > > > rdp->cpu_no_qs.b.norm = true; > > > > > > > > - rdp->core_needs_qs = false; > > > > > > > > > > > > > > How about calling the new hint-clearing function here as well? Just for > > > > > > > robustness and consistency purposes? > > > > > > > > > > > > This and the next function are both called during a CPU-hotplug online > > > > > > operation, so there is little robustness or consistency to be had by > > > > > > doing it twice. > > > > > > > > > > Ok, sorry I missed you are clearing it below in the next function. That's > > > > > fine with me. > > > > > > > > > > This patch looks good to me and I am Ok with merging of these changes into > > > > > the original patch with my authorship as you mentioned. Or if you wanted to > > > > > be author, that's fine too :) > > > > > > > > Paul, does it make sense to clear these urgency hints in rcu_qs() as well? > > > > After all, we are clearing atleast one urgency hint there: the > > > > rcu_read_unlock_special::need_qs bit. > > Makes sense. > > > > We certainly don't want to turn off the scheduling-clock interrupt until > > > after the quiescent state has been reported to the RCU core. And it might > > > still be useful to have a heavy quiescent state because the grace-period > > > kthread can detect that. Just in case the CPU that just called rcu_qs() > > > is slow about actually reporting that quiescent state to the RCU core. > > > > Hmmm... Should ->need_qs ever be cleared from FQS to begin with? > > I did not see the FQS loop clearing ->need_qs in the rcu_read_unlock_special > union after looking for a few minutes. Could you clarify which path this? > > Or do you mean ->core_needs_qs? If so, I feel the FQS loop should clear it as > I think your patch does, since the FQS loop is essentially doing heavy-weight > RCU core processing right? > > Also, where does FQS loop clear rdp.cpu_no_qs? Shouldn't it clear that as > well for the dyntick-idle CPUs? Synchronization? Thanx, Paul