Received: by 2002:a25:c593:0:0:0:0:0 with SMTP id v141csp2451790ybe; Tue, 3 Sep 2019 13:05:36 -0700 (PDT) X-Google-Smtp-Source: APXvYqy5EkuLGq5nHujiJ3VaUllsLaanxHxZXZlOGmr/JYVV0CA4W2UrUxfgWuVPjh2c7rk9yJJM X-Received: by 2002:a62:f80a:: with SMTP id d10mr3913380pfh.98.1567541136103; Tue, 03 Sep 2019 13:05:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1567541136; cv=none; d=google.com; s=arc-20160816; b=AQOaDnrZc+9wHggkfHxfv5QFcfrOCclOIaEqT+RdiJIu6HAZeAQUerCFjDhgBUVnQY kB+eovDL6JOgbeX+zp7fR/vAMLUvTSyvJaLrNxOlaaJmrEhnBKaxuyc1gFK/MmS0jQ7d pQWFVoqLxf2K4OIwmXlJfo8lociO3ui8daej99dlvDShLrz+mBGusV3LgAoPWlEEGETU wdMrdwN9KuCDemIxvyFV7aH9O6SM3LqB+EjaZ+6mYaPK+6TSPmwdJ9eRSMVVwGlgOjiw YDdHIaiXFiM5jnjvJOqMFdzEjOui4blgJWLSFyDAxxpbqR7T+uu68452noGGD7P9tk6M 1kdg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:reply-to:message-id :subject:cc:to:from:date; bh=gMUKbw2SrXF4pHNiL+33UoaUUt8rzL3PTTOI3LRz2u4=; b=v5mLiv77Z+lXzUcWMpfWVwZWJR21ZV1YaT3n64ShW33/5QXvCH0YC8Ig8wYFFWghBY AZkvug3sDBnSJDf0lE67YPyYMoe5yFbVfqXE21+o2UD8QTmQLuJvyAhYBA4mvdLqU6Rq 8RqqkonmgJci5qpRmsUxifORggXDiHw+8UCeOzm+WCC6wTIp+ZnzfQxRiySDLQkXNlu7 HEzd2hN5VAwUNi7EHShCJDZddUX/rL1v0B6E6v+dsr3+n0Xp/PWLi0w80ZFK0YXh7KCK OlpV8RQzqMV46nVQtHi5oZ8lewElz72uUY2g4sYfb+SghTwS/HUI0VhWKs8wwwg5t2JK QFVQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id r15si15117215pgj.71.2019.09.03.13.05.18; Tue, 03 Sep 2019 13:05:36 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726451AbfICUEB (ORCPT + 99 others); Tue, 3 Sep 2019 16:04:01 -0400 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:59172 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725953AbfICUEB (ORCPT ); Tue, 3 Sep 2019 16:04:01 -0400 Received: from pps.filterd (m0098410.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x83K0Nlg121651; Tue, 3 Sep 2019 16:02:48 -0400 Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com with ESMTP id 2usxpwrape-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 03 Sep 2019 16:02:48 -0400 Received: from m0098410.ppops.net (m0098410.ppops.net [127.0.0.1]) by pps.reinject (8.16.0.27/8.16.0.27) with SMTP id x83K0Swg122300; Tue, 3 Sep 2019 16:02:47 -0400 Received: from ppma02wdc.us.ibm.com (aa.5b.37a9.ip4.static.sl-reverse.com [169.55.91.170]) by mx0a-001b2d01.pphosted.com with ESMTP id 2usxpwrap0-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 03 Sep 2019 16:02:47 -0400 Received: from pps.filterd (ppma02wdc.us.ibm.com [127.0.0.1]) by ppma02wdc.us.ibm.com (8.16.0.27/8.16.0.27) with SMTP id x83K0HML006524; Tue, 3 Sep 2019 20:02:46 GMT Received: from b01cxnp22035.gho.pok.ibm.com (b01cxnp22035.gho.pok.ibm.com [9.57.198.25]) by ppma02wdc.us.ibm.com with ESMTP id 2uqgh6vpg1-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 03 Sep 2019 20:02:46 +0000 Received: from b01ledav003.gho.pok.ibm.com (b01ledav003.gho.pok.ibm.com [9.57.199.108]) by b01cxnp22035.gho.pok.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id x83K2je154460696 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 3 Sep 2019 20:02:46 GMT Received: from b01ledav003.gho.pok.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id D4B28B2064; Tue, 3 Sep 2019 20:02:45 +0000 (GMT) Received: from b01ledav003.gho.pok.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 9B117B205F; Tue, 3 Sep 2019 20:02:45 +0000 (GMT) Received: from paulmck-ThinkPad-W541 (unknown [9.70.82.154]) by b01ledav003.gho.pok.ibm.com (Postfix) with ESMTP; Tue, 3 Sep 2019 20:02:45 +0000 (GMT) Received: by paulmck-ThinkPad-W541 (Postfix, from userid 1000) id C3BC816C1074; Tue, 3 Sep 2019 13:02:49 -0700 (PDT) Date: Tue, 3 Sep 2019 13:02:49 -0700 From: "Paul E. McKenney" To: "Joel Fernandes (Google)" Cc: linux-kernel@vger.kernel.org, Andy Lutomirski , Bjorn Helgaas , Ingo Molnar , Josh Triplett , Lai Jiangshan , Mathieu Desnoyers , Petr Mladek , "Rafael J. Wysocki" , rcu@vger.kernel.org, Steven Rostedt , Yafang Shao Subject: Re: [PATCH -rcu dev 1/2] Revert b8c17e6664c4 ("rcu: Maintain special bits at bottom of ->dynticks counter") Message-ID: <20190903200249.GD4125@linux.ibm.com> Reply-To: paulmck@kernel.org References: <20190830162348.192303-1-joel@joelfernandes.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190830162348.192303-1-joel@joelfernandes.org> User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:,, definitions=2019-09-03_04:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=0 phishscore=0 bulkscore=0 spamscore=0 clxscore=1034 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1906280000 definitions=main-1909030199 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Aug 30, 2019 at 12:23:47PM -0400, Joel Fernandes (Google) wrote: > This code is unused and can be removed now. Revert was straightforward. > > Tested with light rcutorture. > > Link: http://lore.kernel.org/r/CALCETrWNPOOdTrFabTDd=H7+wc6xJ9rJceg6OL1S0rTV5pfSsA@mail.gmail.com > Suggested-by: Andy Lutomirski > Signed-off-by: Joel Fernandes (Google) Some questions below. > --- > include/linux/rcutiny.h | 3 -- > kernel/rcu/tree.c | 82 ++++++++++------------------------------- > 2 files changed, 19 insertions(+), 66 deletions(-) > > diff --git a/include/linux/rcutiny.h b/include/linux/rcutiny.h > index b7607e2667ae..b3f689711289 100644 > --- a/include/linux/rcutiny.h > +++ b/include/linux/rcutiny.h > @@ -14,9 +14,6 @@ > > #include /* for HZ */ > > -/* Never flag non-existent other CPUs! */ > -static inline bool rcu_eqs_special_set(int cpu) { return false; } > - > static inline unsigned long get_state_synchronize_rcu(void) > { > return 0; > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c > index 68ebf0eb64c8..417dd00b9e87 100644 > --- a/kernel/rcu/tree.c > +++ b/kernel/rcu/tree.c > @@ -69,20 +69,10 @@ > > /* Data structures. */ > > -/* > - * Steal a bit from the bottom of ->dynticks for idle entry/exit > - * control. Initially this is for TLB flushing. > - */ > -#define RCU_DYNTICK_CTRL_MASK 0x1 > -#define RCU_DYNTICK_CTRL_CTR (RCU_DYNTICK_CTRL_MASK + 1) > -#ifndef rcu_eqs_special_exit > -#define rcu_eqs_special_exit() do { } while (0) > -#endif > - > static DEFINE_PER_CPU_SHARED_ALIGNED(struct rcu_data, rcu_data) = { > .dynticks_nesting = 1, > .dynticks_nmi_nesting = DYNTICK_IRQ_NONIDLE, > - .dynticks = ATOMIC_INIT(RCU_DYNTICK_CTRL_CTR), > + .dynticks = ATOMIC_INIT(1), > }; > struct rcu_state rcu_state = { > .level = { &rcu_state.node[0] }, > @@ -229,20 +219,15 @@ void rcu_softirq_qs(void) > static void rcu_dynticks_eqs_enter(void) > { > struct rcu_data *rdp = this_cpu_ptr(&rcu_data); > - int seq; > + int special; Given that we really are now loading a pure sequence number, why change the name to "special"? This revert is after all removing the ability of ->dynticks to be special. > /* > - * CPUs seeing atomic_add_return() must see prior RCU read-side > + * CPUs seeing atomic_inc_return() must see prior RCU read-side > * critical sections, and we also must force ordering with the > * next idle sojourn. > */ > - seq = atomic_add_return(RCU_DYNTICK_CTRL_CTR, &rdp->dynticks); > - /* Better be in an extended quiescent state! */ > - WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && > - (seq & RCU_DYNTICK_CTRL_CTR)); > - /* Better not have special action (TLB flush) pending! */ > - WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && > - (seq & RCU_DYNTICK_CTRL_MASK)); > + special = atomic_inc_return(&rdp->dynticks); > + WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && special & 0x1); > } > > /* > @@ -252,22 +237,15 @@ static void rcu_dynticks_eqs_enter(void) > static void rcu_dynticks_eqs_exit(void) > { > struct rcu_data *rdp = this_cpu_ptr(&rcu_data); > - int seq; > + int special; Ditto. > /* > - * CPUs seeing atomic_add_return() must see prior idle sojourns, > + * CPUs seeing atomic_inc_return() must see prior idle sojourns, > * and we also must force ordering with the next RCU read-side > * critical section. > */ > - seq = atomic_add_return(RCU_DYNTICK_CTRL_CTR, &rdp->dynticks); > - WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && > - !(seq & RCU_DYNTICK_CTRL_CTR)); > - if (seq & RCU_DYNTICK_CTRL_MASK) { > - atomic_andnot(RCU_DYNTICK_CTRL_MASK, &rdp->dynticks); > - smp_mb__after_atomic(); /* _exit after clearing mask. */ > - /* Prefer duplicate flushes to losing a flush. */ > - rcu_eqs_special_exit(); > - } > + special = atomic_inc_return(&rdp->dynticks); > + WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && !(special & 0x1)); > } > > /* > @@ -284,9 +262,9 @@ static void rcu_dynticks_eqs_online(void) > { > struct rcu_data *rdp = this_cpu_ptr(&rcu_data); > > - if (atomic_read(&rdp->dynticks) & RCU_DYNTICK_CTRL_CTR) > + if (atomic_read(&rdp->dynticks) & 0x1) > return; > - atomic_add(RCU_DYNTICK_CTRL_CTR, &rdp->dynticks); > + atomic_add(0x1, &rdp->dynticks); This could be atomic_inc(), right? > } > > /* > @@ -298,7 +276,7 @@ bool rcu_dynticks_curr_cpu_in_eqs(void) > { > struct rcu_data *rdp = this_cpu_ptr(&rcu_data); > > - return !(atomic_read(&rdp->dynticks) & RCU_DYNTICK_CTRL_CTR); > + return !(atomic_read(&rdp->dynticks) & 0x1); > } > > /* > @@ -309,7 +287,7 @@ int rcu_dynticks_snap(struct rcu_data *rdp) > { > int snap = atomic_add_return(0, &rdp->dynticks); > > - return snap & ~RCU_DYNTICK_CTRL_MASK; > + return snap; > } > > /* > @@ -318,7 +296,7 @@ int rcu_dynticks_snap(struct rcu_data *rdp) > */ > static bool rcu_dynticks_in_eqs(int snap) > { > - return !(snap & RCU_DYNTICK_CTRL_CTR); > + return !(snap & 0x1); > } > > /* > @@ -331,28 +309,6 @@ static bool rcu_dynticks_in_eqs_since(struct rcu_data *rdp, int snap) > return snap != rcu_dynticks_snap(rdp); > } > > -/* > - * Set the special (bottom) bit of the specified CPU so that it > - * will take special action (such as flushing its TLB) on the > - * next exit from an extended quiescent state. Returns true if > - * the bit was successfully set, or false if the CPU was not in > - * an extended quiescent state. > - */ > -bool rcu_eqs_special_set(int cpu) > -{ > - int old; > - int new; > - struct rcu_data *rdp = &per_cpu(rcu_data, cpu); > - > - do { > - old = atomic_read(&rdp->dynticks); > - if (old & RCU_DYNTICK_CTRL_CTR) > - return false; > - new = old | RCU_DYNTICK_CTRL_MASK; > - } while (atomic_cmpxchg(&rdp->dynticks, old, new) != old); > - return true; > -} > - > /* > * Let the RCU core know that this CPU has gone through the scheduler, > * which is a quiescent state. This is called when the need for a > @@ -366,13 +322,13 @@ bool rcu_eqs_special_set(int cpu) > */ > void rcu_momentary_dyntick_idle(void) > { > - int special; > + struct rcu_data *rdp = this_cpu_ptr(&rcu_data); > + int special = atomic_add_return(2, &rdp->dynticks); > > - raw_cpu_write(rcu_data.rcu_need_heavy_qs, false); > - special = atomic_add_return(2 * RCU_DYNTICK_CTRL_CTR, > - &this_cpu_ptr(&rcu_data)->dynticks); > /* It is illegal to call this from idle state. */ > - WARN_ON_ONCE(!(special & RCU_DYNTICK_CTRL_CTR)); > + WARN_ON_ONCE(!(special & 0x1)); > + > + raw_cpu_write(rcu_data.rcu_need_heavy_qs, false); Is it really OK to clear .rcu_need_heavy_qs after the atomic_add_return()? > rcu_preempt_deferred_qs(current); > } > EXPORT_SYMBOL_GPL(rcu_momentary_dyntick_idle); > -- > 2.23.0.187.g17f5b7556c-goog >