Received: by 2002:ac0:a594:0:0:0:0:0 with SMTP id m20-v6csp873974imm; Mon, 21 May 2018 16:14:09 -0700 (PDT) X-Google-Smtp-Source: AB8JxZorktEsCff6H4Ip76xt1KJB4e0Vh5Pecp8HuDgKAlQhdn8Cm95LEkWNE6Qm1vLXFLRGxVvX X-Received: by 2002:a65:665a:: with SMTP id z26-v6mr14487510pgv.302.1526944448941; Mon, 21 May 2018 16:14:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1526944448; cv=none; d=google.com; s=arc-20160816; b=p2OAS/mWxtxOeB9cAK/HpUqcxPVcqzdfMDnSMB616AiggAiHNHrkeVJnijb3uesjzq DzStzWXG2ER+QSl5dGkQAROFVWPELtCDECnyHbHhGL5e0U2sDQL5ITzNM/1Y0pV0VOnp hR1aKZEkqRLepDOrrnj4HRQYZAsBUAt9lYvnwOr74CDAer8jfX9A84VW1aMVFaa4mvEz zdGiHFUFvcFJ12L6HdrX+JMB1dGoaG0CglhdWOoA4U7Uh9esDyui45QiW2V4nn/el9wp E73PRMyrTuUMRC1KoiqbBGFYQ7N4Kb4XaWvOTjHBqT7qKpmY1vUo/EzHceNXGPkNbXSt msRw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:user-agent:in-reply-to :content-disposition:mime-version:references:reply-to:subject:cc:to :from:date:arc-authentication-results; bh=2XtmXsj1zIxRfePp9rx8nCvb+0KIGfsvlaq+1fvUxN8=; b=ad+iWFcIdg589geArfPAuBimUwBq/gRC7k7H0q3smHG6/zr4uNf6AvQrO6zJA/C4wR bC4nKTUm95P0TMOUvSQeeI6jElmsqBm9UzjEf5opNNhJEkqmQxckpoiVcuMqkyryfH1k BVj4qR8wx1YWVyR103Z7uWygmIZitcAhUl/eKGybuvM/Dn3dUKwTRlfsBiuMM7snQesc cA+cTXsNBmHzyj/lmGZljfy0z38IX8CzS7veWUJLrg7VRn84GIXl4B9yG9fuZ0CzQm5z dCcLOZwsXrBVFppsiFpbw+cYOrpthpgtLzfkAs+bjcfT6VGq5Jc0mEr6e3mt6ygy6MRQ d0Fg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=ibm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id h70-v6si1079863pgc.170.2018.05.21.16.13.54; Mon, 21 May 2018 16:14:08 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751745AbeEUXM1 (ORCPT + 99 others); Mon, 21 May 2018 19:12:27 -0400 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:33530 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751152AbeEUXM0 (ORCPT ); Mon, 21 May 2018 19:12:26 -0400 Received: from pps.filterd (m0098413.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.22/8.16.0.22) with SMTP id w4LN3fLK011516 for ; Mon, 21 May 2018 19:12:25 -0400 Received: from e17.ny.us.ibm.com (e17.ny.us.ibm.com [129.33.205.207]) by mx0b-001b2d01.pphosted.com with ESMTP id 2j43n483fc-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Mon, 21 May 2018 19:12:25 -0400 Received: from localhost by e17.ny.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Mon, 21 May 2018 19:12:24 -0400 Received: from b01cxnp23032.gho.pok.ibm.com (9.57.198.27) by e17.ny.us.ibm.com (146.89.104.204) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Mon, 21 May 2018 19:12:21 -0400 Received: from b01ledav003.gho.pok.ibm.com (b01ledav003.gho.pok.ibm.com [9.57.199.108]) by b01cxnp23032.gho.pok.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id w4LNCKfN45678652; Mon, 21 May 2018 23:12:20 GMT Received: from b01ledav003.gho.pok.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id A9E59B2065; Mon, 21 May 2018 20:14:11 -0400 (EDT) Received: from b01ledav003.gho.pok.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 79736B205F; Mon, 21 May 2018 20:14:11 -0400 (EDT) Received: from paulmck-ThinkPad-W541 (unknown [9.70.82.108]) by b01ledav003.gho.pok.ibm.com (Postfix) with ESMTP; Mon, 21 May 2018 20:14:11 -0400 (EDT) Received: by paulmck-ThinkPad-W541 (Postfix, from userid 1000) id 6920616C3F37; Mon, 21 May 2018 16:13:57 -0700 (PDT) Date: Mon, 21 May 2018 16:13:57 -0700 From: "Paul E. McKenney" To: Joel Fernandes Cc: linux-kernel@vger.kernel.org, Joel Fernandes , Josh Triplett , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , byungchul.park@lge.com, kernel-team@android.com Subject: Re: [PATCH v3 3/4] rcu: Use better variable names in funnel locking loop Reply-To: paulmck@linux.vnet.ibm.com References: <20180521044220.123933-1-joel@joelfernandes.org> <20180521044220.123933-4-joel@joelfernandes.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180521044220.123933-4-joel@joelfernandes.org> User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-GCONF: 00 x-cbid: 18052123-0040-0000-0000-0000042EBF3D X-IBM-SpamModules-Scores: X-IBM-SpamModules-Versions: BY=3.00009063; HX=3.00000241; KW=3.00000007; PH=3.00000004; SC=3.00000261; SDB=6.01035737; UDB=6.00529790; IPR=6.00814864; MB=3.00021229; MTD=3.00000008; XFM=3.00000015; UTC=2018-05-21 23:12:24 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 18052123-0041-0000-0000-00000834DA4D Message-Id: <20180521231357.GI3803@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:,, definitions=2018-05-21_09:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=0 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 impostorscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1709140000 definitions=main-1805210265 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sun, May 20, 2018 at 09:42:19PM -0700, Joel Fernandes wrote: > The funnel locking loop in rcu_start_this_gp uses rcu_root as a > temporary variable while walking the combining tree. This causes a > tiresome exercise of a code reader reminding themselves that rcu_root > may not be root. Lets just call it rnp, and rename other variables as > well to be more appropriate. > > Original patch: https://patchwork.kernel.org/patch/10396577/ > > Signed-off-by: Joel Fernandes Nice! Please see feedback interspersed below. Thanx, Paul > --- > kernel/rcu/tree.c | 48 ++++++++++++++++++++++++----------------------- > 1 file changed, 25 insertions(+), 23 deletions(-) > > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c > index 0ffd41ba304f..879c67a31116 100644 > --- a/kernel/rcu/tree.c > +++ b/kernel/rcu/tree.c > @@ -1526,7 +1526,7 @@ static void trace_rcu_this_gp(struct rcu_node *rnp, struct rcu_data *rdp, > > /* > * rcu_start_this_gp - Request the start of a particular grace period > - * @rnp: The leaf node of the CPU from which to start. > + * @rnp_start: The leaf node of the CPU from which to start. > * @rdp: The rcu_data corresponding to the CPU from which to start. > * @gp_seq_req: The gp_seq of the grace period to start. > * > @@ -1540,12 +1540,12 @@ static void trace_rcu_this_gp(struct rcu_node *rnp, struct rcu_data *rdp, > * > * Returns true if the GP thread needs to be awakened else false. > */ > -static bool rcu_start_this_gp(struct rcu_node *rnp, struct rcu_data *rdp, > +static bool rcu_start_this_gp(struct rcu_node *rnp_start, struct rcu_data *rdp, > unsigned long gp_seq_req) > { > bool ret = false; > struct rcu_state *rsp = rdp->rsp; > - struct rcu_node *rnp_root; > + struct rcu_node *rnp, *rnp_root = NULL; Unless I am going blind, this patch really isn't using rnp_root. It could be removed. > > /* > * Use funnel locking to either acquire the root rcu_node > @@ -1556,34 +1556,36 @@ static bool rcu_start_this_gp(struct rcu_node *rnp, struct rcu_data *rdp, > * scan the leaf rcu_node structures. Note that rnp->lock must > * not be released. > */ > - raw_lockdep_assert_held_rcu_node(rnp); > - trace_rcu_this_gp(rnp, rdp, gp_seq_req, TPS("Startleaf")); > - for (rnp_root = rnp; 1; rnp_root = rnp_root->parent) { > - if (rnp_root != rnp) > - raw_spin_lock_rcu_node(rnp_root); > - if (ULONG_CMP_GE(rnp_root->gp_seq_needed, gp_seq_req) || > - rcu_seq_started(&rnp_root->gp_seq, gp_seq_req) || > - (rnp != rnp_root && > - rcu_seq_state(rcu_seq_current(&rnp_root->gp_seq)))) { > - trace_rcu_this_gp(rnp_root, rdp, gp_seq_req, > + raw_lockdep_assert_held_rcu_node(rnp_start); > + trace_rcu_this_gp(rnp_start, rdp, gp_seq_req, TPS("Startleaf")); > + for (rnp = rnp_start; 1; rnp = rnp->parent) { > + if (rnp != rnp_start) > + raw_spin_lock_rcu_node(rnp); > + if (ULONG_CMP_GE(rnp->gp_seq_needed, gp_seq_req) || > + rcu_seq_started(&rnp->gp_seq, gp_seq_req) || > + (rnp != rnp_start && > + rcu_seq_state(rcu_seq_current(&rnp->gp_seq)))) { > + trace_rcu_this_gp(rnp, rdp, gp_seq_req, > TPS("Prestarted")); > goto unlock_out; > } > - rnp_root->gp_seq_needed = gp_seq_req; > - if (rcu_seq_state(rcu_seq_current(&rnp->gp_seq))) { > + rnp->gp_seq_needed = gp_seq_req; > + if (rcu_seq_state(rcu_seq_current(&rnp_start->gp_seq))) { The original had a performance bug, which is quite a bit more obvious given the new names, so thank you for that! The above statement should instead be as follows: if (rcu_seq_state(rcu_seq_current(&rnp->gp_seq))) { It does not make sense to keep checking the starting rcu_node because changes to ->gp_seq happen first at the top of the tree. So we might take an earlier exit by checking the current rnp instead of rechecking rnp_start over and over. Please feel free to make this change, which is probably best as a separate patch. That way this rename patch can remain a straightforward rename patch. > /* > * We just marked the leaf, and a grace period > * is in progress, which means that rcu_gp_cleanup() > * will see the marking. Bail to reduce contention. > */ > - trace_rcu_this_gp(rnp, rdp, gp_seq_req, > + trace_rcu_this_gp(rnp_start, rdp, gp_seq_req, > TPS("Startedleaf")); > goto unlock_out; > } > - if (rnp_root != rnp && rnp_root->parent != NULL) > - raw_spin_unlock_rcu_node(rnp_root); > - if (!rnp_root->parent) > + if (rnp != rnp_start && rnp->parent != NULL) > + raw_spin_unlock_rcu_node(rnp); > + if (!rnp->parent) { > + rnp_root = rnp; Since rnp_root is otherwise unused in the new version, the above statement can be dropped along with the "if" statement's braces and the declaration. > break; /* At root, and perhaps also leaf. */ > + } > } > > /* If GP already in progress, just leave, otherwise start one. */ > @@ -1601,11 +1603,11 @@ static bool rcu_start_this_gp(struct rcu_node *rnp, struct rcu_data *rdp, > trace_rcu_grace_period(rsp->name, READ_ONCE(rsp->gp_seq), TPS("newreq")); > ret = true; /* Caller must wake GP kthread. */ > unlock_out: > - if (rnp != rnp_root) > - raw_spin_unlock_rcu_node(rnp_root); > + if (rnp != rnp_start) > + raw_spin_unlock_rcu_node(rnp); > /* Push furthest requested GP to leaf node and rcu_data structure. */ > - if (ULONG_CMP_GE(rnp_root->gp_seq_needed, gp_seq_req)) { > - rnp->gp_seq_needed = gp_seq_req; > + if (ULONG_CMP_GE(rnp->gp_seq_needed, gp_seq_req)) { > + rnp_start->gp_seq_needed = gp_seq_req; > rdp->gp_seq_needed = gp_seq_req; > } > return ret; > -- > 2.17.0.441.gb46fe60e1d-goog >