Received: by 2002:ac0:a594:0:0:0:0:0 with SMTP id m20-v6csp2640590imm; Sat, 12 May 2018 16:54:05 -0700 (PDT) X-Google-Smtp-Source: AB8JxZrgn59dJ6IkHsSWykVhJES9xsjASFegQA2dHb3ik0Y7Nb6/QIORCYRzmZBeVbzBDA+EVsuH X-Received: by 2002:a62:ccdc:: with SMTP id j89-v6mr4739875pfk.182.1526169245796; Sat, 12 May 2018 16:54:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1526169245; cv=none; d=google.com; s=arc-20160816; b=p4MK6KHbQ3PN13195jIuPyzLXZrLJIRsA7gg1dVUJpLk1AGQobku7t0SMtg+N9+4zR AKJL5nyYu1zJgQGWe9v4IWW9VRi6gsCXUAQBs0Dvv5h+uSCoIIpI9sX0CGAALCzULT6z oVeBApK48XKkNkzdoCVK1FgysREDiEGPY6ZgoLdZ7lyha0BheLdh48TdWwSd0v9pOvRF 1N4/XeSrfMnbWco/H6xMzxi2nR38kfstkLyRmP0X5Kg95ygHdv3l8saAELw9PE6ETcn0 ne602qlFT3bG/5Y/yT/H6SH0nKjw7oc4uP4nGJovdCM5mq0ta87u9Ypcdd8WETXved3i D3qw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature:arc-authentication-results; bh=V53HBcyUQjJvxBYx7rEr2Qa+0w0PUNJfpdmGuddxHGk=; b=CVnNn6BoWwge2VsXlPO0bRiMp2RegTy6etcuCcbi9HePw/KgQ0N1dP4OORkcA9eR71 cwtFefHJIuWeacM2M3uMlNi8pU36b7xYK5l716sbbC3vzkx84zsyT/qMv2+d6CJzXeyI g2lhFuJAWmRNiHpfjo0VsyS5CiI29q3TGMdta4tn/p91uPhs7Px/t0wxr59JAtT0kRQv +UBlIUGIHz5/l23VDDN/ydSDfxU1i70I7xpX1AExJW+p9QKgC1LGiT3f/PubXGyDKNvX Q8YCLd01Ni78p1RItKhpctuMfPgl3aWr83J/+xF3weLaCFnNdDf4IiLIlUKCz1M8UqiK 0LXw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@joelfernandes-org.20150623.gappssmtp.com header.s=20150623 header.b=LolArOX5; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c2-v6si6429896pli.269.2018.05.12.16.53.16; Sat, 12 May 2018 16:54:05 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@joelfernandes-org.20150623.gappssmtp.com header.s=20150623 header.b=LolArOX5; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751634AbeELXxF (ORCPT + 99 others); Sat, 12 May 2018 19:53:05 -0400 Received: from mail-pl0-f66.google.com ([209.85.160.66]:44018 "EHLO mail-pl0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750916AbeELXxE (ORCPT ); Sat, 12 May 2018 19:53:04 -0400 Received: by mail-pl0-f66.google.com with SMTP id c41-v6so63523plj.10 for ; Sat, 12 May 2018 16:53:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=joelfernandes-org.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=V53HBcyUQjJvxBYx7rEr2Qa+0w0PUNJfpdmGuddxHGk=; b=LolArOX5IKWiRSGO/Ah9bb3J3Im3jCbiPiKrqd/4V/xLLTwM66Fx7eN2o903O0qeq8 yzRam1cw/OyVomptDe5W3MM3I3og6E4TqvK1z792Ntep4VMyjRBOQsoYoRafgbu/ll++ E1aIFDqbel0TR/mGiMta6gubc2TcyjtolQiKyS/yShg7MJ4v2ObWgO8uuDbE/2KWF6pq 0ZIb0oGcUUKVEsmnJtJgIEj64skcMETF/IonPLP1adzmiPgOp612bI7r1dqj+5eBM1h9 08DpCE0B6MYGUxe/8prPOTkopx14UCZe5By4O7o/Jnf5HEcO3hHcg00GZ7eqanqCDD6P OmRw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=V53HBcyUQjJvxBYx7rEr2Qa+0w0PUNJfpdmGuddxHGk=; b=GQlElaukv6GElPFUatXMS8SqLbH2KittJICuxq/U3/f7QA+QPU6bMYMEEC3RsDAe/l VO8oqyndsB0bEIeWZ3Qoy+H1zCZZD3eQC/dSnJobo6+k/RKNeV9OCru9gc4LP47ahmnG MTwBljMi2iyypVNUkO/1qINurz7/OPVuLa4qVD9JMklgViU6vD9Uy0S5lGaAXzIcpEoM Smbcj4fJ23+lXYR6V9TPZJ6INvJnw6FIgnthwgfbNI9Ly4gkUowslX1lYAG/HZI270GN 8Eq4+8I9PPABh9G+9NJb2O0nnYTCfjW+vLqgTiyjborfl+p7eYtpdIvvdMYUYjkVayJ1 Kgag== X-Gm-Message-State: ALKqPwdz1QdIx9hsCmbN2VMKs5gdl+hTzhVA7pTDgknfVhsNHbQi8bl7 2z0WZshg6gRkGsmf7CBrXKgPPg== X-Received: by 2002:a17:902:b216:: with SMTP id t22-v6mr4094622plr.105.1526169183845; Sat, 12 May 2018 16:53:03 -0700 (PDT) Received: from localhost ([2620:0:1000:1600:3122:ea9c:d178:eb]) by smtp.gmail.com with ESMTPSA id q76-v6sm14456803pfi.139.2018.05.12.16.53.02 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Sat, 12 May 2018 16:53:03 -0700 (PDT) Date: Sat, 12 May 2018 16:53:01 -0700 From: Joel Fernandes To: "Paul E. McKenney" Cc: linux-kernel@vger.kernel.org, mingo@kernel.org, jiangshanlai@gmail.com, dipankar@in.ibm.com, akpm@linux-foundation.org, mathieu.desnoyers@efficios.com, josh@joshtriplett.org, tglx@linutronix.de, peterz@infradead.org, rostedt@goodmis.org, dhowells@redhat.com, edumazet@google.com, fweisbec@gmail.com, oleg@redhat.com, joel.opensrc@gmail.com, torvalds@linux-foundation.org, npiggin@gmail.com Subject: Re: [tip/core/rcu,16/21] rcu: Add funnel locking to rcu_start_this_gp() Message-ID: <20180512235301.GD192642@joelaf.mtv.corp.google.com> References: <1524452624-27589-16-git-send-email-paulmck@linux.vnet.ibm.com> <20180512060325.GA53808@joelaf.mtv.corp.google.com> <20180512144002.GI26088@linux.vnet.ibm.com> <20180512144438.GA12826@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180512144438.GA12826@linux.vnet.ibm.com> User-Agent: Mutt/1.9.2 (2017-12-15) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sat, May 12, 2018 at 07:44:38AM -0700, Paul E. McKenney wrote: > On Sat, May 12, 2018 at 07:40:02AM -0700, Paul E. McKenney wrote: > > On Fri, May 11, 2018 at 11:03:25PM -0700, Joel Fernandes wrote: > > > On Sun, Apr 22, 2018 at 08:03:39PM -0700, Paul E. McKenney wrote: > > > > The rcu_start_this_gp() function had a simple form of funnel locking that > > > > used only the leaves and root of the rcu_node tree, which is fine for > > > > systems with only a few hundred CPUs, but sub-optimal for systems having > > > > thousands of CPUs. This commit therefore adds full-tree funnel locking. > > > > > > > > This variant of funnel locking is unusual in the following ways: > > > > > > > > 1. The leaf-level rcu_node structure's ->lock is held throughout. > > > > Other funnel-locking implementations drop the leaf-level lock > > > > before progressing to the next level of the tree. > > > > > > > > 2. Funnel locking can be started at the root, which is convenient > > > > for code that already holds the root rcu_node structure's ->lock. > > > > Other funnel-locking implementations start at the leaves. > > > > > > > > 3. If an rcu_node structure other than the initial one believes > > > > that a grace period is in progress, it is not necessary to > > > > go further up the tree. This is because grace-period cleanup > > > > scans the full tree, so that marking the need for a subsequent > > > > grace period anywhere in the tree suffices -- but only if > > > > a grace period is currently in progress. > > > > > > > > 4. It is possible that the RCU grace-period kthread has not yet > > > > started, and this case must be handled appropriately. > > > > > > > > However, the general approach of using a tree to control lock contention > > > > is still in place. > > > > > > > > Signed-off-by: Paul E. McKenney > > > > --- > > > > kernel/rcu/tree.c | 92 +++++++++++++++++++++---------------------------------- > > > > 1 file changed, 35 insertions(+), 57 deletions(-) > > > > > > > > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c > > > > index 94519c7d552f..d3c769502929 100644 > > > > --- a/kernel/rcu/tree.c > > > > +++ b/kernel/rcu/tree.c > > > > @@ -1682,74 +1682,52 @@ static bool rcu_start_this_gp(struct rcu_node *rnp, struct rcu_data *rdp, > > > > { > > > > bool ret = false; > > > > struct rcu_state *rsp = rdp->rsp; > > > > - struct rcu_node *rnp_root = rcu_get_root(rsp); > > > > - > > > > - raw_lockdep_assert_held_rcu_node(rnp); > > > > - > > > > - /* If the specified GP is already known needed, return to caller. */ > > > > - trace_rcu_this_gp(rnp, rdp, c, TPS("Startleaf")); > > > > - if (need_future_gp_element(rnp, c)) { > > > > - trace_rcu_this_gp(rnp, rdp, c, TPS("Prestartleaf")); > > > > - goto out; > > > > - } > > > > + struct rcu_node *rnp_root; > > > > > > > > /* > > > > - * If this rcu_node structure believes that a grace period is in > > > > - * progress, then we must wait for the one following, which is in > > > > - * "c". Because our request will be noticed at the end of the > > > > - * current grace period, we don't need to explicitly start one. > > > > + * Use funnel locking to either acquire the root rcu_node > > > > + * structure's lock or bail out if the need for this grace period > > > > + * has already been recorded -- or has already started. If there > > > > + * is already a grace period in progress in a non-leaf node, no > > > > + * recording is needed because the end of the grace period will > > > > + * scan the leaf rcu_node structures. Note that rnp->lock must > > > > + * not be released. > > > > */ > > > > - if (rnp->gpnum != rnp->completed) { > > > > - need_future_gp_element(rnp, c) = true; > > > > - trace_rcu_this_gp(rnp, rdp, c, TPS("Startedleaf")); > > > > - goto out; > > > > > > Referring to the above negative diff as [1] (which I wanted to refer to later > > > in this message..) > > > > > > > + raw_lockdep_assert_held_rcu_node(rnp); > > > > + trace_rcu_this_gp(rnp, rdp, c, TPS("Startleaf")); > > > > + for (rnp_root = rnp; 1; rnp_root = rnp_root->parent) { > > > > + if (rnp_root != rnp) > > > > + raw_spin_lock_rcu_node(rnp_root); > > > > + if (need_future_gp_element(rnp_root, c) || > > > > + ULONG_CMP_GE(rnp_root->gpnum, c) || > > > > + (rnp != rnp_root && > > > > + rnp_root->gpnum != rnp_root->completed)) { > > > > + trace_rcu_this_gp(rnp_root, rdp, c, TPS("Prestarted")); > > > > + goto unlock_out; > > > > > > I was a bit confused about the implementation of the above for loop: > > > > > > In the previous code (which I refer to in the negative diff [1]), we were > > > checking the leaf, and if the leaf believed that RCU was not idle, then we > > > were marking the need for the future GP and quitting this function. In the > > > new code, it seems like even if the leaf believes RCU is not-idle, we still > > > go all the way up the tree. > > > > > > I think the big change is, in the above new for loop, we either bail of if a > > > future GP need was already marked by an intermediate node, or we go marking > > > up the whole tree about the need for one. > > > > > > If a leaf believes RCU is not idle, can we not just mark the future GP need > > > like before and return? It seems we would otherwise increase the lock > > > contention since now we lock intermediate nodes and then finally even the > > > root. Where as before we were not doing that if the leaf believed RCU was not > > > idle. > > > > > > I am sorry if I missed something obvious. > > > > The trick is that we do the check before we have done the marking. > > So if we bailed, we would not have marked at all. If we are at an > > intermediate node and a grace period is in progress, we do bail. > > > > You are right that this means that we (perhaps unnecessarily) acquire > > the lock of the parent rcu_node, which might or might not be the root. > > And on systems with default fanout with 1024 CPUs or fewer, yes, it will > > be the root, and yes, this is the common case. So might be well worth > > improving. > > > > One way to implement the old mark-and-return approach as you suggest > > above would be as shown below (untested, probably doesn't build, and > > against current rcu/dev). What do you think? > > > > > The other thing is we now don't have the 'Startedleaf' trace like we did > > > before. I sent a patch to remove it, but I think the removal of that is > > > somehow connected to what I was just talking about.. and I was thinking if we > > > should really remove it. Should we add the case for checking leaves only back > > > or is that a bad thing to do? > > > > Suppose I got hit by a bus and you were stuck with the job of debugging > > this. What traces would you want and where would they be? Keeping in > > mind that too-frequent traces have their problems as well. > > > > (Yes, I will be trying very hard to avoid this scenario for as long as > > I can, but this might be a good way for you (and everyone else) to be > > thinking about this.) > > > > Thanx, Paul > > > > ------------------------------------------------------------------------ > > > > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c > > index 1abe29a43944..abf3195e01dc 100644 > > --- a/kernel/rcu/tree.c > > +++ b/kernel/rcu/tree.c > > @@ -1585,6 +1585,8 @@ static bool rcu_start_this_gp(struct rcu_node *rnp, struct rcu_data *rdp, > > goto unlock_out; > > } > > rnp_root->gp_seq_needed = c; e > > + if (rcu_seq_statn(rcu_seq_current(&rnp_root->gp_seq))) > > + if (rcu_seq_state(rcu_seq_current(&rnp_root->gp_seq))) > Right... Make that rnp->gp_seq. Memory locality and all that... > > Thanx, Paul Yes, I think this condition would be right to add. I could roll it into my clean up patch. Also, I think its better if we split the conditions for prestarted into separate if conditions and comment them so its clear, I have started to do that in my tree. If you don't mind going through the if conditions in the funnel locking loop with me, it would be quite helpful so that I don't mess the code up and would also help me add tracing correctly. The if condition for prestarted is this: if (need_future_gp_element(rnp_root, c) || ULONG_CMP_GE(rnp_root->gpnum, c) || (rnp != rnp_root && rnp_root->gpnum != rnp_root->completed)) { trace_rcu_this_gp(rnp_root, rdp, c, TPS("Prestarted")); goto unlock_out; need_future_gp_element(rnp_root, c) = true; As of 16/21, the heart of the loop is the above (excluding the locking bits) In this what confuses me is the second and the third condition for pre-started. The second condition is: ULONG_CMP_GE(rnp_root->gpnum, c). AIUI the goal of this condition is to check whether the requested grace period has already started. I believe then the above check is insufficient. The reason I think its insufficient is I believe we should also check the state of the grace period to augment this check. IMO the condition should really be: (ULONG_CMP_GT(rnp_root->gpnum, c) || (rnp_root->gpnum == c && rnp_root->gpnum != rnp_root->completed)) In a later patch you replaced this with rseq_done(&rnp_root->gp_seq, c) which kind of accounts for the state, except that rseq_done uses ULONG_CMP_GE, whereas to fix this, rseq_done IMO should be using ULONG_CMP_GT to be equivalent to the above check. Do you agree? The third condition for pre-started is: (rnp != rnp_root && rnp_root->gpnum != rnp_root->completed)) This as I followed from your commit message is if an intermediate node thinks RCU is non-idle, then its not necessary to mark the tree and we can bail out since the clean up will scan the whole tree anyway. That makes sense to me but I think I will like to squash the diff in your previous email into this condition as well to handle both conditions together. thanks, - Joel