Received: by 2002:a05:6a10:16a7:0:0:0:0 with SMTP id gp39csp1687625pxb; Fri, 20 Nov 2020 16:40:12 -0800 (PST) X-Google-Smtp-Source: ABdhPJxr5s4FrBydvFLbkccH5w2fRbmjczNoPNY4EDKocA4JEYVzwHtS2Uc+xPazvoHtppIA/jrY X-Received: by 2002:a17:907:d8b:: with SMTP id go11mr25534197ejc.247.1605919212159; Fri, 20 Nov 2020 16:40:12 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1605919212; cv=none; d=google.com; s=arc-20160816; b=rZnVNo8bvViUgdOIONlbfqV95PE1+hXpITunm4VBtIPW51r1++Wlaj9chNk2ZX8eNi XnQlZSR4UnF+A3w8DZi2uatI+oql6jzvp5Gtxfth2KxZ0sfMQx86dulmexnZ/PMp96BJ E6l//f0YAg838xnnnimzgW7d12Gv7gr6YeBfmxcYw54t6+5Q+xET7JmYHvckRSfoGwII dF3seuJl45RNlciR7xiFpC9ib0dpYjpntNoKwTPT1nR7MQEtVGGWx8WHWQWxIz5MIzcE XAVkcQQuXEjB9ztW7rYOOHZgoFdpe41raCQll8IkEzf8erXiv0Hy6W7+zK5atQi4yaiS 6Srg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:reply-to:message-id:subject:cc:to:from:date :dkim-signature; bh=H9t113/3KS7DneZN7gT3Hved4JlDifbY/M0DB4FK+78=; b=hZ5MYbB0GPFXN8/dJ/DBaUJTyOe/z086n8v36yvmXxeUvI0JKmOKlyZUhqn8v6V3Sl vOcz7qumULVjit9tclsxtFUX5U/7Dl+MyavileSuub3N8mTLN6lY3NeQdkCvOy0d7g+x Yg4pEJGOSvcEkp5XPtPfMcU3C7J83iFMENXNIUZ6EA/3OiV4NAuy6w0xnnbU/Nrh0bHt bzMlA8oXM5flQdNz+SK0H1jX7JljZ5tfodi+e3xbKTbDba8FxNhaAwdqs90RykzTFz02 MVrFPhxEgVKF9qoBc4ozE9c4KuHvBUjGekEaaDK7EINAoNtqXz3hfwj8nxIm7kKtnkv6 rBLw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=xo6cCdx6; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id cc1si3027653edb.495.2020.11.20.16.39.48; Fri, 20 Nov 2020 16:40:12 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=xo6cCdx6; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728857AbgKUAhW (ORCPT + 99 others); Fri, 20 Nov 2020 19:37:22 -0500 Received: from mail.kernel.org ([198.145.29.99]:34944 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727773AbgKUAhW (ORCPT ); Fri, 20 Nov 2020 19:37:22 -0500 Received: from paulmck-ThinkPad-P72.home (50-39-104-11.bvtn.or.frontiernet.net [50.39.104.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 0CFFA23A65; Sat, 21 Nov 2020 00:37:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1605919041; bh=tLZavfhr0ifkbVL3g4BOQSGiy46EeE9naID7vgrT4AM=; h=Date:From:To:Cc:Subject:Reply-To:References:In-Reply-To:From; b=xo6cCdx64X8RXPhygMb7IY5bI0Nu57rVwu3eXiOHT+l6jgYjqQl/PATvcq8lacYFh 11djyQXAC/3fOyrO8w32SBx9vAJSlWi92mLqDifdwcmNi9LQQ5N7XQrsw4JYEj0rm9 AbgKVRDAbuk6SBBTb2JsL6sFpl6rRrmQZ+kYkPUk= Received: by paulmck-ThinkPad-P72.home (Postfix, from userid 1000) id CB9953522A6E; Fri, 20 Nov 2020 16:37:20 -0800 (PST) Date: Fri, 20 Nov 2020 16:37:20 -0800 From: "Paul E. McKenney" To: Neeraj Upadhyay Cc: rcu@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@fb.com, mingo@kernel.org, jiangshanlai@gmail.com, akpm@linux-foundation.org, mathieu.desnoyers@efficios.com, josh@joshtriplett.org, tglx@linutronix.de, peterz@infradead.org, rostedt@goodmis.org, dhowells@redhat.com, edumazet@google.com, fweisbec@gmail.com, oleg@redhat.com, joel@joelfernandes.org Subject: Re: [PATCH RFC tip/core/rcu 3/5] srcu: Provide internal interface to start a Tree SRCU grace period Message-ID: <20201121003720.GQ1437@paulmck-ThinkPad-P72> Reply-To: paulmck@kernel.org References: <20201117004017.GA7444@paulmck-ThinkPad-P72> <20201117004052.14758-3-paulmck@kernel.org> <69c05cd0-8187-49a7-5b2d-1a10ba42fa44@codeaurora.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <69c05cd0-8187-49a7-5b2d-1a10ba42fa44@codeaurora.org> User-Agent: Mutt/1.9.4 (2018-02-28) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Nov 20, 2020 at 05:06:50PM +0530, Neeraj Upadhyay wrote: > > > On 11/17/2020 6:10 AM, paulmck@kernel.org wrote: > > From: "Paul E. McKenney" > > > > There is a need for a polling interface for SRCU grace periods. > > This polling needs to initiate an SRCU grace period without having > > to queue (and manage) a callback. This commit therefore splits the > > Tree SRCU __call_srcu() function into callback-initialization and > > queuing/start-grace-period portions, with the latter in a new function > > named srcu_gp_start_if_needed(). This function may be passed a NULL > > callback pointer, in which case it will refrain from queuing anything. > > > > Why have the new function mess with queuing? Locking considerations, > > of course! > > > > Link: https://lore.kernel.org/rcu/20201112201547.GF3365678@moria.home.lan/ > > Reported-by: Kent Overstreet > > Signed-off-by: Paul E. McKenney > > --- > > Reviewed-by: Neeraj Upadhyay I applied both Reviewed-bys, thank you! Thanx, Paul > Thanks > Neeraj > > > kernel/rcu/srcutree.c | 66 +++++++++++++++++++++++++++++---------------------- > > 1 file changed, 37 insertions(+), 29 deletions(-) > > > > diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c > > index 79b7081..d930ece 100644 > > --- a/kernel/rcu/srcutree.c > > +++ b/kernel/rcu/srcutree.c > > @@ -808,6 +808,42 @@ static void srcu_leak_callback(struct rcu_head *rhp) > > } > > /* > > + * Start an SRCU grace period, and also queue the callback if non-NULL. > > + */ > > +static void srcu_gp_start_if_needed(struct srcu_struct *ssp, struct rcu_head *rhp, bool do_norm) > > +{ > > + unsigned long flags; > > + int idx; > > + bool needexp = false; > > + bool needgp = false; > > + unsigned long s; > > + struct srcu_data *sdp; > > + > > + idx = srcu_read_lock(ssp); > > + sdp = raw_cpu_ptr(ssp->sda); > > + spin_lock_irqsave_rcu_node(sdp, flags); > > + rcu_segcblist_enqueue(&sdp->srcu_cblist, rhp); > > + rcu_segcblist_advance(&sdp->srcu_cblist, > > + rcu_seq_current(&ssp->srcu_gp_seq)); > > + s = rcu_seq_snap(&ssp->srcu_gp_seq); > > + (void)rcu_segcblist_accelerate(&sdp->srcu_cblist, s); > > + if (ULONG_CMP_LT(sdp->srcu_gp_seq_needed, s)) { > > + sdp->srcu_gp_seq_needed = s; > > + needgp = true; > > + } > > + if (!do_norm && ULONG_CMP_LT(sdp->srcu_gp_seq_needed_exp, s)) { > > + sdp->srcu_gp_seq_needed_exp = s; > > + needexp = true; > > + } > > + spin_unlock_irqrestore_rcu_node(sdp, flags); > > + if (needgp) > > + srcu_funnel_gp_start(ssp, sdp, s, do_norm); > > + else if (needexp) > > + srcu_funnel_exp_start(ssp, sdp->mynode, s); > > + srcu_read_unlock(ssp, idx); > > +} > > + > > +/* > > * Enqueue an SRCU callback on the srcu_data structure associated with > > * the current CPU and the specified srcu_struct structure, initiating > > * grace-period processing if it is not already running. > > @@ -838,13 +874,6 @@ static void srcu_leak_callback(struct rcu_head *rhp) > > static void __call_srcu(struct srcu_struct *ssp, struct rcu_head *rhp, > > rcu_callback_t func, bool do_norm) > > { > > - unsigned long flags; > > - int idx; > > - bool needexp = false; > > - bool needgp = false; > > - unsigned long s; > > - struct srcu_data *sdp; > > - > > check_init_srcu_struct(ssp); > > if (debug_rcu_head_queue(rhp)) { > > /* Probable double call_srcu(), so leak the callback. */ > > @@ -853,28 +882,7 @@ static void __call_srcu(struct srcu_struct *ssp, struct rcu_head *rhp, > > return; > > } > > rhp->func = func; > > - idx = srcu_read_lock(ssp); > > - sdp = raw_cpu_ptr(ssp->sda); > > - spin_lock_irqsave_rcu_node(sdp, flags); > > - rcu_segcblist_enqueue(&sdp->srcu_cblist, rhp); > > - rcu_segcblist_advance(&sdp->srcu_cblist, > > - rcu_seq_current(&ssp->srcu_gp_seq)); > > - s = rcu_seq_snap(&ssp->srcu_gp_seq); > > - (void)rcu_segcblist_accelerate(&sdp->srcu_cblist, s); > > - if (ULONG_CMP_LT(sdp->srcu_gp_seq_needed, s)) { > > - sdp->srcu_gp_seq_needed = s; > > - needgp = true; > > - } > > - if (!do_norm && ULONG_CMP_LT(sdp->srcu_gp_seq_needed_exp, s)) { > > - sdp->srcu_gp_seq_needed_exp = s; > > - needexp = true; > > - } > > - spin_unlock_irqrestore_rcu_node(sdp, flags); > > - if (needgp) > > - srcu_funnel_gp_start(ssp, sdp, s, do_norm); > > - else if (needexp) > > - srcu_funnel_exp_start(ssp, sdp->mynode, s); > > - srcu_read_unlock(ssp, idx); > > + srcu_gp_start_if_needed(ssp, rhp, do_norm); > > } > > /** > > > > -- > QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of > the Code Aurora Forum, hosted by The Linux Foundation