Received: by 2002:ac0:a594:0:0:0:0:0 with SMTP id m20-v6csp1069050imm; Wed, 23 May 2018 09:46:25 -0700 (PDT) X-Google-Smtp-Source: AB8JxZrSy9E8t5ulYsB5wfl3Gay+1vy3wpLsPjdmplglO87Rpw4iWL7cwmeg+Qg3BUc3bnvI5tI/ X-Received: by 2002:a63:2d83:: with SMTP id t125-v6mr2933938pgt.336.1527093985557; Wed, 23 May 2018 09:46:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527093985; cv=none; d=google.com; s=arc-20160816; b=S8fPxbOJby6g+CwRKZlbtRi7BdYWVeqPoez0MJrzOaMcyD3AwM+7+gEu97XEjAHVzy 5GxHmjWrcggl9ffCfBH3LwNXAkl1YTo7+uLc04hK33Ycw0ZmPdpkIZ7X7LLD3VWfH/9I cIjpL2iH6z6uaB7BbRelGf2/nXXR4tBS27U5ksj3VsfNxQmG5JKPxZYRcWHyZpMSYpbd k8wiNzOHtfSYde41HM2j2eKN5OhrNC83JmSIGCMBpvgIVHA39jOZCtoLOw7QIpJ8QgSv 5tIHMiyiBesY3DHjKqaPy7DncWMRjGP/ryMEDNO54BuBkTeM0GZ37Wsmfqwk7zXtes9I PvUg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:subject:cc:to:from:date :arc-authentication-results; bh=CcepTL+23/qj8YLdlZMjYr+9eHOU5KdsFsYEqrQnyJc=; b=UMs4tqkaJlXPSVLhLe3x95hrXe7YitYHuXIQUD/swIb7PgXTwLpYmens+uck4eZsNM SmZlRbmsIjPRsq0s2QklQFdwSV1enGQ8r2St478PE+c1ynjWXINI9aJ8Oh0lpGnpM9++ 01M8iOOpsUEogOV4Zu2qHn+KKzxgnrt98DsU28b7N2XxrFme0en4nVYWZyiJ+kJO6uXm KYSYFRT0F58xnHaVRA/yY0gxxFhEv08/JkQFR3PYI8HQBEJ0zUO94f31AG/V0HuE4AZ9 YAZS6EOW9yzplZu9GWU1uYwibRJrMf+zVBLAJJ2KKROIjg9+DaHdG+TXcfxchPt8Z9/Q 1nJQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id q12-v6si2004243pgs.293.2018.05.23.09.46.10; Wed, 23 May 2018 09:46:25 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933692AbeEWQpg (ORCPT + 99 others); Wed, 23 May 2018 12:45:36 -0400 Received: from mail.kernel.org ([198.145.29.99]:58440 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754623AbeEWQpe (ORCPT ); Wed, 23 May 2018 12:45:34 -0400 Received: from gandalf.local.home (cpe-66-24-56-78.stny.res.rr.com [66.24.56.78]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 1840120848; Wed, 23 May 2018 16:45:33 +0000 (UTC) Date: Wed, 23 May 2018 12:45:31 -0400 From: Steven Rostedt To: "Paul E. McKenney" Cc: Joel Fernandes , linux-kernel@vger.kernel.org, "Joel Fernandes (Google)" , Peter Zilstra , Ingo Molnar , Boqun Feng , byungchul.park@lge.com, kernel-team@android.com, Josh Triplett , Lai Jiangshan , Mathieu Desnoyers Subject: Re: [PATCH 1/4] rcu: Speed up calling of RCU tasks callbacks Message-ID: <20180523124531.7b0e972a@gandalf.local.home> In-Reply-To: <20180523155734.GK3803@linux.vnet.ibm.com> References: <20180523063815.198302-1-joel@joelfernandes.org> <20180523063815.198302-2-joel@joelfernandes.org> <20180523155734.GK3803@linux.vnet.ibm.com> X-Mailer: Claws Mail 3.16.0 (GTK+ 2.24.32; x86_64-pc-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, 23 May 2018 08:57:34 -0700 "Paul E. McKenney" wrote: > On Tue, May 22, 2018 at 11:38:12PM -0700, Joel Fernandes wrote: > > From: "Joel Fernandes (Google)" > > > > RCU tasks callbacks can take at least 1 second before the callbacks are > > executed. This happens even if the hold-out tasks enter their quiescent states > > quickly. I noticed this when I was testing trampoline callback execution. > > > > To test the trampoline freeing, I wrote a simple script: > > cd /sys/kernel/debug/tracing/ > > echo '__schedule_bug:traceon' > set_ftrace_filter; > > echo '!__schedule_bug:traceon' > set_ftrace_filter; > > > > In the background I had simple bash while loop: > > while [ 1 ]; do x=1; done & > > > > Total time of completion of above commands in seconds: > > > > With this patch: > > real 0m0.179s > > user 0m0.000s > > sys 0m0.054s > > > > Without this patch: > > real 0m1.098s > > user 0m0.000s > > sys 0m0.053s > > > > That's a greater than 6X speed up in performance. In order to accomplish > > this, I am waiting for HZ/10 time before entering the hold-out checking > > loop. The loop still preserves its checking of held tasks every 1 second > > as before, in case this first test doesn't succeed. > > > > Cc: Steven Rostedt > > Given an ack from Steven, I would be happy to take this, give or take > some nits below. I'm currently testing it, and trying to understand it better. > > Thanx, Paul > > > Cc: Peter Zilstra > > Cc: Ingo Molnar > > Cc: Boqun Feng > > Cc: Paul McKenney > > Cc: byungchul.park@lge.com > > Cc: kernel-team@android.com > > Signed-off-by: Joel Fernandes (Google) > > --- > > kernel/rcu/update.c | 12 +++++++++++- > > 1 file changed, 11 insertions(+), 1 deletion(-) > > > > diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c > > index 5783bdf86e5a..a28698e44b08 100644 > > --- a/kernel/rcu/update.c > > +++ b/kernel/rcu/update.c > > @@ -743,6 +743,12 @@ static int __noreturn rcu_tasks_kthread(void *arg) > > */ > > synchronize_srcu(&tasks_rcu_exit_srcu); > > > > + /* > > + * Wait a little bit incase held tasks are released > > in case > > > + * during their next timer ticks. > > + */ > > + schedule_timeout_interruptible(HZ/10); > > + > > /* > > * Each pass through the following loop scans the list > > * of holdout tasks, removing any that are no longer > > @@ -755,7 +761,6 @@ static int __noreturn rcu_tasks_kthread(void *arg) > > int rtst; > > struct task_struct *t1; > > > > - schedule_timeout_interruptible(HZ); > > rtst = READ_ONCE(rcu_task_stall_timeout); > > needreport = rtst > 0 && > > time_after(jiffies, lastreport + rtst); > > @@ -768,6 +773,11 @@ static int __noreturn rcu_tasks_kthread(void *arg) > > check_holdout_task(t, needreport, &firstreport); > > cond_resched(); > > } > > + > > + if (list_empty(&rcu_tasks_holdouts)) > > + break; > > + > > + schedule_timeout_interruptible(HZ); Why is this a full second wait and not the HZ/10 like the others? -- Steve > > Is there a better way to do this? Can this be converted into a for-loop? > Alternatively, would it make sense to have a firsttime local variable > initialized to true, to keep the schedule_timeout_interruptible() at > the beginning of the loop, but skip it on the first pass through the loop? > > Don't get me wrong, what you have looks functionally correct, but > duplicating the condition might cause problems later on, for example, > should a bug fix be needed in the condition. > > > } > > > > /* > > -- > > 2.17.0.441.gb46fe60e1d-goog > >