Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1161038Ab3FTVBR (ORCPT ); Thu, 20 Jun 2013 17:01:17 -0400 Received: from e7.ny.us.ibm.com ([32.97.182.137]:45559 "EHLO e7.ny.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757696Ab3FTVBP (ORCPT ); Thu, 20 Jun 2013 17:01:15 -0400 Date: Thu, 20 Jun 2013 14:01:07 -0700 From: "Paul E. McKenney" To: Frederic Weisbecker Cc: LKML , Ingo Molnar , Li Zhong , Peter Zijlstra , Steven Rostedt , Thomas Gleixner , Borislav Petkov , Alex Shi , Paul Turner , Mike Galbraith , Vincent Guittot Subject: Re: [RFC PATCH 2/4] sched: Consolidate nohz cpu load prelude code Message-ID: <20130620210107.GM4082@linux.vnet.ibm.com> Reply-To: paulmck@linux.vnet.ibm.com References: <1371761141-25386-1-git-send-email-fweisbec@gmail.com> <1371761141-25386-3-git-send-email-fweisbec@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1371761141-25386-3-git-send-email-fweisbec@gmail.com> User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-MML: No X-Content-Scanned: Fidelis XPS MAILER x-cbid: 13062021-5806-0000-0000-000021CF560E Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4167 Lines: 119 On Thu, Jun 20, 2013 at 10:45:39PM +0200, Frederic Weisbecker wrote: > Gather the common code that computes the pending idle cpu load > to decay. > > Signed-off-by: Frederic Weisbecker > Cc: Ingo Molnar > Cc: Li Zhong > Cc: Paul E. McKenney > Cc: Peter Zijlstra > Cc: Steven Rostedt > Cc: Thomas Gleixner > Cc: Borislav Petkov > Cc: Alex Shi > Cc: Paul Turner > Cc: Mike Galbraith > Cc: Vincent Guittot > --- > kernel/sched/proc.c | 40 ++++++++++++++-------------------------- > 1 files changed, 14 insertions(+), 26 deletions(-) > > diff --git a/kernel/sched/proc.c b/kernel/sched/proc.c > index bb3a6a0..030528a 100644 > --- a/kernel/sched/proc.c > +++ b/kernel/sched/proc.c > @@ -470,11 +470,14 @@ decay_load_missed(unsigned long load, unsigned long missed_updates, int idx) > * scheduler tick (TICK_NSEC). With tickless idle this will not be called > * every tick. We fix it up based on jiffies. > */ > -static void __update_cpu_load(struct rq *this_rq, unsigned long this_load, > - unsigned long pending_updates) > +static void __update_cpu_load(struct rq *this_rq, unsigned long this_load) > { > + unsigned long curr_jiffies = ACCESS_ONCE(jiffies); Isn't jiffies declared volatile? (Looks that way to me.) If so, there is no need for ACCESS_ONCE(). > + unsigned long pending_updates; > int i, scale; > > + pending_updates = curr_jiffies - this_rq->last_load_update_tick; > + this_rq->last_load_update_tick = curr_jiffies; > this_rq->nr_load_updates++; > > /* Update our load: */ > @@ -521,20 +524,15 @@ static void __update_cpu_load(struct rq *this_rq, unsigned long this_load, > */ > void update_idle_cpu_load(struct rq *this_rq) > { > - unsigned long curr_jiffies = ACCESS_ONCE(jiffies); > unsigned long load = this_rq->load.weight; > - unsigned long pending_updates; > > /* > * bail if there's load or we're actually up-to-date. > */ > - if (load || curr_jiffies == this_rq->last_load_update_tick) > + if (load || jiffies == this_rq->last_load_update_tick) > return; > > - pending_updates = curr_jiffies - this_rq->last_load_update_tick; > - this_rq->last_load_update_tick = curr_jiffies; > - > - __update_cpu_load(this_rq, load, pending_updates); > + __update_cpu_load(this_rq, load); > } > > /* > @@ -543,22 +541,16 @@ void update_idle_cpu_load(struct rq *this_rq) > void update_cpu_load_nohz(void) > { > struct rq *this_rq = this_rq(); > - unsigned long curr_jiffies = ACCESS_ONCE(jiffies); > - unsigned long pending_updates; > > - if (curr_jiffies == this_rq->last_load_update_tick) > + if (jiffies == this_rq->last_load_update_tick) > return; > > raw_spin_lock(&this_rq->lock); > - pending_updates = curr_jiffies - this_rq->last_load_update_tick; > - if (pending_updates) { > - this_rq->last_load_update_tick = curr_jiffies; > - /* > - * We were idle, this means load 0, the current load might be > - * !0 due to remote wakeups and the sort. > - */ > - __update_cpu_load(this_rq, 0, pending_updates); > - } > + /* > + * We were idle, this means load 0, the current load might be > + * !0 due to remote wakeups and the sort. > + */ > + __update_cpu_load(this_rq, 0); > raw_spin_unlock(&this_rq->lock); > } > #endif /* CONFIG_NO_HZ */ > @@ -568,11 +560,7 @@ void update_cpu_load_nohz(void) > */ > void update_cpu_load_active(struct rq *this_rq) > { > - /* > - * See the mess around update_idle_cpu_load() / update_cpu_load_nohz(). > - */ > - this_rq->last_load_update_tick = jiffies; > - __update_cpu_load(this_rq, this_rq->load.weight, 1); > + __update_cpu_load(this_rq, this_rq->load.weight); > > calc_load_account_active(this_rq); > } > -- > 1.7.5.4 > -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/