Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752682AbdC1OqI (ORCPT ); Tue, 28 Mar 2017 10:46:08 -0400 Received: from mail.kernel.org ([198.145.29.136]:47342 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752002AbdC1OqG (ORCPT ); Tue, 28 Mar 2017 10:46:06 -0400 Date: Tue, 28 Mar 2017 10:46:00 -0400 From: Steven Rostedt To: Dietmar Eggemann Cc: Peter Zijlstra , Ingo Molnar , LKML , Matt Fleming , Vincent Guittot , Morten Rasmussen , Juri Lelli , Patrick Bellasi Subject: Re: [RFC PATCH 2/5] sched/events: Introduce cfs_rq load tracking trace event Message-ID: <20170328104600.18d36cb0@gandalf.local.home> In-Reply-To: <20170328063541.12912-3-dietmar.eggemann@arm.com> References: <20170328063541.12912-1-dietmar.eggemann@arm.com> <20170328063541.12912-3-dietmar.eggemann@arm.com> X-Mailer: Claws Mail 3.14.0 (GTK+ 2.24.31; x86_64-pc-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1511 Lines: 55 On Tue, 28 Mar 2017 07:35:38 +0100 Dietmar Eggemann wrote: > /* This part must be outside protection */ > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index 03adf9fb48b1..ac19ab6ced8f 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -2950,6 +2950,9 @@ __update_load_avg(u64 now, int cpu, struct sched_avg *sa, > sa->util_avg = sa->util_sum / LOAD_AVG_MAX; > } > > + if (cfs_rq) > + trace_sched_load_cfs_rq(cfs_rq); > + Please use TRACE_EVENT_CONDITION(), and test for cfs_rq not NULL. That way it moves the if (cfs_rq) out of the scheduler code and into the jump label protected location. That is, the if is only tested when tracing is enabled. -- Steve > return decayed; > } > > @@ -3170,6 +3173,8 @@ static inline int propagate_entity_load_avg(struct sched_entity *se) > update_tg_cfs_util(cfs_rq, se); > update_tg_cfs_load(cfs_rq, se); > > + trace_sched_load_cfs_rq(cfs_rq); > + > return 1; > } > > @@ -3359,6 +3364,8 @@ static void attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *s > set_tg_cfs_propagate(cfs_rq); > > cfs_rq_util_change(cfs_rq); > + > + trace_sched_load_cfs_rq(cfs_rq); > } > > /** > @@ -3379,6 +3386,8 @@ static void detach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *s > set_tg_cfs_propagate(cfs_rq); > > cfs_rq_util_change(cfs_rq); > + > + trace_sched_load_cfs_rq(cfs_rq); > } > > /* Add the load generated by se into cfs_rq's load average */