Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751097AbdGGF6e (ORCPT ); Fri, 7 Jul 2017 01:58:34 -0400 Received: from mail-oi0-f47.google.com ([209.85.218.47]:34717 "EHLO mail-oi0-f47.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750749AbdGGF6d (ORCPT ); Fri, 7 Jul 2017 01:58:33 -0400 MIME-Version: 1.0 In-Reply-To: <1499189651-18797-5-git-send-email-patrick.bellasi@arm.com> References: <1499189651-18797-1-git-send-email-patrick.bellasi@arm.com> <1499189651-18797-5-git-send-email-patrick.bellasi@arm.com> From: Joel Fernandes Date: Thu, 6 Jul 2017 22:58:31 -0700 Message-ID: Subject: Re: [PATCH v2 4/6] cpufreq: schedutil: update CFS util only if used To: Patrick Bellasi Cc: LKML , Linux PM , Ingo Molnar , Peter Zijlstra , "Rafael J . Wysocki" , Viresh Kumar , Vincent Guittot , Juri Lelli , Andres Oportus , Todd Kjos , Morten Rasmussen , Dietmar Eggemann Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1867 Lines: 49 On Tue, Jul 4, 2017 at 10:34 AM, Patrick Bellasi wrote: > Currently the utilization of the FAIR class is collected before locking > the policy. Although that should not be a big issue for most cases, we > also don't really know how much latency there can be between the > utilization reading and its usage. > > Let's get the FAIR utilization right before its usage to be better in > sync with the current status of a CPU. > > Signed-off-by: Patrick Bellasi > Cc: Ingo Molnar > Cc: Peter Zijlstra > Cc: Rafael J. Wysocki > Cc: Viresh Kumar > Cc: linux-kernel@vger.kernel.org > Cc: linux-pm@vger.kernel.org > --- > kernel/sched/cpufreq_schedutil.c | 3 +-- > 1 file changed, 1 insertion(+), 2 deletions(-) > > diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c > index 98704d8..df433f1 100644 > --- a/kernel/sched/cpufreq_schedutil.c > +++ b/kernel/sched/cpufreq_schedutil.c > @@ -308,10 +308,9 @@ static void sugov_update_shared(struct update_util_data *hook, u64 time, > if (unlikely(current == sg_policy->thread)) > return; > > - sugov_get_util(&util, &max); > - > raw_spin_lock(&sg_policy->update_lock); > > + sugov_get_util(&util, &max); > sg_cpu->util = util; > sg_cpu->max = max; Is your concern that there will we spinlock contention before calling sugov_get_util? If that's the case, with your patch it seems to me such contention (and hence spinning) itself could cause the utilization to be inflated - thus calling sugov_get_util after acquiring the lock will not be as accurate as before. In that case it seems to me its better to let sugov_get_util be called before acquiring the lock (as before). thanks, -Joel