Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933934AbdGTCiw (ORCPT ); Wed, 19 Jul 2017 22:38:52 -0400 Received: from mail-oi0-f45.google.com ([209.85.218.45]:34465 "EHLO mail-oi0-f45.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932243AbdGTCiu (ORCPT ); Wed, 19 Jul 2017 22:38:50 -0400 MIME-Version: 1.0 In-Reply-To: <20170719061937.GB352@vireshk-i7> References: <20170716080407.28492-1-joelaf@google.com> <20170717080441.GM352@vireshk-i7> <20170718054558.GU352@vireshk-i7> <20170719061937.GB352@vireshk-i7> From: Joel Fernandes Date: Wed, 19 Jul 2017 19:38:48 -0700 Message-ID: Subject: Re: [PATCH RFC v5] cpufreq: schedutil: Make iowait boost more energy efficient To: Viresh Kumar Cc: LKML , Juri Lelli , Patrick Bellasi , Andres Oportus , Dietmar Eggemann , Srinivas Pandruvada , Len Brown , "Rafael J . Wysocki" , Ingo Molnar , Peter Zijlstra Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 5502 Lines: 142 Hi Viresh, On Tue, Jul 18, 2017 at 11:19 PM, Viresh Kumar wrote: > On 18-07-17, 21:39, Joel Fernandes wrote: >> Not really, to me B will still work because in the case the flag is >> set, we are correctly double boosting in the next cycle. >> >> Taking an example, with B = flag is set and D = flag is not set >> >> F = Fmin (minimum) >> >> iowait flag B B B D D D >> resulting boost F 2*F 4*F 4*F 2*F F > > What about this ? > > iowait flag B D B D B D > resulting boost F 2*F F 2*F F 2*F Yes I guess so but this oscillation can still happen in current code I think. > > Isn't this the worst behavior we may wish for ? > >> What will not work is C but as I mentioned in my last email, that >> would cause us to delay the iowait boost halving by at most 1 cycle, >> is that really an issue considering we are starting from min compared >> to max? Note that cases A. and B. are still working. >> >> Considering the following cases: >> (1) min freq is 800MHz, it takes upto 4 cycles to reach 4GHz where the >> flag is set. At this point I think its likely we will run for many >> more cycles which means keeping the boost active for 1 extra cycle >> isn't that big a deal. Even if run for just 5 cycles with boost, that >> means only the last cycle will suffer from C not decaying as soon as >> possible. Comparing that to the case where in current code we >> currently run at max from the first cycle, its not that bad. >> >> (2) we have a transient type of thing, in this case we're probably not >> reaching the full max immediately so even if we delay the decaying, >> its still not as bad as what it is currently. >> >> I think considering that the code is way cleaner than any other >> approach - its a small price to pay. Also keeping in mind that this >> patch is still an improvement over the current spike, even though as >> you said its still a bit spikey, but its still better right? >> >> Functionally the code is working and I think is also clean, but if you >> feel that its still confusing, then I'm open to rewriting it. > > I am not worried for being boosted for a bit more time, but with the > spikes even when we do not want a freq change. > >> > And so in my initial solution I reversed the order in >> > sugov_iowait_boost(). >> >> Yes, but to fix A. you had to divide by 2 in sugov_set_iowait_boost, >> and then multiply by 2 later in sugov_iowait_boost to keep the first >> boost at min. That IMO was confusing so my modified patch did it >> differently. > > Yeah, it wasn't great for sure. > > I hope the following one will work for everyone ? > > diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c > index 45fcf21ad685..ceac5f72d8da 100644 > --- a/kernel/sched/cpufreq_schedutil.c > +++ b/kernel/sched/cpufreq_schedutil.c > @@ -53,6 +53,7 @@ struct sugov_cpu { > struct update_util_data update_util; > struct sugov_policy *sg_policy; > > + bool iowait_boost_pending; > unsigned long iowait_boost; > unsigned long iowait_boost_max; > u64 last_update; > @@ -169,7 +170,17 @@ static void sugov_set_iowait_boost(struct sugov_cpu *sg_cpu, u64 time, > unsigned int flags) > { > if (flags & SCHED_CPUFREQ_IOWAIT) { > - sg_cpu->iowait_boost = sg_cpu->iowait_boost_max; > + if (sg_cpu->iowait_boost_pending) > + return; > + > + sg_cpu->iowait_boost_pending = true; > + > + if (sg_cpu->iowait_boost) { > + sg_cpu->iowait_boost = min(sg_cpu->iowait_boost << 1, > + sg_cpu->iowait_boost_max); > + } else { > + sg_cpu->iowait_boost = sg_cpu->sg_policy->policy->min; > + } I would prefer this to be: if (sg_cpu->iowait_boost >= policy->min) { // double it } else { // set it to min } This is for the case when boost happens all the way, then its capped at max, but when its decayed back, its not exactly decayed to Fmin but lower than it, so in that case when boost next time we start from Fmin. > } else if (sg_cpu->iowait_boost) { > s64 delta_ns = time - sg_cpu->last_update; > > @@ -182,17 +193,23 @@ static void sugov_set_iowait_boost(struct sugov_cpu *sg_cpu, u64 time, > static void sugov_iowait_boost(struct sugov_cpu *sg_cpu, unsigned long *util, > unsigned long *max) > { > - unsigned long boost_util = sg_cpu->iowait_boost; > - unsigned long boost_max = sg_cpu->iowait_boost_max; > + unsigned long boost_util, boost_max; > > - if (!boost_util) > + if (!sg_cpu->iowait_boost) > return; > > + if (sg_cpu->iowait_boost_pending) > + sg_cpu->iowait_boost_pending = false; > + else > + sg_cpu->iowait_boost >>= 1; > + > + boost_util = sg_cpu->iowait_boost; > + boost_max = sg_cpu->iowait_boost_max; > + > if (*util * boost_max < *max * boost_util) { > *util = boost_util; > *max = boost_max; This looks good to me and is kind of what I had in mind. I can spend some time testing it soon. Just to be clear if I were to repost this patch after testing, should I have your authorship and my tested-by or do you prefer something else? thanks, -Joel