Received: by 2002:ac0:a5b6:0:0:0:0:0 with SMTP id m51-v6csp4240704imm; Wed, 30 May 2018 01:39:05 -0700 (PDT) X-Google-Smtp-Source: ADUXVKISa+p41b0Bbw1b15968yAg8G1qull/juelKXnirxEdsdApKCcKpnOTjp/Ncax3oQ+0rRow X-Received: by 2002:a65:550d:: with SMTP id f13-v6mr1512614pgr.324.1527669545332; Wed, 30 May 2018 01:39:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527669545; cv=none; d=google.com; s=arc-20160816; b=mm0RexCm7dGCMACBwjjIlk2w/9N1wSCtAlFwa20iXVEofeOuUdZAyLqTA+bKy0VdHM PR3H6Fm9QK8vFwsVgF3z5+/FhIOlGnTlxRVVuhQx66qfdQ7kMcZjyS/WAUUf19SFgDrI SBeI/FXxEsVg5YczkAlN+B6zHxYWmPr6pdftY7OW3cc8kV5jt9EU21Hzfdb9ybv5onLu LaKgCcxzhyFvSWX7AU9ALsceEcTeZju3emhr9Zl5yq/VU9zRm2Od8FGR2kCH1b+6qGDA AeFF6hcCM3qwLDNaFD6cXgMg1OlKoJKMeSYyUitgB+qjbhHzyqyTaYouwm0nr0YQkGim SDJQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=gib+pyPERiC7AqnbfA/D9+mnanAQ1hnrw2g706jhWOA=; b=Ux4iyO67yZG46ELhvIFef6uYxLsWdEkL6wB6qkKQI9VHQ1jZyvy/rKewAy3Gwt/ieH u6yuxq1r3bnEXUVEe//1iHzN4/KrmTazbZNPM9wJpFPfK7mhlJv9OrWrImj+JA7Fa9+p gQ6fFr9PmabRNxXJyONmFXmLDFRsK8ntGUwWtCoahgdJmzPowwbfFY9omgGvegACeUvK 2H/VjeA4exMvD2vBGHsKq5SKIzzVY5frcoSxKNScTfnNWkYU5lTxk0t8cmRFX32ggVNr IUAJKt5oSx1dv3/fAj2TNl21rJEcsw5SMIkLpwI8ryQzShuowQxqMrEHHxSjs5NO4E4n p8XA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id j1-v6si34118741plk.257.2018.05.30.01.38.51; Wed, 30 May 2018 01:39:05 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1030358AbeE3Ihk (ORCPT + 99 others); Wed, 30 May 2018 04:37:40 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:52036 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1030331AbeE3IhT (ORCPT ); Wed, 30 May 2018 04:37:19 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 415D01435; Wed, 30 May 2018 01:37:19 -0700 (PDT) Received: from e108498-lin.cambridge.arm.com (e108498-lin.cambridge.arm.com [10.1.210.84]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 76A943F53D; Wed, 30 May 2018 01:37:17 -0700 (PDT) Date: Wed, 30 May 2018 09:37:15 +0100 From: Quentin Perret To: Juri Lelli Cc: Vincent Guittot , peterz@infradead.org, mingo@kernel.org, linux-kernel@vger.kernel.org, rjw@rjwysocki.net, dietmar.eggemann@arm.com, Morten.Rasmussen@arm.com, viresh.kumar@linaro.org, valentin.schneider@arm.com Subject: Re: [PATCH v5 05/10] cpufreq/schedutil: get max utilization Message-ID: <20180530083715.GB2174@e108498-lin.cambridge.arm.com> References: <1527253951-22709-1-git-send-email-vincent.guittot@linaro.org> <1527253951-22709-6-git-send-email-vincent.guittot@linaro.org> <20180529084009.GE15173@e108498-lin.cambridge.arm.com> <20180529095203.GD8985@localhost.localdomain> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180529095203.GD8985@localhost.localdomain> User-Agent: Mutt/1.8.3 (2017-05-23) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tuesday 29 May 2018 at 11:52:03 (+0200), Juri Lelli wrote: > On 29/05/18 09:40, Quentin Perret wrote: > > Hi Vincent, > > > > On Friday 25 May 2018 at 15:12:26 (+0200), Vincent Guittot wrote: > > > Now that we have both the dl class bandwidth requirement and the dl class > > > utilization, we can use the max of the 2 values when agregating the > > > utilization of the CPU. > > > > > > Signed-off-by: Vincent Guittot > > > --- > > > kernel/sched/sched.h | 6 +++++- > > > 1 file changed, 5 insertions(+), 1 deletion(-) > > > > > > diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h > > > index 4526ba6..0eb07a8 100644 > > > --- a/kernel/sched/sched.h > > > +++ b/kernel/sched/sched.h > > > @@ -2194,7 +2194,11 @@ static inline void cpufreq_update_util(struct rq *rq, unsigned int flags) {} > > > #ifdef CONFIG_CPU_FREQ_GOV_SCHEDUTIL > > > static inline unsigned long cpu_util_dl(struct rq *rq) > > > { > > > - return (rq->dl.running_bw * SCHED_CAPACITY_SCALE) >> BW_SHIFT; > > > + unsigned long util = (rq->dl.running_bw * SCHED_CAPACITY_SCALE) >> BW_SHIFT; > > > + > > > + util = max_t(unsigned long, util, READ_ONCE(rq->avg_dl.util_avg)); > > > > Would it make sense to use a UTIL_EST version of that signal here ? I > > don't think that would make sense for the RT class with your patch-set > > since you only really use the blocked part of the signal for RT IIUC, > > but would that work for DL ? > > Well, UTIL_EST for DL looks pretty much what we already do by computing > utilization based on dl.running_bw. That's why I was thinking of using > that as a starting point for dl.util_avg decay phase. Hmmm I see your point, but running_bw and the util_avg are fundamentally different ... I mean, the util_avg doesn't know about the period, which is an issue in itself I guess ... If you have a long running DL task (say 100ms runtime) with a long period (say 1s), the running_bw should represent ~1/10 of the CPU capacity, but the util_avg can go quite high, which means that you might end up executing this task at max OPP. So if we really want to drive OPPs like that for deadline, a util_est-like version of this util_avg signal should help. Now, you can also argue that going to max OPP for a task that _we know_ uses 1/10 of the CPU capacity isn't right ...