Received: by 2002:ac0:a5b6:0:0:0:0:0 with SMTP id m51-v6csp2518414imm; Mon, 28 May 2018 09:35:50 -0700 (PDT) X-Google-Smtp-Source: AB8JxZqkvvbWh0svE46FTBD+BaGCELvh4r/W2Es+WH4lJE6TbJDSDi0j7CFc3mp9XqWgcPaFMxmx X-Received: by 2002:a17:902:2702:: with SMTP id c2-v6mr14038963plb.297.1527525350354; Mon, 28 May 2018 09:35:50 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527525350; cv=none; d=google.com; s=arc-20160816; b=zuZ+MJBJNcwrqnpa4Ulmt9ctpELivvSlKmJPybQweEPuQimwgDJtllOtaYgNu2x8GL HZ9iC1SugQZwrePR8B9ACu2IXkNsjoUeXsVgbFSUGzVvwvr6rOwmn4GFROrJmaIvEpJ+ hJpoJARO+MCg+8mC7yC1hjSPcDi9TnEqRKvw2BP5HVhNQVC0qRPt9bO9/mR6Qhgw9W+P uCw/SnWf9EvDQLBImuKtQKcQXpI6pFBD7uwlILkfvp+PEoQ5ow2/99mMgLS8t7Y3JRPj d1ZAsF9LFWdRF1zfiudJLTv033EkDOxs44j3VNgwv1+NZXjPn1MoSTg5atn0ieYiqIGE q4yA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :references:in-reply-to:mime-version:dkim-signature :arc-authentication-results; bh=o+1maLKhHTkpomHuVFwDLceUsyUKDVsB9d3ODs4PbIU=; b=Mg0unxNzvPuqoGYk+GHyhKDTSYrHEGDNhLZkoluFL3LSmx6FIryjBovI6PyFLO56ff YQMzjbB5GRYBHgK7fGty9lMeseru3AzCjs7G4c6BchA3oosg1kbmyhJpumLgAIN/yDtC 5Z5ncLRQ21sZji5j9bClPmzpDHFpvMbGBYpsLwpoEmYEEXRjFEWjsjNsd7XjyWDc3RJp +gBwgNdDEbs5plz/3cU9kzBg/zR3B+98/rdQNqF+prDWs93pJZaa2nsRW3LnJ1QjvUJ+ ve6FDYOPB45jUTp6g6IQglH6jKLvlEc3H3pB2Y+mfmqkwFeWhFx1kd0mSBg3R51kYZtZ mn3Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=CFQQ6nCY; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z73-v6si8164805pgd.122.2018.05.28.09.35.35; Mon, 28 May 2018 09:35:50 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=CFQQ6nCY; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S940196AbeE1QfB (ORCPT + 99 others); Mon, 28 May 2018 12:35:01 -0400 Received: from mail-io0-f195.google.com ([209.85.223.195]:35345 "EHLO mail-io0-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S934820AbeE1Qez (ORCPT ); Mon, 28 May 2018 12:34:55 -0400 Received: by mail-io0-f195.google.com with SMTP id g1-v6so14622607iob.2 for ; Mon, 28 May 2018 09:34:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=o+1maLKhHTkpomHuVFwDLceUsyUKDVsB9d3ODs4PbIU=; b=CFQQ6nCYZgzhPYvpADicUjZxoEjAq9qzDsqeQU3K/z0WmcW4I09KBgAM2fJds3nWsW Pa3BlNzOT7DjUa6TwWaB6nyl9CKmUEQx5UzIFx2xaqc9OyKXh6I+Bvisn/8DiSiIp0Yv b3gdfwoWW7r0wBJEH958SDAhsmPmSS+jgm6Js= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=o+1maLKhHTkpomHuVFwDLceUsyUKDVsB9d3ODs4PbIU=; b=nfl/zdVQAEjwC58UxhzIud0vM1XEroJ3/D47UqzvMmNkLfJYhhN5+JQZQusDWQ7tbM sbIMU3f3VxPNLqfLVWY1Q9dtxqrqpupYkcFrrXgM13zxrylN4XkHUSVUGdwvgp0vjxj6 9QmsPTZ5FeCE9TmQmsJLGWFpz4tDLqINN2+tXVHzZwKQsImflw467rEGQFvVBcZR2qIG B/U/sIGH34GmPShQWoRRBTHBUhpX2Yg9DlzTiN6ZRShOX/NFJednmZHS3Y3C6dkH524j h5J+/am9IEr65WS9rwFfj+JNuwd49KHHYpbXaqflhO3BTJ6jk2gRAma4gV50qm4zvRMD JpQw== X-Gm-Message-State: ALKqPwfG3LHO+k4J3jgSgV3htV/Gn7nA0WoFqw+EDZ64I4DdPz8S2zL1 Va3AqQ33KnBmj6/Xn9p7lDrNoPIf64cHfafLmjSZKg== X-Received: by 2002:a6b:c88c:: with SMTP id y134-v6mr1450774iof.295.1527525294562; Mon, 28 May 2018 09:34:54 -0700 (PDT) MIME-Version: 1.0 Received: by 2002:a6b:4cc:0:0:0:0:0 with HTTP; Mon, 28 May 2018 09:34:33 -0700 (PDT) In-Reply-To: <20180528152243.GD1293@localhost.localdomain> References: <1527253951-22709-1-git-send-email-vincent.guittot@linaro.org> <1527253951-22709-6-git-send-email-vincent.guittot@linaro.org> <20180528101234.GA1293@localhost.localdomain> <20180528152243.GD1293@localhost.localdomain> From: Vincent Guittot Date: Mon, 28 May 2018 18:34:33 +0200 Message-ID: Subject: Re: [PATCH v5 05/10] cpufreq/schedutil: get max utilization To: Juri Lelli Cc: Peter Zijlstra , Ingo Molnar , linux-kernel , "Rafael J. Wysocki" , Dietmar Eggemann , Morten Rasmussen , viresh kumar , Valentin Schneider , Quentin Perret , Luca Abeni , Claudio Scordino , Joel Fernandes , Alessio Balsini Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 28 May 2018 at 17:22, Juri Lelli wrote: > On 28/05/18 16:57, Vincent Guittot wrote: >> Hi Juri, >> >> On 28 May 2018 at 12:12, Juri Lelli wrote: >> > Hi Vincent, >> > >> > On 25/05/18 15:12, Vincent Guittot wrote: >> >> Now that we have both the dl class bandwidth requirement and the dl class >> >> utilization, we can use the max of the 2 values when agregating the >> >> utilization of the CPU. >> >> >> >> Signed-off-by: Vincent Guittot >> >> --- >> >> kernel/sched/sched.h | 6 +++++- >> >> 1 file changed, 5 insertions(+), 1 deletion(-) >> >> >> >> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h >> >> index 4526ba6..0eb07a8 100644 >> >> --- a/kernel/sched/sched.h >> >> +++ b/kernel/sched/sched.h >> >> @@ -2194,7 +2194,11 @@ static inline void cpufreq_update_util(struct rq *rq, unsigned int flags) {} >> >> #ifdef CONFIG_CPU_FREQ_GOV_SCHEDUTIL >> >> static inline unsigned long cpu_util_dl(struct rq *rq) >> >> { >> >> - return (rq->dl.running_bw * SCHED_CAPACITY_SCALE) >> BW_SHIFT; >> >> + unsigned long util = (rq->dl.running_bw * SCHED_CAPACITY_SCALE) >> BW_SHIFT; >> > >> > I'd be tempted to say the we actually want to cap to this one above >> > instead of using the max (as you are proposing below) or the >> > (theoretical) power reduction benefits of using DEADLINE for certain >> > tasks might vanish. >> >> The problem that I'm facing is that the sched_entity bandwidth is >> removed after the 0-lag time and the rq->dl.running_bw goes back to >> zero but if the DL task has preempted a CFS task, the utilization of >> the CFS task will be lower than reality and schedutil will set a lower >> OPP whereas the CPU is always running. The example with a RT task >> described in the cover letter can be run with a DL task and will give >> similar results. >> avg_dl.util_avg tracks the utilization of the rq seen by the scheduler >> whereas rq->dl.running_bw gives the minimum to match DL requirement. > > Mmm, I see. Note that I'm only being cautious, what you propose might > work OK, but it seems to me that we might lose some of the benefits of > running tasks with DEADLINE if we start selecting frequency as you > propose even when such tasks are running. I see your point. Taking into account the number cfs running task to choose between rq->dl.running_bw and avg_dl.util_avg could help > > An idea might be to copy running_bw util into dl.util_avg when a DL task > goes to sleep, and then decay the latter as for RT contribution. What > you think? Not sure that this will work because you will overwrite the value each time a DL task goes to sleep and the decay will mainly happen on the update when last DL task goes to sleep which might not reflect what has been used by DL tasks but only the requirement of the last running DL task. This other interest of the PELT is to have an utilization tracking which uses the same curve as CFS so the increase of cfs_rq->avg.util_avg and the decrease of avg_dl.util_avg will compensate themselves (or the opposite)