Received: by 10.192.165.156 with SMTP id m28csp814371imm; Wed, 11 Apr 2018 07:41:11 -0700 (PDT) X-Google-Smtp-Source: AIpwx49jdqGYh83PNrEc7QZSPdGNaWWN4TXS8QQHM+K1gntLxK1X8VAgRv7UhW9w4CJag0vjry0c X-Received: by 10.98.213.9 with SMTP id d9mr4254117pfg.234.1523457671370; Wed, 11 Apr 2018 07:41:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1523457671; cv=none; d=google.com; s=arc-20160816; b=M7lTtF6KevDEFoPelotCYsSj/pTGuy1whOcaepIoBdJbsi57ZSI1og0TL/knIp+4nm S59PrR1So2t5ZnRDlUEoL+bi7kWkPwtr3IJ9gYWoNAxpgNji2eoiIeJmzNiMPkqPakYl EUEI9TNyUNGZQQ3o9LAq3ub5lJCoKNdbyoybMbDZQy+82cqZQGsS017OwUf19KGWORPh laS194PumdQCywOec/EuJLfS3+6CE7NJ5jqjqbX1mJVwDtG5YijzcuX8SoUGhW8Trky7 etkDMSXIigP7FMKuaEcTwHwDXlYLa22qboau6/KIa14t3Eq3bu6R7Ho9jmMnFOIlqTtF kVnQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=RJRfJyVy7nv+6KHgHpY/n0I2/GxUFBWJUFsCum5oa2c=; b=rnneBQoDQyp7bzvQ79HyBboDoBXtvd6xbuIp82hKqupfjgNHGtHZs3kTC0f7TsilTk IhpSFytX9X5tW9MrD9rqqoszcXzq4ZUNGYIKqVZWxy0NfMinjbaBU4uNLR2OBZgf89p0 qk1ysi1+Vrw/EQK+Xa2VmCySAerdm0lsGFY7A+bRkkVDelNiajU5E+ExRANONPbFQOcv KO7bSmH/qCL4nXlPAIzY9wjOlpMtOuT4rjfTbpk81ffz0hR5QO0XXuoN8XkxWIi+MDnR WuAMGv6IujA0Zv7OsrDXBqbtCtVdDMmOSjul/JWr68Of0EQ5emDsmcdawtPrsiL/WDPF YrhQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c12-v6si1220430plo.216.2018.04.11.07.40.31; Wed, 11 Apr 2018 07:41:11 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753694AbeDKOeF (ORCPT + 99 others); Wed, 11 Apr 2018 10:34:05 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:50022 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752804AbeDKOeE (ORCPT ); Wed, 11 Apr 2018 10:34:04 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C9AD11435; Wed, 11 Apr 2018 07:34:03 -0700 (PDT) Received: from e110439-lin (e110439-lin.cambridge.arm.com [10.1.210.68]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 5AC203F487; Wed, 11 Apr 2018 07:34:01 -0700 (PDT) Date: Wed, 11 Apr 2018 15:33:58 +0100 From: Patrick Bellasi To: Vincent Guittot Cc: Peter Zijlstra , linux-kernel , "open list:THERMAL" , Ingo Molnar , "Rafael J . Wysocki" , Viresh Kumar , Juri Lelli , Joel Fernandes , Steve Muckle , Dietmar Eggemann , Morten Rasmussen Subject: Re: [PATCH] sched/fair: schedutil: update only with all info available Message-ID: <20180411143358.GO14248@e110439-lin> References: <20180406172835.20078-1-patrick.bellasi@arm.com> <20180410110412.GG14248@e110439-lin> <20180411101517.GL14248@e110439-lin> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 11-Apr 13:56, Vincent Guittot wrote: > On 11 April 2018 at 12:15, Patrick Bellasi wrote: > > On 11-Apr 08:57, Vincent Guittot wrote: > >> On 10 April 2018 at 13:04, Patrick Bellasi wrote: > >> > On 09-Apr 10:51, Vincent Guittot wrote: > >> >> On 6 April 2018 at 19:28, Patrick Bellasi wrote: > >> >> Peter, > >> >> what was your goal with adding the condition "if > >> >> (rq->cfs.h_nr_running)" for the aggragation of CFS utilization > >> > > >> > The original intent was to get rid of sched class flags, used to track > >> > which class has tasks runnable from within schedutil. The reason was > >> > to solve some misalignment between scheduler class status and > >> > schedutil status. > >> > >> This was mainly for RT tasks but it was not the case for cfs task > >> before commit 8f111bc357aa > > > > True, but with his solution Peter has actually come up with a unified > > interface which is now (and can be IMO) based just on RUNNABLE > > counters for each class. > > But do we really want to only take care of runnable counter for all class ? Perhaps, once we have PELT RT support with your patches we can consider blocked utilization also for those tasks... However, we can also argue that a policy where we trigger updates based on RUNNABLE counters and then it's up to the schedutil policy to decide for how long to ignore a frequency drop, using a step down holding timer similar to what we already have, can also be a possible solution. I also kind-of see a possible interesting per-task tuning of such a policy. Meaning that, for example, for certain tasks we wanna use a longer throttling down scale time which can be instead shorter if only "background" tasks are currently active. > >> > The solution, initially suggested by Viresh, and finally proposed by > >> > Peter was to exploit RQ knowledges directly from within schedutil. > >> > > >> > The problem is that now schedutil updated depends on two information: > >> > utilization changes and number of RT and CFS runnable tasks. > >> > > >> > Thus, using cfs_rq::h_nr_running is not the problem... it's actually > >> > part of a much more clean solution of the code we used to have. > >> > >> So there are 2 problems there: > >> - using cfs_rq::h_nr_running when aggregating cfs utilization which > >> generates a lot of frequency drop > > > > You mean because we now completely disregard the blocked utilization > > where a CPU is idle, right? > > yes > > > > > Given how PELT works and the recent support for IDLE CPUs updated, we > > should probably always add contributions for the CFS class. > > > >> - making sure that the nr-running are up-to-date when used in sched_util > > > > Right... but, if we always add the cfs_rq (to always account for > > blocked utilization), we don't have anymore this last dependency, > > isn't it? > > yes > > > > > We still have to account for the util_est dependency. > > > > Should I add a patch to this series to disregard cfs_rq::h_nr_running > > from schedutil as you suggested? > > It's probably better to have a separate patch as these are 2 different topics > - when updating cfs_rq::h_nr_running and when calling cpufreq_update_util > - should we use runnable or running utilization for CFS Yes, well... since OSPM is just next week, we can also have a better discussion there and decide by then. What is true so far is that using RUNNABLE is a change with respect to the previous behaviors which unfortunately went unnoticed so far. -- #include Patrick Bellasi