Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753109AbcCAU03 (ORCPT ); Tue, 1 Mar 2016 15:26:29 -0500 Received: from mail-lf0-f68.google.com ([209.85.215.68]:34528 "EHLO mail-lf0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751145AbcCAU01 (ORCPT ); Tue, 1 Mar 2016 15:26:27 -0500 MIME-Version: 1.0 In-Reply-To: <20160301145649.GM18792@e106622-lin> References: <5059413.77KZsd2lep@vostro.rjw.lan> <20160225110120.GF3680@pablo> <1770063.xTkquAS3z6@vostro.rjw.lan> <20160301145649.GM18792@e106622-lin> Date: Tue, 1 Mar 2016 21:26:24 +0100 X-Google-Sender-Auth: pELj6zFayvvz84YgEUovdkQmDjY Message-ID: Subject: Re: [RFC/RFT][PATCH 1/1] cpufreq: New governor using utilization data from the scheduler From: "Rafael J. Wysocki" To: Juri Lelli Cc: "Rafael J. Wysocki" , "Rafael J. Wysocki" , Linux PM list , Linux Kernel Mailing List , Viresh Kumar , Srinivas Pandruvada , Peter Zijlstra , Steve Muckle , Ingo Molnar Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3172 Lines: 68 On Tue, Mar 1, 2016 at 3:56 PM, Juri Lelli wrote: > On 26/02/16 03:36, Rafael J. Wysocki wrote: >> On Thursday, February 25, 2016 11:01:20 AM Juri Lelli wrote: > > [...] > >> > >> > That is right. But, can't an higher priority class eat all the needed >> > capacity. I mean, suppose that both CFS and DL need 30% of CPU capacity >> > on the same CPU. DL wins and gets its 30% of capacity. When CFS gets to >> > run it's too late for requesting anything more (w.r.t. the same time >> > window). If we somehow aggregate requests instead, we could request 60% >> > and both classes can have their capacity to run. It seems to me that >> > this is what governors were already doing by using the 1 - idle metric. >> >> That's interesting, because it is about a few different things at a time. :-) >> >> So first of all the "old" governors only collect information about what >> happened in the past and make decisions on that basis (kind of in the hope >> that what happened once will happen again), while the idea behind what >> you're describing seems to be to attempt to project future needs for >> capacity and use that to make decisions (just for the very near future, >> but that should be sufficient). If successful, that would be the most >> suitable approach IMO. >> > > Right, this is a key difference. > >> Of course, the $subject patch is not aspiring to anything of that kind. >> It only uses information about current needs that's already available to >> it in a very straightforward way. >> > > But, using utilization of CFS tasks (based on PELT) has already some > notion of "future needs" (even if it is true that tasks might have > phases). And this will be true for DL as well, once we will have a > corresponding utilization signal that we can consume. I think you are > already consuming information about the future in some sense. :-) That's because the already available numbers include that information. I don't do any projections myself. >> But there's more to it. In the sampling, or rate-limiting if you will, >> situation you really have a window in which many things can happen and >> making a good decision at the beginning of it is important. However, if >> you just can handle *every* request and really switch frequencies on the >> fly, then each of them may come with a "currently needed capacity" number >> and you can just give it what it asks for every time. >> > > True. Rate-limiting poses interesting problems. > >> My point is that there are quite a few things to consider here and I'm >> expecting a learning process to happen before we are happy with what we >> have. So my approach would be (and is) to start very simple and then >> add more complexity over time as needed instead of just trying to address >> every issue I can think about from the outset. >> > > I perfectly understand that, and I agree that there is value in starting > simple. I simply fear that aggregation of utilization signals will be one > of the few things that will pop out fairly soon. :-) That's OK. If it is demonstrably better than the super-simple initial approach, there won't be any reason to reject it. Thanks, Rafael