Received: by 2002:ac0:a5a7:0:0:0:0:0 with SMTP id m36-v6csp3082624imm; Mon, 13 Aug 2018 05:50:20 -0700 (PDT) X-Google-Smtp-Source: AA+uWPzNjDqZPOjkZgGww0z42O0G6hExCR5GfRXILZ5ioo1aSGuAPnyLN9wzr/pVxuqoz9VXVgMm X-Received: by 2002:a17:902:e18d:: with SMTP id cd13-v6mr16213189plb.305.1534164620274; Mon, 13 Aug 2018 05:50:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1534164620; cv=none; d=google.com; s=arc-20160816; b=w8l9X3EAW/f0q68QhKreqDURfx+VZ40X1je+N4JV8BuOrHI2ir6s9omXB8mCCOAEpJ 5yV59BdGlWSzqNZ8nUBOxRYfn4TG/nkzBoekWD5Vout/r/xuL3uy51nyUpt2yUeRamIF EDeKXRkFSAZljL0VJkSKG74DFrpUXtOwBZMW2UkQoBP6GSTgbRV8AHqgOQHYrkkdMQUX g8Dyka5Lv+k625hAO6VhFcYT9wRVtANzkJd7pOO34LUfy/Di3vCZAFOGJG5HkKYdAOzL SVcoWiaLA2xDFrrAM1ar4ZMzRtcD9HSLBZAklJ5ANX9fWBpGZ3P9Ntlzz7zNUYp/kUiJ 62qg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=jEosKwnSnwOnzmK02gpE7l1RNqKCJ2BFufWH3xIGOVA=; b=FzW2X0485L8rNtITrXpXCGCGw6FrGTu5flrCr7MqbBycOyZsvS6kejWyMw+9R/+clo JAWsnt0lUfTICgQYcaqAxLakN3lALuPPt9haX7a486ytp7tANWqMXQAXBgepxCfBWBWY afPntICuZ4h86L1VLyIUdSUnBc9l3ZkC6jwjXklYBsrgaxmAPIkbjgdSGqZL+19KjTM/ 8b2X5AEiR6dPeuUApslvEZOhvfdAsBhFuFVgfHLUw6WWUVaXqG/kwYGwDqQi6s/ZyDEL szRn+O8Kzc/+MO2CJIckoDB187zaZkHXRMMpZz/jeIplgiLmJ1/qFn/3ye/wnbzv3SX4 3Udw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id g131-v6si17422878pgc.204.2018.08.13.05.50.05; Mon, 13 Aug 2018 05:50:20 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729525AbeHMPbZ (ORCPT + 99 others); Mon, 13 Aug 2018 11:31:25 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:58120 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728356AbeHMPbZ (ORCPT ); Mon, 13 Aug 2018 11:31:25 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 41B8518A; Mon, 13 Aug 2018 05:49:16 -0700 (PDT) Received: from e110439-lin (e110439-lin.Emea.Arm.com [10.4.12.126]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 976E53F5D0; Mon, 13 Aug 2018 05:49:13 -0700 (PDT) Date: Mon, 13 Aug 2018 13:49:11 +0100 From: Patrick Bellasi To: Vincent Guittot Cc: Juri Lelli , linux-kernel , "open list:THERMAL" , Ingo Molnar , Peter Zijlstra , Tejun Heo , "Rafael J. Wysocki" , viresh kumar , Paul Turner , Dietmar Eggemann , Morten Rasmussen , Todd Kjos , Joel Fernandes , "Cc: Steve Muckle" , Suren Baghdasaryan Subject: Re: [PATCH v3 06/14] sched/cpufreq: uclamp: add utilization clamping for RT tasks Message-ID: <20180813124911.GD2605@e110439-lin> References: <20180806163946.28380-1-patrick.bellasi@arm.com> <20180806163946.28380-7-patrick.bellasi@arm.com> <20180807132630.GB3062@localhost.localdomain> <20180809153423.nsoepprhut3dv4u2@darkstar> <20180813101221.GA2605@e110439-lin> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 13-Aug 14:07, Vincent Guittot wrote: > On Mon, 13 Aug 2018 at 12:12, Patrick Bellasi wrote: > > > > Hi Vincent! > > > > On 09-Aug 18:03, Vincent Guittot wrote: > > > > On 07-Aug 15:26, Juri Lelli wrote: > > > > [...] > > > > > > > > + util_cfs = cpu_util_cfs(rq); > > > > > > + util_rt = cpu_util_rt(rq); > > > > > > + if (sched_feat(UCLAMP_SCHED_CLASS)) { > > > > > > + util = 0; > > > > > > + if (util_cfs) > > > > > > + util += uclamp_util(cpu_of(rq), util_cfs); > > > > > > + if (util_rt) > > > > > > + util += uclamp_util(cpu_of(rq), util_rt); > > > > > > + } else { > > > > > > + util = cpu_util_cfs(rq); > > > > > > + util += cpu_util_rt(rq); > > > > > > + util = uclamp_util(cpu_of(rq), util); > > > > > > + } > > > > > > > > Regarding the two policies, do you have any comment? > > > > > > Does the policy for (sched_feat(UCLAMP_SCHED_CLASS)== true) really > > > make sense as it is ? > > > I mean, uclamp_util doesn't make any difference between rt and cfs > > > tasks when clamping the utilization so why should be add twice the > > > returned value ? > > > IMHO, this policy would make sense if there were something like > > > uclamp_util_rt() and a uclamp_util_cfs() > > > > The idea for the UCLAMP_SCHED_CLASS policy is to improve fairness on > > low-priority classese, especially when we have high RT utilization. > > > > Let say we have: > > > > util_rt = 40%, util_min=0% > > util_cfs = 10%, util_min=50% > > > > the two policies will select: > > > > UCLAMP_SCHED_CLASS: util = uclamp(40) + uclamp(10) = 50 + 50 = 100% > > !UCLAMP_SCHED_CLASS: util = uclamp(40 + 10) = uclmp(50) = 50% > > > > Which means that, despite the CPU's util_min will be set to 50% when > > CFS is running, these tasks will have almost no boost at all, since > > their bandwidth margin is eclipsed by RT tasks. > > Hmm ... At the opposite, even if there is no running rt task but only > some remaining blocked rt utilization, > even if util_rt = 10%, util_min=0% > and util_cfs = 40%, util_min=50% > the UCLAMP_SCHED_CLASS: util = uclamp(10) + uclamp(40) = 50 + 50 = 100% Yes, that's true... since now I clamp util_rt if it's non zero. Perhaps this can be fixed by clamping util_rt only: if (rt_rq_is_runnable(&rq->rt)) ? > So cfs task can get double boosted by a small rt task. Well, in principle we don't know if the 50% clamp was asserted by the RT or the CFS task, since in the current implementation we max aggregate clamp values across all RT and CFS tasks. > Furthermore, if there is no rt task but 2 cfs tasks of 40% and 10% > the UCLAMP_SCHED_CLASS: util = uclamp(0) + uclamp(40) = 50 = 50% True, but here we are within the same class and what utilization clamping aims to do is to defined the minimum capacity to run _all_ the RUNNABLE tasks... not the minimum capacity for _each_ one of them. > So in this case cfs tasks don't get more boost and have to share the > bandwidth and you don't ensure 50% for each unlike what you try to do > for rt. Above I'm not trying to fix a per-task issue. The UCLAMP_SCHED_CLASS policy is just "trying" to fix a cross-class issue... if we agree there can be a cross-class issue worth to be fixed. > You create a difference in the behavior depending of the class of the > others co-schedule tasks which is not sane IMHO Yes I agree that the current behavior is not completely clean... still the question is: do you reckon the problem I depicted above, i.e. RT workloads eclipsing the min_util required by lower priority classes? To a certain extend I see this problem similar to the rt/dl/irq pressure in defining cpu_capacity, isn't it? Maybe we can make use of (cpu_capacity_orig - cpu_capacity) to factor in a util_min compensation for CFS tasks? -- #include Patrick Bellasi