Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp366217pxj; Thu, 10 Jun 2021 02:44:26 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxBVcO3cg+0tUtaJT/ODF5EbAe27lZRG6NLQlfj3EBSH/cvKOW/2ELIjcBNy/hui/H5cgBv X-Received: by 2002:a05:6402:18c:: with SMTP id r12mr3858007edv.10.1623318266535; Thu, 10 Jun 2021 02:44:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1623318266; cv=none; d=google.com; s=arc-20160816; b=Ei8xqQoBiTamgvN/6VaIBcwYUMkVxGh104/qTwLtRSU1rUjD+oGiMTnOB6RkiHHSKV Xw82rfC1dKR0r6qIUEVTacw3HH2UMdEyXIom4hRgzpVdy3AOzlFtlgIwJOj10MFKEzYO oRFtIVtorqavmOqankvFE683lt2r0aL101dkJWp2DcY/Dzj2YPLUfHLTa+kCl33zISCe NioYbCh9jr41DrvcCVwNeZ+euNFq1mhhdB9xm+kgoA2EgCBf9YZOR7hI96EFJefAtlvJ liyHKf0tVe+IWinudijEgRm6YABc3NqTu0G4kBV6dMF9PyXM/IgjlfcG/8/yefg5Lv7k 7Mrw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=IFMZ6hn0kBDDsWPrF5liecdZCHWos+7s4vNp0cU0WbY=; b=cHxRCo+/Fv87zkFI2YvLU80TtwtnkXoluW1cIPtqtArih7Hyo9RcPstGMU/M6UbaQD I2Q1Qpu5SHt2kjfFAIZFE4TKqteAi7+4vG+yhrH+QsYmQJjKPFjCR01yid5p9KgdkdFV cgnkt+K0ELWXv8kNoRH6CyCzvDTTPYGSNvyUeHE5oSIIReiYjmFj5zFuScTZmHevN1EL /uBlF2W18xL9zbbJqr5umLHe5xGSgHE2MYj3thlMloQ7lMcFUtHfnCtLjIGH97+sHf7d b7DbnjjwH4Y5egLiVsAnM0Tc5Gwlqntj1UHWCck7b5/oGehYjDA8zNbo2woSsF//TOAu 9B6A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=nnbRmysL; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id s15si1864554ejm.186.2021.06.10.02.44.03; Thu, 10 Jun 2021 02:44:26 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=nnbRmysL; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230146AbhFJJpA (ORCPT + 99 others); Thu, 10 Jun 2021 05:45:00 -0400 Received: from mail-lj1-f179.google.com ([209.85.208.179]:39537 "EHLO mail-lj1-f179.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229941AbhFJJo7 (ORCPT ); Thu, 10 Jun 2021 05:44:59 -0400 Received: by mail-lj1-f179.google.com with SMTP id c11so3952388ljd.6 for ; Thu, 10 Jun 2021 02:42:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=IFMZ6hn0kBDDsWPrF5liecdZCHWos+7s4vNp0cU0WbY=; b=nnbRmysLjM19RlYoarApxmIzyRFKTDudbr4x0aDJ/3oPZmwfDLOTi6tcrYVAXic6p8 QLUckwThgZy43B/xWRvybVeDrFIPDOv+wPR8iDYuDl12STDIi1h/SnJXESCaFCZIxqqf koTjxzSqfhWjxKfWI7wk+Xd6v1QywOnFJgBUREU2kZq6qOSsocdU2EzjmvS8N2FcqyEz 7a94BH6azYi1YWLCj6fqw7p1/FOdYf5bPQ+NuSCHLFXf+svgvm6P3siLi8GD07ng/ftV tarwBZJZ0wm5SWK/+sc4jY+cZWb8cZbgAGF1f0j73nwhKfWmq3ZUpBgnXD1BNG47tJx6 cQOA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=IFMZ6hn0kBDDsWPrF5liecdZCHWos+7s4vNp0cU0WbY=; b=CUg/p+MDy0OqPt87Zn2+tq1+vy2E/eDoPsTm5u1INcNPmuz/+15xdcWOvOTBWC0e0E gnJno+rVKoK3ZXMvFGF2r+Z1Xrquha3d4/eckcdMWIhk/RmvfK/Yko7/nW0Ri4f04e+k y/rub9+uwfezDNkXTy3VNr8CJokgxwQ3PsdkfzLcUoEEQGcl5sHPmCxRflBo+1e3p36p Xk7Ds6zQN12fp+lpDNRD7IAX0u+c4X5A50bEUlNruWUILaGbqb0UonDkAy6stnwOiYGn TZgmLT9zjSznxr6KZkvaGeZ74EYzourm9hGcSyASR6VxGOP5k+79EgrGK98dgiSAjPse jI0Q== X-Gm-Message-State: AOAM531UUf8ihkrzeXJyKAnoogk1de8Ko+uMUHPaOzDLEGZZBRy9u2lg uwl5snZ/LdwAmbAiSKxeI3w3tXqTDbXWPy/dFmaTrA== X-Received: by 2002:a2e:b5b0:: with SMTP id f16mr1493765ljn.221.1623318106290; Thu, 10 Jun 2021 02:41:46 -0700 (PDT) MIME-Version: 1.0 References: <20210604080954.13915-1-lukasz.luba@arm.com> <20210604080954.13915-2-lukasz.luba@arm.com> <8f4156a7-46ca-361d-bcb7-1cbdc860ef37@arm.com> In-Reply-To: <8f4156a7-46ca-361d-bcb7-1cbdc860ef37@arm.com> From: Vincent Guittot Date: Thu, 10 Jun 2021 11:41:35 +0200 Message-ID: Subject: Re: [PATCH v2 1/2] sched/fair: Take thermal pressure into account while estimating energy To: Lukasz Luba Cc: linux-kernel , "open list:THERMAL" , Peter Zijlstra , "Rafael J. Wysocki" , Viresh Kumar , Quentin Perret , Dietmar Eggemann , Vincent Donnefort , Beata Michalska , Ingo Molnar , Juri Lelli , Steven Rostedt , segall@google.com, Mel Gorman , Daniel Bristot de Oliveira Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, 10 Jun 2021 at 11:36, Lukasz Luba wrote: > > > > On 6/10/21 10:11 AM, Vincent Guittot wrote: > > On Thu, 10 Jun 2021 at 10:42, Lukasz Luba wrote: > >> > >> > >> > >> On 6/10/21 8:59 AM, Vincent Guittot wrote: > >>> On Fri, 4 Jun 2021 at 10:10, Lukasz Luba wrote: > >>>> > >>>> Energy Aware Scheduling (EAS) needs to be able to predict the frequency > >>>> requests made by the SchedUtil governor to properly estimate energy used > >>>> in the future. It has to take into account CPUs utilization and forecast > >>>> Performance Domain (PD) frequency. There is a corner case when the max > >>>> allowed frequency might be reduced due to thermal. SchedUtil is aware of > >>>> that reduced frequency, so it should be taken into account also in EAS > >>>> estimations. > >>>> > >>>> SchedUtil, as a CPUFreq governor, knows the maximum allowed frequency of > >>>> a CPU, thanks to cpufreq_driver_resolve_freq() and internal clamping > >>>> to 'policy::max'. SchedUtil is responsible to respect that upper limit > >>>> while setting the frequency through CPUFreq drivers. This effective > >>>> frequency is stored internally in 'sugov_policy::next_freq' and EAS has > >>>> to predict that value. > >>>> > >>>> In the existing code the raw value of arch_scale_cpu_capacity() is used > >>>> for clamping the returned CPU utilization from effective_cpu_util(). > >>>> This patch fixes issue with too big single CPU utilization, by introducing > >>>> clamping to the allowed CPU capacity. The allowed CPU capacity is a CPU > >>>> capacity reduced by thermal pressure signal. We rely on this load avg > >>>> geometric series in similar way as other mechanisms in the scheduler. > >>>> > >>>> Thanks to knowledge about allowed CPU capacity, we don't get too big value > >>>> for a single CPU utilization, which is then added to the util sum. The > >>>> util sum is used as a source of information for estimating whole PD energy. > >>>> To avoid wrong energy estimation in EAS (due to capped frequency), make > >>>> sure that the calculation of util sum is aware of allowed CPU capacity. > >>>> > >>>> Signed-off-by: Lukasz Luba > >>>> --- > >>>> kernel/sched/fair.c | 17 ++++++++++++++--- > >>>> 1 file changed, 14 insertions(+), 3 deletions(-) > >>>> > >>>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > >>>> index 161b92aa1c79..1aeddecabc20 100644 > >>>> --- a/kernel/sched/fair.c > >>>> +++ b/kernel/sched/fair.c > >>>> @@ -6527,6 +6527,7 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd) > >>>> struct cpumask *pd_mask = perf_domain_span(pd); > >>>> unsigned long cpu_cap = arch_scale_cpu_capacity(cpumask_first(pd_mask)); > >>>> unsigned long max_util = 0, sum_util = 0; > >>>> + unsigned long _cpu_cap = cpu_cap; > >>>> int cpu; > >>>> > >>>> /* > >>>> @@ -6558,14 +6559,24 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd) > >>>> cpu_util_next(cpu, p, -1) + task_util_est(p); > >>>> } > >>>> > >>>> + /* > >>>> + * Take the thermal pressure from non-idle CPUs. They have > >>>> + * most up-to-date information. For idle CPUs thermal pressure > >>>> + * signal is not updated so often. > >>> > >>> What do you mean by "not updated so often" ? Do you have a value ? > >>> > >>> Thermal pressure is updated at the same rate as other PELT values of > >>> an idle CPU. Why is it a problem there ? > >>> > >> > >> > >> For idle CPU the value is updated 'remotely' by some other CPU > >> running nohz_idle_balance(). That goes into > >> update_blocked_averages() if the flags and checks are OK inside > >> update_nohz_stats(). Sometimes this is not called > >> because other_have_blocked() returned false. It can happen for a long > > > > So i miss that you were in a loop and the below was called for each > > cpu and _cpu_cap was overwritten > > > > + if (!idle_cpu(cpu)) > > + _cpu_cap = cpu_cap - thermal_load_avg(cpu_rq(cpu)); > > > > But that also means that if the 1st cpus of the pd are idle, they will > > use original capacity whereas the other ones will remove the thermal > > pressure. Isn't this a problem ? You don't use the same capacity for > > all cpus in the performance domain regarding the thermal pressure? > > True, but in the experiments for idle CPUs I haven't > observed that they still have some big util (bigger than _cpu_cap). > It decayed already, so it's not a problem for idle CPUs. But it's a problem because there is a random behavior : some idle cpu will use original capacity whereas others will use the capped value set by non idle CPUs. You must have consistent behavior across all idle cpus. Then, if it's not a problem why adding the if (!idle_cpu(cpu)) > > Although, it might be my test case which didn't trigger something. > Is it worth to add the loop above this one, to be 100% sure and > get a thermal pressure signal from some running CPU? > Then apply the same value always inside the 2nd loop?