Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp363908pxj; Thu, 10 Jun 2021 02:40:13 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzWEoCLY0k6+iNhDvVoZL4lKurF/taPkBVC+RmuKmkSTLrB3mGrj5rCrvzMwhpFlZpUx41K X-Received: by 2002:a17:907:6288:: with SMTP id nd8mr3691581ejc.223.1623318013116; Thu, 10 Jun 2021 02:40:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1623318013; cv=none; d=google.com; s=arc-20160816; b=cedAXtrbXbdmRwXVpagXqII+ULQ3bK1M2TPgBnIGhyjGBPeX1sroeA4Zdy4xoJCUtw i1xnbunGOmtTby8aYjn1VIUzJbGJI8tvyqhtbQJwKIQu6Ac79Ct+xxVZZr8izlop0ej4 ZC7fBgiiDAD1B8Pxk5pXr9QTkN6foiIPn08Pj5hu0krxGIhGP5N41hqWb+sL9CIuzbjp oXMyeoUgEo4FZgK3QUrAhwF3mOqO7K7HoAg8QfdVKBu6WhVgMfe6cWSSdfUWkXVl+3ff +Gw+RmY6jTC+ottDXlmFZQaMRQnK7p7ASyWs110LNVa7+CG5ezsnO44ejsEVutKZuhiV PBEw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:from:references :cc:to:subject; bh=1dOHFaHrRw1a5NqFrek4CIVb3OdZpavGwKf+joTRCIk=; b=O01FfGwcqC3v/iyppRC6+WudD+0IV6FRGCeyvMwLQcODSiDTtOykCU3paLbE9XkBsH U3/CJXtbQcqqC4DhAgcOFzcEI5un7vR4ImNWc687ddlkgfJBzLsVEs8YzfwxDQiP43Mx Mj4c8z0f0ola+QXTtG86BQfEaN/0PhvMNaEaMUtxQApRK4GWsPygdCAzwrT2WrYduU0s 37IDOM1FGeCsc3nQ2XEyEXu8voyBtGOOnkg39/EZc8rnVKTBDPA/rDTUxxsiMkXUNaQS x9wfNuN5qOdZQo/gu+ENVfs6ztVFLbiaqmUfnnC8xkkhoRKuvPktxI9SZI2cKTrDDmIW j62Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id lc28si2027290ejc.270.2021.06.10.02.39.50; Thu, 10 Jun 2021 02:40:13 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230151AbhFJJip (ORCPT + 99 others); Thu, 10 Jun 2021 05:38:45 -0400 Received: from foss.arm.com ([217.140.110.172]:55110 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229980AbhFJJim (ORCPT ); Thu, 10 Jun 2021 05:38:42 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9BE43D6E; Thu, 10 Jun 2021 02:36:46 -0700 (PDT) Received: from [10.57.4.220] (unknown [10.57.4.220]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id ED41F3F719; Thu, 10 Jun 2021 02:36:43 -0700 (PDT) Subject: Re: [PATCH v2 1/2] sched/fair: Take thermal pressure into account while estimating energy To: Vincent Guittot Cc: linux-kernel , "open list:THERMAL" , Peter Zijlstra , "Rafael J. Wysocki" , Viresh Kumar , Quentin Perret , Dietmar Eggemann , Vincent Donnefort , Beata Michalska , Ingo Molnar , Juri Lelli , Steven Rostedt , segall@google.com, Mel Gorman , Daniel Bristot de Oliveira References: <20210604080954.13915-1-lukasz.luba@arm.com> <20210604080954.13915-2-lukasz.luba@arm.com> From: Lukasz Luba Message-ID: <8f4156a7-46ca-361d-bcb7-1cbdc860ef37@arm.com> Date: Thu, 10 Jun 2021 10:36:42 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.9.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 6/10/21 10:11 AM, Vincent Guittot wrote: > On Thu, 10 Jun 2021 at 10:42, Lukasz Luba wrote: >> >> >> >> On 6/10/21 8:59 AM, Vincent Guittot wrote: >>> On Fri, 4 Jun 2021 at 10:10, Lukasz Luba wrote: >>>> >>>> Energy Aware Scheduling (EAS) needs to be able to predict the frequency >>>> requests made by the SchedUtil governor to properly estimate energy used >>>> in the future. It has to take into account CPUs utilization and forecast >>>> Performance Domain (PD) frequency. There is a corner case when the max >>>> allowed frequency might be reduced due to thermal. SchedUtil is aware of >>>> that reduced frequency, so it should be taken into account also in EAS >>>> estimations. >>>> >>>> SchedUtil, as a CPUFreq governor, knows the maximum allowed frequency of >>>> a CPU, thanks to cpufreq_driver_resolve_freq() and internal clamping >>>> to 'policy::max'. SchedUtil is responsible to respect that upper limit >>>> while setting the frequency through CPUFreq drivers. This effective >>>> frequency is stored internally in 'sugov_policy::next_freq' and EAS has >>>> to predict that value. >>>> >>>> In the existing code the raw value of arch_scale_cpu_capacity() is used >>>> for clamping the returned CPU utilization from effective_cpu_util(). >>>> This patch fixes issue with too big single CPU utilization, by introducing >>>> clamping to the allowed CPU capacity. The allowed CPU capacity is a CPU >>>> capacity reduced by thermal pressure signal. We rely on this load avg >>>> geometric series in similar way as other mechanisms in the scheduler. >>>> >>>> Thanks to knowledge about allowed CPU capacity, we don't get too big value >>>> for a single CPU utilization, which is then added to the util sum. The >>>> util sum is used as a source of information for estimating whole PD energy. >>>> To avoid wrong energy estimation in EAS (due to capped frequency), make >>>> sure that the calculation of util sum is aware of allowed CPU capacity. >>>> >>>> Signed-off-by: Lukasz Luba >>>> --- >>>> kernel/sched/fair.c | 17 ++++++++++++++--- >>>> 1 file changed, 14 insertions(+), 3 deletions(-) >>>> >>>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c >>>> index 161b92aa1c79..1aeddecabc20 100644 >>>> --- a/kernel/sched/fair.c >>>> +++ b/kernel/sched/fair.c >>>> @@ -6527,6 +6527,7 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd) >>>> struct cpumask *pd_mask = perf_domain_span(pd); >>>> unsigned long cpu_cap = arch_scale_cpu_capacity(cpumask_first(pd_mask)); >>>> unsigned long max_util = 0, sum_util = 0; >>>> + unsigned long _cpu_cap = cpu_cap; >>>> int cpu; >>>> >>>> /* >>>> @@ -6558,14 +6559,24 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd) >>>> cpu_util_next(cpu, p, -1) + task_util_est(p); >>>> } >>>> >>>> + /* >>>> + * Take the thermal pressure from non-idle CPUs. They have >>>> + * most up-to-date information. For idle CPUs thermal pressure >>>> + * signal is not updated so often. >>> >>> What do you mean by "not updated so often" ? Do you have a value ? >>> >>> Thermal pressure is updated at the same rate as other PELT values of >>> an idle CPU. Why is it a problem there ? >>> >> >> >> For idle CPU the value is updated 'remotely' by some other CPU >> running nohz_idle_balance(). That goes into >> update_blocked_averages() if the flags and checks are OK inside >> update_nohz_stats(). Sometimes this is not called >> because other_have_blocked() returned false. It can happen for a long > > So i miss that you were in a loop and the below was called for each > cpu and _cpu_cap was overwritten > > + if (!idle_cpu(cpu)) > + _cpu_cap = cpu_cap - thermal_load_avg(cpu_rq(cpu)); > > But that also means that if the 1st cpus of the pd are idle, they will > use original capacity whereas the other ones will remove the thermal > pressure. Isn't this a problem ? You don't use the same capacity for > all cpus in the performance domain regarding the thermal pressure? True, but in the experiments for idle CPUs I haven't observed that they still have some big util (bigger than _cpu_cap). It decayed already, so it's not a problem for idle CPUs. Although, it might be my test case which didn't trigger something. Is it worth to add the loop above this one, to be 100% sure and get a thermal pressure signal from some running CPU? Then apply the same value always inside the 2nd loop?