Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp346511pxj; Thu, 10 Jun 2021 02:08:38 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxzHfK1NdL7O/cTso1JWsCDIQq5J+0krh3XdKHICb1O4imjjyy6etEGxah0+nIjd4cnXGnc X-Received: by 2002:a17:906:fa13:: with SMTP id lo19mr3625999ejb.468.1623316118415; Thu, 10 Jun 2021 02:08:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1623316118; cv=none; d=google.com; s=arc-20160816; b=hmBhFlGAoNNP2U39J+avYR0WKjTk3g0+jH8LSfiUxgf1AodlRcT7Z84w/XrpNVOELd PilFwHGyPwnaH8imi2zSdC5vuDjo2dXDo3P+3i0So9Nr+AaqE53L8EO1hGrWimjERoQ7 OQ9YbVrzsH3Y+K2VqooEUMkWYefqqe/T2YzNsNfvyuB0WelUujaIGSpdHyk87CaMsv0q /Jst5jnNVxS2ulmXsr+mK/AgRIFpFeRKhUkwzqUo+J+eeCYWBHMn6HhGuKGyL1OV6jrO vUdJ8nisUid7EvDNXxwy3BIDmlLUecSj0RsGWW9SPuDzrOaBMpuUbXT1g01kdpHHRzXF z/Uw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:from:references :cc:to:subject; bh=J9nrnGnH9Lze4FpG0+UO0AT3rXL8XKRubRSZEsUnIZw=; b=vrd9JDkllsPo7XUbMLlFFmy1g7oC5MFWhX/i7ecCI0MppYDwPL95HyFQpaFMQRXZ5h 0rQN5AgYvpq+JArz4YWcMsn1yWw7thAu253FVw0deXsaR1Dh+6N2tM6c6XyGGwm899Be w8LPGbmnGPYtZcWmSO8CGtBlQgCO1L4bac6/GVWubHTQ8R0y3ApzSMZ6sc9SO0Ss4FlB m4VcY1wdGsPUMqFCIpbakmcUlT/Fu+14K+6UkHgQnujYJ9LFwAQzdw+oekdvapNWbQTK p8w+wTxYPWUKgHWxv5d4dzwE6x0SEvMWiP/e8ZXuZstFBfC7YADRLa8SMH+5OfeBP7d8 MGWw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id t16si2136240edi.147.2021.06.10.02.08.15; Thu, 10 Jun 2021 02:08:38 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229935AbhFJJGx (ORCPT + 99 others); Thu, 10 Jun 2021 05:06:53 -0400 Received: from foss.arm.com ([217.140.110.172]:54336 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229715AbhFJJGw (ORCPT ); Thu, 10 Jun 2021 05:06:52 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A5CADD6E; Thu, 10 Jun 2021 02:04:56 -0700 (PDT) Received: from [10.57.4.220] (unknown [10.57.4.220]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 046B43F719; Thu, 10 Jun 2021 02:04:53 -0700 (PDT) Subject: Re: [PATCH v2 1/2] sched/fair: Take thermal pressure into account while estimating energy To: Dietmar Eggemann Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, peterz@infradead.org, rjw@rjwysocki.net, viresh.kumar@linaro.org, vincent.guittot@linaro.org, qperret@google.com, vincent.donnefort@arm.com, Beata.Michalska@arm.com, mingo@redhat.com, juri.lelli@redhat.com, rostedt@goodmis.org, segall@google.com, mgorman@suse.de, bristot@redhat.com References: <20210604080954.13915-1-lukasz.luba@arm.com> <20210604080954.13915-2-lukasz.luba@arm.com> <2f2fc758-92c6-5023-4fcb-f9558bf3369e@arm.com> From: Lukasz Luba Message-ID: <905f1d29-50f9-32be-4199-fc17eab79d04@arm.com> Date: Thu, 10 Jun 2021 10:04:52 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.9.0 MIME-Version: 1.0 In-Reply-To: <2f2fc758-92c6-5023-4fcb-f9558bf3369e@arm.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 6/10/21 9:42 AM, Dietmar Eggemann wrote: [snip] > > So essentially what you want to do is: > > Make EAS aware of the frequency clamping schedutil can be faced with: > > get_next_freq() -> cpufreq_driver_resolve_freq() -> > clamp_val(target_freq, policy->min, policy->max) (1) > > by subtracting CPU's Thermal Pressure (ThPr) signal from the original > CPU capacity `arch_scale_cpu_capacity()` (2). > > --- > > Isn't there a conceptional flaw in this design? Let's say we have a > big.Little system with two cpufreq cooling devices and a thermal zone > (something like Hikey 960). To create a ThPr scenario we have to run > stuff on the CPUs (e.g. hackbench (3)). > Eventually cpufreq_set_cur_state() [drivers/thermal/cpufreq_cooling.c] > will set thermal_pressure to `(2) - (2)*freq/policy->cpuinfo.max_freq` > and PELT will provide the ThPr signal via thermal_load_avg(). > But to create this scenario, the system will become overutilized > (system-wide data, if one CPU is overutilized, the whole system is) so > EAS is disabled (i.e. find_energy_efficient_cpu() and compute_emergy() > are not executed). Not always, it depends on thermal governor decision, workload and 'power actors' (in IPA naming convention). Then it depends when and how hard you clamp the CPUs. They (CPUs) don't have to be always overutilized, they might be even 50-70% utilized but the GPU reduced power budget by 2 Watts, so CPUs left with only 1W. Which is still OK for the CPUs, since they are only 'feeding' the GPU with new 'jobs'. > > I can see that there are episodes in which EAS is running and > thermal_load_avg() != 0 but those have to be when (3) has stopped and > you see the ThPr signal just decaying (no accruing of new ThPr). The > cpufreq cooling device can still issue cpufreq_set_cur_state() but only > with decreasing states. It is true for some CPU heavy workloads, when no other SoC components are involved like: GPU, DSP, ISP, encoders, etc. For other workloads when CPUs don't have to do a lot, but thermal pressure might be seen on them, this patch help. > > --- > > IMHO, a precise description of how you envision the system setup, > incorporating all participating subsystems, would be helpful here. True, I hope this description above would help to understand the scenario. > >> Signed-off-by: Lukasz Luba >> --- >> kernel/sched/fair.c | 17 ++++++++++++++--- >> 1 file changed, 14 insertions(+), 3 deletions(-) >> >> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c >> index 161b92aa1c79..1aeddecabc20 100644 >> --- a/kernel/sched/fair.c >> +++ b/kernel/sched/fair.c >> @@ -6527,6 +6527,7 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd) >> struct cpumask *pd_mask = perf_domain_span(pd); >> unsigned long cpu_cap = arch_scale_cpu_capacity(cpumask_first(pd_mask)); >> unsigned long max_util = 0, sum_util = 0; >> + unsigned long _cpu_cap = cpu_cap; >> int cpu; >> >> /* >> @@ -6558,14 +6559,24 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd) >> cpu_util_next(cpu, p, -1) + task_util_est(p); >> } >> >> + /* >> + * Take the thermal pressure from non-idle CPUs. They have >> + * most up-to-date information. For idle CPUs thermal pressure >> + * signal is not updated so often. >> + */ >> + if (!idle_cpu(cpu)) >> + _cpu_cap = cpu_cap - thermal_load_avg(cpu_rq(cpu)); >> + > > This one is probably the result of the fact that cpufreq cooling device > sets the ThPr for all CPUs of the policy (Frequency Domain (FD) or > Performance Domain (PD)) but PELT updates are happening per-CPU. And > only !idle CPUs get the update in scheduler_tick(). > > Looks like thermal_pressure [per_cpu(thermal_pressure, cpu), > drivers/base/arch_topology.c] set by cpufreq_set_cur_state() is always > in sync with policy->max/cpuinfo_max_freq). > So for your use case this instantaneous `signal` is better than the PELT > one. It's precise (no decaying when frequency clamping is already gone) > and you avoid the per-cpu update issue. Yes, this code implementation tries to address those issues.