Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp592218pxj; Thu, 10 Jun 2021 08:07:17 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxMNCg4EtQrJInKZYw+BXW/NS+KHH+J8qICh6EJGRP7dI0QZtsMY4DO2C0gU0e+b3LC7sU9 X-Received: by 2002:a17:906:b190:: with SMTP id w16mr62046ejy.332.1623337637079; Thu, 10 Jun 2021 08:07:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1623337637; cv=none; d=google.com; s=arc-20160816; b=Wu2CvtundAlRnWYXYfk2usilDZ66j76CY6cOotnEWenchVmzcOIp3U75gyhpuoNrf3 Ryihh+l9fXDTqb3UVh7q0diEEdO/Amn/bUsvjSlNWf8RCTnc2SPQSLnTgjOvResdZ7Oq YaPPhXBQvCc/4mUb963e8nVM5/VRd/SP30qfvJdyG5sCUFhwgpxe15tMU0kCkGuuuR0B 6Q/nL827tmajNUzrYENccIF/l1YwWMPpx6QkviNZWJdztUGUm67Xe8NeWoWJWlVGpPkQ kAyqECOftYjC7ZdMIZdJuaiC3laxX8r8i2jQONiEtOsBDpbZ5fNQB0UUsZOs3jM1xlPm KyYA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from; bh=tBjjPWB1Vr27nPA4D2E0X1tDL/G0zY+mC3nT8gqN4nc=; b=QZOi6iJ8+Eb3FzxBmohF9QjVC0YKAJrGUk8OubMVTeervh4o2TK2WWJbIhkW7BFdfW dmXCQBLo0NT9JIhgWVN7oxgytBtY3xssx7aNQKj8HfO9nkPGf4hfDGCVvq1ZwcQX0IGe v7uNGHgTB0/EfjEWmz/8SM9p5W18T4es1O34WIp83QxdC2Koiv5lLVmo/RjexyMwZrVl onqwczyCzds78fLqjyMlmNA4kRZ8YkdUEFZ/XC/riXNh2ng4Z8mbYFahKkqGG/7i1Fd3 DQs2RXw6Hj1rXkB+CwfvIcGMGwSUvMHKUNWP3OjmGtP5TSd0IwpwvGK/tj2HtAZaCk5Y JpQQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id eb13si2706076edb.312.2021.06.10.08.06.53; Thu, 10 Jun 2021 08:07:17 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231555AbhFJPFl (ORCPT + 99 others); Thu, 10 Jun 2021 11:05:41 -0400 Received: from foss.arm.com ([217.140.110.172]:34002 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231551AbhFJPFk (ORCPT ); Thu, 10 Jun 2021 11:05:40 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 437FD13A1; Thu, 10 Jun 2021 08:03:44 -0700 (PDT) Received: from e123648.arm.com (unknown [10.57.4.220]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 8CD2E3F719; Thu, 10 Jun 2021 08:03:40 -0700 (PDT) From: Lukasz Luba To: linux-kernel@vger.kernel.org Cc: linux-pm@vger.kernel.org, peterz@infradead.org, rjw@rjwysocki.net, viresh.kumar@linaro.org, vincent.guittot@linaro.org, qperret@google.com, dietmar.eggemann@arm.com, vincent.donnefort@arm.com, lukasz.luba@arm.com, Beata.Michalska@arm.com, mingo@redhat.com, juri.lelli@redhat.com, rostedt@goodmis.org, segall@google.com, mgorman@suse.de, bristot@redhat.com, thara.gopinath@linaro.org, amit.kachhap@gmail.com, amitk@kernel.org, rui.zhang@intel.com, daniel.lezcano@linaro.org Subject: [PATCH v3 2/3] sched/fair: Take thermal pressure into account while estimating energy Date: Thu, 10 Jun 2021 16:03:23 +0100 Message-Id: <20210610150324.22919-3-lukasz.luba@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210610150324.22919-1-lukasz.luba@arm.com> References: <20210610150324.22919-1-lukasz.luba@arm.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Energy Aware Scheduling (EAS) needs to be able to predict the frequency requests made by the SchedUtil governor to properly estimate energy used in the future. It has to take into account CPUs utilization and forecast Performance Domain (PD) frequency. There is a corner case when the max allowed frequency might be reduced due to thermal. SchedUtil is aware of that reduced frequency, so it should be taken into account also in EAS estimations. SchedUtil, as a CPUFreq governor, knows the maximum allowed frequency of a CPU, thanks to cpufreq_driver_resolve_freq() and internal clamping to 'policy::max'. SchedUtil is responsible to respect that upper limit while setting the frequency through CPUFreq drivers. This effective frequency is stored internally in 'sugov_policy::next_freq' and EAS has to predict that value. In the existing code the raw value of arch_scale_cpu_capacity() is used for clamping the returned CPU utilization from effective_cpu_util(). This patch fixes issue with too big single CPU utilization, by introducing clamping to the allowed CPU capacity. The allowed CPU capacity is a CPU capacity reduced by thermal pressure signal. We rely on this load avg geometric series in similar way as other mechanisms in the scheduler. Thanks to knowledge about allowed CPU capacity, we don't get too big value for a single CPU utilization, which is then added to the util sum. The util sum is used as a source of information for estimating whole PD energy. To avoid wrong energy estimation in EAS (due to capped frequency), make sure that the calculation of util sum is aware of allowed CPU capacity. This thermal pressure might be visible in scenarios where the CPUs are not heavily loaded, but some other component (like GPU) drastically reduced available power budget and increased the SoC temperature. Thus, we still use EAS for task placement and CPUs are not over-utilized. Signed-off-by: Lukasz Luba --- kernel/sched/fair.c | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 161b92aa1c79..237726217dad 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6527,8 +6527,12 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd) struct cpumask *pd_mask = perf_domain_span(pd); unsigned long cpu_cap = arch_scale_cpu_capacity(cpumask_first(pd_mask)); unsigned long max_util = 0, sum_util = 0; + unsigned long _cpu_cap, thermal_pressure; int cpu; + thermal_pressure = arch_scale_thermal_pressure(cpumask_first(pd_mask)); + _cpu_cap = cpu_cap - thermal_pressure; + /* * The capacity state of CPUs of the current rd can be driven by CPUs * of another rd if they belong to the same pd. So, account for the @@ -6564,8 +6568,10 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd) * is already enough to scale the EM reported power * consumption at the (eventually clamped) cpu_capacity. */ - sum_util += effective_cpu_util(cpu, util_running, cpu_cap, - ENERGY_UTIL, NULL); + cpu_util = effective_cpu_util(cpu, util_running, cpu_cap, + ENERGY_UTIL, NULL); + + sum_util += min(cpu_util, _cpu_cap); /* * Performance domain frequency: utilization clamping @@ -6576,7 +6582,7 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd) */ cpu_util = effective_cpu_util(cpu, util_freq, cpu_cap, FREQUENCY_UTIL, tsk); - max_util = max(max_util, cpu_util); + max_util = max(max_util, min(cpu_util, _cpu_cap)); } return em_cpu_energy(pd->em_pd, max_util, sum_util); -- 2.17.1