Received: by 2002:a5b:505:0:0:0:0:0 with SMTP id o5csp940534ybp; Fri, 11 Oct 2019 06:46:46 -0700 (PDT) X-Google-Smtp-Source: APXvYqy5z/XXrqijKNHqF3gpby0T9ZcF3IpTLquzuOfwpmvWwqgqX5Ked/VR3ji6ZVUQKXvVq8Yj X-Received: by 2002:a17:906:6011:: with SMTP id o17mr13538403ejj.172.1570801606296; Fri, 11 Oct 2019 06:46:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1570801606; cv=none; d=google.com; s=arc-20160816; b=IqvH7md11M+L62Nf4DgceXuY9LKxnIOj8uGy7mDXRCBeLZiLUp5P4RYU3lkJgU8dup kDDp7MIF9Okn9hUMk85xvSdoP0oBAi3yXtagdh5NJRvt5N/G2weXLW5q8qcHQlSgzCfc RpAqxZf3NsUQJ65N1YV9BKPtl7Y/j1v8DUX0JWikY5doWP+XavQKIJHXQ8zEmANiSquy eZVdY2jrL6sx40nogrfRmAMdAETMNaFYTO0Gl+96MEqCHBiiCrQSwSsBydhkEfT/73bm f94z927ndDFkB3VP8CHclIhvUxnZ9BDMNHj2KdiQGPbVrCo7JHD4Pd7WynZkqkhDMKt7 014g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=7cTDqk9nrJT6ouLoLHM7cw8vm9gTl5QE0O/9wgftqNI=; b=gyfasbPJRacTXBoWWI+xvQO/MDZqvfHxiEHtKKqQiWFIvQZANyKjgJGkg0dr0Lni0y QB9UHEJBLpO8lTaOKBoBWq3c6O83KW43Ypqz7jMDAFwMZSdDSrjmPpTLsa3YfE9miBat Dw+QjdfoTEUSOy4oKlHsS6HyycVr+bN9Wl5ld0j5aPva2bDG2cCmCYKmDcwdRySXcBk+ cHkd/Puj0QJgp2V+IUAgADAFN1Z2Z79bb1JKFkV6FrxPGw4r1RXZF2S2PrFQwODA1Xyf AwjB/Vn146gDovo2//EiHXU3Ro4+rpBLhbF4aCCEhaSqzvj00nzFgShrO9cehb3haluH GoYg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id q5si5119980ejb.174.2019.10.11.06.46.22; Fri, 11 Oct 2019 06:46:46 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728707AbfJKNpq (ORCPT + 99 others); Fri, 11 Oct 2019 09:45:46 -0400 Received: from foss.arm.com ([217.140.110.172]:60958 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728684AbfJKNpp (ORCPT ); Fri, 11 Oct 2019 09:45:45 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BAD1C337; Fri, 11 Oct 2019 06:45:44 -0700 (PDT) Received: from e107049-lin.arm.com (e107049-lin.cambridge.arm.com [10.1.195.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 16AFF3F68E; Fri, 11 Oct 2019 06:45:42 -0700 (PDT) From: Douglas RAILLARD To: linux-kernel@vger.kernel.org Cc: linux-pm@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, rjw@rjwysocki.net, viresh.kumar@linaro.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, douglas.raillard@arm.com, dietmar.eggemann@arm.com, qperret@qperret.net, patrick.bellasi@matbug.net, dh.han@samsung.com Subject: [RFC PATCH v3 5/6] sched/cpufreq: Boost schedutil frequency ramp up Date: Fri, 11 Oct 2019 14:44:59 +0100 Message-Id: <20191011134500.235736-6-douglas.raillard@arm.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20191011134500.235736-1-douglas.raillard@arm.com> References: <20191011134500.235736-1-douglas.raillard@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org In some situations, it can be interesting to spend temporarily more power if that can give a useful frequency boost. Use the new sugov_cpu_ramp_boost() function to drive an energy-aware boost, on top of the minimal required frequency. As that boost number is not accurate (and cannot be without a crystal ball), we only use it in a way that allows direct control over the power it is going to cost. This allows keeping a platform-independent level of control over the average power, while allowing for frequency bursts when we know a (set of) tasks can make use of it. In shared policies, the maximum of all CPU's boost is used. Since the extra power expenditure is bounded, it cannot skyrocket even on platforms with a large number of cores in the same frequency domain and/or very high ratio between lowest and highest OPP cost. Signed-off-by: Douglas RAILLARD --- kernel/sched/cpufreq_schedutil.c | 23 +++++++++++++++++------ 1 file changed, 17 insertions(+), 6 deletions(-) diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c index c118f85d1f3d..7c1a749fb6ef 100644 --- a/kernel/sched/cpufreq_schedutil.c +++ b/kernel/sched/cpufreq_schedutil.c @@ -226,6 +226,9 @@ static unsigned long sugov_cpu_ramp_boost_update(struct sugov_cpu *sg_cpu) * @sg_policy: schedutil policy object to compute the new frequency for. * @util: Current CPU utilization. * @max: CPU capacity. + * @boost: Extra power that can be spent on top of the minimum amount of power + * required to meet capacity requirements, as a percentage between 0 and + * EM_COST_MARGIN_SCALE. * * If the utilization is frequency-invariant, choose the new frequency to be * proportional to it, that is @@ -244,7 +247,8 @@ static unsigned long sugov_cpu_ramp_boost_update(struct sugov_cpu *sg_cpu) * cpufreq driver limitations. */ static unsigned int get_next_freq(struct sugov_policy *sg_policy, - unsigned long util, unsigned long max) + unsigned long util, unsigned long max, + unsigned long boost) { struct cpufreq_policy *policy = sg_policy->policy; unsigned int freq = arch_scale_freq_invariant() ? @@ -257,7 +261,7 @@ static unsigned int get_next_freq(struct sugov_policy *sg_policy, * Try to get a higher frequency if one is available, given the extra * power we are ready to spend. */ - freq = em_pd_get_higher_freq(pd, freq, 0); + freq = em_pd_get_higher_freq(pd, freq, boost); if (freq == sg_policy->cached_raw_freq && !sg_policy->need_freq_update) return sg_policy->next_freq; @@ -539,6 +543,7 @@ static void sugov_update_single(struct update_util_data *hook, u64 time, unsigned long util, max; unsigned int next_f; bool busy; + unsigned long ramp_boost = 0; sugov_iowait_boost(sg_cpu, time, flags); sg_cpu->last_update = time; @@ -552,10 +557,10 @@ static void sugov_update_single(struct update_util_data *hook, u64 time, busy = !sg_policy->need_freq_update && sugov_cpu_is_busy(sg_cpu); util = sugov_get_util(sg_cpu); - sugov_cpu_ramp_boost_update(sg_cpu); + ramp_boost = sugov_cpu_ramp_boost_update(sg_cpu); max = sg_cpu->max; util = sugov_iowait_apply(sg_cpu, time, util, max); - next_f = get_next_freq(sg_policy, util, max); + next_f = get_next_freq(sg_policy, util, max, ramp_boost); /* * Do not reduce the frequency if the CPU has not been idle * recently, as the reduction is likely to be premature then. @@ -587,6 +592,8 @@ static unsigned int sugov_next_freq_shared(struct sugov_cpu *sg_cpu, u64 time) struct cpufreq_policy *policy = sg_policy->policy; unsigned long util = 0, max = 1; unsigned int j; + unsigned long ramp_boost = 0; + unsigned long j_ramp_boost = 0; for_each_cpu(j, policy->cpus) { struct sugov_cpu *j_sg_cpu = &per_cpu(sugov_cpu, j); @@ -594,7 +601,11 @@ static unsigned int sugov_next_freq_shared(struct sugov_cpu *sg_cpu, u64 time) j_util = sugov_get_util(j_sg_cpu); if (j_sg_cpu == sg_cpu) - sugov_cpu_ramp_boost_update(sg_cpu); + j_ramp_boost = sugov_cpu_ramp_boost_update(sg_cpu); + else + j_ramp_boost = sugov_cpu_ramp_boost(j_sg_cpu); + ramp_boost = max(ramp_boost, j_ramp_boost); + j_max = j_sg_cpu->max; j_util = sugov_iowait_apply(j_sg_cpu, time, j_util, j_max); @@ -605,7 +616,7 @@ static unsigned int sugov_next_freq_shared(struct sugov_cpu *sg_cpu, u64 time) } - return get_next_freq(sg_policy, util, max); + return get_next_freq(sg_policy, util, max, ramp_boost); } static void -- 2.23.0