Received: by 2002:a25:8b91:0:0:0:0:0 with SMTP id j17csp879387ybl; Thu, 23 Jan 2020 09:23:35 -0800 (PST) X-Google-Smtp-Source: APXvYqypqbBJH2Czucj3Y8ezdlUsPMOeMS0odPYdO5gZhbMzSmHocnpIGwLbnD8nqizT1aPSt+S1 X-Received: by 2002:a05:6808:208:: with SMTP id l8mr10994564oie.112.1579800215800; Thu, 23 Jan 2020 09:23:35 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1579800215; cv=none; d=google.com; s=arc-20160816; b=0OxhBXvNQg7iZUA3HO+fYdfmA37wMxnjb6COjM8qrlgP2mlQHK0BvEd9Od08C3nO3V GZubcWB5cihIZ3U8LqND4voE1TSJit41HmhE+iJPvR4LBxYpmJe79MIrHAqsueHKU0vJ YC6/JIZeB9ZGEH3hlueHQkdosEpQ+ro3fCACPnBYhV4LpUDIedZiXg28IYGXWAZa2RGH PQHpuVwO3pZHz0HC0Oym8BHJHfoVkDpVgWNI1PYATCwJ4E5pFAF4VY7JDsMoJqvpodbq XkzlSHbWjWGFFqZCnUUwHls/E9HsvLy1hMxZv6leUsevI3Oq3uscvQLqqC9vgNwX7d7x +hkw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:organization:from:references:cc:to:subject; bh=3zyh2/AqjFRA7GErdJIWiautxuX8lY6t7n2wqsIy/ok=; b=J23bo+1/aNvNRf6tUr/rOheMxnJsxVE1fU6iQh11J8dl9fbcNGJ3icXBq/dsg2xhwF fFhYNylmCWyF5wdqnew90+x+xHP/eggq4FGTV5ScKZtluqeAunE2LgxowvQolXExwFhV /oz9A670cWSIO1p0Y96MeR6y6LBbOfgFicrybkgCj8Xri13p+xUrxWozf8XRYa/7vuiC 2KPJ8bcCyPWmssBG+5ZcHUO8z4d3KDQqelLAqkif503cfhXGBetmUHKJG6ufNKWyjLiP CTDdE1tjgkh2NUIQK6SdrrICE9zyeuZVsuUFSZOPNnokoubm77OvksJYgIAp6LxodmA/ Tyfg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id r5si1087249oic.19.2020.01.23.09.23.23; Thu, 23 Jan 2020 09:23:35 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729043AbgAWRVI (ORCPT + 99 others); Thu, 23 Jan 2020 12:21:08 -0500 Received: from foss.arm.com ([217.140.110.172]:42606 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726605AbgAWRVI (ORCPT ); Thu, 23 Jan 2020 12:21:08 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 618DE1FB; Thu, 23 Jan 2020 09:21:07 -0800 (PST) Received: from [10.1.195.43] (e107049-lin.cambridge.arm.com [10.1.195.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 1F5DD3F52E; Thu, 23 Jan 2020 09:21:06 -0800 (PST) Subject: Re: [RFC PATCH v4 4/6] sched/cpufreq: Introduce sugov_cpu_ramp_boost To: "Rafael J. Wysocki" Cc: Linux Kernel Mailing List , "Rafael J. Wysocki" , Viresh Kumar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , qperret@google.com, Linux PM References: <20200122173538.1142069-1-douglas.raillard@arm.com> <20200122173538.1142069-5-douglas.raillard@arm.com> From: Douglas Raillard Organization: ARM Message-ID: <9b5afae9-0cf5-6c3a-b94b-0796da4e6a71@arm.com> Date: Thu, 23 Jan 2020 17:21:05 +0000 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.3.1 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Language: en-GB-large Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 1/23/20 3:55 PM, Rafael J. Wysocki wrote: > On Wed, Jan 22, 2020 at 6:36 PM Douglas RAILLARD > wrote: >> >> Use the utilization signals dynamic to detect when the utilization of a >> set of tasks starts increasing because of a change in tasks' behavior. >> This allows detecting when spending extra power for faster frequency >> ramp up response would be beneficial to the reactivity of the system. >> >> This ramp boost is computed as the difference between util_avg and >> util_est_enqueued. This number somehow represents a lower bound of how >> much extra utilization this tasks is actually using, compared to our >> best current stable knowledge of it (which is util_est_enqueued). >> >> When the set of runnable tasks changes, the boost is disabled as the >> impact of blocked utilization on util_avg will make the delta with >> util_est_enqueued not very informative. >> >> Signed-off-by: Douglas RAILLARD >> --- >> kernel/sched/cpufreq_schedutil.c | 43 ++++++++++++++++++++++++++++++++ >> 1 file changed, 43 insertions(+) >> >> diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c >> index 608963da4916..25a410a1ff6a 100644 >> --- a/kernel/sched/cpufreq_schedutil.c >> +++ b/kernel/sched/cpufreq_schedutil.c >> @@ -61,6 +61,10 @@ struct sugov_cpu { >> unsigned long bw_dl; >> unsigned long max; >> >> + unsigned long ramp_boost; >> + unsigned long util_est_enqueued; >> + unsigned long util_avg; >> + >> /* The field below is for single-CPU policies only: */ >> #ifdef CONFIG_NO_HZ_COMMON >> unsigned long saved_idle_calls; >> @@ -183,6 +187,42 @@ static void sugov_deferred_update(struct sugov_policy *sg_policy, u64 time, >> } >> } >> >> +static unsigned long sugov_cpu_ramp_boost(struct sugov_cpu *sg_cpu) >> +{ >> + return READ_ONCE(sg_cpu->ramp_boost); >> +} > > Where exactly is this function used? In the next commit where the boost value is actually used to do something. The function is introduced here to keep the WRITE_ONCE/READ_ONCE pair together. > >> + >> +static unsigned long sugov_cpu_ramp_boost_update(struct sugov_cpu *sg_cpu) >> +{ >> + struct rq *rq = cpu_rq(sg_cpu->cpu); >> + unsigned long util_est_enqueued; >> + unsigned long util_avg; >> + unsigned long boost = 0; >> + >> + util_est_enqueued = READ_ONCE(rq->cfs.avg.util_est.enqueued); >> + util_avg = READ_ONCE(rq->cfs.avg.util_avg); >> + >> + /* >> + * Boost when util_avg becomes higher than the previous stable >> + * knowledge of the enqueued tasks' set util, which is CPU's >> + * util_est_enqueued. >> + * >> + * We try to spot changes in the workload itself, so we want to >> + * avoid the noise of tasks being enqueued/dequeued. To do that, >> + * we only trigger boosting when the "amount of work" enqueued >> + * is stable. >> + */ >> + if (util_est_enqueued == sg_cpu->util_est_enqueued && >> + util_avg >= sg_cpu->util_avg && >> + util_avg > util_est_enqueued) >> + boost = util_avg - util_est_enqueued; >> + >> + sg_cpu->util_est_enqueued = util_est_enqueued; >> + sg_cpu->util_avg = util_avg; >> + WRITE_ONCE(sg_cpu->ramp_boost, boost); >> + return boost; >> +} >> + >> /** >> * get_next_freq - Compute a new frequency for a given cpufreq policy. >> * @sg_policy: schedutil policy object to compute the new frequency for. >> @@ -514,6 +554,7 @@ static void sugov_update_single(struct update_util_data *hook, u64 time, >> busy = !sg_policy->need_freq_update && sugov_cpu_is_busy(sg_cpu); >> >> util = sugov_get_util(sg_cpu); >> + sugov_cpu_ramp_boost_update(sg_cpu); >> max = sg_cpu->max; >> util = sugov_iowait_apply(sg_cpu, time, util, max); >> next_f = get_next_freq(sg_policy, util, max); >> @@ -554,6 +595,8 @@ static unsigned int sugov_next_freq_shared(struct sugov_cpu *sg_cpu, u64 time) >> unsigned long j_util, j_max; >> >> j_util = sugov_get_util(j_sg_cpu); >> + if (j_sg_cpu == sg_cpu) >> + sugov_cpu_ramp_boost_update(sg_cpu); >> j_max = j_sg_cpu->max; >> j_util = sugov_iowait_apply(j_sg_cpu, time, j_util, j_max); >> >> --