Received: by 10.192.165.148 with SMTP id m20csp5083300imm; Tue, 8 May 2018 21:54:55 -0700 (PDT) X-Google-Smtp-Source: AB8JxZqS9gbaIosQL3ljtSA4y+HE1+VPoBUxAYg/u0+3r1nhJFaYSvgz9YndP8IX/X7NfTfPFzPG X-Received: by 10.98.78.200 with SMTP id c191mr41934058pfb.153.1525841695790; Tue, 08 May 2018 21:54:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1525841695; cv=none; d=google.com; s=arc-20160816; b=d1LUf7HmImYQapgzbfdb7RBh7WEqAIrRLPvzgFKaRfC3g8yUKZ1I06B6nF8/mdNwJe 7vCgjMRRvkLfLg9jS0Lav8KexZEtSAJAWuIyEHr2rctugGpuh1F19nAYhzfSbBNCQx8G 7943HaJt7Tcywn35sKGdFsE4ZCMMGspLeM0V8W5Xmu1T08LEj9Og1wLlFBLVFX5u+WMh tj2BEMUC+ZqLOTuS40q0XUi3ov9oAfNp6B1rYl7EHO9yYFWpNkGLAgBYPPxuU63Xn19h f6PeXWILDcCXPVNkD5p/E5M4OgRuDfpdxDnqQ5h24uzirDu2A7a22lSnJt/2YJ6KQ7w4 Avvg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature:arc-authentication-results; bh=7HZxJVn7zqQKRkKPhEvxTpBldt0LwL8Q3vPBlYbjkmI=; b=sfXtlzULEY/55omNDYHkPAXE1HeAAZPpP6o6sLonGU/uqkqFJvQkGTYDKcCmzwhDkv +ddsSVxWBikUWzk1L0lvqUtbllCfAN71YUMgJ9HsTV1tnPF3R7M0c/gUwIsqnd9Xhu9Y rFpNfae/MmiT9GeVqbepKc68cEV4P9Fii/vWZN4FFpoJRY7cppaIYNSN2hzDtShcTMVP SLyWilvLKFrxchLOUIu2bPESY57TODnIIF/o2Oh5o49NuVId0w4WdbB8Q14ab+yR8cPb fEyLMPELg8qcLnkXk+KhE0uN0/jf2lAWpP5Av+Dp4Ci/0Iq8VgDXhgqvKn4qwUsWQP3w gfeQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@joelfernandes-org.20150623.gappssmtp.com header.s=20150623 header.b=dcQc/jGM; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id y3-v6si15416604pgv.504.2018.05.08.21.54.40; Tue, 08 May 2018 21:54:55 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@joelfernandes-org.20150623.gappssmtp.com header.s=20150623 header.b=dcQc/jGM; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753148AbeEIEy3 (ORCPT + 99 others); Wed, 9 May 2018 00:54:29 -0400 Received: from mail-pg0-f66.google.com ([74.125.83.66]:38139 "EHLO mail-pg0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750805AbeEIEy1 (ORCPT ); Wed, 9 May 2018 00:54:27 -0400 Received: by mail-pg0-f66.google.com with SMTP id n9-v6so22123909pgq.5 for ; Tue, 08 May 2018 21:54:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=joelfernandes-org.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=7HZxJVn7zqQKRkKPhEvxTpBldt0LwL8Q3vPBlYbjkmI=; b=dcQc/jGMUJKfLact7i4J0oQZTNXncP6d4x3iapZmKuH3ND04rtdRSeiyZXSzJXSWfD G1E7gkt449hZJhgMHynEbsNKokxuRNxFn6KbYKN9RnhD1rwDOryTRAlC9HvlLWi90/8U 3nJcA7YsFthDJ8vXHmT9EYqBXxMU1xNurXbpwf5A/1kbvurzzmCVeG0UymQaJGF8VJTX KRhR4j0hFivGZBM+uoHwv/nONBbepBoSsd01TMyJSUkmot/MrmVOocKV4W4Q0QNOjp6i GlKm7vvEs/0DHM6u0m5exxfqYDJa0hTeTHSP+Li14iqQGhymtGcG+kVpVuGHjmFMbWCs 2R5g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=7HZxJVn7zqQKRkKPhEvxTpBldt0LwL8Q3vPBlYbjkmI=; b=HuN9xW4DtW60JZOWzWCR21L9PC7iQthbnSjRnHeZNljUisi+36WAlGeJt2CsQMqyfe pOpbHACQ/pFsT5bEPLVlh8CtvmMIXcjU4TaMs2HuShJbvIF3urElJK5qaGPkv5Q9xt1L vOVE7l3nrwCvSwERveuQ0RW++o2iFE2YByTKZinHjz3z02+ozu531JByfNBjMt6XTHTU 8V1BudOtAHWtoYS4tnVSQuCdp9Lgjj7ho9f8EJRvYEDDXdIiTjWO3+eh2gug+jCBALSJ GdvdnKfm5j21IuAeV2+K1oe7BRH7WXEp0RJywVom117RWql7JgjmGkpVM/VXnnBNwpat mqtQ== X-Gm-Message-State: ALQs6tCXYjxE7qM0l68TdaqKfq9lVcZ/l6ukP/bW8KbyajV+g/qS8j15 A1yZjxnhS4lrMA8lhpPzAbcJBg== X-Received: by 10.98.80.80 with SMTP id e77mr43006803pfb.16.1525841667015; Tue, 08 May 2018 21:54:27 -0700 (PDT) Received: from localhost ([2620:0:1000:1600:3122:ea9c:d178:eb]) by smtp.gmail.com with ESMTPSA id q76sm56373521pfi.139.2018.05.08.21.54.26 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 08 May 2018 21:54:26 -0700 (PDT) Date: Tue, 8 May 2018 21:54:25 -0700 From: Joel Fernandes To: Viresh Kumar Cc: Claudio Scordino , linux-kernel@vger.kernel.org, "Rafael J . Wysocki" , Peter Zijlstra , Ingo Molnar , Patrick Bellasi , Juri Lelli , Luca Abeni , Joel Fernandes , linux-pm@vger.kernel.org Subject: Re: [RFC PATCH] sched/cpufreq/schedutil: handling urgent frequency requests Message-ID: <20180509045425.GA158882@joelaf.mtv.corp.google.com> References: <1525704215-8683-1-git-send-email-claudio@evidence.eu.com> <20180508065435.bcht6dyb3rpp6gk5@vireshk-i7> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180508065435.bcht6dyb3rpp6gk5@vireshk-i7> User-Agent: Mutt/1.9.2 (2017-12-15) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, May 08, 2018 at 12:24:35PM +0530, Viresh Kumar wrote: > On 07-05-18, 16:43, Claudio Scordino wrote: > > At OSPM, it was mentioned the issue about urgent CPU frequency requests > > arriving when a frequency switch is already in progress. > > > > Besides the various issues (physical time for switching frequency, > > on-going kthread activity, etc.) one (minor) issue is the kernel > > "forgetting" such request, thus waiting the next switch time for > > recomputing the needed frequency and behaving accordingly. > > > > This patch makes the kthread serve any urgent request occurred during > > the previous frequency switch. It introduces a specific flag, only set > > when the SCHED_DEADLINE scheduling class increases the CPU utilization, > > aiming at decreasing the likelihood of a deadline miss. > > > > Indeed, some preliminary tests in critical conditions (i.e. > > SCHED_DEADLINE tasks with short periods) have shown reductions of more > > than 10% of the average number of deadline misses. On the other hand, > > the increase in terms of energy consumption when running SCHED_DEADLINE > > tasks (not yet measured) is likely to be not negligible (especially in > > case of critical scenarios like "ramp up" utilizations). > > > > The patch is meant as follow-up discussion after OSPM. > > > > Signed-off-by: Claudio Scordino > > CC: Viresh Kumar > > CC: Rafael J. Wysocki > > CC: Peter Zijlstra > > CC: Ingo Molnar > > CC: Patrick Bellasi > > CC: Juri Lelli > > Cc: Luca Abeni > > CC: Joel Fernandes > > CC: linux-pm@vger.kernel.org > > --- > > kernel/sched/cpufreq_schedutil.c | 20 ++++++++++++++++++-- > > 1 file changed, 18 insertions(+), 2 deletions(-) > > > > diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c > > index d2c6083..4de06b0 100644 > > --- a/kernel/sched/cpufreq_schedutil.c > > +++ b/kernel/sched/cpufreq_schedutil.c > > @@ -41,6 +41,7 @@ struct sugov_policy { > > bool work_in_progress; > > > > bool need_freq_update; > > + bool urgent_freq_update; > > }; > > > > struct sugov_cpu { > > @@ -92,6 +93,14 @@ static bool sugov_should_update_freq(struct sugov_policy *sg_policy, u64 time) > > !cpufreq_can_do_remote_dvfs(sg_policy->policy)) > > return false; > > > > + /* > > + * Continue computing the new frequency. In case of work_in_progress, > > + * the kthread will resched a change once the current transition is > > + * finished. > > + */ > > + if (sg_policy->urgent_freq_update) > > + return true; > > + > > if (sg_policy->work_in_progress) > > return false; > > > > @@ -121,6 +130,9 @@ static void sugov_update_commit(struct sugov_policy *sg_policy, u64 time, > > sg_policy->next_freq = next_freq; > > sg_policy->last_freq_update_time = time; > > > > + if (sg_policy->work_in_progress) > > + return; > > + > > if (policy->fast_switch_enabled) { > > next_freq = cpufreq_driver_fast_switch(policy, next_freq); > > if (!next_freq) > > @@ -274,7 +286,7 @@ static inline bool sugov_cpu_is_busy(struct sugov_cpu *sg_cpu) { return false; } > > static inline void ignore_dl_rate_limit(struct sugov_cpu *sg_cpu, struct sugov_policy *sg_policy) > > { > > if (cpu_util_dl(cpu_rq(sg_cpu->cpu)) > sg_cpu->util_dl) > > - sg_policy->need_freq_update = true; > > + sg_policy->urgent_freq_update = true; > > } > > > > static void sugov_update_single(struct update_util_data *hook, u64 time, > > @@ -383,8 +395,11 @@ static void sugov_work(struct kthread_work *work) > > struct sugov_policy *sg_policy = container_of(work, struct sugov_policy, work); > > > > mutex_lock(&sg_policy->work_lock); > > - __cpufreq_driver_target(sg_policy->policy, sg_policy->next_freq, > > + do { > > + sg_policy->urgent_freq_update = false; > > + __cpufreq_driver_target(sg_policy->policy, sg_policy->next_freq, > > CPUFREQ_RELATION_L); > > If we are going to solve this problem, then maybe instead of the added > complexity and a new flag we can look for need_freq_update flag at this location > and re-calculate the next frequency if required. Just for discussion sake, is there any need for work_in_progress? If we can queue multiple work say kthread_queue_work can handle it, then just queuing works whenever they are available should be Ok and the kthread loop can handle them. __cpufreq_driver_target is also protected by the work lock if there is any concern that can have races... only thing is rate-limiting of the requests, but we are doing a rate limiting, just not for the "DL increased utilization" type requests (which I don't think we are doing at the moment for urgent DL requests anyway). Following is an untested diff to show the idea. What do you think? thanks, - Joel ----8<--- diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c index d2c6083304b4..862634ff4bf3 100644 --- a/kernel/sched/cpufreq_schedutil.c +++ b/kernel/sched/cpufreq_schedutil.c @@ -38,7 +38,6 @@ struct sugov_policy { struct mutex work_lock; struct kthread_worker worker; struct task_struct *thread; - bool work_in_progress; bool need_freq_update; }; @@ -92,16 +91,8 @@ static bool sugov_should_update_freq(struct sugov_policy *sg_policy, u64 time) !cpufreq_can_do_remote_dvfs(sg_policy->policy)) return false; - if (sg_policy->work_in_progress) - return false; - if (unlikely(sg_policy->need_freq_update)) { sg_policy->need_freq_update = false; - /* - * This happens when limits change, so forget the previous - * next_freq value and force an update. - */ - sg_policy->next_freq = UINT_MAX; return true; } @@ -129,7 +120,6 @@ static void sugov_update_commit(struct sugov_policy *sg_policy, u64 time, policy->cur = next_freq; trace_cpu_frequency(next_freq, smp_processor_id()); } else { - sg_policy->work_in_progress = true; irq_work_queue(&sg_policy->irq_work); } } @@ -386,8 +376,6 @@ static void sugov_work(struct kthread_work *work) __cpufreq_driver_target(sg_policy->policy, sg_policy->next_freq, CPUFREQ_RELATION_L); mutex_unlock(&sg_policy->work_lock); - - sg_policy->work_in_progress = false; } static void sugov_irq_work(struct irq_work *irq_work) @@ -671,7 +659,6 @@ static int sugov_start(struct cpufreq_policy *policy) sg_policy->freq_update_delay_ns = sg_policy->tunables->rate_limit_us * NSEC_PER_USEC; sg_policy->last_freq_update_time = 0; sg_policy->next_freq = UINT_MAX; - sg_policy->work_in_progress = false; sg_policy->need_freq_update = false; sg_policy->cached_raw_freq = 0;