Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp4573546imu; Tue, 15 Jan 2019 02:18:49 -0800 (PST) X-Google-Smtp-Source: ALg8bN7uQ+4R5RAMjIsNs09zmecRRG9MjiaE2vf0HeSzf7frZ2pFXvivXOfDQsNRVLEfExqjUdg/ X-Received: by 2002:a17:902:5a4d:: with SMTP id f13mr3396357plm.49.1547547529914; Tue, 15 Jan 2019 02:18:49 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1547547529; cv=none; d=google.com; s=arc-20160816; b=lK9Uu1yfiTAxgG9DEVhC5dF0NjEMh64NVyKsAot7PWrzNSGI6Ap5PqkMUqBJnJl/yX Ks0ya2q3sKAmmi+Mw4+gUx9dr545aYhCOVJkW4Xke0SN0Ur1WCwXr3YVOQxI6nxWIwZv QATfYn2+Dq+hKZhGUybB9rQOZgA7ywsPgeaN9p3670Y0/1S1vIEM00bCysPqo3R9aTSf itgVud6JRdj9nJBnTjubuEqYVqmRJeVRUbSYq5ej1X5JiY+bd0V7An3mrfYJM282PGi0 AzYxdJSYL1o5ebV90/driNHfbVCOBlDb92bNO2VanyXp+qm4B5eL+OnHMIrohVCmvwfu 4s1A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=gtzQBufuAmkpgzMTRUAICVZPT5zIUNWK+f+jMpLdzZk=; b=AG0qzOHCdBWRN6rW9CEQ5/KLAeCZHFz26X/evfIGM8y6lmVLmRhKP+Lb4IeoHD+HZo Lp1WwhEsMyUw9IixzXg2Sir87UvVJqDYZ+e3khDfivYJmn7IqxLm5FOy9CnMyAjvo+Od N15XCmKRPXDzOG/yxAZ7wo2wbPeNV+nv1erIZ8aqruEqb1JjDVrAXqPYG5YT3vbA8mNG fxQuCztrButJ+VZN2U4Lb+I9THKxIbZ1GeVXVXlZ1ZRxRYhKE8t7gZEupZ/2kbjR+VOO KwGy7DSdOEYfyrhNV6xiYCXJgipjatUCxTR+vbHd7ppB7yIxNFySWgkiJlJuPayviI3g SzLg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id l94si2856116plb.416.2019.01.15.02.18.34; Tue, 15 Jan 2019 02:18:49 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728856AbfAOKP4 (ORCPT + 99 others); Tue, 15 Jan 2019 05:15:56 -0500 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:46918 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728801AbfAOKPy (ORCPT ); Tue, 15 Jan 2019 05:15:54 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id AE83A15AD; Tue, 15 Jan 2019 02:15:53 -0800 (PST) Received: from e110439-lin.cambridge.arm.com (e110439-lin.cambridge.arm.com [10.1.194.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 97DE13F70D; Tue, 15 Jan 2019 02:15:50 -0800 (PST) From: Patrick Bellasi To: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, linux-api@vger.kernel.org Cc: Ingo Molnar , Peter Zijlstra , Tejun Heo , "Rafael J . Wysocki" , Vincent Guittot , Viresh Kumar , Paul Turner , Quentin Perret , Dietmar Eggemann , Morten Rasmussen , Juri Lelli , Todd Kjos , Joel Fernandes , Steve Muckle , Suren Baghdasaryan Subject: [PATCH v6 08/16] sched/cpufreq: uclamp: Add utilization clamping for FAIR tasks Date: Tue, 15 Jan 2019 10:15:05 +0000 Message-Id: <20190115101513.2822-9-patrick.bellasi@arm.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20190115101513.2822-1-patrick.bellasi@arm.com> References: <20190115101513.2822-1-patrick.bellasi@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Each time a frequency update is required via schedutil, a frequency is selected to (possibly) satisfy the utilization reported by each scheduling class. However, when utilization clamping is in use, the frequency selection should consider userspace utilization clamping hints. This will allow, for example, to: - boost tasks which are directly affecting the user experience by running them at least at a minimum "requested" frequency - cap low priority tasks not directly affecting the user experience by running them only up to a maximum "allowed" frequency These constraints are meant to support a per-task based tuning of the frequency selection thus supporting a fine grained definition of performance boosting vs energy saving strategies in kernel space. Add support to clamp the utilization and IOWait boost of RUNNABLE FAIR tasks within the boundaries defined by their aggregated utilization clamp constraints. Based on the max(min_util, max_util) of each task, max-aggregated the CPU clamp value in a way to give the boosted tasks the performance they need when they happen to be co-scheduled with other capped tasks. Signed-off-by: Patrick Bellasi Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Rafael J. Wysocki --- Changes in v6: Message-ID: <20181107113849.GC14309@e110439-lin> - sanity check util_max >= util_min Others: - wholesale s/group/bucket/ - wholesale s/_{get,put}/_{inc,dec}/ to match refcount APIs --- kernel/sched/cpufreq_schedutil.c | 27 ++++++++++++++++++++++++--- kernel/sched/sched.h | 23 +++++++++++++++++++++++ 2 files changed, 47 insertions(+), 3 deletions(-) diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c index 033ec7c45f13..520ee2b785e7 100644 --- a/kernel/sched/cpufreq_schedutil.c +++ b/kernel/sched/cpufreq_schedutil.c @@ -218,8 +218,15 @@ unsigned long schedutil_freq_util(int cpu, unsigned long util_cfs, * CFS tasks and we use the same metric to track the effective * utilization (PELT windows are synchronized) we can directly add them * to obtain the CPU's actual utilization. + * + * CFS utilization can be boosted or capped, depending on utilization + * clamp constraints requested by currently RUNNABLE tasks. + * When there are no CFS RUNNABLE tasks, clamps are released and + * frequency will be gracefully reduced with the utilization decay. */ - util = util_cfs; + util = (type == ENERGY_UTIL) + ? util_cfs + : uclamp_util(rq, util_cfs); util += cpu_util_rt(rq); dl_util = cpu_util_dl(rq); @@ -327,6 +334,7 @@ static void sugov_iowait_boost(struct sugov_cpu *sg_cpu, u64 time, unsigned int flags) { bool set_iowait_boost = flags & SCHED_CPUFREQ_IOWAIT; + unsigned int max_boost; /* Reset boost if the CPU appears to have been idle enough */ if (sg_cpu->iowait_boost && @@ -342,11 +350,24 @@ static void sugov_iowait_boost(struct sugov_cpu *sg_cpu, u64 time, return; sg_cpu->iowait_boost_pending = true; + /* + * Boost FAIR tasks only up to the CPU clamped utilization. + * + * Since DL tasks have a much more advanced bandwidth control, it's + * safe to assume that IO boost does not apply to those tasks. + * Instead, since RT tasks are not utilization clamped, we don't want + * to apply clamping on IO boost while there is blocked RT + * utilization. + */ + max_boost = sg_cpu->iowait_boost_max; + if (!cpu_util_rt(cpu_rq(sg_cpu->cpu))) + max_boost = uclamp_util(cpu_rq(sg_cpu->cpu), max_boost); + /* Double the boost at each request */ if (sg_cpu->iowait_boost) { sg_cpu->iowait_boost <<= 1; - if (sg_cpu->iowait_boost > sg_cpu->iowait_boost_max) - sg_cpu->iowait_boost = sg_cpu->iowait_boost_max; + if (sg_cpu->iowait_boost > max_boost) + sg_cpu->iowait_boost = max_boost; return; } diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index b7f3ee8ba164..95d62a2a0b44 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -2267,6 +2267,29 @@ static inline unsigned int uclamp_none(int clamp_id) return SCHED_CAPACITY_SCALE; } +#ifdef CONFIG_UCLAMP_TASK +static inline unsigned int uclamp_util(struct rq *rq, unsigned int util) +{ + unsigned int min_util = READ_ONCE(rq->uclamp[UCLAMP_MIN].value); + unsigned int max_util = READ_ONCE(rq->uclamp[UCLAMP_MAX].value); + + /* + * Since CPU's {min,max}_util clamps are MAX aggregated considering + * RUNNABLE tasks with _different_ clamps, we can end up with an + * invertion, which we can fix at usage time. + */ + if (unlikely(min_util >= max_util)) + return min_util; + + return clamp(util, min_util, max_util); +} +#else /* CONFIG_UCLAMP_TASK */ +static inline unsigned int uclamp_util(struct rq *rq, unsigned int util) +{ + return util; +} +#endif /* CONFIG_UCLAMP_TASK */ + #ifdef arch_scale_freq_capacity # ifndef arch_scale_freq_invariant # define arch_scale_freq_invariant() true -- 2.19.2