Received: by 2002:ac0:a5a7:0:0:0:0:0 with SMTP id m36-v6csp3739093imm; Mon, 6 Aug 2018 09:45:13 -0700 (PDT) X-Google-Smtp-Source: AAOMgpc50Dm8EJ37r/d7h8IfEbx5aLp018jR0IhNyfzUbU37OytDukmQvm/DMN7brlU5C+y/TjY1 X-Received: by 2002:a63:5758:: with SMTP id h24-v6mr15139327pgm.432.1533573913238; Mon, 06 Aug 2018 09:45:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1533573913; cv=none; d=google.com; s=arc-20160816; b=hU1SaL02D413xI1BkgBhUGZ4Ato0KaQg+Mlwh41WCxslDNMg0oF3UWcBRwUrEJ3Zmf zYMScg+VMSYJov4apthqntyzpLkVUmTuSmBmnnQYqoPqsfkOL03apoh/Cy7D/pUWAieQ GgZlGpGEVp07hFNPt3EhHqI5tqXEbTdWoBv+8+YENCxnLd6AV5mNBDnLRdpToX59u8Uo 0R2b3iqLH9DX8DmiO/l93s8sfLAUZEBjicX721QYt826gqVLT5SR/XhmvZ2JVfLRUatY lCzKFt1ZaayqAobAoWu5MVOzN4To7CWWN9Ew0fjAr5YNs5lES8nVtcfHMQy8ARO/eUmw EygQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=UIQacS+ZqCKPc/KEZsRtRz/ZkurENXeLtVlwK+/5qPA=; b=UVe2YriEZHnV2hYvTYuL7XVeUjrU4pmwMe0X0cOwhXgtz+Ecwb4UhEw/Yy4R/1RnlW FGvw79VAAI1LabZ+BO8+w0Ma+u2x1DxQp8Ws5lHmXSsWTCIhQtP2wcXyFXBIAMaBDQQD xj9+LdBeSDe7CzjKSU+57MhvznMqto2Qe0d4fb3awFQHZv6KzHLC1D8MbFvA6P+Qo1Da lMpyPhVbUJp6hTZeyBingi8aSah9zYNlnqaCCnHPld/rpgMKaZnFC9J9FTWiof+wUb5Q oxlg+TPSZYYQlDfBo30pb6rEihpqsZYKrz43WhBKOfYG4KQUEVnDBGddcHCNGlVnF2iy cXQQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a15-v6si12371249pgg.611.2018.08.06.09.44.58; Mon, 06 Aug 2018 09:45:13 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387512AbeHFSup (ORCPT + 99 others); Mon, 6 Aug 2018 14:50:45 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:41834 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727561AbeHFSuo (ORCPT ); Mon, 6 Aug 2018 14:50:44 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2767E15AB; Mon, 6 Aug 2018 09:40:51 -0700 (PDT) Received: from e110439-lin.Cambridge.Arm.com (e110439-lin.emea.arm.com [10.4.12.126]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 5A1FF3F5D0; Mon, 6 Aug 2018 09:40:48 -0700 (PDT) From: Patrick Bellasi To: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org Cc: Ingo Molnar , Peter Zijlstra , Tejun Heo , "Rafael J . Wysocki" , Viresh Kumar , Vincent Guittot , Paul Turner , Dietmar Eggemann , Morten Rasmussen , Juri Lelli , Todd Kjos , Joel Fernandes , Steve Muckle , Suren Baghdasaryan Subject: [PATCH v3 07/14] sched/core: uclamp: enforce last task UCLAMP_MAX Date: Mon, 6 Aug 2018 17:39:39 +0100 Message-Id: <20180806163946.28380-8-patrick.bellasi@arm.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180806163946.28380-1-patrick.bellasi@arm.com> References: <20180806163946.28380-1-patrick.bellasi@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When a util_max clamped task sleeps, its clamp constraints are removed from the CPU. However, the blocked utilization on that CPU can still be higher than the max clamp value enforced while that task was running. This max clamp removal when a CPU is going to be idle could thus allow unwanted CPU frequency increases, right while the task is not running. This can happen, for example, where there is another (smaller) task running on a different CPU of the same frequency domain. In this case, when we aggregate the utilization of all the CPUs in a shared frequency domain, schedutil can still see the full non clamped blocked utilization of all the CPUs and thus eventually increase the frequency. Let's fix this by using: uclamp_cpu_put_id(UCLAMP_MAX) uclamp_cpu_update(last_clamp_value) to detect when a CPU has no more RUNNABLE clamped tasks and to flag this condition. Thus, while a CPU is idle, we can still enforce the last used clamp value for it. To the contrary, we do not track any UCLAMP_MIN since, while a CPU is idle, we don't want to enforce any minimum frequency Indeed, we rely just on blocked load decay to smoothly reduce the frequency. Signed-off-by: Patrick Bellasi Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Rafael J. Wysocki Cc: Viresh Kumar Cc: Suren Baghdasaryan Cc: Todd Kjos Cc: Joel Fernandes Cc: Juri Lelli Cc: Dietmar Eggemann Cc: Morten Rasmussen Cc: linux-kernel@vger.kernel.org Cc: linux-pm@vger.kernel.org --- Changes in v3: Message-ID: - rename UCLAMP_NONE into UCLAMP_NOT_VALID Changes in v2: - rabased on v4.18-rc4 - new patch to improve a specific issue --- kernel/sched/core.c | 35 +++++++++++++++++++++++++++++++---- kernel/sched/sched.h | 2 ++ 2 files changed, 33 insertions(+), 4 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index bc2beedec7bf..ff76b000bbe8 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -906,7 +906,8 @@ uclamp_group_find(int clamp_id, unsigned int clamp_value) * For the specified clamp index, this method computes the new CPU utilization * clamp to use until the next change on the set of RUNNABLE tasks on that CPU. */ -static inline void uclamp_cpu_update(struct rq *rq, int clamp_id) +static inline void uclamp_cpu_update(struct rq *rq, int clamp_id, + unsigned int last_clamp_value) { struct uclamp_group *uc_grp = &rq->uclamp.group[clamp_id][0]; int max_value = UCLAMP_NOT_VALID; @@ -924,6 +925,19 @@ static inline void uclamp_cpu_update(struct rq *rq, int clamp_id) if (max_value >= SCHED_CAPACITY_SCALE) break; } + + /* + * Just for the UCLAMP_MAX value, in case there are no RUNNABLE + * task, we keep the CPU clamped to the last task's clamp value. + * This avoids frequency spikes to MAX when one CPU, with an high + * blocked utilization, sleeps and another CPU, in the same frequency + * domain, do not see anymore the clamp on the first CPU. + */ + if (clamp_id == UCLAMP_MAX && max_value == UCLAMP_NOT_VALID) { + rq->uclamp.flags |= UCLAMP_FLAG_IDLE; + max_value = last_clamp_value; + } + rq->uclamp.value[clamp_id] = max_value; } @@ -953,13 +967,26 @@ static inline void uclamp_cpu_get_id(struct task_struct *p, uc_grp = &rq->uclamp.group[clamp_id][0]; uc_grp[group_id].tasks += 1; + /* Force clamp update on idle exit */ + uc_cpu = &rq->uclamp; + clamp_value = p->uclamp[clamp_id].value; + if (unlikely(uc_cpu->flags & UCLAMP_FLAG_IDLE)) { + /* + * This function is called for both UCLAMP_MIN (before) and + * UCLAMP_MAX (after). Let's reset the flag only the second + * once we know that UCLAMP_MIN has been already updated. + */ + if (clamp_id == UCLAMP_MAX) + uc_cpu->flags &= ~UCLAMP_FLAG_IDLE; + uc_cpu->value[clamp_id] = clamp_value; + return; + } + /* * If this is the new max utilization clamp value, then we can update * straight away the CPU clamp value. Otherwise, the current CPU clamp * value is still valid and we are done. */ - uc_cpu = &rq->uclamp; - clamp_value = p->uclamp[clamp_id].value; if (uc_cpu->value[clamp_id] < clamp_value) uc_cpu->value[clamp_id] = clamp_value; } @@ -1011,7 +1038,7 @@ static inline void uclamp_cpu_put_id(struct task_struct *p, uc_cpu = &rq->uclamp; clamp_value = uc_grp[group_id].value; if (clamp_value >= uc_cpu->value[clamp_id]) - uclamp_cpu_update(rq, clamp_id); + uclamp_cpu_update(rq, clamp_id, clamp_value); } /** diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index bb305e3d5737..d5855babb9c9 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -803,6 +803,8 @@ struct uclamp_group { * values, i.e. no min/max clamping at all. */ struct uclamp_cpu { +#define UCLAMP_FLAG_IDLE 0x01 + int flags; int value[UCLAMP_CNT]; struct uclamp_group group[UCLAMP_CNT][CONFIG_UCLAMP_GROUPS_COUNT + 1]; }; -- 2.18.0