Received: by 2002:ac0:98c7:0:0:0:0:0 with SMTP id g7-v6csp3739879imd; Mon, 29 Oct 2018 11:35:01 -0700 (PDT) X-Google-Smtp-Source: AJdET5fg+RVEpxdGKMiilSbYhcseRS197r9SIz+V1VJoxwvADyyTStNfYOFjYkqi/LGXnLsFd30n X-Received: by 2002:a63:e505:: with SMTP id r5-v6mr14632021pgh.170.1540838101004; Mon, 29 Oct 2018 11:35:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1540838100; cv=none; d=google.com; s=arc-20160816; b=IzJqqGpWyKaln/oKqzF25k9YQBsMtYlXlE9bK/gsrZdAwo0WKlRTDvQqFzaH6pC3AE M0sKtOj62zqjfXgEC4FJAPHpsbL2diOhOWAX8/DpNzrol8X5baClqbJrneu8IDxrZ2OU mXmXdLB6298VozLTeobjiASCaAfo9AMXkACITJ6zfxHlA1gfI58iYyMOp2edjX/LqG8R r9VZkyRzKEXqhy5slLMlVM2f7yHbpQr8cvEkkipeDjdT4AEwakyWSoIZZWlDaPaLAYG+ g6lIuNMAsC97K7E3/zO90I1H+lddRbVvBe8DW5LLRX7ruRGAoauVe8tgtVwPiBP5EeDa dX9w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=ICie6mI8KKQuxMuplOaMj2CkhDupUFNgBvz+JtFvieo=; b=mSNUG3FmRc3bcSgVitMRfZbiZyBd6fYdFkGor0e9QU5OgSmkZSaiBdAQwp2+LPJJXr tDztnUV0+W1szTE4o4AIuuI9lrRhjB3Bul8EwOfb4HkyGxNWEAut5a1KTmskyOC+J2H1 te1Raon/0EYkFiNnZ0NWJKBRbpdqVW/ETaNJJzW9aBOjjwuBRmTYpA3bm7so+tynF/xn fu/5FJllMDzf1qyrRqZfeu3cVuf+SMItKBRvdTbywOylAyCSXFiN3sZLRFzahAyKYUlN yNdwo820mGEnMvtY01klO0kfa/RNiCIHAHItn0wTUL7UFrX1Cy2G+jj9WcOcc4IqogVg yTjg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id j61-v6si11570702plb.121.2018.10.29.11.34.45; Mon, 29 Oct 2018 11:35:00 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729530AbeJ3DXn (ORCPT + 99 others); Mon, 29 Oct 2018 23:23:43 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:44620 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729275AbeJ3DXm (ORCPT ); Mon, 29 Oct 2018 23:23:42 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A79DD1684; Mon, 29 Oct 2018 11:33:51 -0700 (PDT) Received: from e110439-lin.cambridge.arm.com (e110439-lin.cambridge.arm.com [10.1.194.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id B69D83F6A8; Mon, 29 Oct 2018 11:33:48 -0700 (PDT) From: Patrick Bellasi To: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org Cc: Ingo Molnar , Peter Zijlstra , Tejun Heo , "Rafael J . Wysocki" , Vincent Guittot , Viresh Kumar , Paul Turner , Quentin Perret , Dietmar Eggemann , Morten Rasmussen , Juri Lelli , Todd Kjos , Joel Fernandes , Steve Muckle , Suren Baghdasaryan Subject: [PATCH v5 06/15] sched/core: uclamp: enforce last task UCLAMP_MAX Date: Mon, 29 Oct 2018 18:33:01 +0000 Message-Id: <20181029183311.29175-8-patrick.bellasi@arm.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20181029183311.29175-1-patrick.bellasi@arm.com> References: <20181029183311.29175-1-patrick.bellasi@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When a util_max clamped task sleeps, its clamp constraints are removed from the CPU. However, the blocked utilization on that CPU can still be higher than the max clamp value enforced while that task was running. The release of a util_max clamp when a CPU is going to be idle could thus allow unwanted CPU frequency increases while tasks are not running. This can happen, for example, when a frequency update is triggered from another CPU of the same frequency domain. In this case, when we aggregate the utilization of all the CPUs in a shared frequency domain, schedutil can still see the full not clamped blocked utilization of all the CPUs and thus, eventually, increase the frequency. Let's fix this by using: uclamp_cpu_put_id(UCLAMP_MAX) uclamp_cpu_update(last_clamp_value) to detect when a CPU has no more RUNNABLE clamped tasks and to flag this condition. Thus, while a CPU is idle, we can still enforce the last used clamp value for it. To the contrary, we do not track any UCLAMP_MIN since, while a CPU is idle, we don't want to enforce any minimum frequency. Indeed, in this case, we rely just on the decay of the blocked utilization to smoothly reduce the CPU frequency. Signed-off-by: Patrick Bellasi Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Rafael J. Wysocki Cc: Viresh Kumar Cc: Suren Baghdasaryan Cc: Todd Kjos Cc: Joel Fernandes Cc: Juri Lelli Cc: Quentin Perret Cc: Dietmar Eggemann Cc: Morten Rasmussen Cc: linux-kernel@vger.kernel.org Cc: linux-pm@vger.kernel.org --- Changes in v5: Others: - reduce usage of alias local variables whenever the global ones can still be used without affecting code readability - reduced usage of inline comments - rebased on v4.19 Changes in v4: Message-ID: <20180816172016.GG2960@e110439-lin> - ensure to always reset clamp holding on wakeup from IDLE Others: - rebased on v4.19-rc1 Changes in v3: Message-ID: - rename UCLAMP_NONE into UCLAMP_NOT_VALID Changes in v2: - rabased on v4.18-rc4 - new patch to improve a specific issue --- kernel/sched/core.c | 41 ++++++++++++++++++++++++++++++++++++++--- kernel/sched/sched.h | 11 +++++++++++ 2 files changed, 49 insertions(+), 3 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 21f6251a1d44..b23f80c07be9 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -796,10 +796,11 @@ static union uclamp_map uclamp_maps[UCLAMP_CNT][UCLAMP_GROUPS]; * For the specified clamp index, this method computes the new CPU utilization * clamp to use until the next change on the set of active clamp groups. */ -static inline void uclamp_cpu_update(struct rq *rq, unsigned int clamp_id) +static inline void uclamp_cpu_update(struct rq *rq, unsigned int clamp_id, + unsigned int last_clamp_value) { unsigned int group_id; - int max_value = 0; + int max_value = -1; for (group_id = 0; group_id < UCLAMP_GROUPS; ++group_id) { if (!rq->uclamp.group[clamp_id][group_id].tasks) @@ -810,6 +811,28 @@ static inline void uclamp_cpu_update(struct rq *rq, unsigned int clamp_id) if (max_value >= SCHED_CAPACITY_SCALE) break; } + + /* + * Just for the UCLAMP_MAX value, in case there are no RUNNABLE + * task, we want to keep the CPU clamped to the last task's clamp + * value. This is to avoid frequency spikes to MAX when one CPU, with + * an high blocked utilization, sleeps and another CPU, in the same + * frequency domain, do not see anymore the clamp on the first CPU. + * + * The UCLAMP_FLAG_IDLE is set whenever we detect, from the above + * loop, that there are no more RUNNABLE taks on that CPU. + * In this case we enforce the CPU util_max to that of the last + * dequeued task. + */ + if (max_value < 0) { + if (clamp_id == UCLAMP_MAX) { + rq->uclamp.flags |= UCLAMP_FLAG_IDLE; + max_value = last_clamp_value; + } else { + max_value = uclamp_none(UCLAMP_MIN); + } + } + rq->uclamp.value[clamp_id] = max_value; } @@ -835,6 +858,18 @@ static inline void uclamp_cpu_get_id(struct task_struct *p, struct rq *rq, rq->uclamp.group[clamp_id][group_id].tasks += 1; + if (unlikely(rq->uclamp.flags & UCLAMP_FLAG_IDLE)) { + /* + * Reset clamp holds on idle exit. + * This function is called for both UCLAMP_MIN (before) and + * UCLAMP_MAX (after). Let's reset the flag only the second + * once we know that UCLAMP_MIN has been already updated. + */ + if (clamp_id == UCLAMP_MAX) + rq->uclamp.flags &= ~UCLAMP_FLAG_IDLE; + rq->uclamp.value[clamp_id] = p->uclamp[clamp_id].value; + } + if (rq->uclamp.value[clamp_id] < p->uclamp[clamp_id].value) rq->uclamp.value[clamp_id] = p->uclamp[clamp_id].value; } @@ -883,7 +918,7 @@ static inline void uclamp_cpu_put_id(struct task_struct *p, struct rq *rq, } #endif if (clamp_value >= rq->uclamp.value[clamp_id]) - uclamp_cpu_update(rq, clamp_id); + uclamp_cpu_update(rq, clamp_id, clamp_value); } /** diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 94c4f2f410ad..859192ec492c 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -807,6 +807,17 @@ struct uclamp_group { struct uclamp_cpu { struct uclamp_group group[UCLAMP_CNT][UCLAMP_GROUPS]; int value[UCLAMP_CNT]; +/* + * Idle clamp holding + * Whenever a CPU is idle, we enforce the util_max clamp value of the last + * task running on that CPU. This bit is used to flag a clamp holding + * currently active for a CPU. This flag is: + * - set when we update the clamp value of a CPU at the time of dequeuing the + * last before entering idle + * - reset when we enqueue the first task after a CPU wakeup from IDLE + */ +#define UCLAMP_FLAG_IDLE 0x01 + int flags; }; #endif /* CONFIG_UCLAMP_TASK */ -- 2.18.0