Received: by 2002:ac0:98c7:0:0:0:0:0 with SMTP id g7-v6csp3741346imd; Mon, 29 Oct 2018 11:36:33 -0700 (PDT) X-Google-Smtp-Source: AJdET5en1kCVz7rIcLlYvDLa3SkjlWNj1ztTjAbyORJ1y9zq35YpzoH9ZpXyU9sUZoTFT2VFCPz9 X-Received: by 2002:a17:902:930a:: with SMTP id bc10-v6mr15754758plb.17.1540838193204; Mon, 29 Oct 2018 11:36:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1540838193; cv=none; d=google.com; s=arc-20160816; b=k9xPm0DdjXtpYw8PUuf6AcCER8DyYQX3wCCY/EEQK3WA79HwWIVLhL7u4Wb/5FCnTx g1yEewqtsCpvFnsqC0ePhSgxRdws2WUyHuLt9azdGAwVn946J3Gztih8OBS7fvnD8z0+ qVGVu6m+rI/ODajYPai6J2NkfR2T/5g3h0fPeAJI9eopRSQEhd5kA3GpmRjeoTyZuoG1 VelqosEHlC96C7qeizeopGMiBpP2ovynz8JQJ5pYMhZtr3+9bttsuvOeYZaSNGSnRBAt mS8W0B4KgPJnLvYL98ormX1EcJyLAgZVp1vuHNQI1/J0UFPFnJO+AhAvU8l3F/XeZKFs pSDA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=jDy+V8JNwNqryJ1AVz9nRzsRdXPUviE/nH+buCXHYoY=; b=xUV27RP4tEoqmO+9PJYgFNhXCLMwmWIRqMfWCkLsnT6ni4XFILCMZ6Xr02IAF41Wal CideFaxm0QIMh0qh8g9w4fHI9mhRiqde3G5FnPyiuqzh10Sgq+rX9MgTp/ACp3Oifzvi G96kSL/UTnrdWgt7Z4qeDO07tfHF3LDp9ITfYXgSsV9X5c0n42Em5g4n1/TRZpQY5i41 fEh0TTAKzHp61kWPeazSYvG8ZojfAcTe8DPveSkjyEzmgm72+tpajeSCwK3g5/OO9r6R NyYvQZ5jscunlpT3N2ahINRgaHTC0tGmDwTsdWYa16sCZ9KHlyiDtSlUxGJHThNlBpue t5EQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id n9-v6si19584306pge.456.2018.10.29.11.36.17; Mon, 29 Oct 2018 11:36:33 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729656AbeJ3DYG (ORCPT + 99 others); Mon, 29 Oct 2018 23:24:06 -0400 Received: from foss.arm.com ([217.140.101.70]:44726 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728517AbeJ3DYF (ORCPT ); Mon, 29 Oct 2018 23:24:05 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 22CD0165C; Mon, 29 Oct 2018 11:34:14 -0700 (PDT) Received: from e110439-lin.cambridge.arm.com (e110439-lin.cambridge.arm.com [10.1.194.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 31BC13F6A8; Mon, 29 Oct 2018 11:34:11 -0700 (PDT) From: Patrick Bellasi To: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org Cc: Ingo Molnar , Peter Zijlstra , Tejun Heo , "Rafael J . Wysocki" , Vincent Guittot , Viresh Kumar , Paul Turner , Quentin Perret , Dietmar Eggemann , Morten Rasmussen , Juri Lelli , Todd Kjos , Joel Fernandes , Steve Muckle , Suren Baghdasaryan Subject: [PATCH v5 13/15] sched/core: uclamp: map TG's clamp values into CPU's clamp groups Date: Mon, 29 Oct 2018 18:33:08 +0000 Message-Id: <20181029183311.29175-15-patrick.bellasi@arm.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20181029183311.29175-1-patrick.bellasi@arm.com> References: <20181029183311.29175-1-patrick.bellasi@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Utilization clamping requires to map each different clamp value into one of the available clamp groups used in the scheduler fast-path to account for RUNNABLE tasks. Thus, each time a TG's clamp value sysfs attribute is updated via: cpu_util_{min,max}_write_u64() we need to update the task group reference to the new value's clamp group and release the reference to the previous one. Let's ensure that, whenever a task group is assigned a specific clamp_value, this is properly translated into a unique clamp group to be used in the fast-path (i.e. at enqueue/dequeue time). We do that by slightly refactoring uclamp_group_get() to make the *task_struct parameter optional. This allows to re-use the code already available for the support of the per-task API. Signed-off-by: Patrick Bellasi Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Tejun Heo Cc: Rafael J. Wysocki Cc: Viresh Kumar Cc: Suren Baghdasaryan Cc: Todd Kjos Cc: Joel Fernandes Cc: Juri Lelli Cc: Quentin Perret Cc: Dietmar Eggemann Cc: Morten Rasmussen Cc: linux-kernel@vger.kernel.org Cc: linux-pm@vger.kernel.org --- Changes in v5: Others: - rebased on v4.19 Changes in v4: Others: - rebased on v4.19-rc1 Changes in v3: Message-ID: - add explicit calls to uclamp_group_find(), which is now not more part of uclamp_group_get() Others: - rebased on tip/sched/core Changes in v2: - rebased on v4.18-rc4 - this code has been split from a previous patch to simplify the review --- include/linux/sched.h | 7 +++-- kernel/sched/core.c | 64 +++++++++++++++++++++++++++++++++++-------- 2 files changed, 56 insertions(+), 15 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index d3f6bf62ab3f..7698e7554892 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -601,9 +601,10 @@ struct sched_dl_entity { * clamp group index (group_id), i.e. * index of the per-cpu RUNNABLE tasks refcounting array * - * The mapped bit is set whenever a task has been mapped on a clamp group for - * the first time. When this bit is set, any clamp group get (for a new clamp - * value) will be matches by a clamp group put (for the old clamp value). + * The mapped bit is set whenever a scheduling entity has been mapped on a + * clamp group for the first time. When this bit is set, any clamp group get + * (for a new clamp value) will be matches by a clamp group put (for the old + * clamp value). * * The active bit is set whenever a task has got an effective clamp group * and value assigned, which can be different from the user requested ones. diff --git a/kernel/sched/core.c b/kernel/sched/core.c index cb49bffb3da8..3dcd1c17a244 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1396,9 +1396,9 @@ static void __init init_uclamp(void) #ifdef CONFIG_UCLAMP_TASK_GROUP /* Init root TG's clamp group */ uc_se = &root_task_group.uclamp[clamp_id]; - uc_se->value = uclamp_none(clamp_id); - uc_se->group_id = 0; - uc_se->effective.value = uclamp_none(clamp_id); + uclamp_group_get(NULL, uc_se, clamp_id, uclamp_none(UCLAMP_MAX)); + uc_se->effective.group_id = uc_se->group_id; + uc_se->effective.value = uc_se->value; #endif } } @@ -6971,6 +6971,22 @@ void ia64_set_curr_task(int cpu, struct task_struct *p) static DEFINE_SPINLOCK(task_group_lock); #ifdef CONFIG_UCLAMP_TASK_GROUP +/* + * free_uclamp_sched_group: release utilization clamp references of a TG + * @tg: the task group being removed + * + * An empty task group can be removed only when it has no more tasks or child + * groups. This means that we can also safely release all the reference + * counting to clamp groups. + */ +static inline void free_uclamp_sched_group(struct task_group *tg) +{ + int clamp_id; + + for (clamp_id = 0; clamp_id < UCLAMP_CNT; ++clamp_id) + uclamp_group_put(clamp_id, tg->uclamp[clamp_id].group_id); +} + /** * alloc_uclamp_sched_group: initialize a new TG's for utilization clamping * @tg: the newly created task group @@ -6989,17 +7005,18 @@ static inline int alloc_uclamp_sched_group(struct task_group *tg, int clamp_id; for (clamp_id = 0; clamp_id < UCLAMP_CNT; ++clamp_id) { - tg->uclamp[clamp_id].value = - parent->uclamp[clamp_id].value; - tg->uclamp[clamp_id].group_id = - parent->uclamp[clamp_id].group_id; + uclamp_group_get(NULL, &tg->uclamp[clamp_id], clamp_id, + parent->uclamp[clamp_id].value); tg->uclamp[clamp_id].effective.value = parent->uclamp[clamp_id].effective.value; + tg->uclamp[clamp_id].effective.group_id = + parent->uclamp[clamp_id].effective.group_id; } return 1; } #else +static inline void free_uclamp_sched_group(struct task_group *tg) { } static inline int alloc_uclamp_sched_group(struct task_group *tg, struct task_group *parent) { @@ -7009,6 +7026,7 @@ static inline int alloc_uclamp_sched_group(struct task_group *tg, static void sched_free_group(struct task_group *tg) { + free_uclamp_sched_group(tg); free_fair_sched_group(tg); free_rt_sched_group(tg); autogroup_free(tg); @@ -7258,6 +7276,7 @@ static void cpu_cgroup_attach(struct cgroup_taskset *tset) * cpu_util_update_hier: propagate effective clamp down the hierarchy * @css: the task group to update * @clamp_id: the clamp index to update + * @group_id: the group index mapping the new task clamp value * @value: the new task group clamp value * * The effective clamp for a TG is expected to track the most restrictive @@ -7277,9 +7296,13 @@ static void cpu_cgroup_attach(struct cgroup_taskset *tset) * be propagated down to all the descendants. When a subgroup is found which * has already its effective clamp value matching its clamp value, then we can * safely skip all its descendants which are granted to be already in sync. + * + * The TG's group_id is also updated to ensure it tracks the effective clamp + * value. */ static void cpu_util_update_hier(struct cgroup_subsys_state *css, - int clamp_id, unsigned int value) + unsigned int clamp_id, unsigned int group_id, + unsigned int value) { struct cgroup_subsys_state *top_css = css; struct uclamp_se *uc_se, *uc_parent; @@ -7291,8 +7314,10 @@ static void cpu_util_update_hier(struct cgroup_subsys_state *css, * groups we consider their current value. */ uc_se = &css_tg(css)->uclamp[clamp_id]; - if (css != top_css) + if (css != top_css) { value = uc_se->value; + group_id = uc_se->effective.group_id; + } /* * Skip the whole subtrees if the current effective clamp is @@ -7308,12 +7333,15 @@ static void cpu_util_update_hier(struct cgroup_subsys_state *css, } /* Propagate the most restrictive effective value */ - if (uc_parent->effective.value < value) + if (uc_parent->effective.value < value) { value = uc_parent->effective.value; + group_id = uc_parent->effective.group_id; + } if (uc_se->effective.value == value) continue; uc_se->effective.value = value; + uc_se->effective.group_id = group_id; } } @@ -7326,6 +7354,7 @@ static int cpu_util_min_write_u64(struct cgroup_subsys_state *css, if (min_value > SCHED_CAPACITY_SCALE) return -ERANGE; + mutex_lock(&uclamp_mutex); rcu_read_lock(); tg = css_tg(css); @@ -7336,11 +7365,16 @@ static int cpu_util_min_write_u64(struct cgroup_subsys_state *css, goto out; } + /* Update TG's reference count */ + uclamp_group_get(NULL, &tg->uclamp[UCLAMP_MIN], UCLAMP_MIN, min_value); + /* Update effective clamps to track the most restrictive value */ - cpu_util_update_hier(css, UCLAMP_MIN, min_value); + cpu_util_update_hier(css, UCLAMP_MIN, tg->uclamp[UCLAMP_MIN].group_id, + min_value); out: rcu_read_unlock(); + mutex_unlock(&uclamp_mutex); return ret; } @@ -7354,6 +7388,7 @@ static int cpu_util_max_write_u64(struct cgroup_subsys_state *css, if (max_value > SCHED_CAPACITY_SCALE) return -ERANGE; + mutex_lock(&uclamp_mutex); rcu_read_lock(); tg = css_tg(css); @@ -7364,11 +7399,16 @@ static int cpu_util_max_write_u64(struct cgroup_subsys_state *css, goto out; } + /* Update TG's reference count */ + uclamp_group_get(NULL, &tg->uclamp[UCLAMP_MAX], UCLAMP_MAX, max_value); + /* Update effective clamps to track the most restrictive value */ - cpu_util_update_hier(css, UCLAMP_MAX, max_value); + cpu_util_update_hier(css, UCLAMP_MAX, tg->uclamp[UCLAMP_MAX].group_id, + max_value); out: rcu_read_unlock(); + mutex_unlock(&uclamp_mutex); return ret; } -- 2.18.0