Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp4573766imu; Tue, 15 Jan 2019 02:19:06 -0800 (PST) X-Google-Smtp-Source: ALg8bN6/NpdbA/AiPb094T0X/CMXjcVqFnHvr/MgS2gB/d/pKwTbmYxNEUURYe9Z9wQBdNqWWK7F X-Received: by 2002:a17:902:7296:: with SMTP id d22mr3353303pll.265.1547547546002; Tue, 15 Jan 2019 02:19:06 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1547547545; cv=none; d=google.com; s=arc-20160816; b=UssRBNNX0eT0wLh4L96kozghv0ezuDvnYsDJz4R16gqZtDuzKc5tZsxI/xEaljV90m f45bAAiiSoFgl+2/gdJZB6YH1xQEu7NILkuGvKRNIa5MQQiC4ZoY7Y0uuaHhEnaffvPm 83nFPOF0dFzZqqnN/kpAOLVbmPTqfVZaGmYQUUWHXPmo45fQegAgXt1CSx+mhbFx2w2H bwx0RQmGLwqUQYR24i7zteqeV05dP1ed8V04HI8Mul04YdX8Nm2FEKaBhyr0XLvcw2L8 ubxZRd4Q7LFyjQze1UM7JoUFFskutIn8EuoBVq5AGVs4Ze9PMYjXFq+AQWblTW3/2qLa 49Aw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=pQzaY+RVmm3FprI3Ri5QAyQZKQfiiaWQtkVk9rIj/pY=; b=mvWX856MqIv09sEyoZA2TDoMK68WbbBQlA+wINbxI3Pt020lD7c3d56ZBIXHZpGhT/ 7juX8zNYwvdolFixb7aP1SF8FY3KTVbLtcZdWAzm451dDpxFGmJcZPU7XJnVrzvHcebr lZMKtEdidHCXpdhLx+xZy+Wq9MqZdYY/PF2fHkriOjEnXhPl1fjpDplQJ9n5xH2ptpS9 nOCLD/KC1oeCrigvx6Z0qy0YEqrUdqjy5xKai7WWRZkUvlep4P4K7pcSxaJYxNpc0ZLD mgz6bfZH1mCNew2zvb5LZevcBTWhXd68kGLaWwxRZ6Ipf+54/bhb97bdfBc/JoUDDgie rC5Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 10si2935057pgl.30.2019.01.15.02.18.50; Tue, 15 Jan 2019 02:19:05 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728978AbfAOKQW (ORCPT + 99 others); Tue, 15 Jan 2019 05:16:22 -0500 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:47060 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728961AbfAOKQU (ORCPT ); Tue, 15 Jan 2019 05:16:20 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8681815AD; Tue, 15 Jan 2019 02:16:20 -0800 (PST) Received: from e110439-lin.cambridge.arm.com (e110439-lin.cambridge.arm.com [10.1.194.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 6F90A3F70D; Tue, 15 Jan 2019 02:16:17 -0800 (PST) From: Patrick Bellasi To: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, linux-api@vger.kernel.org Cc: Ingo Molnar , Peter Zijlstra , Tejun Heo , "Rafael J . Wysocki" , Vincent Guittot , Viresh Kumar , Paul Turner , Quentin Perret , Dietmar Eggemann , Morten Rasmussen , Juri Lelli , Todd Kjos , Joel Fernandes , Steve Muckle , Suren Baghdasaryan Subject: [PATCH v6 16/16] sched/core: uclamp: Update CPU's refcount on TG's clamp changes Date: Tue, 15 Jan 2019 10:15:13 +0000 Message-Id: <20190115101513.2822-17-patrick.bellasi@arm.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20190115101513.2822-1-patrick.bellasi@arm.com> References: <20190115101513.2822-1-patrick.bellasi@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On updates of task group (TG) clamp values, ensure that these new values are enforced on all RUNNABLE tasks of the task group, i.e. all RUNNABLE tasks are immediately boosted and/or clamped as requested. Do that by slightly refactoring uclamp_bucket_inc(). An additional parameter *cgroup_subsys_state (css) is used to walk the list of tasks in the TGs and update the RUNNABLE ones. Do that by taking the rq lock for each task, the same mechanism used for cpu affinity masks updates. Signed-off-by: Patrick Bellasi Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Tejun Heo --- Changes in v6: Others: - wholesale s/group/bucket/ - wholesale s/_{get,put}/_{inc,dec}/ to match refcount APIs - small documentation updates --- kernel/sched/core.c | 56 +++++++++++++++++++++++++++++++++------------ 1 file changed, 42 insertions(+), 14 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index c8d1fc9880ff..36866a1b9f9d 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1111,7 +1111,22 @@ static void uclamp_bucket_dec(unsigned int clamp_id, unsigned int bucket_id) &uc_map_old.data, uc_map_new.data)); } -static void uclamp_bucket_inc(struct task_struct *p, struct uclamp_se *uc_se, +static inline void uclamp_bucket_inc_tg(struct cgroup_subsys_state *css, + int clamp_id, unsigned int bucket_id) +{ + struct css_task_iter it; + struct task_struct *p; + + /* Update clamp buckets for RUNNABLE tasks in this TG */ + css_task_iter_start(css, 0, &it); + while ((p = css_task_iter_next(&it))) + uclamp_task_update_active(p, clamp_id); + css_task_iter_end(&it); +} + +static void uclamp_bucket_inc(struct task_struct *p, + struct cgroup_subsys_state *css, + struct uclamp_se *uc_se, unsigned int clamp_id, unsigned int clamp_value) { union uclamp_map *uc_maps = &uclamp_maps[clamp_id][0]; @@ -1183,6 +1198,9 @@ static void uclamp_bucket_inc(struct task_struct *p, struct uclamp_se *uc_se, uc_se->value = clamp_value; uc_se->bucket_id = bucket_id; + if (css) + uclamp_bucket_inc_tg(css, clamp_id, bucket_id); + if (p) uclamp_task_update_active(p, clamp_id); @@ -1221,11 +1239,11 @@ int sched_uclamp_handler(struct ctl_table *table, int write, } if (old_min != sysctl_sched_uclamp_util_min) { - uclamp_bucket_inc(NULL, &uclamp_default[UCLAMP_MIN], + uclamp_bucket_inc(NULL, NULL, &uclamp_default[UCLAMP_MIN], UCLAMP_MIN, sysctl_sched_uclamp_util_min); } if (old_max != sysctl_sched_uclamp_util_max) { - uclamp_bucket_inc(NULL, &uclamp_default[UCLAMP_MAX], + uclamp_bucket_inc(NULL, NULL, &uclamp_default[UCLAMP_MAX], UCLAMP_MAX, sysctl_sched_uclamp_util_max); } goto done; @@ -1260,12 +1278,12 @@ static int __setscheduler_uclamp(struct task_struct *p, mutex_lock(&uclamp_mutex); if (attr->sched_flags & SCHED_FLAG_UTIL_CLAMP_MIN) { p->uclamp[UCLAMP_MIN].user_defined = true; - uclamp_bucket_inc(p, &p->uclamp[UCLAMP_MIN], + uclamp_bucket_inc(p, NULL, &p->uclamp[UCLAMP_MIN], UCLAMP_MIN, lower_bound); } if (attr->sched_flags & SCHED_FLAG_UTIL_CLAMP_MAX) { p->uclamp[UCLAMP_MAX].user_defined = true; - uclamp_bucket_inc(p, &p->uclamp[UCLAMP_MAX], + uclamp_bucket_inc(p, NULL, &p->uclamp[UCLAMP_MAX], UCLAMP_MAX, upper_bound); } mutex_unlock(&uclamp_mutex); @@ -1304,7 +1322,7 @@ static void uclamp_fork(struct task_struct *p, bool reset) p->uclamp[clamp_id].mapped = false; p->uclamp[clamp_id].active = false; - uclamp_bucket_inc(NULL, &p->uclamp[clamp_id], + uclamp_bucket_inc(NULL, NULL, &p->uclamp[clamp_id], clamp_id, clamp_value); } } @@ -1326,19 +1344,23 @@ static void __init init_uclamp(void) memset(uclamp_maps, 0, sizeof(uclamp_maps)); for (clamp_id = 0; clamp_id < UCLAMP_CNT; ++clamp_id) { uc_se = &init_task.uclamp[clamp_id]; - uclamp_bucket_inc(NULL, uc_se, clamp_id, uclamp_none(clamp_id)); + uclamp_bucket_inc(NULL, NULL, uc_se, clamp_id, + uclamp_none(clamp_id)); uc_se = &uclamp_default[clamp_id]; - uclamp_bucket_inc(NULL, uc_se, clamp_id, uclamp_none(clamp_id)); + uclamp_bucket_inc(NULL, NULL, uc_se, clamp_id, + uclamp_none(clamp_id)); /* RT tasks by default will go to max frequency */ uc_se = &uclamp_default_perf[clamp_id]; - uclamp_bucket_inc(NULL, uc_se, clamp_id, uclamp_none(UCLAMP_MAX)); + uclamp_bucket_inc(NULL, NULL, uc_se, clamp_id, + uclamp_none(UCLAMP_MAX)); #ifdef CONFIG_UCLAMP_TASK_GROUP /* Init root TG's clamp bucket */ uc_se = &root_task_group.uclamp[clamp_id]; - uclamp_bucket_inc(NULL, uc_se, clamp_id, uclamp_none(UCLAMP_MAX)); + uclamp_bucket_inc(NULL, NULL, uc_se, clamp_id, + uclamp_none(UCLAMP_MAX)); uc_se->effective.bucket_id = uc_se->bucket_id; uc_se->effective.value = uc_se->value; #endif @@ -6937,8 +6959,8 @@ static inline int alloc_uclamp_sched_group(struct task_group *tg, int clamp_id; for (clamp_id = 0; clamp_id < UCLAMP_CNT; ++clamp_id) { - uclamp_bucket_inc(NULL, &tg->uclamp[clamp_id], clamp_id, - parent->uclamp[clamp_id].value); + uclamp_bucket_inc(NULL, NULL, &tg->uclamp[clamp_id], + clamp_id, parent->uclamp[clamp_id].value); tg->uclamp[clamp_id].effective.value = parent->uclamp[clamp_id].effective.value; tg->uclamp[clamp_id].effective.bucket_id = @@ -7239,6 +7261,10 @@ static void cpu_util_update_hier(struct cgroup_subsys_state *css, uc_se->effective.value = value; uc_se->effective.bucket_id = bucket_id; + + /* Immediately updated descendants active tasks */ + if (css != top_css) + uclamp_bucket_inc_tg(css, clamp_id, bucket_id); } } @@ -7263,7 +7289,8 @@ static int cpu_util_min_write_u64(struct cgroup_subsys_state *css, } /* Update TG's reference count */ - uclamp_bucket_inc(NULL, &tg->uclamp[UCLAMP_MIN], UCLAMP_MIN, min_value); + uclamp_bucket_inc(NULL, css, &tg->uclamp[UCLAMP_MIN], + UCLAMP_MIN, min_value); /* Update effective clamps to track the most restrictive value */ cpu_util_update_hier(css, UCLAMP_MIN, tg->uclamp[UCLAMP_MIN].bucket_id, @@ -7297,7 +7324,8 @@ static int cpu_util_max_write_u64(struct cgroup_subsys_state *css, } /* Update TG's reference count */ - uclamp_bucket_inc(NULL, &tg->uclamp[UCLAMP_MAX], UCLAMP_MAX, max_value); + uclamp_bucket_inc(NULL, css, &tg->uclamp[UCLAMP_MAX], + UCLAMP_MAX, max_value); /* Update effective clamps to track the most restrictive value */ cpu_util_update_hier(css, UCLAMP_MAX, tg->uclamp[UCLAMP_MAX].bucket_id, -- 2.19.2