Received: by 2002:a25:ad19:0:0:0:0:0 with SMTP id y25csp6591175ybi; Mon, 8 Jul 2019 05:33:42 -0700 (PDT) X-Google-Smtp-Source: APXvYqxCei2Vf0sXeGPghlEjjQ7ATE6JXbrkwXkzmZiVXQMbtZGoHLMPhZeDb16ECBk9laSGhIL/ X-Received: by 2002:a17:90b:d8a:: with SMTP id bg10mr25316834pjb.92.1562589222736; Mon, 08 Jul 2019 05:33:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1562589222; cv=none; d=google.com; s=arc-20160816; b=r0uQ28gkkHnPVcNX1XZRMztboeRbdrP1RMQMXOJaklnhUgNgCZNcntjatSmm00JOx0 uwXoj1wq1lNU+SoStzyjUKPepd0poI9O7gx09mk3PLPyiDprIAqkx2S5Yaibfq8oFhbm 5nIrYC5S964R7gTGdHhcWOQ/vWSWcgloKl1B8ia5cqKQENDbiXzM+3VJfTr2WEwhe9BP GycSUXdi5L17t+yPXdL3PYmSLVT9fDp8p6Dh/m+slrORZaUYl0hx0+WqdToqNa8Ye42L bDhTW6+iUcKLPg6lNS2BAqdMX8AsKmoVTbnwOEqRhfUHMGALzO1OEcrLvc9kxFPPIdD8 I1+Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=sZl4PHDTxC9hk5jwN+RKiBoBaRtBoR1ujBvREbztNEo=; b=Qq9TXBG6RpaDKqOW/XXuTz/2QM2jF6q/C/bMUL0WVfaTcKS0q1sugUl6/jDIU/0BE4 ZCjYHLfFK9yl9pKPqh0p9PlGNlLE0yxSDqfyBZFxrt4yHiItFdLZnyP84PoLFI713XWe 1z3DI6xOct1fG0vCvm32nZ4qFlwCGF8Sf8LDKHF3RpCn/nQeQ4W9ZIJMJg7qYLzeYIQB 8cnEeq/TsWNoipbxPkLXzxDELWe3ho9WNbnuhaWx9scgkRtVcMn3LD75SMGDJbkxB09H O3ZTz0qGb/1jRUlwsA6xoofWh+BQNfA6oIuzXeBygweJMq/KgenrriQSO+k5DY+Majm/ JkfQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id r129si17752255pgr.21.2019.07.08.05.33.26; Mon, 08 Jul 2019 05:33:42 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729638AbfGHIo0 (ORCPT + 99 others); Mon, 8 Jul 2019 04:44:26 -0400 Received: from foss.arm.com ([217.140.110.172]:42014 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729602AbfGHIoY (ORCPT ); Mon, 8 Jul 2019 04:44:24 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BF58A152F; Mon, 8 Jul 2019 01:44:23 -0700 (PDT) Received: from e110439-lin.cambridge.arm.com (e110439-lin.cambridge.arm.com [10.1.194.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 6AEE93F246; Mon, 8 Jul 2019 01:44:21 -0700 (PDT) From: Patrick Bellasi To: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org Cc: Ingo Molnar , Peter Zijlstra , Tejun Heo , "Rafael J . Wysocki" , Vincent Guittot , Viresh Kumar , Paul Turner , Quentin Perret , Dietmar Eggemann , Morten Rasmussen , Juri Lelli , Todd Kjos , Joel Fernandes , Steve Muckle , Suren Baghdasaryan , Alessio Balsini Subject: [PATCH v11 5/5] sched/core: uclamp: Update CPU's refcount on TG's clamp changes Date: Mon, 8 Jul 2019 09:43:57 +0100 Message-Id: <20190708084357.12944-6-patrick.bellasi@arm.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190708084357.12944-1-patrick.bellasi@arm.com> References: <20190708084357.12944-1-patrick.bellasi@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On updates of task group (TG) clamp values, ensure that these new values are enforced on all RUNNABLE tasks of the task group, i.e. all RUNNABLE tasks are immediately boosted and/or capped as requested. Do that each time we update effective clamps from cpu_util_update_eff(). Use the *cgroup_subsys_state (css) to walk the list of tasks in each affected TG and update their RUNNABLE tasks. Update each task by using the same mechanism used for cpu affinity masks updates, i.e. by taking the rq lock. Signed-off-by: Patrick Bellasi Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Tejun Heo --- Changes in v11: Message-ID: <20190624174607.GQ657710@devbig004.ftw2.facebook.com> - Ensure group limits always clamps group protection --- kernel/sched/core.c | 58 ++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 57 insertions(+), 1 deletion(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 2591a70c85cf..ddc5fcd4b9cf 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1043,6 +1043,57 @@ static inline void uclamp_rq_dec(struct rq *rq, struct task_struct *p) uclamp_rq_dec_id(rq, p, clamp_id); } +static inline void +uclamp_update_active(struct task_struct *p, unsigned int clamp_id) +{ + struct rq_flags rf; + struct rq *rq; + + /* + * Lock the task and the rq where the task is (or was) queued. + * + * We might lock the (previous) rq of a !RUNNABLE task, but that's the + * price to pay to safely serialize util_{min,max} updates with + * enqueues, dequeues and migration operations. + * This is the same locking schema used by __set_cpus_allowed_ptr(). + */ + rq = task_rq_lock(p, &rf); + + /* + * Setting the clamp bucket is serialized by task_rq_lock(). + * If the task is not yet RUNNABLE and its task_struct is not + * affecting a valid clamp bucket, the next time it's enqueued, + * it will already see the updated clamp bucket value. + */ + if (!p->uclamp[clamp_id].active) + goto done; + + uclamp_rq_dec_id(rq, p, clamp_id); + uclamp_rq_inc_id(rq, p, clamp_id); + +done: + + task_rq_unlock(rq, p, &rf); +} + +static inline void +uclamp_update_active_tasks(struct cgroup_subsys_state *css, + unsigned int clamps) +{ + struct css_task_iter it; + struct task_struct *p; + unsigned int clamp_id; + + css_task_iter_start(css, 0, &it); + while ((p = css_task_iter_next(&it))) { + for_each_clamp_id(clamp_id) { + if ((0x1 << clamp_id) & clamps) + uclamp_update_active(p, clamp_id); + } + } + css_task_iter_end(&it); +} + #ifdef CONFIG_UCLAMP_TASK_GROUP static void cpu_util_update_eff(struct cgroup_subsys_state *css); static void uclamp_update_root_tg(void) @@ -7087,8 +7138,13 @@ static void cpu_util_update_eff(struct cgroup_subsys_state *css) uc_se[clamp_id].bucket_id = uclamp_bucket_id(eff[clamp_id]); clamps |= (0x1 << clamp_id); } - if (!clamps) + if (!clamps) { css = css_rightmost_descendant(css); + continue; + } + + /* Immediately update descendants RUNNABLE tasks */ + uclamp_update_active_tasks(css, clamps); } } -- 2.21.0