Received: by 10.213.65.68 with SMTP id h4csp2849633imn; Mon, 9 Apr 2018 10:00:49 -0700 (PDT) X-Google-Smtp-Source: AIpwx4+4NMpES4N+jOXWxvaOqKYgvsNY9O0wt0DyUQyVBinrmYbeB0lM7GUkv7gTy44b3kDTpm9L X-Received: by 10.101.73.207 with SMTP id t15mr26218103pgs.204.1523293249597; Mon, 09 Apr 2018 10:00:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1523293249; cv=none; d=google.com; s=arc-20160816; b=PNnyBE4tw7/4psv+ZZhrxdIKGtWKSlDeVnDy8StBLRie84kxPQlEU/Jm5g8sPDtTsw uFVCi6lvwknjltFRg+iaAzWx5vD0mT5ANop0JKtJ5X9OZFZmS+u6g3lia2NJp5jSMh5h TVXvsiAaRGsi3iaKgB5ji8LR98ZB/kPdcCrZO1eovk5RZsVNIhnX97zpwsSPX9RT9KdG EhaGIfLWuvvynLY8vryXiZGhg0rRvxAEtZJ3dBaSDAAsjbCrZ4tK+VwAPGpwACoZan6n lT8M0nX+QuRLa279Y/4Laka+FhLO0Y2n5xL9+8kGKw3RiOisttNuOKYKokjgK701EqwP ajuA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=O+yIs/+LkP91pTGzg/GBQGh2hWDvBXEHhfRkBBv4liI=; b=LWZFJig3/TQZ1BHQdZheUg9Jdy/V3lmTUMvhMCQ4yBoMu9I6vofdIs8hRWArsW6cho m3SXtEb3gJAumj/prbkVc2xt+IY+En3qNOUyUDWskY5F8ljAI/Qbl+StkFGUwNMNZoHo KKbtVE75uBv9SxufBc1Ip8I0OkvqFKR1IM2+PzP44cMQKm7qEii+owL+ZkzBtRrJ8W0a eO6HUvqwbOwZ+DMB+RnkGsjdU9IosTl+vfwfyNyggYQfjHqZCUNexdMPiCCXD5i6A4Gm Vgd2ViEq5nioqfauwVaSrXEkKU9esKfdHcWszpKhOBgBsPPNc27rliSieyeThxFcIBOM G9og== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t13si447231pgv.385.2018.04.09.10.00.12; Mon, 09 Apr 2018 10:00:49 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753848AbeDIQ4u (ORCPT + 99 others); Mon, 9 Apr 2018 12:56:50 -0400 Received: from foss.arm.com ([217.140.101.70]:58578 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753746AbeDIQ4p (ORCPT ); Mon, 9 Apr 2018 12:56:45 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id F102015AB; Mon, 9 Apr 2018 09:56:44 -0700 (PDT) Received: from e110439-lin.cambridge.arm.com (e110439-lin.cambridge.arm.com [10.1.210.68]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 76E213F24A; Mon, 9 Apr 2018 09:56:42 -0700 (PDT) From: Patrick Bellasi To: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org Cc: Ingo Molnar , Peter Zijlstra , Tejun Heo , "Rafael J . Wysocki" , Viresh Kumar , Vincent Guittot , Paul Turner , Dietmar Eggemann , Morten Rasmussen , Juri Lelli , Joel Fernandes , Steve Muckle Subject: [PATCH 5/7] sched/core: uclamp: use TG clamps to restrict TASK clamps Date: Mon, 9 Apr 2018 17:56:13 +0100 Message-Id: <20180409165615.2326-6-patrick.bellasi@arm.com> X-Mailer: git-send-email 2.15.1 In-Reply-To: <20180409165615.2326-1-patrick.bellasi@arm.com> References: <20180409165615.2326-1-patrick.bellasi@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When a task's util_clamp value is configured via sched_setattr, this value has to be properly accounted in the corresponding clamp group every time the task is enqueue and dequeued. When cgroups are also in use, per-task clamp values have to be aggregated to those of the CPU's controller's CGroup in which the task is currently living. Let's update uclamp_cpu_get() to provide an aggregation between the task and the TG clamp values. Every time a task is enqueued, it will be accounted in the clamp_group which defines the smaller clamp value between the task and the TG's ones. This mimics what already happen for a task's CPU affinity mask when the task is also living in a cpuset. The overall idea is that: CGroups attributes are always used to restrict the per-task attributes. For consistency purposes, as well as to properly inform userspace, the sched_getattr call is updated to always return the properly aggregated constrains as described above. This will also make sched_getattr a convenient userpace API to know the utilization constraints enforced on a task by the CGroups's CPU controller. Signed-off-by: Patrick Bellasi Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Tejun Heo Cc: Paul Turner Cc: Joel Fernandes Cc: Steve Muckle Cc: Juri Lelli Cc: Dietmar Eggemann Cc: Morten Rasmussen Cc: linux-kernel@vger.kernel.org Cc: linux-pm@vger.kernel.org --- kernel/sched/core.c | 15 +++++++++++++++ 1 file changed, 15 insertions(+) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index b8299a4f03e7..592de8d32427 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -966,9 +966,18 @@ static inline void uclamp_cpu_get(struct task_struct *p, int cpu, int clamp_id) clamp_value = p->uclamp[clamp_id].value; group_id = p->uclamp[clamp_id].group_id; +#ifdef CONFIG_UCLAMP_TASK_GROUP + /* Use TG's clamp value to limit task specific values */ + if (group_id == UCLAMP_NONE || + clamp_value >= task_group(p)->uclamp[clamp_id].value) { + clamp_value = task_group(p)->uclamp[clamp_id].value; + group_id = task_group(p)->uclamp[clamp_id].group_id; + } +#else /* No task specific clamp values: nothing to do */ if (group_id == UCLAMP_NONE) return; +#endif /* Increment the current group_id */ uc_cpu->group[group_id].tasks += 1; @@ -5401,6 +5410,12 @@ SYSCALL_DEFINE4(sched_getattr, pid_t, pid, struct sched_attr __user *, uattr, #ifdef CONFIG_UCLAMP_TASK attr.sched_util_min = p->uclamp[UCLAMP_MIN].value; attr.sched_util_max = p->uclamp[UCLAMP_MAX].value; +#ifdef CONFIG_UCLAMP_TASK_GROUP + if (task_group(p)->uclamp[UCLAMP_MIN].value < attr.sched_util_min) + attr.sched_util_min = task_group(p)->uclamp[UCLAMP_MIN].value; + if (task_group(p)->uclamp[UCLAMP_MAX].value < attr.sched_util_max) + attr.sched_util_max = task_group(p)->uclamp[UCLAMP_MAX].value; +#endif #endif rcu_read_unlock(); -- 2.15.1