Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp365293yba; Wed, 15 May 2019 02:47:14 -0700 (PDT) X-Google-Smtp-Source: APXvYqw9Y9QKwPOMK7m8oEQ+5eHpVG6qitV0H3G9K/KBq2MowZHdcnYqu1rv1GYiD5BiXkIbSD+h X-Received: by 2002:a65:644e:: with SMTP id s14mr43997309pgv.290.1557913634012; Wed, 15 May 2019 02:47:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1557913634; cv=none; d=google.com; s=arc-20160816; b=RDd3pNwi5M1fawWRmKjkxWKBiLu7h9xTQfi/Zr1oif+Ao8DPPRK/r//9u6+l37nz/P ULprd2mRRq5XTH0RR9YTjfpY1zYKIx3EP2BJmAtXYNWAucXgglhONIuCHXeCZGdlVeHb jpHoTGiQRn0pQ6LxaayD8MccPow2Rfdu2O5XIjsWnEgKxe+0aiEtJdM1T82lQNLVp4B+ +KNa8v6z5h9r2BIKjcTfb3mky213VG6n6TrUBiJP5E4UPhJ4qwO44f4KfN/rh3MWwXXN 2v/ayATLtPTj2BG5FvfpG8qOB0DKsCgOSpOLkEwZfqfBKIm4i25DflC+xErQjmI3xBpu QY3Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=5YbDEUY3T49x4CXTcamHR2fFw/3dlpaiahRMgV8Rw8s=; b=nGH1TP+ajkRdcz7M5Xcq7XGX7JBU9qwMqTTJDRUhZ3Tpx5P3/1HEG68xeT2MsEj+uR S2taw7vrF4Du5ycIEJ8N7V73uYXJZ1qFbX//DXWapO54qcJ6QGc1+I7l4O+CoCc2hMYF EEiPhnyml9ghyWw7Ox57M/R8iYWvXBfbkM5F5J5nOcieDwFPO2Hb07Y9Xu4yPtbFBPHB 6SaYDCxUEiXtFZnn5QbKrf6Zt/Caj1pvvjZu3rsc26AFy0LNqvX7aB6LrImDjFD9+9y8 05P1RmYh/L+90r5O6whGCH1/quXrPcAaPFgE4Z/qZqbW3N5yYCKhGmhxYPNKiweKn+9g n3eg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d2si1371516pld.78.2019.05.15.02.46.59; Wed, 15 May 2019 02:47:13 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726525AbfEOJpk (ORCPT + 99 others); Wed, 15 May 2019 05:45:40 -0400 Received: from foss.arm.com ([217.140.101.70]:39246 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726501AbfEOJpi (ORCPT ); Wed, 15 May 2019 05:45:38 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 26915A78; Wed, 15 May 2019 02:45:38 -0700 (PDT) Received: from e110439-lin.cambridge.arm.com (e110439-lin.cambridge.arm.com [10.1.194.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id E4F9C3F703; Wed, 15 May 2019 02:45:34 -0700 (PDT) From: Patrick Bellasi To: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, linux-api@vger.kernel.org Cc: Ingo Molnar , Peter Zijlstra , Tejun Heo , "Rafael J . Wysocki" , Vincent Guittot , Viresh Kumar , Paul Turner , Quentin Perret , Dietmar Eggemann , Morten Rasmussen , Juri Lelli , Todd Kjos , Joel Fernandes , Steve Muckle , Suren Baghdasaryan Subject: [PATCH v9 02/16] sched/core: uclamp: Add bucket local max tracking Date: Wed, 15 May 2019 10:44:45 +0100 Message-Id: <20190515094459.10317-3-patrick.bellasi@arm.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190515094459.10317-1-patrick.bellasi@arm.com> References: <20190515094459.10317-1-patrick.bellasi@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Because of bucketization, different task-specific clamp values are tracked in the same bucket. For example, with 20% bucket size and assuming to have: Task1: util_min=25% Task2: util_min=35% both tasks will be refcounted in the [20..39]% bucket and always boosted only up to 20% thus implementing a simple floor aggregation normally used in histograms. In systems with only few and well-defined clamp values, it would be useful to track the exact clamp value required by a task whenever possible. For example, if a system requires only 23% and 47% boost values then it's possible to track the exact boost required by each task using only 3 buckets of ~33% size each. Introduce a mechanism to max aggregate the requested clamp values of RUNNABLE tasks in the same bucket. Keep it simple by resetting the bucket value to its base value only when a bucket becomes inactive. Allow a limited and controlled overboosting margin for tasks recounted in the same bucket. In systems where the boost values are not known in advance, it is still possible to control the maximum acceptable overboosting margin by tuning the number of clamp groups. For example, 20 groups ensure a 5% maximum overboost. Remove the rq bucket initialization code since a correct bucket value is now computed when a task is refcounted into a CPU's rq. Signed-off-by: Patrick Bellasi Cc: Ingo Molnar Cc: Peter Zijlstra --- Changes in v9 Message-ID: <20190415144930.pntid6evu6r67l4o@e110439-lin> - fix "max local update" by moving into uclamp_rq_inc_id() --- kernel/sched/core.c | 43 +++++++++++++++++++++++++------------------ 1 file changed, 25 insertions(+), 18 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 29c0d465fd9e..79b57cbbe032 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -773,6 +773,11 @@ static inline unsigned int uclamp_bucket_id(unsigned int clamp_value) return clamp_value / UCLAMP_BUCKET_DELTA; } +static inline unsigned int uclamp_bucket_base_value(unsigned int clamp_value) +{ + return UCLAMP_BUCKET_DELTA * uclamp_bucket_id(clamp_value); +} + static inline unsigned int uclamp_none(int clamp_id) { if (clamp_id == UCLAMP_MIN) @@ -810,6 +815,11 @@ unsigned int uclamp_rq_max_value(struct rq *rq, unsigned int clamp_id) * When a task is enqueued on a rq, the clamp bucket currently defined by the * task's uclamp::bucket_id is refcounted on that rq. This also immediately * updates the rq's clamp value if required. + * + * Tasks can have a task-specific value requested from user-space, track + * within each bucket the maximum value for tasks refcounted in it. + * This "local max aggregation" allows to track the exact "requested" value + * for each bucket when all its RUNNABLE tasks require the same clamp. */ static inline void uclamp_rq_inc_id(struct rq *rq, struct task_struct *p, unsigned int clamp_id) @@ -823,8 +833,15 @@ static inline void uclamp_rq_inc_id(struct rq *rq, struct task_struct *p, bucket = &uc_rq->bucket[uc_se->bucket_id]; bucket->tasks++; + /* + * Local max aggregation: rq buckets always track the max + * "requested" clamp value of its RUNNABLE tasks. + */ + if (bucket->tasks == 1 || uc_se->value > bucket->value) + bucket->value = uc_se->value; + if (uc_se->value > READ_ONCE(uc_rq->value)) - WRITE_ONCE(uc_rq->value, bucket->value); + WRITE_ONCE(uc_rq->value, uc_se->value); } /* @@ -851,6 +868,12 @@ static inline void uclamp_rq_dec_id(struct rq *rq, struct task_struct *p, if (likely(bucket->tasks)) bucket->tasks--; + /* + * Keep "local max aggregation" simple and accept to (possibly) + * overboost some RUNNABLE tasks in the same bucket. + * The rq clamp bucket value is reset to its base value whenever + * there are no more RUNNABLE tasks refcounting it. + */ if (likely(bucket->tasks)) return; @@ -891,25 +914,9 @@ static void __init init_uclamp(void) unsigned int clamp_id; int cpu; - for_each_possible_cpu(cpu) { - struct uclamp_bucket *bucket; - struct uclamp_rq *uc_rq; - unsigned int bucket_id; - + for_each_possible_cpu(cpu) memset(&cpu_rq(cpu)->uclamp, 0, sizeof(struct uclamp_rq)); - for_each_clamp_id(clamp_id) { - uc_rq = &cpu_rq(cpu)->uclamp[clamp_id]; - - bucket_id = 1; - while (bucket_id < UCLAMP_BUCKETS) { - bucket = &uc_rq->bucket[bucket_id]; - bucket->value = bucket_id * UCLAMP_BUCKET_DELTA; - ++bucket_id; - } - } - } - for_each_clamp_id(clamp_id) { uclamp_se_set(&init_task.uclamp[clamp_id], uclamp_none(clamp_id)); -- 2.21.0