Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp7256335imu; Tue, 22 Jan 2019 03:03:46 -0800 (PST) X-Google-Smtp-Source: ALg8bN7yztN1duu1MJCc2nb8xSgalq1zY3EHoZuAigGpxzhv+86PiXC/7dLSymRXTQBSFnN1kyhZ X-Received: by 2002:a63:6150:: with SMTP id v77mr30880058pgb.266.1548155026208; Tue, 22 Jan 2019 03:03:46 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1548155026; cv=none; d=google.com; s=arc-20160816; b=O3Bq9eJkgMPuFt7miIfW7uJrY0YmLIKHihrYS45hKBMiJIF9JJG2/UBebQFpajnMFd WG0QbQ4NtG2gsiYOoRKmlKfYLRYusDwEFPXuW//8N/STSSPdVKgkmwUipUNHlVHviMLu nygERlC5ZpVSZjapNfO9oHEqFiPzqFWZgKKuwBwFD/OZ1J2zbgGqAOs/Xdm5yDioXDKC 4OTHuNRJTIODu8U8W95+VWU5kwSBNVhQV2JHQjQBfVlZisrbZXH6XmkxSwlz6q9dfNhE NrJDJ4KGOlDj4AC0cjCAKnAHosVqF7DT/AT5YxhwDRfVkMpD89RIvfsTNIHa03gcawIL FEeg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=pb9k8tykBUbvUI97WvGlJqwuwGDWvFxcZSRlhnF341E=; b=lRMlRrLtVmpOiq+V8/0NuVmC3iaBNJkWY3HCLEmoG2a5kSf7w4azwvcGl5BN+7Dbij cGbaNNHHY2AB2s/d7yTGwChpmyTcvwgfUTF4QvPfjl6kjIeS/UDm1fuE/kQQ6e9xO4jM A+WMxRbVShkTihHIO55Vo/OF3KsgIcOTms+KWoB22bXFW2qimzBw1OgJc+UvPT6x4Emt Vek5BVBCQqBnuZttOiImCYkOJ+T6LzeCdBS9ydY3s5sZlGKzY/jFJ1w8u3hQJ2semDkG NWFVWZ/HDhg/8fICYTpeKdgG0ZvfdQO0ELJz+8fn8lrRMEd/gKunP2m5Xj7G4Vuhdopi csAg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id bd12si15222255plb.193.2019.01.22.03.03.28; Tue, 22 Jan 2019 03:03:46 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728019AbfAVLCM (ORCPT + 99 others); Tue, 22 Jan 2019 06:02:12 -0500 Received: from foss.arm.com ([217.140.101.70]:50936 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727048AbfAVLCM (ORCPT ); Tue, 22 Jan 2019 06:02:12 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 569F5A78; Tue, 22 Jan 2019 03:02:11 -0800 (PST) Received: from e110439-lin (e110439-lin.cambridge.arm.com [10.1.194.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 3EDFA3F6A8; Tue, 22 Jan 2019 03:02:08 -0800 (PST) Date: Tue, 22 Jan 2019 11:02:05 +0000 From: Patrick Bellasi To: "Rafael J. Wysocki" Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, linux-api@vger.kernel.org, Ingo Molnar , Peter Zijlstra , Tejun Heo , "Rafael J . Wysocki" , Vincent Guittot , Viresh Kumar , Paul Turner , Quentin Perret , Dietmar Eggemann , Morten Rasmussen , Juri Lelli , Todd Kjos , Joel Fernandes , Steve Muckle , Suren Baghdasaryan Subject: Re: [PATCH v6 08/16] sched/cpufreq: uclamp: Add utilization clamping for FAIR tasks Message-ID: <20190122110205.lpk6o2jdfe3mttjr@e110439-lin> References: <20190115101513.2822-1-patrick.bellasi@arm.com> <20190115101513.2822-9-patrick.bellasi@arm.com> <3006911.57lVBuUGX3@aspire.rjw.lan> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <3006911.57lVBuUGX3@aspire.rjw.lan> User-Agent: NeoMutt/20180716 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 22-Jan 11:37, Rafael J. Wysocki wrote: > On Tuesday, January 15, 2019 11:15:05 AM CET Patrick Bellasi wrote: > > Each time a frequency update is required via schedutil, a frequency is > > selected to (possibly) satisfy the utilization reported by each > > scheduling class. However, when utilization clamping is in use, the > > frequency selection should consider userspace utilization clamping > > hints. This will allow, for example, to: > > > > - boost tasks which are directly affecting the user experience > > by running them at least at a minimum "requested" frequency > > > > - cap low priority tasks not directly affecting the user experience > > by running them only up to a maximum "allowed" frequency > > > > These constraints are meant to support a per-task based tuning of the > > frequency selection thus supporting a fine grained definition of > > performance boosting vs energy saving strategies in kernel space. > > > > Add support to clamp the utilization and IOWait boost of RUNNABLE FAIR > > tasks within the boundaries defined by their aggregated utilization > > clamp constraints. > > Based on the max(min_util, max_util) of each task, max-aggregated the > > CPU clamp value in a way to give the boosted tasks the performance they > > need when they happen to be co-scheduled with other capped tasks. > > > > Signed-off-by: Patrick Bellasi > > Cc: Ingo Molnar > > Cc: Peter Zijlstra > > Cc: Rafael J. Wysocki > > > > --- > > Changes in v6: > > Message-ID: <20181107113849.GC14309@e110439-lin> > > - sanity check util_max >= util_min > > Others: > > - wholesale s/group/bucket/ > > - wholesale s/_{get,put}/_{inc,dec}/ to match refcount APIs > > --- > > kernel/sched/cpufreq_schedutil.c | 27 ++++++++++++++++++++++++--- > > kernel/sched/sched.h | 23 +++++++++++++++++++++++ > > 2 files changed, 47 insertions(+), 3 deletions(-) > > > > diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c > > index 033ec7c45f13..520ee2b785e7 100644 > > --- a/kernel/sched/cpufreq_schedutil.c > > +++ b/kernel/sched/cpufreq_schedutil.c > > @@ -218,8 +218,15 @@ unsigned long schedutil_freq_util(int cpu, unsigned long util_cfs, > > * CFS tasks and we use the same metric to track the effective > > * utilization (PELT windows are synchronized) we can directly add them > > * to obtain the CPU's actual utilization. > > + * > > + * CFS utilization can be boosted or capped, depending on utilization > > + * clamp constraints requested by currently RUNNABLE tasks. > > + * When there are no CFS RUNNABLE tasks, clamps are released and > > + * frequency will be gracefully reduced with the utilization decay. > > */ > > - util = util_cfs; > > + util = (type == ENERGY_UTIL) > > + ? util_cfs > > + : uclamp_util(rq, util_cfs); > > util += cpu_util_rt(rq); > > > > dl_util = cpu_util_dl(rq); > > @@ -327,6 +334,7 @@ static void sugov_iowait_boost(struct sugov_cpu *sg_cpu, u64 time, > > unsigned int flags) > > { > > bool set_iowait_boost = flags & SCHED_CPUFREQ_IOWAIT; > > + unsigned int max_boost; > > > > /* Reset boost if the CPU appears to have been idle enough */ > > if (sg_cpu->iowait_boost && > > @@ -342,11 +350,24 @@ static void sugov_iowait_boost(struct sugov_cpu *sg_cpu, u64 time, > > return; > > sg_cpu->iowait_boost_pending = true; > > > > + /* > > + * Boost FAIR tasks only up to the CPU clamped utilization. > > + * > > + * Since DL tasks have a much more advanced bandwidth control, it's > > + * safe to assume that IO boost does not apply to those tasks. > > + * Instead, since RT tasks are not utilization clamped, we don't want > > + * to apply clamping on IO boost while there is blocked RT > > + * utilization. > > + */ > > + max_boost = sg_cpu->iowait_boost_max; > > + if (!cpu_util_rt(cpu_rq(sg_cpu->cpu))) > > + max_boost = uclamp_util(cpu_rq(sg_cpu->cpu), max_boost); > > + > > /* Double the boost at each request */ > > if (sg_cpu->iowait_boost) { > > sg_cpu->iowait_boost <<= 1; > > - if (sg_cpu->iowait_boost > sg_cpu->iowait_boost_max) > > - sg_cpu->iowait_boost = sg_cpu->iowait_boost_max; > > + if (sg_cpu->iowait_boost > max_boost) > > + sg_cpu->iowait_boost = max_boost; > > return; > > } > > > > diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h > > index b7f3ee8ba164..95d62a2a0b44 100644 > > --- a/kernel/sched/sched.h > > +++ b/kernel/sched/sched.h > > @@ -2267,6 +2267,29 @@ static inline unsigned int uclamp_none(int clamp_id) > > return SCHED_CAPACITY_SCALE; > > } > > > > +#ifdef CONFIG_UCLAMP_TASK > > +static inline unsigned int uclamp_util(struct rq *rq, unsigned int util) > > +{ > > + unsigned int min_util = READ_ONCE(rq->uclamp[UCLAMP_MIN].value); > > + unsigned int max_util = READ_ONCE(rq->uclamp[UCLAMP_MAX].value); > > + > > + /* > > + * Since CPU's {min,max}_util clamps are MAX aggregated considering > > + * RUNNABLE tasks with _different_ clamps, we can end up with an > > + * invertion, which we can fix at usage time. > > + */ > > + if (unlikely(min_util >= max_util)) > > + return min_util; > > + > > + return clamp(util, min_util, max_util); > > +} > > +#else /* CONFIG_UCLAMP_TASK */ > > +static inline unsigned int uclamp_util(struct rq *rq, unsigned int util) > > +{ > > + return util; > > +} > > +#endif /* CONFIG_UCLAMP_TASK */ > > + > > #ifdef arch_scale_freq_capacity > > # ifndef arch_scale_freq_invariant > > # define arch_scale_freq_invariant() true > > > > IMO it would be better to combine this patch with the next one. Main reason was to better document in the changelog what we do for the two different classes... > At least some things in it I was about to ask about would go away > then. :-) ... but if it creates confusion I can certainly merge them. Or maybe clarify better in this patch what's not clear: may I ask what were your questions ? > Besides, I don't really see a reason for the split here. Was mainly to make the changes required for RT more self-contained. For that class only, not for FAIR, we have additional code in the following patch which add uclamp_default_perf which are system defaults used to track/account tasks requesting the maximum frequency. Again, I can either better clarify the above patch or just merge the two together: what do you prefer ? -- #include Patrick Bellasi