Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp5502911imu; Tue, 13 Nov 2018 07:33:13 -0800 (PST) X-Google-Smtp-Source: AJdET5cgyjCk6eME8BCphBN4jjkTW76xRlmg3WhsDn0zGPWSqbZMDVtUIhMW0iQAaODabcoBdV2I X-Received: by 2002:a63:9e58:: with SMTP id r24mr5310096pgo.264.1542123193157; Tue, 13 Nov 2018 07:33:13 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1542123193; cv=none; d=google.com; s=arc-20160816; b=PFTPYSL2pgN8Lqkw1CpniwswJrCyofWaSSoc0JChLmdlm0V7o9JN5+BFP9nmFAk6kN xWy9yvVKhQeY0BfMqfeNvEh8G2VMzdRklhuiGeN4Qgrausu9sSbMque3/wgkwxoGA3rC jIiOq5WGn9mVngNF2RdTmKtawIxAjeCf7DkHzwxhpXZfVmS9BYauFGPQCzFRpnbMaMQR mOSxKJnE2jaTwukVBAm5fXAD2ZismOIQx/E18RgWycnnkXvZDXID0xTvhlt1PmNJ0SQJ O9STZqPWvbSXPOcKmOCVtaShyl0s9S8gtzetrR7coUHi2PfD9TrryvoDw1Ij2062/fO2 q09A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=otZFuCJXqBtuFMWgM1pcwELIqKCWGb6PQ9vkJNDc/aU=; b=xMC9JwiygHYFn5E0P2URitvZYI2UQgE1ug+tT8MmSiXPAYbmv02dvZ3FVymi0Xpgqf X8VpHKLheZ525TSAob2u8WBFMe0sBm+gryN49OAADneka1sU5EjqE6Ek2OBzR3D9nBjF OnCrpFoJtLC+gBzVIGRyEqSoX3FuoFBWcuVszkps51sy3g+yHw5OZXLAr/s8/Be0a3zz /lC0QU9AtMgzVY0p8Hghq2C2oJfvZX1nxEWMkydZu/p0zK1u7+a4HK0r6Eh2RDp6iGNh 6UW413bXfUTgNyAiNbnAx/r3kk8t/JfUiSgP3QkefGocRiOwOfrkzJnlzvT6Rr3gM5nq mQcQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p10-v6si23099241pll.63.2018.11.13.07.32.42; Tue, 13 Nov 2018 07:33:13 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387904AbeKNB21 (ORCPT + 99 others); Tue, 13 Nov 2018 20:28:27 -0500 Received: from foss.arm.com ([217.140.101.70]:58378 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728818AbeKNB21 (ORCPT ); Tue, 13 Nov 2018 20:28:27 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id EB7C1A78; Tue, 13 Nov 2018 07:29:50 -0800 (PST) Received: from darkstar (usa-sjc-mx-foss1.foss.arm.com [217.140.101.70]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 60B5B3F5BD; Tue, 13 Nov 2018 07:29:50 -0800 (PST) Date: Tue, 13 Nov 2018 07:29:48 -0800 From: Patrick Bellasi To: Peter Zijlstra Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, Ingo Molnar , Tejun Heo , "Rafael J . Wysocki" , Vincent Guittot , Viresh Kumar , Paul Turner , Quentin Perret , Dietmar Eggemann , Morten Rasmussen , Juri Lelli , Todd Kjos , Joel Fernandes , Steve Muckle , Suren Baghdasaryan Subject: Re: [PATCH v5 07/15] sched/core: uclamp: add clamp group bucketing support Message-ID: <20181113152948.GC7681@darkstar> References: <20181029183311.29175-1-patrick.bellasi@arm.com> <20181029183311.29175-9-patrick.bellasi@arm.com> <20181112000910.GC3038@worktop> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20181112000910.GC3038@worktop> User-Agent: Mutt/1.9.4 (2018-02-28) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 12-Nov 01:09, Peter Zijlstra wrote: > On Mon, Oct 29, 2018 at 06:33:02PM +0000, Patrick Bellasi wrote: > > The number of clamp groups configured at compile time defines the range > > of utilization clamp values tracked by each CPU clamp group. > > For example, with the default configuration: > > CONFIG_UCLAMP_GROUPS_COUNT 5 > > we will have 5 clamp groups tracking 20% utilization each. In this case, > > a task with util_min=25% will have group_id=1. > > OK I suppose; but should we not do a wholesale s/group/bucket/ at this > point? Yes, if bucketization is acceptable, we should probably rename. Question is: are you ok for a renaming in this patch. or you better prefer I use that naming since the beginning ? If we wanna use "bucket" since the beginning, then we should also probably squash the entire patch into the previous ones and drop this one. I personally prefer to keep this concept into a separate patch, but at the same time I don't very like the idea of a massive renaming in this patch. > > We should probably raise the minimum number of buckets from 1 though :-) Mmm... the default is already set to what fits into a single cache line... perhaps we can use that as a minimum too ? But. technically we can (partially) track different clamp values also with just one bucket... (explanation in the following comment). > > +/* > > + * uclamp_group_value: get the "group value" for a given "clamp value" > > + * @value: the utiliation "clamp value" to translate > > + * > > + * The number of clamp group, which is defined at compile time, allows to > > + * track a finite number of different clamp values. Thus clamp values are > > + * grouped into bins each one representing a different "group value". > > + * This method returns the "group value" corresponding to the specified > > + * "clamp value". > > + */ > > +static inline unsigned int uclamp_group_value(unsigned int clamp_value) > > +{ > > +#define UCLAMP_GROUP_DELTA (SCHED_CAPACITY_SCALE / CONFIG_UCLAMP_GROUPS_COUNT) > > +#define UCLAMP_GROUP_UPPER (UCLAMP_GROUP_DELTA * CONFIG_UCLAMP_GROUPS_COUNT) > > + > > + if (clamp_value >= UCLAMP_GROUP_UPPER) > > + return SCHED_CAPACITY_SCALE; > > + > > + return UCLAMP_GROUP_DELTA * (clamp_value / UCLAMP_GROUP_DELTA); > > +} > > Can't we further simplify; I mean, at this point all we really need to > know is the rq's highest group_id that is in use. We don't need to > actually track the value anymore. This will force to track each clamp value with the exact bucket value. Instead, by tracking the actual clamp value within a bucket, we have the chance to updte the bucket value to the actual (max) clamp value of the RUNNABLE tasks in that bucket. In a properly configured system, this allows to track exact clamp values with a minimum number of buckets. -- #include Patrick Bellasi