Received: by 2002:ac0:950c:0:0:0:0:0 with SMTP id f12csp3788230imc; Thu, 14 Mar 2019 05:26:17 -0700 (PDT) X-Google-Smtp-Source: APXvYqyFX5b+YTtVZlUXHL0bdt8e4CeYs5XJBMyn5IPF0oPeAuJ5lQIqKm9EClhLzfnWx+ZxqvHV X-Received: by 2002:a17:902:a9c8:: with SMTP id b8mr13980863plr.12.1552566377067; Thu, 14 Mar 2019 05:26:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1552566377; cv=none; d=google.com; s=arc-20160816; b=siz8AT5mBaCiWMy/eas7yZu+ORFvlndGhYTz3pYiL+s504tc64AqiNq5BZ5CfNvLMg /eLk/Uj+njpZuQMuB8obf4LwCP2XgSQ8EFRf1CLt+d3ESI7EgVN9lE1CvTdE16GIiNmg K3UV7+B7B5CbNG6VPhDlBgWsGWVbrNA3PwmhsT/eaaH7xFC9pLH+brYXjQPeeuSn78AY MAO0M6rX/MH1mCanmJGU3A7qSE47KHi8KooQUqDNbcmZg7GpbfU6gT7eXtetZM0ULzcj hJqU++SpCJyiFaAmFeWbFtay92skLina3a9MczhQ45mDJKgH8v2ruNE/i0qKqfakKPpe U89A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=dWkl0HCLctOiJiE9W357cOsNhZOQwMkpkDojmAdzIq4=; b=Fn9PJmwiJ3DQU3kjJAvIp0IpC2kHdDIj5CLj0ZiWpnxcUcHZAHEm0kWY5gdnXLftae OgKIS1pJLAjfA3wJFb9gd6pB1E2vrMobcq8rn0Z0cn5Yn31gTGVucy3JSqTazPmBcO0C kOo/DPtWwEb+ld+q/qwIyC69HHYrBAwFtVMYyamjlk1PDWRUr7PLXArh7mbPShMUVItP srO0zEGRAzep/g944RvCI3zXsZ9UNry+VgrX5H0roUIsXKUY5ckjtmN2o7Ux/pd7HCy6 jQHtZOzImNMWp4Vi6f7yp4wW5fNQa+Yo7TLQQ2Crp2yr0KhvqzOR27r3uLWUwTffH1zC UGUw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m9si12188135pgv.238.2019.03.14.05.26.01; Thu, 14 Mar 2019 05:26:17 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727173AbfCNMXC (ORCPT + 99 others); Thu, 14 Mar 2019 08:23:02 -0400 Received: from foss.arm.com ([217.140.101.70]:43332 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726688AbfCNMXC (ORCPT ); Thu, 14 Mar 2019 08:23:02 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 79E6880D; Thu, 14 Mar 2019 05:23:01 -0700 (PDT) Received: from e110439-lin (e110439-lin.cambridge.arm.com [10.1.194.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 85A503F614; Thu, 14 Mar 2019 05:22:58 -0700 (PDT) Date: Thu, 14 Mar 2019 12:22:56 +0000 From: Patrick Bellasi To: Suren Baghdasaryan Cc: Peter Zijlstra , LKML , linux-pm@vger.kernel.org, linux-api@vger.kernel.org, Ingo Molnar , Tejun Heo , "Rafael J . Wysocki" , Vincent Guittot , Viresh Kumar , Paul Turner , Quentin Perret , Dietmar Eggemann , Morten Rasmussen , Juri Lelli , Todd Kjos , Joel Fernandes , Steve Muckle Subject: Re: [PATCH v7 01/15] sched/core: uclamp: Add CPU's clamp buckets refcounting Message-ID: <20190314122256.7wb3ydswpkfmntvf@e110439-lin> References: <20190208100554.32196-1-patrick.bellasi@arm.com> <20190208100554.32196-2-patrick.bellasi@arm.com> <20190313140925.GE5922@hirez.programming.kicks-ass.net> <20190313152359.lkyzel7fakjoi5hu@e110439-lin> <20190313194619.GR2482@worktop.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: NeoMutt/20180716 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 13-Mar 14:08, Suren Baghdasaryan wrote: > On Wed, Mar 13, 2019 at 12:46 PM Peter Zijlstra wrote: > > > > On Wed, Mar 13, 2019 at 03:23:59PM +0000, Patrick Bellasi wrote: > > > On 13-Mar 15:09, Peter Zijlstra wrote: > > > > On Fri, Feb 08, 2019 at 10:05:40AM +0000, Patrick Bellasi wrote: > > > > > > > +static inline void uclamp_rq_update(struct rq *rq, unsigned int clamp_id) > > > > > +{ > > > > > + struct uclamp_bucket *bucket = rq->uclamp[clamp_id].bucket; > > > > > + unsigned int max_value = uclamp_none(clamp_id); > > > > > > > > That's 1024 for uclamp_max > > > > > > > > > + unsigned int bucket_id; > > > > > + > > > > > + /* > > > > > + * Both min and max clamps are MAX aggregated, thus the topmost > > > > > + * bucket with some tasks defines the rq's clamp value. > > > > > + */ > > > > > + bucket_id = UCLAMP_BUCKETS; > > > > > + do { > > > > > + --bucket_id; > > > > > + if (!rq->uclamp[clamp_id].bucket[bucket_id].tasks) > > > > > + continue; > > > > > + max_value = bucket[bucket_id].value; > > > > > > > > but this will then _lower_ it. That's not a MAX aggregate. > > > > > > For uclamp_max we want max_value=1024 when there are no active tasks, > > > which means: no max clamp enforced on CFS/RT "idle" cpus. > > > > > > If instead there are active RT/CFS tasks then we want the clamp value > > > of the max group, which means: MAX aggregate active clamps. > > > > > > That's what the code above does and the comment says. > > > > That's (obviously) not how I read it... maybe something like: > > > > static inline void uclamp_rq_update(struct rq *rq, unsigned int clamp_id) > > { > > struct uclamp_bucket *bucket = rq->uclamp[clamp_id].bucket; > > int i; > > > > /* > > * Since both min and max clamps are max aggregated, find the > > * top most bucket with tasks in. > > */ > > for (i = UCLMAP_BUCKETS-1; i>=0; i--) { > > if (!bucket[i].tasks) > > continue; > > return bucket[i].value; > > } > > > > /* No tasks -- default clamp values */ > > return uclamp_none(clamp_id); > > } > > > > would make it clearer? > > This way it's also more readable/obvious when it's used inside > uclamp_rq_dec_id, assuming uclamp_rq_update is renamed into smth like > get_max_rq_uclamp. Rightm, I have now something like that: ---8<--- static inline unsigned int uclamp_rq_max_value(struct rq *rq, unsigned int clamp_id) { struct uclamp_bucket *bucket = rq->uclamp[clamp_id].bucket; int bucket_id; /* * Since both min and max clamps are max aggregated, find the * top most bucket with tasks in. */ for (bucket_id = UCLMAP_BUCKETS-1; bucket_id >= 0; bucket_id--) { if (!bucket[bucket_id].tasks) continue; return bucket[bucket_id].value; } /* No tasks -- default clamp value */ return uclamp_none(clamp_id); } static inline void uclamp_rq_dec_id(struct task_struct *p, struct rq *rq, unsigned int clamp_id) { //... if (bucket->value >= rq_clamp) { /* * Reset rq's clamp bucket value to its nominal value whenever * there are anymore RUNNABLE tasks refcounting it. */ bucket->value = uclamp_bucket_nominal_value(rq_clamp); WRITE_ONCE(uc_rq->value, uclamp_rq_max_value(rq, clamp_id)); } } ---8<--- -- #include Patrick Bellasi