Received: by 2002:ac0:a5a6:0:0:0:0:0 with SMTP id m35-v6csp523543imm; Fri, 14 Sep 2018 02:09:53 -0700 (PDT) X-Google-Smtp-Source: ANB0VdbZfgjijo808GwQtr7q4xBDX1qUcAkD+162UOzDbysxdwmVx+yXc8tTcemieeIwOaGU+G7x X-Received: by 2002:a17:902:ac8f:: with SMTP id h15-v6mr11107643plr.161.1536916193527; Fri, 14 Sep 2018 02:09:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1536916193; cv=none; d=google.com; s=arc-20160816; b=aPr4Qgaqiy/FJswDIXQUwuXUonWG2YpQJfYOOJDGafuyq4em5gQWA1oT3Xu8/14y4g ZLltNTp4YMHilxUya9KxSiGhjR8FE7EQXfkjP0HDTy0p7OL7rHMLU9QL2YvPxBK3h7Rj FZVMvlbzKhvA784bbxiNjiWc5taWry+Y52amu7b2REs6UfRTVm3yz+tkzQSp+K2Qx62u kJqpgAJqTuJF+yY0Zw/YqunVAe2U5wvMA7kathhIOHq1NgsWhljFoN3vHoesXwAr5ol1 RhoeAjhJQsdbyM6vovFwJOplQ+ZPJ2+VPvVK+eHkJlGVEFXbCWo/O8dfVzby5ydK97na xMOQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=wdUBfSrqhp4sGQQ1XvXt7Bl5dD/jLqzybPnsAMRN/nQ=; b=hJmS4OTeFwIIEg+NkJrg0IFiU3mNLiSj2+dYojJG04j0zpcNasVI9P6i3QeX7TLTEA LpxKaCj18zIf3hMLr9HVac0VjTbv7/a82RHujTtGV3AnuAY+NaiW6Qny9xL9j3fQuLpd t/09UI2chMIHFvbtXJAz8jCn7+HsZbuSLuAimNpynRuT5L7CnpO8ALTv/8zbZRXOCAbA eAr7hcR5kEPyoFWJgJwyatO3sjsSz2saKYOlTD5megjiITVUxXvPOO9Wdr5MOCvTYmce DsU6naYfP/JJPQuobVu8qbPhDURxxH7jduL15aF/U56uiFh21LD+QymmNCZcaf/OdLJQ xx9A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w8-v6si6823775pgw.647.2018.09.14.02.09.38; Fri, 14 Sep 2018 02:09:53 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728059AbeINOVb (ORCPT + 99 others); Fri, 14 Sep 2018 10:21:31 -0400 Received: from foss.arm.com ([217.140.101.70]:58674 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726883AbeINOVb (ORCPT ); Fri, 14 Sep 2018 10:21:31 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 47DF118A; Fri, 14 Sep 2018 02:07:57 -0700 (PDT) Received: from e110439-lin (e110439-lin.emea.arm.com [10.4.12.126]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7892B3F703; Fri, 14 Sep 2018 02:07:54 -0700 (PDT) Date: Fri, 14 Sep 2018 10:07:51 +0100 From: Patrick Bellasi To: Peter Zijlstra Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, Ingo Molnar , Tejun Heo , "Rafael J . Wysocki" , Viresh Kumar , Vincent Guittot , Paul Turner , Quentin Perret , Dietmar Eggemann , Morten Rasmussen , Juri Lelli , Todd Kjos , Joel Fernandes , Steve Muckle , Suren Baghdasaryan Subject: Re: [PATCH v4 03/16] sched/core: uclamp: add CPU's clamp groups accounting Message-ID: <20180914090751.GN1413@e110439-lin> References: <20180828135324.21976-1-patrick.bellasi@arm.com> <20180828135324.21976-4-patrick.bellasi@arm.com> <20180913191209.GY24082@hirez.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180913191209.GY24082@hirez.programming.kicks-ass.net> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 13-Sep 21:12, Peter Zijlstra wrote: > On Tue, Aug 28, 2018 at 02:53:11PM +0100, Patrick Bellasi wrote: > > +static inline void uclamp_cpu_get_id(struct task_struct *p, > > + struct rq *rq, int clamp_id) > > +{ > > + struct uclamp_group *uc_grp; > > + struct uclamp_cpu *uc_cpu; > > + int clamp_value; > > + int group_id; > > + > > + /* Every task must reference a clamp group */ > > + group_id = p->uclamp[clamp_id].group_id; > > > +} > > + > > +static inline void uclamp_cpu_put_id(struct task_struct *p, > > + struct rq *rq, int clamp_id) > > +{ > > + struct uclamp_group *uc_grp; > > + struct uclamp_cpu *uc_cpu; > > + unsigned int clamp_value; > > + int group_id; > > + > > + /* New tasks don't have a previous clamp group */ > > + group_id = p->uclamp[clamp_id].group_id; > > + if (group_id == UCLAMP_NOT_VALID) > > + return; > > *confused*, so on enqueue they must have a group_id, but then on dequeue > they might no longer have? Why not ? Tasks always have a (task-specific) group_id, once set the first time. IOW, init_task::group_id is UCLAMP_NOT_VALID, as well as all the tasks forked under reset_on_fork, otherwise the get the group_id of the parent. Actually, I just noted that the reset_on_fork path is now setting p::group_id=0, which is semantically equivalent to UCLAMP_NOT_VALID... but will update that assignment for consistency in v5. The only way to set a !UCLAMP_NOT_VALID value for p::group_id is via the syscall, which will either fails or set a new valid group_id. Thus, if we have a valid p::group_id @enqueue time, we will have one @dequeue time too. Eventually it could be a different one, because while RUNNABLE we do a syscall... but this case is addressed by the following patch: [PATCH v4 04/16] sched/core: uclamp: update CPU's refcount on clamp changes https://lore.kernel.org/lkml/20180828135324.21976-5-patrick.bellasi@arm.com/ Does that makes sense ? > > +} > > > @@ -1110,6 +1313,7 @@ static inline void enqueue_task(struct rq *rq, struct task_struct *p, int flags) > > if (!(flags & ENQUEUE_RESTORE)) > > sched_info_queued(rq, p); > > > > + uclamp_cpu_get(rq, p); > > p->sched_class->enqueue_task(rq, p, flags); > > } > > > > @@ -1121,6 +1325,7 @@ static inline void dequeue_task(struct rq *rq, struct task_struct *p, int flags) > > if (!(flags & DEQUEUE_SAVE)) > > sched_info_dequeued(rq, p); > > > > + uclamp_cpu_put(rq, p); > > p->sched_class->dequeue_task(rq, p, flags); > > } > > The ordering, is that right? We get while the task isn't enqueued yet, > which would suggest we put when the task is dequeued. That's the "usual trick" required for correct schedutil updates. The scheduler class code will likely poke schedutil and thus we wanna be sure to have updated CPU clamps by the time we have to compute the next OPP. Cheers, Patrick -- #include Patrick Bellasi