Received: by 2002:ac0:a5a6:0:0:0:0:0 with SMTP id m35-v6csp1073385imm; Thu, 13 Sep 2018 12:13:35 -0700 (PDT) X-Google-Smtp-Source: ANB0VdYezXjOVopgn18fzneO8sL04txJDs0RMar2x+qcUoEpwc/6PlftvdSo95BTOdTQmr11zIJK X-Received: by 2002:a63:a919:: with SMTP id u25-v6mr5895839pge.211.1536866015054; Thu, 13 Sep 2018 12:13:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1536866015; cv=none; d=google.com; s=arc-20160816; b=nS41EuwvIo9XY54E9g9pzu7ckU8LXA8cKB8pUYhhbX6tQCEsm+dkBExb/ntwGwp9FO 2273w1R0yXb/YpOPxJUurNrdNiYSkKjrSnlT634Tjx6MJeTVuExRP0suMowJN9rv95yb PCFW6Tx4grEEvS7Fl/Lo5T/vfQYGEAU2j7e4E8tMqWJaDGPcH+8ebjeKdL7AN174JDV7 fGLtZilIcrdpexKGtEXolHNzfVnlFLkUGPhXOXFZ11kN+noQL2Qf4U3jMdi0wnnhK9TD rJbz6ydxpmFq2/20Ckz27QZR5upLkObn5H5ZRmc8/N/p02MoXWjkjXAqQZsGckPUmban dGwA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature; bh=16p4hRpGa2bIicIpj23kA3cO8z3ANLIBH+tTb51RoE4=; b=BcMvYQcETMvXVs7EJS/Z/6DJF3ST5ueWx5b41RPD1plBjo+9rLorMW6vMPjya5Jaou uGzFN0m0FFzYhLycb9rRq4D5+mLOLbSRx/0TPWmdRxEmFgpU77GL2lksYlg33+yzT1mv nZuseamBPW6XRF/hjiPvJPkXJovx3XSpeeOAGHmJ03l/cvnamynWRXgP9Ee0Buc7g6ek +rOkWRFml9SSXIFBlz+maA8brTSsy2BmvwwYnZKK2i4vemZNp53Vo45QxohmxhIxctIF vjNxayAqS8RWvMR8GEaZsZRw99CA8veeGS/Cd6TSjNwyVN7QkHlaD+oH7j0cbB6SxAza pbkA== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@infradead.org header.s=merlin.20170209 header.b="n0TsWF/6"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id n3-v6si4622771plb.185.2018.09.13.12.13.19; Thu, 13 Sep 2018 12:13:35 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail header.i=@infradead.org header.s=merlin.20170209 header.b="n0TsWF/6"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728432AbeINAXS (ORCPT + 99 others); Thu, 13 Sep 2018 20:23:18 -0400 Received: from merlin.infradead.org ([205.233.59.134]:33740 "EHLO merlin.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728084AbeINAXS (ORCPT ); Thu, 13 Sep 2018 20:23:18 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=merlin.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=16p4hRpGa2bIicIpj23kA3cO8z3ANLIBH+tTb51RoE4=; b=n0TsWF/6i94Czn2801hcG6BSS wWbL1nhbh4KPHom09reeRGJSFULBovISi9wOFwV74BQj9Kzo8TRn9SRWNDZNLh5rzlyRYI6naSWWN GcPGGMqOfTF72m8tWWbgZ1H2IVbMnZdjhq9e0sDolFBzTo74pVxc0F8D2vLRoKSTMi8vpYoJW3n+A B8KwzrjATOIO/T/RZ+Qx0dys1LDDc66Ej+wyEA46YGSSJ+5AtNWFTEt/qaL7Tqt2/RmqZ0aRaSzi+ I7Rftiwbh7xC9INAZNVWLQaUJSMt0KxQv3h8qbmrJZeoaddWhDPhblCFXGEMv+ejWY6WXe0hPI3Zd UeBzq9+gw==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=hirez.programming.kicks-ass.net) by merlin.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1g0X2A-0004vd-L5; Thu, 13 Sep 2018 19:12:20 +0000 Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000) id 23B24202C1A34; Thu, 13 Sep 2018 21:12:09 +0200 (CEST) Date: Thu, 13 Sep 2018 21:12:09 +0200 From: Peter Zijlstra To: Patrick Bellasi Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, Ingo Molnar , Tejun Heo , "Rafael J . Wysocki" , Viresh Kumar , Vincent Guittot , Paul Turner , Quentin Perret , Dietmar Eggemann , Morten Rasmussen , Juri Lelli , Todd Kjos , Joel Fernandes , Steve Muckle , Suren Baghdasaryan Subject: Re: [PATCH v4 03/16] sched/core: uclamp: add CPU's clamp groups accounting Message-ID: <20180913191209.GY24082@hirez.programming.kicks-ass.net> References: <20180828135324.21976-1-patrick.bellasi@arm.com> <20180828135324.21976-4-patrick.bellasi@arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180828135324.21976-4-patrick.bellasi@arm.com> User-Agent: Mutt/1.10.0 (2018-05-17) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Aug 28, 2018 at 02:53:11PM +0100, Patrick Bellasi wrote: > +static inline void uclamp_cpu_get_id(struct task_struct *p, > + struct rq *rq, int clamp_id) > +{ > + struct uclamp_group *uc_grp; > + struct uclamp_cpu *uc_cpu; > + int clamp_value; > + int group_id; > + > + /* Every task must reference a clamp group */ > + group_id = p->uclamp[clamp_id].group_id; > +} > + > +static inline void uclamp_cpu_put_id(struct task_struct *p, > + struct rq *rq, int clamp_id) > +{ > + struct uclamp_group *uc_grp; > + struct uclamp_cpu *uc_cpu; > + unsigned int clamp_value; > + int group_id; > + > + /* New tasks don't have a previous clamp group */ > + group_id = p->uclamp[clamp_id].group_id; > + if (group_id == UCLAMP_NOT_VALID) > + return; *confused*, so on enqueue they must have a group_id, but then on dequeue they might no longer have? > +} > @@ -1110,6 +1313,7 @@ static inline void enqueue_task(struct rq *rq, struct task_struct *p, int flags) > if (!(flags & ENQUEUE_RESTORE)) > sched_info_queued(rq, p); > > + uclamp_cpu_get(rq, p); > p->sched_class->enqueue_task(rq, p, flags); > } > > @@ -1121,6 +1325,7 @@ static inline void dequeue_task(struct rq *rq, struct task_struct *p, int flags) > if (!(flags & DEQUEUE_SAVE)) > sched_info_dequeued(rq, p); > > + uclamp_cpu_put(rq, p); > p->sched_class->dequeue_task(rq, p, flags); > } The ordering, is that right? We get while the task isn't enqueued yet, which would suggest we put when the task is dequeued.