Received: by 10.192.165.156 with SMTP id m28csp717483imm; Fri, 13 Apr 2018 06:43:00 -0700 (PDT) X-Google-Smtp-Source: AIpwx48j7J++Bm4hr9hus7dOcUFchVYJRy0pQCcy+Nh8wx/YznjJ8xE1Qmu2WNa71iDHkGUirAqB X-Received: by 2002:a17:902:28a6:: with SMTP id f35-v6mr5338180plb.204.1523626980570; Fri, 13 Apr 2018 06:43:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1523626980; cv=none; d=google.com; s=arc-20160816; b=ZzrqQ7G7P09CIatWTzzhJwgKfEmPk3nvpmemRiq3znnMOtu9qljUe3iTmn1CTF/dPw +PeXwwsEMof+c50Y62bNJP1guAW5Cr57nNrawtRDNOCmvX0v1D+E7Z1RqQFeDidm8P1e PF6NHw8akwO9wvWzAYlq09S5lhLM8ESFVuhgLIwZZJ1bY/7FwVzTn7VUD+siJtxg31Uu tJ/wM6WZQ+Vd1PnZM3tT8PTJEkWgztPMEeodd0Ukmy4evGXc7gQmqmfUXXQODPh3DelP CS1HqD/XB+TxofyJgy89GrRkZ7qWJXKHEWCm31HVM2uG+ky9SVn450t3GY6DxoJGcriG lmhg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=9GAaNi6C5PRZIdFv3lp1nTQpjMICAgb4bMz1USJ/bUA=; b=vf23LOcPQ9cOAFX4HCVyZFkcNj+QXL4F7uTeCZthDSOmOZFaFOxqpuArtUJ1XNMjRa FjhrANdgyccbQ/KPtFbrXHapT/3z9Q92iiz6k6otvVaqEeBuqB7sEwGxvh83f6mkKxiL TDCkNGosPU3f3tqDGBfkutKeV4UGvgd/oz43JmGumS8e7rzD+bmRE4bJiwRqhL7NMyM4 qLKu/Ppnlf1oeQn9Cmi/kxl+mOup3EJaWwyhzfSYsArik1rs6EUNlO3EIw3JTb3uxkk4 7qgCG3b2LjFdnApsM/Pikjk0z2LH74ez1nk/WdkpwYS+AF0rDJ7ICyKFS24w+D2eM5V2 Ft0g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b34-v6si2041892plc.53.2018.04.13.06.42.46; Fri, 13 Apr 2018 06:43:00 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754476AbeDMLrw (ORCPT + 99 others); Fri, 13 Apr 2018 07:47:52 -0400 Received: from foss.arm.com ([217.140.101.70]:41754 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750940AbeDMLru (ORCPT ); Fri, 13 Apr 2018 07:47:50 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7013E80D; Fri, 13 Apr 2018 04:47:50 -0700 (PDT) Received: from e110439-lin (e110439-lin.cambridge.arm.com [10.1.210.68]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 11F6B3F592; Fri, 13 Apr 2018 04:47:47 -0700 (PDT) Date: Fri, 13 Apr 2018 12:47:45 +0100 From: Patrick Bellasi To: Peter Zijlstra Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, Ingo Molnar , Tejun Heo , "Rafael J . Wysocki" , Viresh Kumar , Vincent Guittot , Paul Turner , Dietmar Eggemann , Morten Rasmussen , Juri Lelli , Joel Fernandes , Steve Muckle Subject: Re: [PATCH 1/7] sched/core: uclamp: add CPU clamp groups accounting Message-ID: <20180413114745.GV14248@e110439-lin> References: <20180409165615.2326-1-patrick.bellasi@arm.com> <20180409165615.2326-2-patrick.bellasi@arm.com> <20180413084302.GR4043@hirez.programming.kicks-ass.net> <20180413111510.GS14248@e110439-lin> <20180413113650.GR4064@hirez.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180413113650.GR4064@hirez.programming.kicks-ass.net> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 13-Apr 13:36, Peter Zijlstra wrote: > On Fri, Apr 13, 2018 at 12:15:10PM +0100, Patrick Bellasi wrote: > > On 13-Apr 10:43, Peter Zijlstra wrote: > > > On Mon, Apr 09, 2018 at 05:56:09PM +0100, Patrick Bellasi wrote: > > > > +static inline void uclamp_task_update(struct rq *rq, struct task_struct *p) > > > > +{ > > > > + int cpu = cpu_of(rq); > > > > + int clamp_id; > > > > + > > > > + /* The idle task does not affect CPU's clamps */ > > > > + if (unlikely(p->sched_class == &idle_sched_class)) > > > > + return; > > > > + /* DEADLINE tasks do not affect CPU's clamps */ > > > > + if (unlikely(p->sched_class == &dl_sched_class)) > > > > + return; > > > > + > > > > + for (clamp_id = 0; clamp_id < UCLAMP_CNT; ++clamp_id) { > > > > + if (uclamp_task_affects(p, clamp_id)) > > > > + uclamp_cpu_put(p, cpu, clamp_id); > > > > + else > > > > + uclamp_cpu_get(p, cpu, clamp_id); > > > > + } > > > > +} > > > > > > Is that uclamp_task_affects() thing there to fix up the fact you failed > > > to propagate the calling context (enqueue/dequeue) ? > > > > Not really, it's intended by design: we back annotate the clamp_group > > a task has been refcounted in. > > > > The uclamp_task_affects() tells if we are refcounted now and then we > > know from the back-annotation from which refcounter we need to remove > > the task. > > > > I found this solution much less racy and effective in avoiding to > > screw up the refcounter whenever we look at a task at either > > dequeue/migration time and these operations can overlaps with the > > slow-path. Meaning, when we change the task specific clamp_group > > either via syscall or cgroups attributes. > > > > IOW, the back annotation allows to decouple refcounting from > > clamp_group configuration in a lockless way. > > But it adds extra state and logic, to a fastpath, for no reason. > > I suspect you messed up the cgroup side; because the syscall should > already have done task_rq_lock() and hold both p->pi_lock and rq->lock > and have dequeued the task when changing the attribute. Yes, actually I'm using task_rq_lock() from the cgroup callback to update each task already queued. And I do the same from the sched_setattr syscall... > It is actually really hard to make the syscall do it wrong. ... thus, I'll look better into this. Not sure now if there was some other corner-case. In the past I remember some funny dance in cgroup callbacks when a task was terminating (like being moved in the root-rq just before exiting). But, as you say, if we always have the task_rq_lock we should be safe. -- #include Patrick Bellasi