Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp7514467imu; Tue, 22 Jan 2019 07:14:53 -0800 (PST) X-Google-Smtp-Source: ALg8bN6BwCF2Jj9ev7PfU4Hf7LuzFcFYgQckTrWSrcYP7M/BaWwkGFmub5HRVAfmlz6DE1UJy/2W X-Received: by 2002:a62:7892:: with SMTP id t140mr33755549pfc.237.1548170093834; Tue, 22 Jan 2019 07:14:53 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1548170093; cv=none; d=google.com; s=arc-20160816; b=VzYn6lEEDDkI3dwqJJoSY+5tsVGEeiFZ70CF1nvGJOW0LniuOpX9rYKmWpSotK5rlm cJ4j66Zlfqwnj4gwXRdRr2DNKGhWV5x50tSampMXc3TLfQCOYD5tNEtZGZWYV8EYtTcW ts6nfjY6fexFrHQfZlAQ1gdH8M/x23hHbKEnok5JvwT8te88xps0CBc72DQMq0ZFiupu +G8XYU9I6Ut4j3ZWDP3iHM997BdAKw6WMDYPwe26xxQiwuAecD3/1eIHSOMHhyzXqmz0 5wq+od1Dm3keYgo2a+0IwdMXG/9x9gUccggC2InJrUJ1YQYsysWFJm3SyEG7Ji/Ug7y+ PWRw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature; bh=pol6vbvKu8tFxQP5TOGuApIiaN4SOJ/yxxzuZY8iRIY=; b=YNTqAUpHorXJIV5XwQjY7EW6r3tZEilAGPhvIk98YzVGCZGZCvV4UydRqGJDqxjvsl shsGD42qIYQJR+uchLnpWJgcLQKiOCGWp2mGTAcwVRKHo5st+WcPuMDwuPmQARhvZCsB Hrp7fVigmYLgPqkcOX0OkdVq9RP4Kkgrow65bHnDnQYDHaTk/Ls7KBSQxXKu35x0FDPz 1DlhgokRfyyml1PH6AWYcyzzTC/W7dtF3N89ccPToCqvgoS7EUvClGgL339riV8mJOOP I7r6rMQJT/w2qNoL7yAU4unfSywwLXSNOxTPDKaYkLV3tig/sXAMaWlu3WU8onG7P/O/ CNhw== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@infradead.org header.s=merlin.20170209 header.b=1KkiRqkv; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e126si16063054pfh.185.2019.01.22.07.14.38; Tue, 22 Jan 2019 07:14:53 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail header.i=@infradead.org header.s=merlin.20170209 header.b=1KkiRqkv; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729112AbfAVPNb (ORCPT + 99 others); Tue, 22 Jan 2019 10:13:31 -0500 Received: from merlin.infradead.org ([205.233.59.134]:32950 "EHLO merlin.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728795AbfAVPNb (ORCPT ); Tue, 22 Jan 2019 10:13:31 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=merlin.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=pol6vbvKu8tFxQP5TOGuApIiaN4SOJ/yxxzuZY8iRIY=; b=1KkiRqkvY7Ut9falFCG0TuEri gdiepq/u/kXEVv0nl1/MjhV3pHEifadphQN9DogTFalH1UpNVP3C5x4gN05bjkwi/KC9jocea5Szq JWXgevaBp5RNKI3LMMDQ1OAYE9OKFAeACr8VfJX/vSIeDRFUzNWzxDAeWOPJ3MU2UFnaGbJj/pZWo rluPO9wQ+GISVYkDcjhb7SLsOK+RKLFYu7VeAQgKNMCltnF2WGx+LyPg0Fw3S69Guxg54Gydtq4ZJ JGcYFQ84498a6LrdlsTnLmdAw1X+jIK+6lXm67lZbhmXkNZC/iX77QcTo5QUAdbeOJM+6Pa2BjHoh 017L1eGSA==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=hirez.programming.kicks-ass.net) by merlin.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1glxjs-00034d-Dk; Tue, 22 Jan 2019 15:13:20 +0000 Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000) id 378FB2042CFCC; Tue, 22 Jan 2019 16:13:17 +0100 (CET) Date: Tue, 22 Jan 2019 16:13:17 +0100 From: Peter Zijlstra To: Patrick Bellasi Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, linux-api@vger.kernel.org, Ingo Molnar , Tejun Heo , "Rafael J . Wysocki" , Vincent Guittot , Viresh Kumar , Paul Turner , Quentin Perret , Dietmar Eggemann , Morten Rasmussen , Juri Lelli , Todd Kjos , Joel Fernandes , Steve Muckle , Suren Baghdasaryan Subject: Re: [PATCH v6 07/16] sched/core: uclamp: Add system default clamps Message-ID: <20190122151317.GH13777@hirez.programming.kicks-ass.net> References: <20190115101513.2822-1-patrick.bellasi@arm.com> <20190115101513.2822-8-patrick.bellasi@arm.com> <20190122135644.GP27931@hirez.programming.kicks-ass.net> <20190122144329.ziimv6fejwvky7yb@e110439-lin> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190122144329.ziimv6fejwvky7yb@e110439-lin> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Jan 22, 2019 at 02:43:29PM +0000, Patrick Bellasi wrote: > On 22-Jan 14:56, Peter Zijlstra wrote: > > On Tue, Jan 15, 2019 at 10:15:04AM +0000, Patrick Bellasi wrote: > > > > > diff --git a/include/linux/sched.h b/include/linux/sched.h > > > index 84294925d006..c8f391d1cdc5 100644 > > > --- a/include/linux/sched.h > > > +++ b/include/linux/sched.h > > > @@ -625,6 +625,11 @@ struct uclamp_se { > > > unsigned int bucket_id : bits_per(UCLAMP_BUCKETS); > > > unsigned int mapped : 1; > > > unsigned int active : 1; > > > + /* Clamp bucket and value actually used by a RUNNABLE task */ > > > + struct { > > > + unsigned int value : bits_per(SCHED_CAPACITY_SCALE); > > > + unsigned int bucket_id : bits_per(UCLAMP_BUCKETS); > > > + } effective; > > > > I am confuzled by this thing.. so uclamp_se already has a value,bucket, > > which per the prior code is the effective one. > > > > Now; I think I see why you want another value; you need the second to > > store the original value for when the system limits change and we must > > re-evaluate. > > Yes, that's one reason, the other one being to properly support > CGroup when we add them in the following patches. > > Effective will always track the value/bucket in which the task has > been refcounted at enqueue time and it depends on the aggregated > value. > > Should you not update all tasks? > > That's true, but that's also an expensive operation, that's why now > I'm doing only lazy updates at next enqueue time. Aaah, so you refcount on the original value, which allows you to skip fixing up all tasks. I missed that bit. > Do you think that could be acceptable? Think so, it's a sysctl poke, 'nobody' ever does that.