Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp3467093pxk; Mon, 5 Oct 2020 10:21:04 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyuSY8kN//jKSh/bjr7yOX6LTE3sLxMDXZKyLW0SVZ/eFpJu0yMwFPmbrEcHbLGzLAvDhv1 X-Received: by 2002:a17:906:a002:: with SMTP id p2mr693081ejy.399.1601918464020; Mon, 05 Oct 2020 10:21:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1601918464; cv=none; d=google.com; s=arc-20160816; b=PX1+1S2hopGcQ99HgG6UOi/3M0wmQPYsLS1RhzAPZRHLFKLmTNHOtpYj/5w3iCPR1o yGNpE/MpHrBpJX11+CmMC3j702l7h6PtMak03Fer+A29WH0gDU3nw3BoG81GF6IzD9Z6 +thy6Yu+jfh72XFLkBD/jhNujlJ6trC1fCoEiXROCyk7hCFbIkErQIFLrb4MfaINoS9b 1/gvqXdg4suRiWCb8yWLUUHMurUK3FsJrcNx4GFpd+Xmftn4WJVfi34N9WPiFh4p7rgw onBuRh11eSigXhfGBLTjLJOU4j0GQvlWVivcroNH5Tyi5tCQZgiEkpTSb5NC8HqOkQQs 8SxQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date; bh=bM+Zn2nmJZu0CW1Riq0SThI+itSdiViiURxKB1f2nA0=; b=w6zsOimenW2b1s7whEeYiJh7VtndxlfEqIdjZQsTVsfhEinjw73MR/aj2Xby5EqJ+g XfzAs/JFenogrzXfQKgk7JQvvkK8WqFlkiSNclxDm8yRjGj4ISVGlOkyX8aCtdRvDho1 5bCq80BgtKh1W0nBzNkpch0wFjJ+8NQrtcxRRhfsWynE38BvGTToIWoOgkEgQflxZXRl 4xKkV9Q8rgwpyOnTFWR7G6X29VjC2XX6XhzNe8zAfzZM8LVSCHMueIYV1CV8VjCoRSTL AhMeCGvyIFGc+MKJSYqI8WpMUVLfCB4R5uzpHsGiU0f3w9c1XzetPoXyCMgjqLHxKRrz Q4fw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id a8si172137ejr.157.2020.10.05.10.20.41; Mon, 05 Oct 2020 10:21:04 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728256AbgJERPE (ORCPT + 99 others); Mon, 5 Oct 2020 13:15:04 -0400 Received: from foss.arm.com ([217.140.110.172]:53424 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726935AbgJERPD (ORCPT ); Mon, 5 Oct 2020 13:15:03 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 891BB113E; Mon, 5 Oct 2020 10:15:03 -0700 (PDT) Received: from e107158-lin.cambridge.arm.com (e107158-lin.cambridge.arm.com [10.1.195.21]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id AFEC23F66B; Mon, 5 Oct 2020 10:15:02 -0700 (PDT) Date: Mon, 5 Oct 2020 18:15:00 +0100 From: Qais Yousef To: Patrick Bellasi Cc: Yun Hsiang , Dietmar Eggemann , peterz@infradead.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH 1/1] sched/uclamp: release per-task uclamp control if user set to default value Message-ID: <20201005171500.eztpptd76fotkwa6@e107158-lin.cambridge.arm.com> References: <20200928082643.133257-1-hsiang023167@gmail.com> <8272de8d-9868-d419-e2bb-d5e2c0614b63@arm.com> <20201002053812.GA176142@ubuntu> <57e6b3d3-22cd-0533-cfe7-e689c7983fcc@arm.com> <87o8lg7gpi.derkling@matbug.net> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <87o8lg7gpi.derkling@matbug.net> User-Agent: NeoMutt/20171215 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 10/05/20 18:58, Patrick Bellasi wrote: [...] > >> it can not go back to the initial state to let the module(group) control. > > > > In case A changes its values e.g. from 3a to 3b it will go back to be > > controlled by /TG again (like it was when it had no user defined > > values). > > True, however it's also true that strictly speaking once a task has > defined a per-task value, we will always aggregate/clamp that value wrt > to TG and SystemWide value. > > >> But the other tasks in the group will be affected by the group. > > This is not clear to me. > > All tasks in a group will be treated independently. All the tasks are > subject to the same _individual_ aggregation/clamping policy. I think the confusing bit is this check in uclamp_tg_restrict() 1085 uc_max = task_group(p)->uclamp[clamp_id]; 1086 if (uc_req.value > uc_max.value || !uc_req.user_defined) 1087 return uc_max; If a task is !user_defined then it'll *inherit* the TG value. So you can end up with 2 different behaviors based on that flag. I.e: if 2 tasks have their util_min=0, but one is user_defined while the other isn't, the effective uclamp value will look different for the 2 tasks. IIUC, Yun wants to be able to reset this user_defined flag to re-enable this inheritance behavior for a task. Which I agree with you, seems a sensible thing to allow (via new sched_setattr() flag of course). Thanks -- Qais Yousef