Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp4619731pxj; Tue, 8 Jun 2021 20:04:02 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzcuuyESffKSzqB44N5LrBDvQwJl5y6h9lde/fcxj9ZKXBjKNXE5n84HVVs7AZ10z0pnVqt X-Received: by 2002:a17:907:1b1b:: with SMTP id mp27mr25876657ejc.538.1623207842478; Tue, 08 Jun 2021 20:04:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1623207842; cv=none; d=google.com; s=arc-20160816; b=t0AqBtuaoBq26logRwdinvyVw4SarMwpixRLkFj3birAeCCI22pECU0IV5kEic7YYq +lMWHXr4Vb7kAy6zVGFQVCmwUBfc3+BjaTxapE7cpVdPc6NI1YPJ8xNIYJ3z83B89LKq 77/j//zzZVCaab52zQvELzWIHHJCBF1fdUifAnhoqyVxqNvjc5olcmb4/Ehwk9ldzfZz oiAS2PjcYDoxVbNaXOjrHdxzxEPrYfWuCSu7/E7SXGVdDf8xDOaLsygKTXHZQYLibnz5 5Ik8+zUQQwEzNuOkwBe/bfAW5TF8fLyvN9QDopymrKmfShE+1LkI6nuW7JcmXhFBvvIM 6TmA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-transfer-encoding :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=BZ38DkWO0Hu+6eIuD8YEuvvPysDq+eFxFc3Qerpq6FI=; b=CEX24X122EWgMtdi6lysk/KD228AYdAFHyvWTptNmrcUvUfcLN3UVZiXO4TFt3i4k/ p9OVpDzd2zyaVIXshue/OMjZSjUUBuCMRVlP2Sik8LXpqkMzCjBGl/8h7oHN8rBxlXv+ 5N/+ATQFCtrBxheAXbZmP5Xp4Y/nnH8c3xvLHXTx+zvgE9xyGjRWrmVn1dkVrgY3oDjs yb3paSrVvNallesdyxF3HqIh1GMk50WOq4pkg+iDIHiY1taGXwKo2zVNpDDQplC0AUA3 0jNqlPTeR0vZIvkdoqfSyvHI7tFZsTxvS4Ttl7iJ34UGBfwpu75c3FQyLDuRk6Otos9F FGew== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id kd26si1342923ejc.672.2021.06.08.20.03.38; Tue, 08 Jun 2021 20:04:02 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233253AbhFHO1x (ORCPT + 99 others); Tue, 8 Jun 2021 10:27:53 -0400 Received: from foss.arm.com ([217.140.110.172]:60288 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233220AbhFHO1w (ORCPT ); Tue, 8 Jun 2021 10:27:52 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1E4036D; Tue, 8 Jun 2021 07:25:59 -0700 (PDT) Received: from e107158-lin.cambridge.arm.com (e107158-lin.cambridge.arm.com [10.1.195.57]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 013FD3F73D; Tue, 8 Jun 2021 07:25:56 -0700 (PDT) Date: Tue, 8 Jun 2021 15:25:54 +0100 From: Qais Yousef To: Xuewen Yan Cc: Quentin Perret , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Benjamin Segall , Mel Gorman , Daniel Bristot de Oliveira , linux-kernel , Chunyan Zhang , Ryan Y , Patrick Bellasi , tj@kernel.org Subject: Re: [PATCH] sched/uclamp: Avoid setting cpu.uclamp.min bigger than cpu.uclamp.max Message-ID: <20210608142554.tbiu2s5qnwflmk27@e107158-lin.cambridge.arm.com> References: <20210602123803.15738-1-xuewen.yan94@gmail.com> <20210604160839.2op4ak75vle3gmt3@e107158-lin.cambridge.arm.com> <20210605114908.mqfsip5pskamls6k@e107158-lin.cambridge.arm.com> <20210607134902.nlgvqtzj35rhjg7x@e107158-lin.cambridge.arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 06/08/21 19:45, Xuewen Yan wrote: > > Looking at this again now, I better understand what you were trying to say. > > I got confused that you were still arguing about cgroup inverted > > cpu.uclamp.min/max, but you're actually talking about something else. > > Generally speaking, this kind of situation does not basically exist, > but I just consider all the situations that can occur when users use > it. +1 > > > > > It would be a lot easier to not cross talk threads and reply to my patch > > directly with this remark. > Sorry for the trouble because of my unfamiliar with the maillist, I > will pay attention next time :) Not really a problem, it was just a bit confusing to get the right context :) > > + uc_min = task_group(p)->uclamp[UCLAMP_MIN].value; > > + uc_max = task_group(p)->uclamp[UCLAMP_MAX].value; > > + val = uc_req.value; > > + uc_req.value = clamp(val, uc_min, uc_max); > > This is not a good solution, because it just clamp the uc_req.value, > but the uc_req.bucket_id is not changed. This is what I actually have now. I did move to using uclamp_se_set(). I also needed to modify uclamp_update_active_tasks() so that both uclamp_min/max unconditionally. I still need to sleep on it to make sure I haven't missed something else, but it looks fine so far. Thanks! -- Qais Yousef --->8--- diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 9e9a5be35cde..1d2d3e6648a6 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1403,38 +1403,28 @@ static void uclamp_sync_util_min_rt_default(void) static inline struct uclamp_se uclamp_tg_restrict(struct task_struct *p, enum uclamp_id clamp_id) { - struct uclamp_se uc_req = p->uclamp_req[clamp_id]; + /* Copy by value as we could modify it */ + struct uclamp_se uc_eff = p->uclamp_req[clamp_id]; #ifdef CONFIG_UCLAMP_TASK_GROUP + unsigned long tg_min, tg_max, value; /* * Tasks in autogroups or root task group will be * restricted by system defaults. */ if (task_group_is_autogroup(task_group(p))) - return uc_req; + return uc_eff; if (task_group(p) == &root_task_group) - return uc_req; + return uc_eff; - switch (clamp_id) { - case UCLAMP_MIN: { - struct uclamp_se uc_min = task_group(p)->uclamp[clamp_id]; - if (uc_req.value < uc_min.value) - return uc_min; - break; - } - case UCLAMP_MAX: { - struct uclamp_se uc_max = task_group(p)->uclamp[clamp_id]; - if (uc_req.value > uc_max.value) - return uc_max; - break; - } - default: - WARN_ON_ONCE(1); - break; - } + tg_min = task_group(p)->uclamp[UCLAMP_MIN].value; + tg_max = task_group(p)->uclamp[UCLAMP_MAX].value; + value = uc_eff.value; + value = clamp(value, tg_min, tg_max); + uclamp_se_set(&uc_eff, value, false); #endif - return uc_req; + return uc_eff; } /* @@ -1661,8 +1651,7 @@ uclamp_update_active(struct task_struct *p, enum uclamp_id clamp_id) #ifdef CONFIG_UCLAMP_TASK_GROUP static inline void -uclamp_update_active_tasks(struct cgroup_subsys_state *css, - unsigned int clamps) +uclamp_update_active_tasks(struct cgroup_subsys_state *css) { enum uclamp_id clamp_id; struct css_task_iter it; @@ -1670,10 +1659,8 @@ uclamp_update_active_tasks(struct cgroup_subsys_state *css, css_task_iter_start(css, 0, &it); while ((p = css_task_iter_next(&it))) { - for_each_clamp_id(clamp_id) { - if ((0x1 << clamp_id) & clamps) - uclamp_update_active(p, clamp_id); - } + for_each_clamp_id(clamp_id) + uclamp_update_active(p, clamp_id); } css_task_iter_end(&it); } @@ -9626,7 +9613,7 @@ static void cpu_util_update_eff(struct cgroup_subsys_state *css) } /* Immediately update descendants RUNNABLE tasks */ - uclamp_update_active_tasks(css, clamps); + uclamp_update_active_tasks(css); } }