Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp290842pxj; Thu, 17 Jun 2021 02:50:33 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy74TVlcXlPw18n00LFKX2Erhtg2vC9xFsPNyPNaxIoIIdR91902g0HRN0zo4njy2qMCn+u X-Received: by 2002:a92:cbc8:: with SMTP id s8mr1487517ilq.193.1623923433319; Thu, 17 Jun 2021 02:50:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1623923433; cv=none; d=google.com; s=arc-20160816; b=CbKD1fgdIr4fejNZVa53V06D6K5OrlVX1gTM3bsFT7q4wjhi2jgs24IbScdXzuSjZU Dak3qpvRvaJZuTivNJa5I9z4KFax4xpxi/ePBy5PqYDbbVMnwGsHVDK/NVU4aSuJw5rq /c2N9t24LCui9TddjAjhk0FRFjiv8556NQHnm+ydW1SQquOA3EK9PaA7Ju6IZAF3Jbgk YrXV1Eekh7kNQFkDIBU6fIk6KkT/nBd6691UKdfbEKHx8oTENX+55ZVrBu+YCwrF8fbq BVFWyN87TvvpYHtdmMR7rib5K2Jn4GPMDsH5QG27UTA6hKtPaFAMdR4paT1ZwSMtItWt lsJg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date; bh=YkeTr2BFuZImXO7UyzoDfXg2Hiql3tEOTCnutac6apM=; b=NRcvPztTV0G4fM3X93iREWAR9+FrWDvUAQLFvDo1yjUUguW1Q+xbgo8QhknKmgJ8KI iP0iwAjYkdc0fLmSUt1xCyj/netYrB0GWAqxTjGWA1kfhv9eEGyBFghUlclN8mSxF0zT m3GWOHLRSyleMoZDrnhg9QHRktqGz2T1IXhpazEeBjVUPTPkcfE8ttiPwAr3APDg5xX2 mb0K1pf9Mcc0N3YPxedyjp3dLy8Z8W+LmQuEYDzGpyV4Ht4MvUe/PJMUIIhR74c/j01D fjQEnA0XSzkhZqGV7P9Me6amY0jaduXlgoPTKieTbnvXel+M3fydqQirF1khVeh9YfPE eNag== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id c17si5263780iob.43.2021.06.17.02.50.21; Thu, 17 Jun 2021 02:50:33 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231297AbhFQJEb (ORCPT + 99 others); Thu, 17 Jun 2021 05:04:31 -0400 Received: from foss.arm.com ([217.140.110.172]:50500 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230242AbhFQJEa (ORCPT ); Thu, 17 Jun 2021 05:04:30 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 416C21042; Thu, 17 Jun 2021 02:02:23 -0700 (PDT) Received: from e107158-lin.cambridge.arm.com (e107158-lin.cambridge.arm.com [10.1.195.57]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C8D923F694; Thu, 17 Jun 2021 02:02:21 -0700 (PDT) Date: Thu, 17 Jun 2021 10:02:19 +0100 From: Qais Yousef To: Dietmar Eggemann Cc: "Peter Zijlstra (Intel)" , Ingo Molnar , Vincent Guittot , Patrick Bellasi , Tejun Heo , Quentin Perret , Wei Wang , Yun Hsiang , Xuewen Yan , linux-kernel@vger.kernel.org Subject: Re: [PATCH] sched/uclamp: Fix uclamp_tg_restrict() Message-ID: <20210617090219.6s5zxbvr7n4yr3wa@e107158-lin.cambridge.arm.com> References: <20210611122246.3475897-1-qais.yousef@arm.com> <0b47fb7f-c96b-c2d6-e5e4-9a63683d6d56@arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <0b47fb7f-c96b-c2d6-e5e4-9a63683d6d56@arm.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 06/16/21 19:09, Dietmar Eggemann wrote: > On 11/06/2021 14:22, Qais Yousef wrote: > > Now cpu.uclamp.min acts as a protection, we need to make sure that the > > uclamp request of the task is within the allowed range of the cgroup, > > that is it is clamp()'ed correctly by tg->uclamp[UCLAMP_MIN] and > > tg->uclamp[UCLAMP_MAX]. > > > > As reported by Xuewen [1] we can have some corner cases where there's > > inverstion between uclamp requested by task (p) and the uclamp values of > > s/inverstion/inversion Fixed. > > [...] > > > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > > index 9e9a5be35cde..0318b00baa97 100644 > > --- a/kernel/sched/core.c > > +++ b/kernel/sched/core.c > > @@ -1403,38 +1403,28 @@ static void uclamp_sync_util_min_rt_default(void) > > static inline struct uclamp_se > > uclamp_tg_restrict(struct task_struct *p, enum uclamp_id clamp_id) > > { > > - struct uclamp_se uc_req = p->uclamp_req[clamp_id]; > > + /* Copy by value as we could modify it */ > > + struct uclamp_se uc_eff = p->uclamp_req[clamp_id]; > > #ifdef CONFIG_UCLAMP_TASK_GROUP > > + unsigned int tg_min, tg_max, value; > > > > /* > > * Tasks in autogroups or root task group will be > > * restricted by system defaults. > > */ > > if (task_group_is_autogroup(task_group(p))) > > - return uc_req; > > + return uc_eff; > > if (task_group(p) == &root_task_group) > > - return uc_req; > > + return uc_eff; > > > > - switch (clamp_id) { > > - case UCLAMP_MIN: { > > - struct uclamp_se uc_min = task_group(p)->uclamp[clamp_id]; > > - if (uc_req.value < uc_min.value) > > - return uc_min; > > - break; > > - } > > - case UCLAMP_MAX: { > > - struct uclamp_se uc_max = task_group(p)->uclamp[clamp_id]; > > - if (uc_req.value > uc_max.value) > > - return uc_max; > > - break; > > - } > > - default: > > - WARN_ON_ONCE(1); > > - break; > > - } > > + tg_min = task_group(p)->uclamp[UCLAMP_MIN].value; > > + tg_max = task_group(p)->uclamp[UCLAMP_MAX].value; > > + value = uc_eff.value; > > + value = clamp(value, tg_min, tg_max); > > + uclamp_se_set(&uc_eff, value, false); > > #endif > > > > - return uc_req; > > + return uc_eff; > > } > > I got confused by the renaming uc_req -> uc_eff. > > We have: > > uclamp_eff_value() (1) > > uclamp_se uc_eff = uclamp_eff_get(p, clamp_id); (2) > > uclamp_se uc_req = uclamp_tg_restrict(p, clamp_id) (3) > > struct uclamp_se uc_eff = p->uclamp_req[clamp_id]; > .... > > (3) is now calling it uc_eff where (2) still uses uc_req for the return > of (3). IMHO uc_*eff* was used after the system level ( > uclamp_default) have been applied. Renamed it back to uc_req. > > [...] > > > @@ -1670,10 +1659,8 @@ uclamp_update_active_tasks(struct cgroup_subsys_state *css, > > > > css_task_iter_start(css, 0, &it); > > while ((p = css_task_iter_next(&it))) { > > - for_each_clamp_id(clamp_id) { > > - if ((0x1 << clamp_id) & clamps) > > - uclamp_update_active(p, clamp_id); > > - } > > + for_each_clamp_id(clamp_id) > > + uclamp_update_active(p, clamp_id); > > } > > css_task_iter_end(&it); > > } > > @@ -9626,7 +9613,7 @@ static void cpu_util_update_eff(struct cgroup_subsys_state *css) > > } > > > > /* Immediately update descendants RUNNABLE tasks */ > > - uclamp_update_active_tasks(css, clamps); > > + uclamp_update_active_tasks(css); > > Since we now always have to update both clamp_id's, can you not update > both under the same task_rq_lock() (in uclamp_update_active())? Good idea. Done this --->8--- diff --git a/kernel/sched/core.c b/kernel/sched/core.c index b4e856a4335d..fdb9a109fd68 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1620,8 +1620,9 @@ static inline void uclamp_rq_dec(struct rq *rq, struct task_struct *p) } static inline void -uclamp_update_active(struct task_struct *p, enum uclamp_id clamp_id) +uclamp_update_active(struct task_struct *p) { + enum uclamp_id clamp_id; struct rq_flags rf; struct rq *rq; @@ -1641,9 +1642,11 @@ uclamp_update_active(struct task_struct *p, enum uclamp_id clamp_id) * affecting a valid clamp bucket, the next time it's enqueued, * it will already see the updated clamp bucket value. */ - if (p->uclamp[clamp_id].active) { - uclamp_rq_dec_id(rq, p, clamp_id); - uclamp_rq_inc_id(rq, p, clamp_id); + for_each_clamp_id(clamp_id) { + if (p->uclamp[clamp_id].active) { + uclamp_rq_dec_id(rq, p, clamp_id); + uclamp_rq_inc_id(rq, p, clamp_id); + } } task_rq_unlock(rq, p, &rf); @@ -1653,15 +1656,12 @@ uclamp_update_active(struct task_struct *p, enum uclamp_id clamp_id) static inline void uclamp_update_active_tasks(struct cgroup_subsys_state *css) { - enum uclamp_id clamp_id; struct css_task_iter it; struct task_struct *p; css_task_iter_start(css, 0, &it); - while ((p = css_task_iter_next(&it))) { - for_each_clamp_id(clamp_id) - uclamp_update_active(p, clamp_id); - } + while ((p = css_task_iter_next(&it))) + uclamp_update_active(p); css_task_iter_end(&it); } --->8--- Thanks! -- Qais Yousef