Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp4320723pxj; Tue, 8 Jun 2021 11:25:43 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwLXjx3L5Dz8KPbLpEd/qXfX3OqNgG6za7IFqWrXNhEFUpMsGNGfJe0JgWODoA4PIjnvRsY X-Received: by 2002:a05:6402:170e:: with SMTP id y14mr27648746edu.367.1623176743355; Tue, 08 Jun 2021 11:25:43 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1623176743; cv=none; d=google.com; s=arc-20160816; b=oOXc2OIKiVe/tMDPewnJWZjul0ehui0FovWK1tnPYFKbz0AUvmm7VW5B+L3EruoYfB rGj2uL9KZgSlFarUGl6fNFeM2KJgkvLLrnAe7QbGAIU2rZ4VSTuDr2MwBel+5MzDu5MS 2niZ8rYYfF5CdzvzGFQ9HlA43IcbKb/Qk9VH7UHllHeKEtEjojayDO+VX+Y9SlW2LNZo cDVrIg2in7fNxRDHaQ9aNUwwNSr+eTH6F4ai5BVpk+D6sd04nhuKLaAUYR9PvmBa1VA6 r1wlDdhEh/yt9dw44BVhYekrP5zCQitTN1VcU9fbH5zojGT0qseAEEUPAWm+RjRjhprI j80g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date; bh=tUGFXgKqFbpW9dD5K1qN2p7IhaF+pvYnB4F9rCfKFnE=; b=wDVL4ldntdQFpmyQJKJQDoWaTMDKPtnRPervYL9adB1NtwjOECY+dm6OukAPJADsqa 0CkU2KYL6iwd2F2rUSZf0EhDKbvNxLEZQnj5wXP9JfRBxaCv7TNrbdzQXU5nFg1IwASn MawRFdaRgn6kbV0MJCMKm8ebXhgnILX6OAF09uXyke2PGkxM6rgtscQEa5gdIOoGB819 GoBWJs/evIazEWdFDksCFlGL8cuEwIK5Jtht8DUVhvmy6MuJDacIAE4oqzdIaIvbZ1YD AK0uO+y3vQ1Nwraqrvf0CTCZZb8H2v1KM4huzNzyD97V4Oodox5GYltgFqcpQlCKvvcf izfA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id h9si339056edw.186.2021.06.08.11.25.19; Tue, 08 Jun 2021 11:25:43 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233909AbhFHSXz (ORCPT + 99 others); Tue, 8 Jun 2021 14:23:55 -0400 Received: from foss.arm.com ([217.140.110.172]:37362 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233970AbhFHSXz (ORCPT ); Tue, 8 Jun 2021 14:23:55 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9623DD6E; Tue, 8 Jun 2021 11:22:01 -0700 (PDT) Received: from e107158-lin.cambridge.arm.com (unknown [10.1.195.57]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 9A59A3F694; Tue, 8 Jun 2021 11:21:59 -0700 (PDT) Date: Tue, 8 Jun 2021 19:21:57 +0100 From: Qais Yousef To: Xuewen Yan Cc: Quentin Perret , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Benjamin Segall , Mel Gorman , Daniel Bristot de Oliveira , linux-kernel , Chunyan Zhang , Ryan Y , Patrick Bellasi , tj@kernel.org Subject: Re: [PATCH] sched/uclamp: Avoid setting cpu.uclamp.min bigger than cpu.uclamp.max Message-ID: <20210608182157.lx6ypd6sm6gwxkt4@e107158-lin.cambridge.arm.com> References: <20210604160839.2op4ak75vle3gmt3@e107158-lin.cambridge.arm.com> <20210605114908.mqfsip5pskamls6k@e107158-lin.cambridge.arm.com> <20210607134902.nlgvqtzj35rhjg7x@e107158-lin.cambridge.arm.com> <20210608142554.tbiu2s5qnwflmk27@e107158-lin.cambridge.arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 06/08/21 23:01, Xuewen Yan wrote: > Hi > > On Tue, Jun 8, 2021 at 10:25 PM Qais Yousef wrote: > > > > --->8--- > > > > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > > index 9e9a5be35cde..1d2d3e6648a6 100644 > > --- a/kernel/sched/core.c > > +++ b/kernel/sched/core.c > > @@ -1403,38 +1403,28 @@ static void uclamp_sync_util_min_rt_default(void) > > static inline struct uclamp_se > > uclamp_tg_restrict(struct task_struct *p, enum uclamp_id clamp_id) > > { > > - struct uclamp_se uc_req = p->uclamp_req[clamp_id]; > > + /* Copy by value as we could modify it */ > > + struct uclamp_se uc_eff = p->uclamp_req[clamp_id]; > > #ifdef CONFIG_UCLAMP_TASK_GROUP > > + unsigned long tg_min, tg_max, value; > > > > /* > > * Tasks in autogroups or root task group will be > > * restricted by system defaults. > > */ > > if (task_group_is_autogroup(task_group(p))) > > - return uc_req; > > + return uc_eff; > > if (task_group(p) == &root_task_group) > > - return uc_req; > > + return uc_eff; > > > > - switch (clamp_id) { > > - case UCLAMP_MIN: { > > - struct uclamp_se uc_min = task_group(p)->uclamp[clamp_id]; > > - if (uc_req.value < uc_min.value) > > - return uc_min; > > - break; > > - } > > - case UCLAMP_MAX: { > > - struct uclamp_se uc_max = task_group(p)->uclamp[clamp_id]; > > - if (uc_req.value > uc_max.value) > > - return uc_max; > > - break; > > - } > > - default: > > - WARN_ON_ONCE(1); > > - break; > > - } > > + tg_min = task_group(p)->uclamp[UCLAMP_MIN].value; > > + tg_max = task_group(p)->uclamp[UCLAMP_MAX].value; > > + value = uc_eff.value; > > + value = clamp(value, tg_min, tg_max); > > + uclamp_se_set(&uc_eff, value, false); > > Is it reasonable to set user_defined to be false here? Yep, it doesn't really matter for the effective value. It matters for the actual task request. > > > #endif > > > > - return uc_req; > > + return uc_eff; > > } > > > > /* > > @@ -1661,8 +1651,7 @@ uclamp_update_active(struct task_struct *p, enum uclamp_id clamp_id) > > > > #ifdef CONFIG_UCLAMP_TASK_GROUP > > static inline void > > -uclamp_update_active_tasks(struct cgroup_subsys_state *css, > > - unsigned int clamps) > > +uclamp_update_active_tasks(struct cgroup_subsys_state *css) > > { > > enum uclamp_id clamp_id; > > struct css_task_iter it; > > @@ -1670,10 +1659,8 @@ uclamp_update_active_tasks(struct cgroup_subsys_state *css, > > > > css_task_iter_start(css, 0, &it); > > while ((p = css_task_iter_next(&it))) { > > - for_each_clamp_id(clamp_id) { > > - if ((0x1 << clamp_id) & clamps) > > - uclamp_update_active(p, clamp_id); > > - } > > + for_each_clamp_id(clamp_id) > > + uclamp_update_active(p, clamp_id); > > } > > css_task_iter_end(&it); > > } > > @@ -9626,7 +9613,7 @@ static void cpu_util_update_eff(struct cgroup_subsys_state *css) > > } > > > > /* Immediately update descendants RUNNABLE tasks */ > > - uclamp_update_active_tasks(css, clamps); > > + uclamp_update_active_tasks(css); > > } > > } > > Would you resend another email? maybe it would be better to resend an > email with a new subject? Yeah I will do a proper posting. But I need to stare at this a bit more then will do. Thanks -- Qais Yousef