Received: by 2002:a05:6a10:af89:0:0:0:0 with SMTP id iu9csp1369986pxb; Fri, 21 Jan 2022 16:46:14 -0800 (PST) X-Google-Smtp-Source: ABdhPJyU7ru9+21OR7VKUo1AYjwBieulcLiptVOQXQrq1jHtSNj6x+7uTwkXurcd8mL85uUNRtKk X-Received: by 2002:a17:902:a3c4:b0:149:6639:4b86 with SMTP id q4-20020a170902a3c400b0014966394b86mr5980746plb.60.1642812373929; Fri, 21 Jan 2022 16:46:13 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1642812373; cv=none; d=google.com; s=arc-20160816; b=QcbXnlzKEFs7l2dZZCJfD/C7kXIn9ObBePNX/3xXCLBYjWiro8nYSiCtwkg46JV3m5 NEA1fphT5FrRXkKpFpMsk0CaH5dXSEgROR87EB39AvgS1IuzYKKsT3Z/BXRL200FmVD7 Yi6pN3j155w6MEhuLSaLG386aDRO4RmfH1BdbSTXLWwu8QEanuzMsEiKVYzq1OlMYldY xs95uFuJ1iCfcq308q0nDcoL4xiji8CIO2M1rTqQYk8dplOG4tFoXBOwHxQPaajcONwi sE3UjbLgxcZPlS9epF+6VsWcO//JMTD9CuwlC6d6JEf0b5vc8gL97ersvOgbRdkInEcQ w19w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date; bh=QvXPKb4MxoLMftCloB8EAs7n1rZUqOFcqWKbJf7Bdgc=; b=f+N4hMn3aVcpV5LDB4YW4/BnaWMpmMl0fY/82rW6XsuMKMSx0MAnKX3nV9g4pJdB4/ pCnnUyOmDnJdpxHTnSoc1YFKaNWRPvL5AMvWW2oc6JYTNOecsUF3TOENa/B7hq2p8exT TvLbhwBp6tVIymleEMLyPfEV3IAaSmTMAwfSqSsMevTrsz5T4upcXxXgpAuq6MwUbh6R vmojhU56/K6zLZYEaeuSG+1bOVtHse2RP8OJDTHSqdqQzbgGU8fUyogegWgL5EUkRm24 DNgu6e9pO7dCREgb2VbyAlTPuIY+MhQ1NyMs8PoPaTubPoNxMLet2o8H3lS0+WgolZ9q SbfA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id 5si2248906pfi.72.2022.01.21.16.46.02; Fri, 21 Jan 2022 16:46:13 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349912AbiAUKRk (ORCPT + 99 others); Fri, 21 Jan 2022 05:17:40 -0500 Received: from foss.arm.com ([217.140.110.172]:47230 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379968AbiAUKR1 (ORCPT ); Fri, 21 Jan 2022 05:17:27 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A1AB7ED1; Fri, 21 Jan 2022 02:17:25 -0800 (PST) Received: from FVFF7649Q05P (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 609983F73D; Fri, 21 Jan 2022 02:17:23 -0800 (PST) Date: Fri, 21 Jan 2022 10:17:20 +0000 From: Vincent Donnefort To: Chitti Babu Theegala Cc: mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, joel@joelfernandes.org, linux-arm-msm@vger.kernel.org, quic_lingutla@quicinc.com, linux-kernel@vger.kernel.org, quic_rjendra@quicinc.com Subject: Re: [PATCH] sched/fair: Prefer small idle cores for forkees Message-ID: References: <20220112143902.13239-1-quic_ctheegal@quicinc.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Jan 20, 2022 at 10:15:07PM +0530, Chitti Babu Theegala wrote: > > > On 1/13/2022 10:05 PM, Vincent Donnefort wrote: > > On Wed, Jan 12, 2022 at 08:09:02PM +0530, Chitti Babu Theegala wrote: > > > Newly forked threads don't have any useful utilization data yet and > > > it's not possible to forecast their impact on energy consumption. > > > update_pick_idlest These forkees (though very small, most times) end up waking big > > > cores from deep sleep for that very small durations. > > > > > > Bias all forkees to small cores to prevent waking big cores from deep > > > sleep to save power. > > > > This bias might be interesting for some workloads, but what about the > > others? (see find_energy_efficient_cpu() comment, which discusses forkees). > > > > Yes, I agree with the find_energy_efficient_cpu() comment that we don't have > any useful utilization data yet and hence not possible to forecast. However, > I don't see any point in penalizing the power by waking up bigger cores > which are in deep sleep state for very small workloads. > > This patch helps lighter workloads during idle conditions w.r.t power POV. > For active (interactive or heavier) workloads, on most big.Little systems' > these foreground tasks get pulled into gold affined cpu-sets where this > patch would not play any spoilsport. Even for systems with such cpu-sets not > defined, heavy workloads might need just another 1 or 2 scheduling windows > for ramping to better freq or core. Scheduling windows? I suppose you do not refer to PELT here, so I'm not sure this argument applies here. Beside, CFS always bias toward performance (except feec(), which does it in a lesser extent). > > > > > > > Signed-off-by: Chitti Babu Theegala > > > --- > > > kernel/sched/fair.c | 16 +++++++++++----- > > > 1 file changed, 11 insertions(+), 5 deletions(-) > > > > > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > > > index 6e476f6..d407bbc 100644 > > > --- a/kernel/sched/fair.c > > > +++ b/kernel/sched/fair.c > > > @@ -5976,7 +5976,7 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, > > > } > > > static struct sched_group * > > > -find_idlest_group(struct sched_domain *sd, struct task_struct *p, int this_cpu); > > > +find_idlest_group(struct sched_domain *sd, struct task_struct *p, int this_cpu, int sd_flag); > > > /* > > > * find_idlest_group_cpu - find the idlest CPU among the CPUs in the group. > > > @@ -6063,7 +6063,7 @@ static inline int find_idlest_cpu(struct sched_domain *sd, struct task_struct *p > > > continue; > > > } > > > - group = find_idlest_group(sd, p, cpu); > > > + group = find_idlest_group(sd, p, cpu, sd_flag); > > > if (!group) { > > > sd = sd->child; > > > continue; > > > @@ -8997,7 +8997,8 @@ static inline void update_sg_wakeup_stats(struct sched_domain *sd, > > > static bool update_pick_idlest(struct sched_group *idlest, > > > struct sg_lb_stats *idlest_sgs, > > > struct sched_group *group, > > > - struct sg_lb_stats *sgs) > > > + struct sg_lb_stats *sgs, > > > + int sd_flag) > > > { > > > if (sgs->group_type < idlest_sgs->group_type) > > > return true; > > > @@ -9034,6 +9035,11 @@ static bool update_pick_idlest(struct sched_group *idlest, > > > if (idlest_sgs->idle_cpus > sgs->idle_cpus) > > > return false; > > > + /* Select smaller cpu group for newly woken up forkees */ > > > + if ((sd_flag & SD_BALANCE_FORK) && (idlest_sgs->idle_cpus && > > > + !capacity_greater(idlest->sgc->max_capacity, group->sgc->max_capacity))) > > > + return false; > > > + > > > > Energy biased placement should probably be applied only when EAS is enabled. > > > > It's especially true here, if all CPUs have the same capacity, capacity_greater > > would be always false. So unless I missed something, we wouldn't let the group_util > > evaluation happen, would we? > > True. I am uploading new version patch with a EAS enablement check in place. > > > > > [...]