Received: by 2002:a25:ab43:0:0:0:0:0 with SMTP id u61csp6527688ybi; Wed, 5 Jun 2019 02:19:13 -0700 (PDT) X-Google-Smtp-Source: APXvYqxSUiO6OmobJh7i5iAhRTF1/bhct+4mBttqMt/eYiVHDzeILFCL3QleUf8d1zRaRCJiOQzZ X-Received: by 2002:aa7:8a11:: with SMTP id m17mr45012129pfa.122.1559726353884; Wed, 05 Jun 2019 02:19:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1559726353; cv=none; d=google.com; s=arc-20160816; b=cbu3s89goShmQq2ukIo/izA1s84Db2V7+cg75+AsuLih4aV2RInr2i7tVCtQppErvf nBmnHCs+wgt5rV1hDDV1Pwpp5AKhwt3qM+b7oYt4jwQBGui7rFr4xZzqOEWztkzbmp+t aJ2ZHYKQwgVpPCezjlls10qyBU8UuI3MyvuZAzdxS9jAt6GS6aGgbYkt4aE0TTFtbpN0 AJ1pSF9g3Ad8QY5oV+Dj7mP0qkFZluG4JqArGvo7RNsFeON1VsUb1ShSk7xQ5Ew64+oC MznGUiNCk4GFQBGgVyuRnsqex1vQK93wJdBDeACtcxq+ojeN9O4kXCz/GZLYFyGjyCh3 tDvQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=oV8XaYXH3gFzbheTi0ewpxDuC78RPGikM0GxWtHZB+Y=; b=lqFvJB8rzEyvvCnvXZDO62Astn26uMjgikAUUL+ayxd6cUkQNrbj9qol0Za9a4utLb Hi0ZOVNG8eC860wsAIAyckiYesX3S+wb6xy/Juo5musx3k6N9/UYLEBB9PhNGK4QZgN+ FPwyUkJ5YYxt+pCsN6Z+riDoUsyuUCmtH31mep5R4ApPatDpbew2+kO00WuJx8d6sI9x fXL1Z4cY1BFwfSOBqY+771iyW3635YryShZZEGhBvn+3CD7xoy3kco4c5svS6BT8Lowj PdNzz6C4OPjFotTBgheL/IIB0WunQG2xG8pzKZTghhfbwGVw+XzeN0CDEJ961FrAkJwM XsKw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id q23si15675925pfc.179.2019.06.05.02.18.57; Wed, 05 Jun 2019 02:19:13 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726997AbfFEJQx (ORCPT + 99 others); Wed, 5 Jun 2019 05:16:53 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:56122 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726502AbfFEJQx (ORCPT ); Wed, 5 Jun 2019 05:16:53 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 940A5A78; Wed, 5 Jun 2019 02:16:52 -0700 (PDT) Received: from queper01-lin (queper01-lin.cambridge.arm.com [10.1.195.48]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 800803F690; Wed, 5 Jun 2019 02:16:51 -0700 (PDT) Date: Wed, 5 Jun 2019 10:16:46 +0100 From: Quentin Perret To: Viresh Kumar Cc: Ingo Molnar , Peter Zijlstra , linux-kernel@vger.kernel.org, Vincent Guittot Subject: Re: [PATCH] sched/fair: Introduce fits_capacity() Message-ID: <20190605091644.w3g7hc7r3eiscz4f@queper01-lin> References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: NeoMutt/20171215 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Viresh, On Tuesday 04 Jun 2019 at 12:31:52 (+0530), Viresh Kumar wrote: > The same formula to check utilization against capacity (after > considering capacity_margin) is already used at 5 different locations. > > This patch creates a new macro, fits_capacity(), which can be used from > all these locations without exposing the details of it and hence > simplify code. > > All the 5 code locations are updated as well to use it.. > > Signed-off-by: Viresh Kumar > --- > kernel/sched/fair.c | 14 +++++++------- > 1 file changed, 7 insertions(+), 7 deletions(-) > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index 7f8d477f90fe..db3a218b7928 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -102,6 +102,8 @@ int __weak arch_asym_cpu_priority(int cpu) > * (default: ~20%) > */ > static unsigned int capacity_margin = 1280; > + > +#define fits_capacity(cap, max) ((cap) * capacity_margin < (max) * 1024) > #endif > > #ifdef CONFIG_CFS_BANDWIDTH > @@ -3727,7 +3729,7 @@ util_est_dequeue(struct cfs_rq *cfs_rq, struct task_struct *p, bool task_sleep) > > static inline int task_fits_capacity(struct task_struct *p, long capacity) > { > - return capacity * 1024 > task_util_est(p) * capacity_margin; > + return fits_capacity(task_util_est(p), capacity); > } > > static inline void update_misfit_status(struct task_struct *p, struct rq *rq) > @@ -5143,7 +5145,7 @@ static inline unsigned long cpu_util(int cpu); > > static inline bool cpu_overutilized(int cpu) > { > - return (capacity_of(cpu) * 1024) < (cpu_util(cpu) * capacity_margin); > + return !fits_capacity(cpu_util(cpu), capacity_of(cpu)); This ... > } > > static inline void update_overutilized_status(struct rq *rq) > @@ -6304,7 +6306,7 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu) > /* Skip CPUs that will be overutilized. */ > util = cpu_util_next(cpu, p, cpu); > cpu_cap = capacity_of(cpu); > - if (cpu_cap * 1024 < util * capacity_margin) > + if (!fits_capacity(util, cpu_cap)) ... and this isn't _strictly_ equivalent to the existing code but I guess we can live with the difference :-) > continue; > > /* Always use prev_cpu as a candidate. */ > @@ -7853,8 +7855,7 @@ group_is_overloaded(struct lb_env *env, struct sg_lb_stats *sgs) > static inline bool > group_smaller_min_cpu_capacity(struct sched_group *sg, struct sched_group *ref) > { > - return sg->sgc->min_capacity * capacity_margin < > - ref->sgc->min_capacity * 1024; > + return fits_capacity(sg->sgc->min_capacity, ref->sgc->min_capacity); > } > > /* > @@ -7864,8 +7865,7 @@ group_smaller_min_cpu_capacity(struct sched_group *sg, struct sched_group *ref) > static inline bool > group_smaller_max_cpu_capacity(struct sched_group *sg, struct sched_group *ref) > { > - return sg->sgc->max_capacity * capacity_margin < > - ref->sgc->max_capacity * 1024; > + return fits_capacity(sg->sgc->max_capacity, ref->sgc->max_capacity); > } > > static inline enum > -- > 2.21.0.rc0.269.g1a574e7a288b > Also, since we're talking about making the capacity_margin code more consistent, one small thing I had in mind: we have a capacity margin in sugov too, which happens to be 1.25 has well (see map_util_freq()). Conceptually, capacity_margin in fair.c and the sugov margin are both about answering: "do I have enough CPU capacity to serve X of util, or do I need more ?" So perhaps we should factorize the capacity_margin code some more to use it in both places in a consistent way ? This could be done in a separate patch, though. Thanks, Quentin