Received: by 2002:a25:868d:0:0:0:0:0 with SMTP id z13csp3610870ybk; Tue, 19 May 2020 08:45:46 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw6WQI3R6YjO/a5MLteAOgTRoLhkpye+gmvY0Y4oxQ5PwjiZafXBVO7+ebIEP/saXwS35HZ X-Received: by 2002:a50:f111:: with SMTP id w17mr16247600edl.41.1589903146726; Tue, 19 May 2020 08:45:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1589903146; cv=none; d=google.com; s=arc-20160816; b=fGtzLFc9fU8f7C17Q9yj2ucKAb4IGRvOg5VmiZRql1MdzM+0BODnpl6hNBFiqTTDHY 3FyOeGFeujhI+DZhUqY/F16cV1luKPgsZhM/uz0knyFu7q8Zj1uVXS9AY+Ak58HiE8ww cG2ubnJ36skWngbdiAbO9xW89ZXYuYbPZoj6/XtTSs5f3KwhZAXaapHFROnZi0jKGJtB hNh4sxvYcK/X+dW3iJMjkANDnSEvL/FC+t8UzZxRJhZpS/bzQtgeqQfEJIO7GCLh0OFr GFztRcKgxUVxLQbabp0mLOCrA/pl/MZO97CGH2vxGc3VLXsIOyf40QD9hWDJPBZnd/pk nqsA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=0d43uTOdUMjo8+h/D8gql32mRU0y1WLhq8SXyWr0IXc=; b=uXH90dV6x9fXMhCEiA7kPPjL0STycx6gC3n6BCy4goHizucAFCqbI+bV80UiRmXyXE ePGkils+b9pRdGqreCD6T6f6TpU0wgD5u+yzspE4qAjR1GmTcKyylSPgoMmNm+MmEgyI 7SiXacAZis/yNdJAGxDslcdH52gc2NYgp5T+UL6yg+9bwXviKGHBioK2QjsDKZ657vXT +VDnQ4LUw6XM8F2JelFZisMz4jKNtyDRzdmmPxxg6uNqheZwjQsDKpyMMhyxmIHff+bm sEsTJ+GDw2taZOY2pJS/yPmfBdmHe5eMjxu/UDLSX1W2hRrIIb5O9TXHdEK63Yd1rGJx /dxw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=W0KspSqb; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id f24si73156ejw.697.2020.05.19.08.45.23; Tue, 19 May 2020 08:45:46 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=W0KspSqb; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729267AbgESPlr (ORCPT + 99 others); Tue, 19 May 2020 11:41:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39746 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728633AbgESPlr (ORCPT ); Tue, 19 May 2020 11:41:47 -0400 Received: from mail-lf1-x143.google.com (mail-lf1-x143.google.com [IPv6:2a00:1450:4864:20::143]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1DF60C08C5C0 for ; Tue, 19 May 2020 08:41:47 -0700 (PDT) Received: by mail-lf1-x143.google.com with SMTP id 202so11615002lfe.5 for ; Tue, 19 May 2020 08:41:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=0d43uTOdUMjo8+h/D8gql32mRU0y1WLhq8SXyWr0IXc=; b=W0KspSqbJ6Aj72DfmWtAWHU7VMWd90gATZqaAo/4cvA1YpTLHu3x7p4bkzAcjLaecr Ec7eCPkDgEVFv0Wem7uMj2a7aKEUpjEcYdMNfYPKUInU+vAUKoUKWNCA8zrJv7/cJXSm 7qddQEmzn2pUKGjmJw/bd9hbWC4M59o0nIuOTMSqZ6mw9Kc0dN2eCoxVULJXA0+ogLK+ 4BMAw29cH29CrL8OJ3dv0Bzf4vfNQaIW8FjuLgbRGF93vIuRKDQnDeB+o4jHlWUBOF1b s8OqYHkaUCkH5tgFJ2r+iMJ55IMhxjrE3OeuNWJE5ZwQ/4MqJ09mRD16n6OvzqasQev9 fPVg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=0d43uTOdUMjo8+h/D8gql32mRU0y1WLhq8SXyWr0IXc=; b=H8gvCvwhm4ku10g90qNBCPRAWVQlk6k8XSe8hClaTs3RQcOL1usOMqIDQCNP9BW8s3 Ucmml9aoDjFaYzRED1Okfy5MZ5x6PiiD98OVXFqFws1XIhIm3KYm/yxVanQ+2lcSBcHy Nn+WDiV79Yfyr6v+10gJXK/ktHQ1w20UguJKzHSUCex286XnakuNzq/sQq1JLxKvVjOs sOL8dgd9dUWR7uy8aiOpNvrO8aNVdH9cy7O80w9vDOmVyBQQKOUkOH5OIkGvMS2ncpeQ uhWf6J3/k90apB8Erm/jpA2/WUThstqg7xR0aZm+bebvj1OPA4tEargwUbtvbIpnSsDF FQsw== X-Gm-Message-State: AOAM532V0Hg7ffU5vJGcrhBxxyYBX3AlCRgezDKAvv1H6CCRBfeX0PHc cyMPFb0Rv8LtU8NHlBG16/Wki5CnGTbDawtXFQrHyg== X-Received: by 2002:a05:6512:6ca:: with SMTP id u10mr16149796lff.184.1589902905516; Tue, 19 May 2020 08:41:45 -0700 (PDT) MIME-Version: 1.0 References: <20200506155301.14288-1-vincent.guittot@linaro.org> In-Reply-To: From: Vincent Guittot Date: Tue, 19 May 2020 17:41:19 +0200 Message-ID: Subject: Re: [PATCH v2] sched/pelt: sync util/runnable_sum with PELT window when propagating To: Dietmar Eggemann Cc: Ingo Molnar , Peter Zijlstra , Juri Lelli , Steven Rostedt , Ben Segall , Mel Gorman , linux-kernel Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, 19 May 2020 at 12:28, Dietmar Eggemann wrote: > > On 06/05/2020 17:53, Vincent Guittot wrote: > > [...] > > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > > index 02f323b85b6d..df3923a65162 100644 > > --- a/kernel/sched/fair.c > > +++ b/kernel/sched/fair.c > > @@ -3441,52 +3441,46 @@ static inline void > > update_tg_cfs_util(struct cfs_rq *cfs_rq, struct sched_entity *se, struct cfs_rq *gcfs_rq) > > { > > long delta = gcfs_rq->avg.util_avg - se->avg.util_avg; > > + /* > > + * cfs_rq->avg.period_contrib can be used for both cfs_rq and se. > > + * See ___update_load_avg() for details. > > + */ > > + u32 divider = LOAD_AVG_MAX - 1024 + cfs_rq->avg.period_contrib; > > Why not doing the assignment (like in update_tg_cfs_load()) after the > next condition? Same question for update_tg_cfs_runnable(). In fact, I expect the compiler to be smart enough to do this at the best place > > [...] > > > static inline void > > update_tg_cfs_runnable(struct cfs_rq *cfs_rq, struct sched_entity *se, struct cfs_rq *gcfs_rq) > > { > > long delta = gcfs_rq->avg.runnable_avg - se->avg.runnable_avg; > > + /* > > + * cfs_rq->avg.period_contrib can be used for both cfs_rq and se. > > + * See ___update_load_avg() for details. > > + */ > > + u32 divider = LOAD_AVG_MAX - 1024 + cfs_rq->avg.period_contrib; > > We know have 6 assignments like this in fair.c and 1 in pelt.c. Could > this not be refactored by using something like this in pelt.h: > > +static inline u32 get_divider(struct sched_avg *avg) That's a good point I would add a pelt in the name like static inline u32 get_pelt_divider(struct sched_avg *avg) > +{ > + return LOAD_AVG_MAX - 1024 + avg->period_contrib; > +} > > [...] > > > diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c > > index b647d04d9c8b..1feff80e7e45 100644 > > --- a/kernel/sched/pelt.c > > +++ b/kernel/sched/pelt.c > > @@ -237,6 +237,30 @@ ___update_load_sum(u64 now, struct sched_avg *sa, > > return 1; > > } > > > > +/* > > + * When syncing *_avg with *_sum, we must take into account the current > > + * position in the PELT segment otherwise the remaining part of the segment > > + * will be considered as idle time whereas it's not yet elapsed and this will > > + * generate unwanted oscillation in the range [1002..1024[. > > + * > > + * The max value of *_sum varies with the position in the time segment and is > > + * equals to : > > + * > > + * LOAD_AVG_MAX*y + sa->period_contrib > > + * > > + * which can be simplified into: > > + * > > + * LOAD_AVG_MAX - 1024 + sa->period_contrib > > + * > > + * because LOAD_AVG_MAX*y == LOAD_AVG_MAX-1024 > > Isn't this rather '~' instead of '==', even for y^32 = 0.5 ? > > 47742 * 0.5^(1/32) ~ 47742 - 1024 With integer precision and the runnable_avg_yN_inv array, you've got exactly 1024 > > > Apart from that, LGTM > > Reviewed-by: Dietmar Eggemann