Received: by 10.223.185.116 with SMTP id b49csp7707930wrg; Thu, 1 Mar 2018 09:43:52 -0800 (PST) X-Google-Smtp-Source: AG47ELuHqo62w2A1z3rJowHnkuT5SGcXovD73CM29QwhqeSkJXXf+hFXzVaSZW2PO4IV+pF4stJl X-Received: by 10.101.81.132 with SMTP id h4mr2204630pgq.332.1519926232715; Thu, 01 Mar 2018 09:43:52 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1519926232; cv=none; d=google.com; s=arc-20160816; b=BB1vOH8vDMpp0zbyd7+okngWNJTx+vgr31e3qTb6GfhzyvRwTYMfaXc9MkfLlUlhU2 l5mOen0vf6qNEmitgignY6ZzyTpGGg9cwjBJCo419EmYZpoMiO35z9zidznm/JXGAWXh Gc7tSXcI1SvLsQDAH1Ja4t2baftb0BjQGZX33VrL7OEejUUkeHwBPLR0ex7DoXPOv+48 fICP4qNQdrM6ezJ/S3QUBFMRJWsa4rciiOWU1BBQpz0Awgo7Bs3jprHFi1X55q/I0bUU xuEYdgrXXpUsEnAq7d5M9W8YkEr/86u+kzGcqTQnAwvafVfpNJC1qkpImZfuGpbCw7mg tdTA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=t3cZEHTN1h2V6L4kEAprjisXC1Y36TBv7siSL+14I20=; b=b929x8ov8WopkF1rI4e2Ff7SCx0IJyOAUnjPkDCF5GF1dsj9wdD6TT/yUskthalueY 38z8CcUPBqxXIC1tQtVB4qTGle/KlFlpfwmYW1VwIDVWgOwpzLyX4GOI/5Q4vujHlUZX DW7lYGEOLePbv7kvFhaPe6c32TKBXvvlE7n1tBSTBFweOoX6hlY0dT8nN+Lv9PPGUdIg hJTq66eS6zJk1YVFc0Re53QyoyvoK4MsLlkg6Nz2eKJMcvGEG13KNdFpipUzen1k2IZ/ Phkah8NxKqgi3DjQRH1UwSAT3yUInkwSHrerdWYJZmm07q9mmyS7nRKgM3xiPyc3uD9o IHmA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w5-v6si3248615plz.551.2018.03.01.09.43.37; Thu, 01 Mar 2018 09:43:52 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1033416AbeCARm4 (ORCPT + 99 others); Thu, 1 Mar 2018 12:42:56 -0500 Received: from foss.arm.com ([217.140.101.70]:42580 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1033261AbeCARmz (ORCPT ); Thu, 1 Mar 2018 12:42:55 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DD81580D; Thu, 1 Mar 2018 09:42:54 -0800 (PST) Received: from e110439-lin (e110439-lin.cambridge.arm.com [10.1.210.68]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7FDE83F25C; Thu, 1 Mar 2018 09:42:52 -0800 (PST) Date: Thu, 1 Mar 2018 17:42:42 +0000 From: Patrick Bellasi To: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org Cc: Ingo Molnar , Peter Zijlstra , "Rafael J . Wysocki" , Viresh Kumar , Vincent Guittot , Paul Turner , Dietmar Eggemann , Morten Rasmussen , Juri Lelli , Todd Kjos , Joel Fernandes , Steve Muckle Subject: Re: [PATCH v5 1/4] sched/fair: add util_est on top of PELT Message-ID: <20180301174231.GA26235@e110439-lin> References: <20180222170153.673-1-patrick.bellasi@arm.com> <20180222170153.673-2-patrick.bellasi@arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180222170153.673-2-patrick.bellasi@arm.com> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This is missing the below #ifdef guards, adding here has a note for the next resping on list. On 22-Feb 17:01, Patrick Bellasi wrote: [...] > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index e1febd252a84..c8526687f107 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -5205,6 +5205,23 @@ static inline void hrtick_update(struct rq *rq) > } > #endif > #ifdef CONFIG_SMP > +static inline unsigned long task_util(struct task_struct *p); > +static inline unsigned long _task_util_est(struct task_struct *p); > + > +static inline void util_est_enqueue(struct cfs_rq *cfs_rq, > + struct task_struct *p) > +{ > + unsigned int enqueued; > + > + if (!sched_feat(UTIL_EST)) > + return; > + > + /* Update root cfs_rq's estimated utilization */ > + enqueued = READ_ONCE(cfs_rq->avg.util_est.enqueued); > + enqueued += _task_util_est(p); > + WRITE_ONCE(cfs_rq->avg.util_est.enqueued, enqueued); > +} > + #else static inline void util_est_enqueue(struct cfs_rq *cfs_rq struct task_struct *p) { } #endif /* CONFIG_SMP */ > /* > * The enqueue_task method is called before nr_running is > * increased. Here we update the fair scheduling stats and > @@ -5257,9 +5274,86 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags) > if (!se) > add_nr_running(rq, 1); > > + util_est_enqueue(&rq->cfs, p); > hrtick_update(rq); > } > > +/* > + * Check if a (signed) value is within a specified (unsigned) margin, > + * based on the observation that: > + * abs(x) < y := (unsigned)(x + y - 1) < (2 * y - 1) > + * > + * NOTE: this only works when value + maring < INT_MAX. > + */ > +static inline bool within_margin(int value, int margin) > +{ > + return ((unsigned int)(value + margin - 1) < (2 * margin - 1)); > +} > + > +static inline void util_est_dequeue(struct cfs_rq *cfs_rq, > + struct task_struct *p, > + bool task_sleep) > +{ #ifdef CONFIG_SMP > + long last_ewma_diff; > + struct util_est ue; > + > + if (!sched_feat(UTIL_EST)) > + return; > + > + /* > + * Update root cfs_rq's estimated utilization > + * > + * If *p is the last task then the root cfs_rq's estimated utilization > + * of a CPU is 0 by definition. > + */ > + ue.enqueued = 0; > + if (cfs_rq->nr_running) { > + ue.enqueued = READ_ONCE(cfs_rq->avg.util_est.enqueued); > + ue.enqueued -= min_t(unsigned int, ue.enqueued, > + _task_util_est(p)); > + } > + WRITE_ONCE(cfs_rq->avg.util_est.enqueued, ue.enqueued); > + > + /* > + * Skip update of task's estimated utilization when the task has not > + * yet completed an activation, e.g. being migrated. > + */ > + if (!task_sleep) > + return; > + > + /* > + * Skip update of task's estimated utilization when its EWMA is > + * already ~1% close to its last activation value. > + */ > + ue = READ_ONCE(p->se.avg.util_est); > + ue.enqueued = task_util(p); > + last_ewma_diff = ue.enqueued - ue.ewma; > + if (within_margin(last_ewma_diff, (SCHED_CAPACITY_SCALE / 100))) > + return; > + > + /* > + * Update Task's estimated utilization > + * > + * When *p completes an activation we can consolidate another sample > + * of the task size. This is done by storing the current PELT value > + * as ue.enqueued and by using this value to update the Exponential > + * Weighted Moving Average (EWMA): > + * > + * ewma(t) = w * task_util(p) + (1-w) * ewma(t-1) > + * = w * task_util(p) + ewma(t-1) - w * ewma(t-1) > + * = w * (task_util(p) - ewma(t-1)) + ewma(t-1) > + * = w * ( last_ewma_diff ) + ewma(t-1) > + * = w * (last_ewma_diff + ewma(t-1) / w) > + * > + * Where 'w' is the weight of new samples, which is configured to be > + * 0.25, thus making w=1/4 ( >>= UTIL_EST_WEIGHT_SHIFT) > + */ > + ue.ewma <<= UTIL_EST_WEIGHT_SHIFT; > + ue.ewma += last_ewma_diff; > + ue.ewma >>= UTIL_EST_WEIGHT_SHIFT; > + WRITE_ONCE(p->se.avg.util_est, ue); #endif /* CONFIG_SMP */ > +} > + > static void set_next_buddy(struct sched_entity *se); > > /* > @@ -5316,6 +5410,7 @@ static void dequeue_task_fair(struct rq *rq, struct task_struct *p, int flags) > if (!se) > sub_nr_running(rq, 1); > > + util_est_dequeue(&rq->cfs, p, task_sleep); > hrtick_update(rq); > } > -- #include Patrick Bellasi