Received: by 10.223.185.116 with SMTP id b49csp4881730wrg; Wed, 7 Mar 2018 02:40:15 -0800 (PST) X-Google-Smtp-Source: AG47ELsp1eGBQNCY/ci+edztBVgBaSdfmUu+1e3UgZL/8Rx8N6qS0dbblRlvIT+NW/N4YreRbWxL X-Received: by 2002:a17:902:5596:: with SMTP id g22-v6mr19378593pli.4.1520419214985; Wed, 07 Mar 2018 02:40:14 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1520419214; cv=none; d=google.com; s=arc-20160816; b=D4q2b7OW2j5W+hbpivMV7vmaHzWj6AZd1YSRew9oQpUcPk/CXhW9Ul1pDXndYZfCQH o0FmhXeqZdynJJ/A6n1W2tTOt1I2BDm6Xsey6D4sssm8emDMtbNyLlc7Cz29PTiFBGKF N7p0JFS1h5yQ+Rfks2ZeNtmpgYfLpVb6mMj1l9TXRSMIcqFH3lb+HM8WtqRdQzgda6f2 JTr/pOuaW2kZ2ifDKfuSFKdyjxnO1Ji9t3W4Zvfi3GqvdKrkryw+Fz9We8UfqAI83TEI AuZ7pqKz9ntjhzBNcH/YX72MI0goUPsb849Y0k/7gyBLfsex7/tBuiel1JBohaNBlIAn o1WA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature:arc-authentication-results; bh=TMLzG0NnHeoxzuJxSbA8k0J55wSnuZvrX8IfFm6Np8M=; b=PhocMIX+gctF/uBoo8DMmg9RUT7+h3WOFLkCr3FS+8e7+tq/I5mZW71VpzZhOjQugE t6MxkBaG3Nn9q1q9rfn5pCei0pRBsPAp8u5X3+uVVFgLtJQkcOLQYEhhO1i4SGtECoOa ONRLormEkVZXwZgymEQyCCTGI+KS5iK7LP5FtqqOc+ElEilTiYy3MYGW+OCKm1TIYY5E wWBgFdd+RvQItCWzJELbw3hzf6MkV5cE9CGl8/wIeaIU3FFSRBJlhl/MYVE5vmlXsYLv xILx95qRWbdI4bqW2hcedcO90uvEG1iskI3qXWypoRAT8wGz5mYtFhpmG/fH6fEwMfjf fBjw== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@infradead.org header.s=merlin.20170209 header.b=aRkDkhlx; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id g5-v6si12724341pll.488.2018.03.07.02.39.59; Wed, 07 Mar 2018 02:40:14 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail header.i=@infradead.org header.s=merlin.20170209 header.b=aRkDkhlx; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751225AbeCGKjG (ORCPT + 99 others); Wed, 7 Mar 2018 05:39:06 -0500 Received: from merlin.infradead.org ([205.233.59.134]:43266 "EHLO merlin.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751112AbeCGKjE (ORCPT ); Wed, 7 Mar 2018 05:39:04 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=merlin.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=TMLzG0NnHeoxzuJxSbA8k0J55wSnuZvrX8IfFm6Np8M=; b=aRkDkhlxvu//NdpAoCF+J0eLO S55eY95xa+ubg1MbN03egiXsR1ZtQ5X6+bj2cGG9/RSfUa/sUBaivI02OMQST2cSNNniG5cO9uYos Ff5lECUmN07tg9MHsTVL+H6QLS9gsv/JGrphVSBQaj7cw+iiH3SP1oXstOuW7d/ZEupqcgAjg6wPP kTUK0wl2J+ecfttwzicRr1g2XbsjHRdz2w+QH3NuNtfK1vQPuEuKylBQB31SE5EJU+Jju0XLYGT6a athIt52Ox1yFotaIawDY5+CSXtXzGuy8sKRdeuq0hxw199F/MizKiEq4d+3LxmTxJ3cfW9YeWC/m5 9ioR6y5hA==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=hirez.programming.kicks-ass.net) by merlin.infradead.org with esmtpsa (Exim 4.89 #1 (Red Hat Linux)) id 1etWTG-00033b-5B; Wed, 07 Mar 2018 10:38:55 +0000 Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000) id E6FA12029FA14; Wed, 7 Mar 2018 11:38:52 +0100 (CET) Date: Wed, 7 Mar 2018 11:38:52 +0100 From: Peter Zijlstra To: Patrick Bellasi Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, Ingo Molnar , "Rafael J . Wysocki" , Viresh Kumar , Vincent Guittot , Paul Turner , Dietmar Eggemann , Morten Rasmussen , Juri Lelli , Todd Kjos , Joel Fernandes , Steve Muckle Subject: Re: [PATCH v5 4/4] sched/fair: update util_est only on util_avg updates Message-ID: <20180307103852.GJ25201@hirez.programming.kicks-ass.net> References: <20180222170153.673-1-patrick.bellasi@arm.com> <20180222170153.673-5-patrick.bellasi@arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180222170153.673-5-patrick.bellasi@arm.com> User-Agent: Mutt/1.9.2 (2017-12-15) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Feb 22, 2018 at 05:01:53PM +0000, Patrick Bellasi wrote: > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index 8364771f7301..1bf9a86ebc39 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -3047,6 +3047,29 @@ static inline void cfs_rq_util_change(struct cfs_rq *cfs_rq) > } > } > > +/* > + * When a task is dequeued, its estimated utilization should not be update if > + * its util_avg has not been updated at least once. > + * This flag is used to synchronize util_avg updates with util_est updates. > + * We map this information into the LSB bit of the utilization saved at > + * dequeue time (i.e. util_est.dequeued). > + */ > +#define UTIL_EST_NEED_UPDATE_FLAG 0x1 > + > +static inline void cfs_se_util_change(struct sched_avg *avg) > +{ if (!sched_feat(UTIL_EST)) return; > + if (sched_feat(UTIL_EST)) { > + struct util_est ue = READ_ONCE(avg->util_est); > + > + if (!(ue.enqueued & UTIL_EST_NEED_UPDATE_FLAG)) > + return; > + > + /* Reset flag to report util_avg has been updated */ > + ue.enqueued &= ~UTIL_EST_NEED_UPDATE_FLAG; > + WRITE_ONCE(avg->util_est, ue); > + } and loose the indent. Also, since we only update the enqueued value, we don't need to load/store the entire util_est thing here. > +} > + > #ifdef CONFIG_SMP > /* > * Approximate: > @@ -3308,6 +3331,7 @@ __update_load_avg_se(u64 now, int cpu, struct cfs_rq *cfs_rq, struct sched_entit > cfs_rq->curr == se)) { > > ___update_load_avg(&se->avg, se_weight(se), se_runnable(se)); > + cfs_se_util_change(&se->avg); > return 1; > } > So we only clear the bit for @se updates. > @@ -5218,7 +5242,7 @@ static inline void util_est_enqueue(struct cfs_rq *cfs_rq, > > /* Update root cfs_rq's estimated utilization */ > enqueued = READ_ONCE(cfs_rq->avg.util_est.enqueued); > - enqueued += _task_util_est(p); > + enqueued += (_task_util_est(p) | 0x1); UTIL_EST_NEED_UPDATE_FLAG, although I do agree that 0x1 is much easier to type ;-) But you set it for the cfs_rq value ?! That doesn't seem right. > WRITE_ONCE(cfs_rq->avg.util_est.enqueued, enqueued); > } > > @@ -5310,7 +5334,7 @@ static inline void util_est_dequeue(struct cfs_rq *cfs_rq, > if (cfs_rq->nr_running) { > ue.enqueued = READ_ONCE(cfs_rq->avg.util_est.enqueued); > ue.enqueued -= min_t(unsigned int, ue.enqueued, > - _task_util_est(p)); > + (_task_util_est(p) | UTIL_EST_NEED_UPDATE_FLAG)); > } > WRITE_ONCE(cfs_rq->avg.util_est.enqueued, ue.enqueued); > Would it be really horrible if you separate the value and flag using a bitfield/shifts? > @@ -5321,12 +5345,19 @@ static inline void util_est_dequeue(struct cfs_rq *cfs_rq, > if (!task_sleep) > return; > > + /* > + * Skip update of task's estimated utilization if the PELT signal has > + * never been updated (at least once) since last enqueue time. > + */ > + ue = READ_ONCE(p->se.avg.util_est); > + if (ue.enqueued & UTIL_EST_NEED_UPDATE_FLAG) > + return; > + > /* > * Skip update of task's estimated utilization when its EWMA is > * already ~1% close to its last activation value. > */ > + ue.enqueued = (task_util(p) | UTIL_EST_NEED_UPDATE_FLAG); > last_ewma_diff = ue.enqueued - ue.ewma; > if (within_margin(last_ewma_diff, (SCHED_CAPACITY_SCALE / 100))) > return; I see what you do, but Yuck! that's really nasty. Then again, I've not actually got a better suggestion.