Received: by 2002:ac0:a5b6:0:0:0:0:0 with SMTP id m51-v6csp3780526imm; Mon, 4 Jun 2018 09:07:10 -0700 (PDT) X-Google-Smtp-Source: ADUXVKLkKMWkNJqLCHJ39/zQNfv1I7nvRr4AxZijr0MMbXgR88iqXcNgWC2CpbtVlhaZlTivtYye X-Received: by 2002:a65:6604:: with SMTP id w4-v6mr17679756pgv.346.1528128430924; Mon, 04 Jun 2018 09:07:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1528128430; cv=none; d=google.com; s=arc-20160816; b=BBNWw3lDQeNFrhqQTs8wj1q3j5wdvyctp/2yv+Ybtx/JlluQ1M6971KaE3h8uLBFnP m0fWv+iEv5zBh0xpxvpFrDqX6Y0xKLK5rEZ5tv1MBmGOB0DCp51M724r3IFtavrb4CWq MPKORv2TiQqAtFYzBLCMF59sCWiyXqziMVI39PHmL75A9qPdmxJsGOU7kr7wweYNHydU TA6cy/6mEiawORE6oWTDayYC96xK7lukGOamQt8iAZxdPValRSIAPp82RVafcJOlP0TG cwsgZ/pe6rApCW5VCNqujU5z0qTDwUnoJsFgETSxpnbpHe+FJb87tlYrD+69MpM6ZDkB z4LA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=MXa7tW8Kno+Hya0w+L6Fb7oaQArUjE3WZnaFsBLjXuk=; b=1KLr+kLRx3EVbTHFxEez1aG5tMNZyJ/iICbe6dHyehJsVpOB3ewroogsaX5RPfv8/P LsPyPAwbCEOUJDVJJNkOt2d/e/8dumx6g70UZlfmuzrteattaQXEVCfGDEt1B45a1QqK cnPaf7F9tIecY7UUGnI9iBiepIFYJEHHaS1+w7J3JgyjjKnwWC1aTBVMMfiaTpnYuNFB 7MiDPLfLOUGf17f85i7tYjcD5C1IE6yqN5Sh5OFkB6Ii+BwD+89A6qzy0FxpS/tQ00hE EPdaMprRaLG0X+NOoWsksWDXm6qmffJg7klnOO+aWwgqNtKJR+OdwStUhVAqg6tYynWx ALHA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id l12-v6si37994180pgr.367.2018.06.04.09.06.56; Mon, 04 Jun 2018 09:07:10 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751499AbeFDQG2 (ORCPT + 99 others); Mon, 4 Jun 2018 12:06:28 -0400 Received: from foss.arm.com ([217.140.101.70]:45396 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751375AbeFDQGS (ORCPT ); Mon, 4 Jun 2018 12:06:18 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3727515B2; Mon, 4 Jun 2018 09:06:18 -0700 (PDT) Received: from e110439-lin.cambridge.arm.com (e110439-lin.cambridge.arm.com [10.1.210.68]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id D4BE73F557; Mon, 4 Jun 2018 09:06:15 -0700 (PDT) From: Patrick Bellasi To: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org Cc: Ingo Molnar , Peter Zijlstra , "Rafael J . Wysocki" , Viresh Kumar , Vincent Guittot , Dietmar Eggemann , Morten Rasmussen , Juri Lelli , Joel Fernandes , Steve Muckle , Todd Kjos Subject: [PATCH 2/2] sched/fair: util_est: add running_sum tracking Date: Mon, 4 Jun 2018 17:06:00 +0100 Message-Id: <20180604160600.22052-3-patrick.bellasi@arm.com> X-Mailer: git-send-email 2.15.1 In-Reply-To: <20180604160600.22052-1-patrick.bellasi@arm.com> References: <20180604160600.22052-1-patrick.bellasi@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The estimated utilization of a task is affected by the task being preempted, either by another FAIR task of by a task of an higher priority class (i.e. RT or DL). Indeed, when a preemption happens, the PELT utilization of the preempted task is going to be decayed a bit. That's actually correct for utilization, which goal is to measure the actual CPU bandwidth consumed by a task. However, the above behavior does not allow to know exactly what is the utilization a task "would have used" if it was running without being preempted. Thus, this reduces the effectiveness of util_est for a task because it does not always allow to predict how much CPU a task is likely to require. Let's improve the estimated utilization by adding a new "sort-of" PELT signal, explicitly only for SE which has the following behavior: a) at each enqueue time of a task, its value is the (already decayed) util_avg of the task being enqueued b) it's updated at each update_load_avg c) it can just increase, whenever the task is actually RUNNING on a CPU, while it's kept stable while the task is RUNNANBLE but not actively consuming CPU bandwidth Such a defined signal is exactly equivalent to the util_avg for a task running alone on a CPU while, in case the task is preempted, it allows to know at dequeue time how much would have been the task utilization if it was running alone on that CPU. This new signal is named "running_avg", since it tracks the actual RUNNING time of a task by ignoring any form of preemption. From an implementation standpoint, since the sched_avg should fit into a single cache line, we save space by tracking only a new runnable sum: p->se.avg.running_sum while the conversion into a running_avg is done on demand whenever we need it, which is at task dequeue time when a new util_est sample has to be collected. The conversion from "running_sum" to "running_avg" is done by performing a single division by LOAD_AVG_MAX, which introduces a small error since in the division we do not consider the (sa->period_contrib - 1024) compensation factor used in ___update_load_avg(). However: a) this error is expected to be limited (~2-3%) b) it can be safely ignored since the estimated utilization is the only consumer which is already subject to small estimation errors The additional corresponding benefit is that, at run-time, we pay the cost for a additional sum and multiply, while the more expensive division is required only at dequeue time. Signed-off-by: Patrick Bellasi Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Vincent Guittot Cc: Juri Lelli Cc: Todd Kjos Cc: Joel Fernandes Cc: Steve Muckle Cc: Dietmar Eggemann Cc: Morten Rasmussen Cc: linux-kernel@vger.kernel.org Cc: linux-pm@vger.kernel.org --- include/linux/sched.h | 1 + kernel/sched/fair.c | 16 ++++++++++++++-- 2 files changed, 15 insertions(+), 2 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index 9d8732dab264..2bd5f1c68da9 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -399,6 +399,7 @@ struct sched_avg { u64 load_sum; u64 runnable_load_sum; u32 util_sum; + u32 running_sum; u32 period_contrib; unsigned long load_avg; unsigned long runnable_load_avg; diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index f74441be3f44..5d54d6a4c31f 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3161,6 +3161,8 @@ accumulate_sum(u64 delta, int cpu, struct sched_avg *sa, sa->runnable_load_sum = decay_load(sa->runnable_load_sum, periods); sa->util_sum = decay_load((u64)(sa->util_sum), periods); + if (running) + sa->running_sum = decay_load(sa->running_sum, periods); /* * Step 2 @@ -3176,8 +3178,10 @@ accumulate_sum(u64 delta, int cpu, struct sched_avg *sa, sa->load_sum += load * contrib; if (runnable) sa->runnable_load_sum += runnable * contrib; - if (running) + if (running) { sa->util_sum += contrib * scale_cpu; + sa->running_sum += contrib * scale_cpu; + } return periods; } @@ -3963,6 +3967,12 @@ static inline void util_est_enqueue(struct cfs_rq *cfs_rq, WRITE_ONCE(cfs_rq->avg.util_est.enqueued, enqueued); } +static inline void util_est_enqueue_running(struct task_struct *p) +{ + /* Initilize the (non-preempted) utilization */ + p->se.avg.running_sum = p->se.avg.util_sum; +} + /* * Check if a (signed) value is within a specified (unsigned) margin, * based on the observation that: @@ -4018,7 +4028,7 @@ util_est_dequeue(struct cfs_rq *cfs_rq, struct task_struct *p, bool task_sleep) * Skip update of task's estimated utilization when its EWMA is * already ~1% close to its last activation value. */ - ue.enqueued = (task_util(p) | UTIL_AVG_UNCHANGED); + ue.enqueued = p->se.avg.running_sum / LOAD_AVG_MAX; last_ewma_diff = ue.enqueued - ue.ewma; if (within_margin(last_ewma_diff, (SCHED_CAPACITY_SCALE / 100))) return; @@ -5437,6 +5447,8 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags) if (!se) add_nr_running(rq, 1); + util_est_enqueue_running(p); + hrtick_update(rq); } -- 2.15.1