Received: by 2002:a25:ab43:0:0:0:0:0 with SMTP id u61csp2038702ybi; Thu, 20 Jun 2019 08:08:27 -0700 (PDT) X-Google-Smtp-Source: APXvYqybjNuTOOtxFOdc/MFVBCyXmCvs6VvEgZzO+R3QEvDGsfFslcqrbHjqG/CHDHNYXy5iSqZ+ X-Received: by 2002:a63:374a:: with SMTP id g10mr13038787pgn.31.1561043307624; Thu, 20 Jun 2019 08:08:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561043307; cv=none; d=google.com; s=arc-20160816; b=L80XnuTtzQdhKfm+FAqjz/ixi+FXD0UGavjVfTqtCkzAoe0cWIHjVTKTSMRcEwgIEc cpZtJA194oNgZGoMslln1Z65YiRAQq+gOY5WxXhsv9BQYi4+/r/HF06axezCxks+n+hQ /4PlRTKgRHG8/LOeCVkCtfodKWZPkshT9SSwUnNvpmrjb1dmAHsfx7KdvAAzKqOoDAwI giDs6LLDY0UlUtW0JgByKYUFe+q9xk3Bbp/W6RXG5PtBY184vtiVGh2U5qT2Ka0ctvgc Ff3kypcodzkjq72YThxaLVS3m23H+vJQ1q7kLMUzueDTHlwOJ1drIzm21sI73A9eL7oT YKtA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from; bh=fVgSZAakJfg944zotRy2AAW8TZsxn5SSRruCk/Se0Yc=; b=vi6yG00nZFwjGBmekCvLnPhaW09MfxLRGET+I/OKpFQvOmki1e7gIkEDvDzuRPeUp/ cVy02qxPYkTb381ynUwOxQlcyPQ8Cfr53jotKYkq8RR7+dWHaBXbk13jydyJPP0zmKRX 7ERClCrV99ZF0ZNXAYqIaUuiRcRBXCjR/mYOWTSvl7kWr44pNetI3tBNPTcIcGHpEIdL lwRUO6JlobVdgwiVALqyVsaojYW+MCoInChOc3EO610RVgo97wGJIDPQB4DNX/C4sGtU Sdo12R6PnDFyzp6Dumtj7WhJ7E8HPQrC/8J+Cx0picyzFQU2yn4fiC850XiF9o1WK21S CNNA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i32si31615pje.44.2019.06.20.08.08.11; Thu, 20 Jun 2019 08:08:27 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731940AbfFTPGF (ORCPT + 99 others); Thu, 20 Jun 2019 11:06:05 -0400 Received: from foss.arm.com ([217.140.110.172]:43370 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726530AbfFTPGF (ORCPT ); Thu, 20 Jun 2019 11:06:05 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 553EF28; Thu, 20 Jun 2019 08:06:04 -0700 (PDT) Received: from e110439-lin.cambridge.arm.com (e110439-lin.cambridge.arm.com [10.1.194.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id BECFE3F718; Thu, 20 Jun 2019 08:06:02 -0700 (PDT) From: Patrick Bellasi To: linux-kernel@vger.kernel.org Cc: Ingo Molnar , Peter Zijlstra , "Rafael J . Wysocki" , Vincent Guittot , Viresh Kumar , Douglas Raillard , Quentin Perret , Dietmar Eggemann , Morten Rasmussen , Juri Lelli Subject: [PATCH] sched/fair: util_est: fast ramp-up EWMA on utilization increases Date: Thu, 20 Jun 2019 16:05:55 +0100 Message-Id: <20190620150555.15717-1-patrick.bellasi@arm.com> X-Mailer: git-send-email 2.21.0 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The estimated utilization for a task is currently defined based on: - enqueued: the utilization value at the end of the last activation - ewma: an exponential moving average which samples are the enqueued values According to this definition, when a task suddenly change it's bandwidth requirements from small to big, the EWMA will need to collect multiple samples before converging up to track the new big utilization. Moreover, after the PELT scale invariance update [1], in the above scenario we can see that the utilization of the task has a significant drop from the first big activation to the following one. That's implied by the new "time-scaling" mechanisms instead of the previous "delta-scaling" approach. Unfortunately, these drops cannot be fully absorbed by the current util_est implementation. Indeed, the low-frequency filtering introduced by the "ewma" is entirely useless while converging up and it does not help in stabilizing sooner the PELT signal. To make util_est do better service in the above scenario, do change its definition to slow down only utilization decreases. Do that by resetting the "ewma" every time the last collected sample increases. This change makes also the default util_est implementation more aligned with the major scheduler behavior, which is to optimize for performance. In the future, this implementation can be further refined to consider task specific hints. [1] sched/fair: Update scale invariance of PELT Message-ID: Signed-off-by: Patrick Bellasi Cc: Ingo Molnar Cc: Peter Zijlstra --- kernel/sched/fair.c | 14 +++++++++++++- kernel/sched/features.h | 1 + 2 files changed, 14 insertions(+), 1 deletion(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 3c11dcdedcbc..27b33caaaaf4 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3685,11 +3685,22 @@ util_est_dequeue(struct cfs_rq *cfs_rq, struct task_struct *p, bool task_sleep) if (ue.enqueued & UTIL_AVG_UNCHANGED) return; + /* + * Reset EWMA on utilization increases, the moving average is used only + * to smooth utilization decreases. + */ + ue.enqueued = (task_util(p) | UTIL_AVG_UNCHANGED); + if (sched_feat(UTIL_EST_FASTUP)) { + if (ue.ewma < ue.enqueued) { + ue.ewma = ue.enqueued; + goto done; + } + } + /* * Skip update of task's estimated utilization when its EWMA is * already ~1% close to its last activation value. */ - ue.enqueued = (task_util(p) | UTIL_AVG_UNCHANGED); last_ewma_diff = ue.enqueued - ue.ewma; if (within_margin(last_ewma_diff, (SCHED_CAPACITY_SCALE / 100))) return; @@ -3722,6 +3733,7 @@ util_est_dequeue(struct cfs_rq *cfs_rq, struct task_struct *p, bool task_sleep) ue.ewma <<= UTIL_EST_WEIGHT_SHIFT; ue.ewma += last_ewma_diff; ue.ewma >>= UTIL_EST_WEIGHT_SHIFT; +done: WRITE_ONCE(p->se.avg.util_est, ue); } diff --git a/kernel/sched/features.h b/kernel/sched/features.h index 2410db5e9a35..7481cd96f391 100644 --- a/kernel/sched/features.h +++ b/kernel/sched/features.h @@ -89,3 +89,4 @@ SCHED_FEAT(WA_BIAS, true) * UtilEstimation. Use estimated CPU utilization. */ SCHED_FEAT(UTIL_EST, true) +SCHED_FEAT(UTIL_EST_FASTUP, true) -- 2.21.0