Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp2504412pxj; Mon, 10 May 2021 04:43:36 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw7AY/EpNp59gX4vms7mPbqw2NjQUXDYaILxTAf+qA990WWurAfq/yIzpn+apGL/3zsNSle X-Received: by 2002:a17:906:a48:: with SMTP id x8mr25091275ejf.127.1620647015999; Mon, 10 May 2021 04:43:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1620647015; cv=none; d=google.com; s=arc-20160816; b=dh14Lwsuc8xnfJLB6Tis9BhjlAY2obBXKM3DCm9ltO+K2zXUlXM0VTOQyynHXD1b3r +dB/LObeSh762DwYyckUeHC60OiNIc4F4cxDoFfgMBBJ3PrnJDovEMzIsO4RBRJR4jHr QuCIqAOIdXzuMk5gyPrnQMLhRvq7weqKIhC3rwsIFDKCVaoslzYOHhLldvwpkT9PPL05 zEZUYNOC0LvmmixQYwxyxfurRHholKzL/ippafUpgrFmM/LnOzQja6PBCqv7zkWBREyh zeG8pCijK95jXwcfdcN7UUcEAlwUHQ1kwvXYzTu44P4TOtF7FJGmPx4/Un+0pw3/RBFH Kz8A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=+t3DRb/GBQ6fHF0rnm3qWO8bMnBAHiyOUeXd30tGMrc=; b=aQ3E3cTo/+GgM/iGVA52i6+jl8mV7pypMoU7OVhI9TI94nNWvKibU5uBDUOWs4CEBY XCm/uL4YpBsIsWYnrC1VdjY46p48iVyvVCDv0T83iADmMjBcciHwpAWHYvWUx1BN/kVw z2DyzqhwcZj+dkYkJaaY3lSdSEzkjug45oPpbXVqhGjCejnq9sealSgdh7EewVEs3dmx /83U62l6MAPjMdeAgCS/P3ElMTvxMtWP5yFc5YZMkjpmPBB6AGcJQ3LpbUhueQFsW4SA iF1q+ISmS8mYVnsYUryR07n6xEaxLbf04LhpoaqWu+8LU9h1phWHSGDg6hw82gvE971m TxzA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=bg+Z9K42; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id g3si2234519ejf.16.2021.05.10.04.43.12; Mon, 10 May 2021 04:43:35 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=bg+Z9K42; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234946AbhEJLjR (ORCPT + 99 others); Mon, 10 May 2021 07:39:17 -0400 Received: from mail.kernel.org ([198.145.29.99]:52738 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234794AbhEJK5E (ORCPT ); Mon, 10 May 2021 06:57:04 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 99CA861432; Mon, 10 May 2021 10:49:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1620643758; bh=C7Ea8EweC2yH0pB/nsXtw6r08gM32NeMA84SYNa7jF0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=bg+Z9K42A3wAP0NvCkOnnqpQjiYkPSuprwcBcxGDEQhBG4zAIJ0SawGDUh3jKCyfR qu/ll/FeH2toOLgpe9tm+sSppXj3dZXzFvqXn7tax0h22hHfih+oQ6uyLNv95jEhSx xSB2r7fJCf2UHipudob10zTjDPBdEemG77J6RYV0= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Vincent Donnefort , "Peter Zijlstra (Intel)" , Ingo Molnar , Dietmar Eggemann , Vincent Guittot , Sasha Levin Subject: [PATCH 5.11 147/342] sched/pelt: Fix task util_est update filtering Date: Mon, 10 May 2021 12:18:57 +0200 Message-Id: <20210510102014.933276318@linuxfoundation.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210510102010.096403571@linuxfoundation.org> References: <20210510102010.096403571@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Vincent Donnefort [ Upstream commit b89997aa88f0b07d8a6414c908af75062103b8c9 ] Being called for each dequeue, util_est reduces the number of its updates by filtering out when the EWMA signal is different from the task util_avg by less than 1%. It is a problem for a sudden util_avg ramp-up. Due to the decay from a previous high util_avg, EWMA might now be close enough to the new util_avg. No update would then happen while it would leave ue.enqueued with an out-of-date value. Taking into consideration the two util_est members, EWMA and enqueued for the filtering, ensures, for both, an up-to-date value. This is for now an issue only for the trace probe that might return the stale value. Functional-wise, it isn't a problem, as the value is always accessed through max(enqueued, ewma). This problem has been observed using LISA's UtilConvergence:test_means on the sd845c board. No regression observed with Hackbench on sd845c and Perf-bench sched pipe on hikey/hikey960. Signed-off-by: Vincent Donnefort Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Ingo Molnar Reviewed-by: Dietmar Eggemann Reviewed-by: Vincent Guittot Link: https://lkml.kernel.org/r/20210225165820.1377125-1-vincent.donnefort@arm.com Signed-off-by: Sasha Levin --- kernel/sched/fair.c | 15 ++++++++++++--- 1 file changed, 12 insertions(+), 3 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index bbc78794224a..dfb65140eb2d 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3959,6 +3959,8 @@ static inline void util_est_dequeue(struct cfs_rq *cfs_rq, trace_sched_util_est_cfs_tp(cfs_rq); } +#define UTIL_EST_MARGIN (SCHED_CAPACITY_SCALE / 100) + /* * Check if a (signed) value is within a specified (unsigned) margin, * based on the observation that: @@ -3976,7 +3978,7 @@ static inline void util_est_update(struct cfs_rq *cfs_rq, struct task_struct *p, bool task_sleep) { - long last_ewma_diff; + long last_ewma_diff, last_enqueued_diff; struct util_est ue; if (!sched_feat(UTIL_EST)) @@ -3997,6 +3999,8 @@ static inline void util_est_update(struct cfs_rq *cfs_rq, if (ue.enqueued & UTIL_AVG_UNCHANGED) return; + last_enqueued_diff = ue.enqueued; + /* * Reset EWMA on utilization increases, the moving average is used only * to smooth utilization decreases. @@ -4010,12 +4014,17 @@ static inline void util_est_update(struct cfs_rq *cfs_rq, } /* - * Skip update of task's estimated utilization when its EWMA is + * Skip update of task's estimated utilization when its members are * already ~1% close to its last activation value. */ last_ewma_diff = ue.enqueued - ue.ewma; - if (within_margin(last_ewma_diff, (SCHED_CAPACITY_SCALE / 100))) + last_enqueued_diff -= ue.enqueued; + if (within_margin(last_ewma_diff, UTIL_EST_MARGIN)) { + if (!within_margin(last_enqueued_diff, UTIL_EST_MARGIN)) + goto done; + return; + } /* * To avoid overestimation of actual task utilization, skip updates if -- 2.30.2