Received: by 2002:ac0:a5a6:0:0:0:0:0 with SMTP id m35-v6csp2493229imm; Mon, 24 Sep 2018 05:26:01 -0700 (PDT) X-Google-Smtp-Source: ACcGV61k8PMWb9KxSUOHy4IyZacJsw/nAokZn7PWUHMuFe8YyklNNWnzMKvKSewuiKZvVfc8EvIs X-Received: by 2002:a17:902:6a8a:: with SMTP id n10-v6mr10587572plk.288.1537791961695; Mon, 24 Sep 2018 05:26:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1537791961; cv=none; d=google.com; s=arc-20160816; b=OKr7a63xgI8iHCc7hoZwO1i3c8KhzZPdPA5oTOeQKW1oD4T9Uroev943eyQbc4vxny dGuZZB8od7rVba+JvRy2wlbdBkX5lYe0ezleLdh+ycNtreX4hwngy0PAi7NJ8gQFAftq VLbNVRy47lvv2V/RAT628RaQL+/99gXrdylOmlY19kABnmwZT4XsFnsb14PnHPOnSwyH gYB+0K9IbkJxGhff4WEU3S4Ej+eQ9gkqgMhXAjE7hzr6lun8uzS2rpuubjQssIFaaHAa Ey+E26sfYLhSePgRjJV+whBsKxlgr2ueonM7TtWhXgfwhfy9m5q0zOZaJQVO2UsiN7i0 zf7Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from; bh=p85QQ1CAwJNJc3/MGytK734VebWsH6aYxhPdAWmoQSk=; b=I3M/WbSKgz8aZZpmoy/l6G6h6MJ8m31ieXKX7dSaZda+uzHcfAK/n6NpQnc0PLt/xK sLloA1zTP35zL28/+5zVq+/XuBjcwAMLLjhUgOtSgA42jpnHlRxLW8ZvA5rNf0p1LT4V WfgXFjpTCqWwuVPemLaMPDTkfQ2+7InYj6jGULeE9Buumg0XEkOq99lLd/YS5EHMWsb6 KsruPKDsuwt8DNPfRbU6ixjuZ092fsoLBWFSZcr6Q3cW0dxlVNMkOaPYtTHIeAcD31Lu BALMYPJxa4aRJEpshP4TZg5RcI+WBylHbHCHyMVUKhTbzonNvH2Bxj+zzmDMOUJtZ+6E KncA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m21-v6si34524317pgh.664.2018.09.24.05.25.46; Mon, 24 Sep 2018 05:26:01 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732723AbeIXSZa (ORCPT + 99 others); Mon, 24 Sep 2018 14:25:30 -0400 Received: from mail.linuxfoundation.org ([140.211.169.12]:56692 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732124AbeIXSZ2 (ORCPT ); Mon, 24 Sep 2018 14:25:28 -0400 Received: from localhost (ip-213-127-77-73.ip.prioritytelecom.net [213.127.77.73]) by mail.linuxfoundation.org (Postfix) with ESMTPSA id 87911107E; Mon, 24 Sep 2018 12:23:35 +0000 (UTC) From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Quentin Perret , "Peter Zijlstra (Intel)" , Vincent Guittot , Linus Torvalds , Thomas Gleixner , dietmar.eggemann@arm.com, morten.rasmussen@arm.com, patrick.bellasi@arm.com, Ingo Molnar , Sasha Levin Subject: [PATCH 4.14 137/173] sched/fair: Fix util_avg of new tasks for asymmetric systems Date: Mon, 24 Sep 2018 13:52:51 +0200 Message-Id: <20180924113125.732139449@linuxfoundation.org> X-Mailer: git-send-email 2.19.0 In-Reply-To: <20180924113114.334025954@linuxfoundation.org> References: <20180924113114.334025954@linuxfoundation.org> User-Agent: quilt/0.65 X-stable: review MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.14-stable review patch. If anyone has any objections, please let me know. ------------------ From: Quentin Perret [ Upstream commit 8fe5c5a937d0f4e84221631833a2718afde52285 ] When a new task wakes-up for the first time, its initial utilization is set to half of the spare capacity of its CPU. The current implementation of post_init_entity_util_avg() uses SCHED_CAPACITY_SCALE directly as a capacity reference. As a result, on a big.LITTLE system, a new task waking up on an idle little CPU will be given ~512 of util_avg, even if the CPU's capacity is significantly less than that. Fix this by computing the spare capacity with arch_scale_cpu_capacity(). Signed-off-by: Quentin Perret Signed-off-by: Peter Zijlstra (Intel) Acked-by: Vincent Guittot Cc: Linus Torvalds Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: dietmar.eggemann@arm.com Cc: morten.rasmussen@arm.com Cc: patrick.bellasi@arm.com Link: http://lkml.kernel.org/r/20180612112215.25448-1-quentin.perret@arm.com Signed-off-by: Ingo Molnar Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman --- kernel/sched/fair.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -757,11 +757,12 @@ static void attach_entity_cfs_rq(struct * To solve this problem, we also cap the util_avg of successive tasks to * only 1/2 of the left utilization budget: * - * util_avg_cap = (1024 - cfs_rq->avg.util_avg) / 2^n + * util_avg_cap = (cpu_scale - cfs_rq->avg.util_avg) / 2^n * - * where n denotes the nth task. + * where n denotes the nth task and cpu_scale the CPU capacity. * - * For example, a simplest series from the beginning would be like: + * For example, for a CPU with 1024 of capacity, a simplest series from + * the beginning would be like: * * task util_avg: 512, 256, 128, 64, 32, 16, 8, ... * cfs_rq util_avg: 512, 768, 896, 960, 992, 1008, 1016, ... @@ -773,7 +774,8 @@ void post_init_entity_util_avg(struct sc { struct cfs_rq *cfs_rq = cfs_rq_of(se); struct sched_avg *sa = &se->avg; - long cap = (long)(SCHED_CAPACITY_SCALE - cfs_rq->avg.util_avg) / 2; + long cpu_scale = arch_scale_cpu_capacity(NULL, cpu_of(rq_of(cfs_rq))); + long cap = (long)(cpu_scale - cfs_rq->avg.util_avg) / 2; if (cap > 0) { if (cfs_rq->avg.util_avg != 0) {