Received: by 2002:ac0:a5a6:0:0:0:0:0 with SMTP id m35-v6csp2512835imm; Mon, 24 Sep 2018 05:44:02 -0700 (PDT) X-Google-Smtp-Source: ACcGV62zZNz2sL908QY9BcFf8IbFqD+msY2gKxCr9Zjc4Rhyp8Cc9oZuCt9ivn3IspUKbnJ7LRP6 X-Received: by 2002:a63:574c:: with SMTP id h12-v6mr8136937pgm.423.1537793042240; Mon, 24 Sep 2018 05:44:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1537793042; cv=none; d=google.com; s=arc-20160816; b=YPfAqDjDAkChPRVr+7zChroHQU93xmW8AexDqp6OOPdUiBdm68ay03pd0r/uWWxFC1 xUauX8wPtYbUKWhRUqyumxGD1lj7xXH4aGsvGDFaMS+yxKl1zWh9ERLraH1t/5lEpoIe r+AehkZkt7wkSG4QJBO66SBk4zN2nDM/m8eRXBCZKsZRV/WBr4T39OmAQ9GOzwgD3n+6 2yvtx2iDeWQlvzUQ0bQolEUKl1WQvLtuZd6f5/IQ7BMs6mY5ed2ZAfMFy4YKW3g3fgOM gFKFEP3UQyhDv2SNwcA5O+Am23PhlicrPRNHLPSWMMnDJUNDoNSXXLcsqBJmvEyBvsMs zeWg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from; bh=55ybwCTLoa3kQwkXZ/oFoQH4FDx66aYDcTn5pX8qY/8=; b=0tvE6ERMdM+PEEQB0N51gIb6n26jkCo23Miuck0LyvHsjpkpl9wI10BpmqmlhCMre5 34KzwXcCKTF0/zQqGIpdtrnUuJbYKk55aphj+r3idYdCqUCwv+agGqB0BkFiZ+N5SVb+ ejxgwP/yzJjXbG/3R2wxlssrSKo3DPWlIEs15j0k7t/yOUwUhKtLT1I0UAnd60BpzyNi 12+nXL/mq1LK2iQhwLe5is/hT2wfNjFPZHyg6SAhhp4CBvDPCRbFqnxSCPr6cl7dXKeV vbNLBCVvoQWVo6Ss6tVEG84uzkTXvmEIMmKTUo4fjJ3fyG4uORMFk3Z3abJ00UrQ1k7D IXPQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id g70-v6si37719998pfe.4.2018.09.24.05.43.46; Mon, 24 Sep 2018 05:44:02 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388920AbeIXSoV (ORCPT + 99 others); Mon, 24 Sep 2018 14:44:21 -0400 Received: from mail.linuxfoundation.org ([140.211.169.12]:59314 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731786AbeIXSoU (ORCPT ); Mon, 24 Sep 2018 14:44:20 -0400 Received: from localhost (ip-213-127-77-73.ip.prioritytelecom.net [213.127.77.73]) by mail.linuxfoundation.org (Postfix) with ESMTPSA id 366F51099; Mon, 24 Sep 2018 12:42:22 +0000 (UTC) From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Quentin Perret , "Peter Zijlstra (Intel)" , Vincent Guittot , Linus Torvalds , Thomas Gleixner , dietmar.eggemann@arm.com, morten.rasmussen@arm.com, patrick.bellasi@arm.com, Ingo Molnar , Sasha Levin Subject: [PATCH 4.18 189/235] sched/fair: Fix util_avg of new tasks for asymmetric systems Date: Mon, 24 Sep 2018 13:52:55 +0200 Message-Id: <20180924113123.349498903@linuxfoundation.org> X-Mailer: git-send-email 2.19.0 In-Reply-To: <20180924113103.999624566@linuxfoundation.org> References: <20180924113103.999624566@linuxfoundation.org> User-Agent: quilt/0.65 X-stable: review MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.18-stable review patch. If anyone has any objections, please let me know. ------------------ From: Quentin Perret [ Upstream commit 8fe5c5a937d0f4e84221631833a2718afde52285 ] When a new task wakes-up for the first time, its initial utilization is set to half of the spare capacity of its CPU. The current implementation of post_init_entity_util_avg() uses SCHED_CAPACITY_SCALE directly as a capacity reference. As a result, on a big.LITTLE system, a new task waking up on an idle little CPU will be given ~512 of util_avg, even if the CPU's capacity is significantly less than that. Fix this by computing the spare capacity with arch_scale_cpu_capacity(). Signed-off-by: Quentin Perret Signed-off-by: Peter Zijlstra (Intel) Acked-by: Vincent Guittot Cc: Linus Torvalds Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: dietmar.eggemann@arm.com Cc: morten.rasmussen@arm.com Cc: patrick.bellasi@arm.com Link: http://lkml.kernel.org/r/20180612112215.25448-1-quentin.perret@arm.com Signed-off-by: Ingo Molnar Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman --- kernel/sched/fair.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -735,11 +735,12 @@ static void attach_entity_cfs_rq(struct * To solve this problem, we also cap the util_avg of successive tasks to * only 1/2 of the left utilization budget: * - * util_avg_cap = (1024 - cfs_rq->avg.util_avg) / 2^n + * util_avg_cap = (cpu_scale - cfs_rq->avg.util_avg) / 2^n * - * where n denotes the nth task. + * where n denotes the nth task and cpu_scale the CPU capacity. * - * For example, a simplest series from the beginning would be like: + * For example, for a CPU with 1024 of capacity, a simplest series from + * the beginning would be like: * * task util_avg: 512, 256, 128, 64, 32, 16, 8, ... * cfs_rq util_avg: 512, 768, 896, 960, 992, 1008, 1016, ... @@ -751,7 +752,8 @@ void post_init_entity_util_avg(struct sc { struct cfs_rq *cfs_rq = cfs_rq_of(se); struct sched_avg *sa = &se->avg; - long cap = (long)(SCHED_CAPACITY_SCALE - cfs_rq->avg.util_avg) / 2; + long cpu_scale = arch_scale_cpu_capacity(NULL, cpu_of(rq_of(cfs_rq))); + long cap = (long)(cpu_scale - cfs_rq->avg.util_avg) / 2; if (cap > 0) { if (cfs_rq->avg.util_avg != 0) {