Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp661829imu; Mon, 5 Nov 2018 07:00:23 -0800 (PST) X-Google-Smtp-Source: AJdET5fp40Aevhbj7FtrlfAUkRu36RUsWi2x1P/IKM4hWjFQvFNH2C4S66ubQU6SvxExK1fjy7to X-Received: by 2002:a63:e40c:: with SMTP id a12mr12638460pgi.28.1541430023484; Mon, 05 Nov 2018 07:00:23 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1541430023; cv=none; d=google.com; s=arc-20160816; b=L8fOhyOFp9y1zNT5pVzjcRl3gfbE8N/QZDNK6SqfYp9xlnz5O70a8uTSUN7yLykICw IN099ufLBhU4QqpeVTEYRSfBeXNzM0qkxw0b8AeAf54BjqHKjDRwgjcK7fJVHVjXH40e vavuZhC07thxJ/e1EWEXrRCMrtTMsDJTWSmTHXtcpCt+P4l8PRzm0mhSuocrmDH+GUug m8evZYk9q1Lz6tLmWu9xMZD9chSUvm7qNk8hwS2q869G30oDIvRWLHM7GcqaDRnk2Xe2 h6XE1ey75aEJHXd9mi0Wl3O7tDASPtcIJcGzdUls/azSeFRjU0r27KoGauyjxIzEEg1N R3Lw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=48AUId9czgPnIuPsKEjEosHuLLaI/9P2PYC8RUXOLTM=; b=akMZ3BIVaIOSTHNKRX5o4h43QmqWAI8TVPHgtpmeiBJcchosAK2uLEzenr4GsVRd9J Ktyi556QKdKdeueBBw5m5JRe8hPExFz74vIvCFwkLEfKKhuC0k/jPBB4LF0hP/cATpG6 Nc/8fifk4v/l0EYaMtTv5u9tV45YzGu7GQdn0pCDN1W2UUVQbTsWlZq+jaQ3T2yC+5oX u6ZCa9DQBKic8xl1yMRMqVJEUghbWD8fp1J/wZJtsyDQBZYr0jgZvrBr996mxIk4tLFY nMWG9+upVIlrVHe6m5dc9PXB6cjhEwrdryHNdjuLYzcrNPHvfQ46oHyo/gyS9YZ++hLv yxvA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id h70si24645461pge.221.2018.11.05.07.00.05; Mon, 05 Nov 2018 07:00:23 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730055AbeKFATI (ORCPT + 99 others); Mon, 5 Nov 2018 19:19:08 -0500 Received: from foss.arm.com ([217.140.101.70]:45428 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726255AbeKFATI (ORCPT ); Mon, 5 Nov 2018 19:19:08 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 22833A78; Mon, 5 Nov 2018 06:59:02 -0800 (PST) Received: from e105550-lin.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 339C13F5CF; Mon, 5 Nov 2018 06:59:00 -0800 (PST) Date: Mon, 5 Nov 2018 14:58:54 +0000 From: Morten Rasmussen To: Vincent Guittot Cc: Dietmar Eggemann , Peter Zijlstra , Ingo Molnar , linux-kernel , "Rafael J. Wysocki" , Patrick Bellasi , Paul Turner , Ben Segall , Thara Gopinath , pkondeti@codeaurora.org Subject: Re: [PATCH v5 2/2] sched/fair: update scale invariance of PELT Message-ID: <20181105145854.GA6401@e105550-lin.cambridge.arm.com> References: <1540570303-6097-1-git-send-email-vincent.guittot@linaro.org> <1540570303-6097-3-git-send-email-vincent.guittot@linaro.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Nov 05, 2018 at 10:10:34AM +0100, Vincent Guittot wrote: > On Fri, 2 Nov 2018 at 16:36, Dietmar Eggemann wrote: > > > > On 10/26/18 6:11 PM, Vincent Guittot wrote: > > > The current implementation of load tracking invariance scales the > > > contribution with current frequency and uarch performance (only for > > > utilization) of the CPU. One main result of this formula is that the > > > figures are capped by current capacity of CPU. Another one is that the > > > load_avg is not invariant because not scaled with uarch. > > > > > > The util_avg of a periodic task that runs r time slots every p time slots > > > varies in the range : > > > > > > U * (1-y^r)/(1-y^p) * y^i < Utilization < U * (1-y^r)/(1-y^p) > > > > > > with U is the max util_avg value = SCHED_CAPACITY_SCALE > > > > > > At a lower capacity, the range becomes: > > > > > > U * C * (1-y^r')/(1-y^p) * y^i' < Utilization < U * C * (1-y^r')/(1-y^p) > > > > > > with C reflecting the compute capacity ratio between current capacity and > > > max capacity. > > > > > > so C tries to compensate changes in (1-y^r') but it can't be accurate. > > > > > > Instead of scaling the contribution value of PELT algo, we should scale the > > > running time. The PELT signal aims to track the amount of computation of > > > tasks and/or rq so it seems more correct to scale the running time to > > > reflect the effective amount of computation done since the last update. > > > > > > In order to be fully invariant, we need to apply the same amount of > > > running time and idle time whatever the current capacity. Because running > > > at lower capacity implies that the task will run longer, we have to ensure > > > that the same amount of idle time will be apply when system becomes idle > > > and no idle time has been "stolen". But reaching the maximum utilization > > > value (SCHED_CAPACITY_SCALE) means that the task is seen as an > > > always-running task whatever the capacity of the CPU (even at max compute > > > capacity). In this case, we can discard this "stolen" idle times which > > > becomes meaningless. > > > > > > In order to achieve this time scaling, a new clock_pelt is created per rq. > > > The increase of this clock scales with current capacity when something > > > is running on rq and synchronizes with clock_task when rq is idle. With > > > this mecanism, we ensure the same running and idle time whatever the > > > current capacity. > > > > Thinking about this new approach on a big.LITTLE platform: > > > > CPU Capacities big: 1024 LITTLE: 512, performance CPUfreq governor > > > > A 50% (runtime/period) task on a big CPU will become an always running > > task on the little CPU. The utilization signal of the task and the > > cfs_rq of the little CPU converges to 1024. > > > > With contrib scaling the utilization signal of the 50% task converges to > > 512 on the little CPU, even it is always running on it, and so does the > > one of the cfs_rq. > > > > Two 25% tasks on a big CPU will become two 50% tasks on a little CPU. > > The utilization signal of the tasks converges to 512 and the one of the > > cfs_rq of the little CPU converges to 1024. > > > > With contrib scaling the utilization signal of the 25% tasks converges > > to 256 on the little CPU, even they run each 50% on it, and the one of > > the cfs_rq converges to 512. > > > > So what do we consider system-wide invariance? I thought that e.g. a 25% > > task should have a utilization value of 256 no matter on which CPU it is > > running? > > > > In both cases, the little CPU is not going idle whereas the big CPU does. > > IMO, the key point here is that there is no idle time. As soon as > there is no idle time, you don't know if a task has enough compute > capacity so you can't make difference between the 50% running task or > an always running task on the little core. > That's also interesting to noticed that the task will reach the always > running state after more than 600ms on little core with utilization > starting from 0. > > Then considering the system-wide invariance, the task are not really > invariant. If we take a 50% running task that run 40ms in a period of > 80ms, the max utilization of the task will be 721 on the big core and > 512 on the little core. > Then, if you take a 39ms running task instead, the utilization on the > big core will reach 709 but it will be 507 on little core. So your > utilization depends on the current capacity > With the new proposal, the max utilization will be 709 on big and > little cores for the 39ms running task. For the 40ms running task, the > utilization will be 721 on big core. then if the task moves on the > little, it will reach the value 721 after 80ms, then 900 after more > than 160ms and 1000 after 320ms It has always been debatable what to do with utilization when there are no spare cycles. In Dietmar's example where two 25% tasks are put on a 512 (50%) capacity CPU we add just enough utilization to have no spare cycles left. One could argue that 25% is still the correct utilization for those tasks. However, we only know their true utilization because they just ran unconstrained on a higher capacity CPU. Once they are on the 512 capacity CPU we wouldn't know if the tasks grew in utilization as there are no spare cycles to use. As I see it, the most fundamental difference between scaling contribution and time for PELT is the characteristics when CPUs are over-utilized. With contribution scaling the PELT utilization of a task is a _minimum_ utilization. Regardless of where the task is currently/was running (and provided that it doesn't change behaviour) its PELT utilization will approximate its _minimum_ utilization on an idle 1024 capacity CPU. With time scaling the PELT utilization doesn't really have a meaning on its own. It has to be compared to the capacity of the CPU where it is/was running to know what the its current PELT utilization means. When the utilization over-shoots the capacity its value is no longer represents utilization, it just means that it has a higher compute demand than is offered on its current CPU and a high value means that it has been suffering longer. It can't be used to predict the actual utilization on an idle 1024 capacity any better than contribution scaled PELT utilization. This change might not be a showstopper, but it is something to be aware off and take into account wherever PELT utilization is used. Morten