Received: by 10.213.65.68 with SMTP id h4csp2089337imn; Thu, 5 Apr 2018 08:48:02 -0700 (PDT) X-Google-Smtp-Source: AIpwx4/2KI+xTt/2CwWAKBq7ZGLRpPCC8cxmGQMcH7414RYBfENlbfhlvHSnCfxccLN6S74KVCcW X-Received: by 10.99.113.25 with SMTP id m25mr15362549pgc.164.1522943282013; Thu, 05 Apr 2018 08:48:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1522943281; cv=none; d=google.com; s=arc-20160816; b=YMFSkTGqA54L6NIhaiFXNzoMz3kj5VEvO6en+ASFaTAoyQgtISMroWS2Ux3ISIjps+ C6cI8AwNLnf0mSgfpzlFZOLeKbWStOY+ZDh2zf/GTh/JNlCj1D+irlfEu2zIWN3Dwiii JmiC1crtdI511xDF4ViBGnXvdkPjOgYdKJY+wIKoZyqziSGYZzYMr54TY7X6E20jUZJk RDP21ZdKHyuYgli22Twpio/tVod3LcBy2f+SWqCehYL6pukLYZMk/8uK9KebpcbBX28/ LCQgT+ngxKn2PmiJh1MJA4UVsbdxexEBUw6pdpcmodknh7FFbfe4rg2WyRX/5DzTiVSu VHIQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=yGAe+eJKhOIgw2DXIiAPNwBtJ1mn84GLQvciYmgqzyk=; b=P/y9Y4SOTM+BFtfA9rUvadMWsTSUs1KWquc8W1x0kBbLyB+swJnwvahoAi3T+pQCFZ gGDe4smJwApYhGhhaFqfnR9YEk2/8IvTog/17de5dWk9JUVsKY1bcXP9xgdMGlXqhL2G aOpZmbCYNwU8+C/GT6++EsX7pz+B/oIPy6jU302x4a3TBi+8YunH23SN6QJg23b3ax1u PCiJ/QWMaSYFpEhKTsf42MzUN7yHJq3LNgWqC6GvyoCyx0n2ZiaCYTvySfOXA91HjPi+ AXyVnkQ8IoSD51q/pdBxWtsmbsPwYpZ24FQ8BUYgo4RnNPY9uWUsnqf0H54tGvArKA0b Yhqg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z62si5580851pgd.819.2018.04.05.08.47.47; Thu, 05 Apr 2018 08:48:01 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751401AbeDEPqf (ORCPT + 99 others); Thu, 5 Apr 2018 11:46:35 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:56522 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750835AbeDEPqe (ORCPT ); Thu, 5 Apr 2018 11:46:34 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 566A71529; Thu, 5 Apr 2018 08:46:34 -0700 (PDT) Received: from e105550-lin.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id AF2353F487; Thu, 5 Apr 2018 08:46:32 -0700 (PDT) Date: Thu, 5 Apr 2018 16:46:30 +0100 From: Morten Rasmussen To: Vincent Guittot Cc: Valentin Schneider , Catalin Marinas , Will Deacon , LAK , linux-kernel , Peter Zijlstra , Dietmar Eggemann , Chris Redpath Subject: Re: [PATCH] sched: support dynamiQ cluster Message-ID: <20180405154630.GS4589@e105550-lin.cambridge.arm.com> References: <1522223215-23524-1-git-send-email-vincent.guittot@linaro.org> <20180329125324.GR4589@e105550-lin.cambridge.arm.com> <74865492-d9a6-649d-d37c-a5a6a8c28f23@arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Apr 04, 2018 at 03:43:17PM +0200, Vincent Guittot wrote: > On 4 April 2018 at 12:44, Valentin Schneider wrote: > > Hi, > > > > On 03/04/18 13:17, Vincent Guittot wrote: > >> Hi Valentin, > >> > > [...] > >>> > >>> I believe ASYM_PACKING behaves better here because the workload is only > >>> sysbench threads. As stated above, since task utilization is disregarded, I > >> > >> It behaves better because it doesn't wait for the task's utilization > >> to reach a level before assuming the task needs high compute capacity. > >> The utilization gives an idea of the running time of the task not the > >> performance level that is needed > >> > > > > That's my point actually. ASYM_PACKING disregards utilization and moves those > > threads to the big cores ASAP, which is good here because it's just sysbench > > threads. > > > > What I meant was that if the task composition changes, IOW we mix "small" > > tasks (e.g. periodic stuff) and "big" tasks (performance-sensitive stuff like > > sysbench threads), we shouldn't assume all of those require to run on a big > > CPU. The thing is, ASYM_PACKING can't make the difference between those, so > > That's the 1st point where I tend to disagree: why big cores are only > for long running task and periodic stuff can't need to run on big > cores to get max compute capacity ? > You make the assumption that only long running tasks need high compute > capacity. This patch wants to always provide max compute capacity to > the system and not only long running task There is no way we can tell if a periodic or short-running tasks requires the compute capacity of a big core or not based on utilization alone. The utilization can only tell us if a task could potentially use more compute capacity, i.e. the utilization approaches the compute capacity of its current cpu. How we handle low utilization tasks comes down to how we define "performance" and if we care about the cost of "performance" (e.g. energy consumption). Placing a low utilization task on a little cpu should always be fine from _throughput_ point of view. As long as the cpu has spare cycles it means that work isn't piling up faster than it can be processed. However, from a _latency_ (completion time) point of view it might be a problem, and for latency sensitive tasks I can agree that going for max capacity might be better choice. The misfit patches places tasks based on utilization to ensure that tasks get the _throughput_ they need if possible. This is in line with the placement policy we have in select_task_rq_fair() already. We shouldn't forget that what we are discussing here is the default behaviour when we don't have sufficient knowledge about the tasks in the scheduler. So we are looking a reasonable middle-of-the-road policy that doesn't kill your performance or the battery. If user-space has its own opinion about performance requirements it is free to use task affinity to control which cpu the task end up on and ensure that the task gets max capacity always. On top of that we have had interfaces in Android for years to specify performance requirements for task (groups) to allow small tasks to be placed on big cpus and big task to be placed on little cpus depending on their requirements. It is even tied into cpufreq as well. A lot of effort has gone into Android to get this balance right. Patrick is working hard on upstreaming some of those features. In the bigger picture always going for max capacity is not desirable for well-configured big.LITTLE system. You would never exploit the advantage of the little cpus as you always use big first and only use little when the bigs are overloaded at which point having little cpus at all makes little sense. Vendors build big.LITTLE systems because they want a better performance/energy trade-off, if they wanted max capacity always, they would just built big-only systems. If we would be that concerned about latency, DVFS would be a problem too and we would use nothing but the performance governor. So seen in the bigger picture I have to disagree that blindly going for max capacity is the right default policy for big.LITTLE. As soon as we involve a energy model in the task placement decisions, it definitely isn't. Morten