Received: by 2002:a25:31c3:0:0:0:0:0 with SMTP id x186csp567615ybx; Wed, 30 Oct 2019 01:08:20 -0700 (PDT) X-Google-Smtp-Source: APXvYqxsns1ECpUXOfUgxDl85BX24Jc5Z197QUMPVobKQQv9WID96nDfja864JHzKEjBv+n6VqfA X-Received: by 2002:a17:906:a44:: with SMTP id x4mr7419078ejf.64.1572422900320; Wed, 30 Oct 2019 01:08:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1572422900; cv=none; d=google.com; s=arc-20160816; b=PE4Yk5R5LmJbcyAsn2jF66uZXnLcaJoLHgBV2du9gxgfh1vIMasVvStTwkvpvsIPXH eU+7+z2N7Uy8VsH/3auDnpyBe73Bvg4QuKpgbKWcg3SxXDL1PLZc+YFnrfOl+lx5LSXL BqI0Skf890ld+45iQqH+xkFrYniK5dPN5CUoTzsRXmuCfJEzhn7sF9SPJnIdMpOuj7mp DEuoaYSXvuCY1EsB659xiBWmtK3W0p8R2bWxD4Ju94Ep57EPJQqsx/G7YeAJneUmJS9i i/7AJjLbvs0j0o26VrEnyK1TzBJraD55OQFbwxR6zPY3eooEib2jHfM7aT3NaJz446Fe j61g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=tH0v8CKp4rkqpt8Oe+l5sC+1JZVbcOeXEIccbuIjsLY=; b=hJVIw//QFEmcGYt+BsLgVGFRJjI6VtYjlNI2ccU2bcZSLrV/391D5hWQqOCZ+5Vdgt 57rp36zINBQnkd3omyOHPYE+5rlW5Ys1o6pmI46ztCx9ZG7yrOFNuOsH2JfYnQVCIqBw y1LGR3C51hQmsZMwDKy2/ezDHyyvgBFqcBos4CKr4UFrHhYKdl/5s9R2J4ILhzEmR2uK HLadVnc8GLZkJJZPhd5FvTSO94SvacCQ7JSIES5ZnlOXkdlmBBx+p2XPrIrEcRkfEygi bbky86YfxeLHNnSO5/h9C8osrOaJLPU2imOXbmDJ2LFCjuH84Z4P3xPvm8pd2mMLRzvM eg9A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=r8lanPsZ; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d15si784739ejw.269.2019.10.30.01.07.54; Wed, 30 Oct 2019 01:08:20 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=r8lanPsZ; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726069AbfJ3IE6 (ORCPT + 99 others); Wed, 30 Oct 2019 04:04:58 -0400 Received: from mail-lf1-f68.google.com ([209.85.167.68]:40777 "EHLO mail-lf1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725822AbfJ3IE6 (ORCPT ); Wed, 30 Oct 2019 04:04:58 -0400 Received: by mail-lf1-f68.google.com with SMTP id f4so801660lfk.7 for ; Wed, 30 Oct 2019 01:04:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=tH0v8CKp4rkqpt8Oe+l5sC+1JZVbcOeXEIccbuIjsLY=; b=r8lanPsZ5r4DYw8xhoSUxPC6Og+6ox8PfERevdtaWjVFkqYsqWlqGYvRTXWFjBxxkF LSyNvdzS1ABOlsYBdGo9D7/bao9ip9WaDafN+ThzbwTB5RjYH2pICKu6rmHXFOr5+7fp r9XWIritTeXZnmg8Gw+YYIYAQFLsfH8j0fgisUPBVAgdF6EE38FHQkIjWbNicBmyaJNY cLl1ShtR0r4qiIoqDuwHuPahFVlphjGDVkc6F/9Cf+i6YvhjBaplw2C7N/YOa+PZXbjS K3YkvEYAVKG4f91YnSW+gypAdouX6DxoywsPlVNSARVENvJapuzEam0vAkKjdf+NyDjA hohA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=tH0v8CKp4rkqpt8Oe+l5sC+1JZVbcOeXEIccbuIjsLY=; b=ZhYT+eih0FiewkCrH40H3wAQD6I9zD15Dp7k5nbm6YuN4laFfEKYzSeBXFkWY0o8L8 ZdxbSHaUrWwIlqFulqwEIdPtVIqfjpRfCRICyKcVHa/DyLT4NvxKekR0MIdUyFz3y/Rq BZRFzWAfs5ynsQHcnPDhbXBfQ39H+47K+Uu/2R4CnuiKD5Ik94KL+TxKQoxI4zW8XvJQ DSQ1GsmT25EERyar1KrvmkPGR+T3uYoEFlJIgOZSpiOO3RaCT0qDalaEMuKzcYHKMTCi gOLyNpYOOiEnts4oUU08oH7ViVye7SEbgUYQtdavPmsTs1lxIVqbiFInMwCaUrGrY6Nz xeyQ== X-Gm-Message-State: APjAAAVCSB1p7RTMXjGBE4pW/uLExwWhrDUXrc1wOE6nyCCwJV87Xv1H PgmJrwq5F1Mo97507geCD9ANKuSVY4914Exu8yWPcg== X-Received: by 2002:ac2:5dc1:: with SMTP id x1mr5069448lfq.177.1572422695426; Wed, 30 Oct 2019 01:04:55 -0700 (PDT) MIME-Version: 1.0 References: <20191009104611.15363-1-qais.yousef@arm.com> <20191029110224.awoi37pdquachqtd@e107158-lin.cambridge.arm.com> <20191029114824.2kb4fygxxx72r3in@e107158-lin.cambridge.arm.com> <20191029124630.ivfbpenue3fw33qt@e107158-lin.cambridge.arm.com> <20191029203619.GA7607@darkstar> In-Reply-To: <20191029203619.GA7607@darkstar> From: Vincent Guittot Date: Wed, 30 Oct 2019 09:04:44 +0100 Message-ID: Subject: Re: [PATCH v2] sched: rt: Make RT capacity aware To: Patrick Bellasi Cc: Qais Yousef , Ingo Molnar , Peter Zijlstra , Steven Rostedt , Juri Lelli , Dietmar Eggemann , Ben Segall , Mel Gorman , linux-kernel Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, 29 Oct 2019 at 21:36, Patrick Bellasi wrote: > > Hi Vincent, Qais, > > On 29-Oct 13:54, Vincent Guittot wrote: > > On Tue, 29 Oct 2019 at 13:46, Qais Yousef wrote: > > > On 10/29/19 13:20, Vincent Guittot wrote: > > > > > > Making big cores the default CPUs for all RT tasks is not a minor > > > > > > change and IMO locality should stay the default behavior when there is > > > > > > no uclamp constraint > > The default value for system-wide's uclamp.min is 1024 because by > default _we want_ RT tasks running at MAX_OPP. This means that by > default RT tasks are running _under constraints_. > > The "no uclamp constraints" case you mention requires that you set > uclamp.min=0 for that task. In that case, this patch will do exactly > what you ask for: locality is preserved. > > > > > > How this is affecting locality? The task will always go to the big core, so it > > > > > should be local. > > > > > > > > local with the waker > > > > You will force rt task to run on big cluster although waker, data and > > > > interrupts can be on little one. > > > > So making big core as default is far from always being the best choice > > > > > > This is loaded with assumptions IMO. AFAICT we don't know what's the best > > > choice. > > Agree... more on that after... > > > > First, the value of uclamp.min is outside of the scope of this patch. Unless > > > what you're saying is that when uclamp.min is 1024 then we should NOT choose a > > > big cpu then there's no disagreement about what this patch do. If that's what > > > you're objecting to please be more specific about how do you see this working > > > instead. > > > > My point is that this patch makes the big cores the default CPUs for > > RT tasks which is far from being a minor change and far from being > > an obvious default good choice > > Some time ago we agreed that going to MAX_OPP for RT tasks was > "mandatory". That was defenitively a big change, likely much more > impacting than the one proposed by this patch. > > On many mobile devices we ended up pinning RT tasks on LITTLE cores > (mainly) to save quite a lot of energy by avoiding the case of big > CPUs randomly spiking to MAX_OPP just because of a small RT task > waking up on them. We also added some heuristic in schedutil has a > "band aid" for the effects of the aforementioned choice. > > By running RT on LITTLEs there could be also some wakeup latency > improvement? Yes, maybe... would be interesting to have some real > HW *and* SW use-case on hand to compare. > > However, we know that RT is all about "latency", but what is a bit > more fuzzy is the definition of "latency": > > A) wakeup-latency > From a scheduler standpoint it's quite often considered as the the > time it takes to "wakeup" a task and actually start executing its > instructions. > > B) completion-time > From an app standpoint, it's quite often important the time to > complete the task activation and go back to sleep. > > Running at MAX_OPP looks much more related to the need to complete > fast than waking up fast, especially considering that that decision You will wake up faster as well when running at MAX_OPP because instructions will run faster or at least as fast. That being said, running twice faster doesn't mean at all waking up twice faster but for sure it will be faster although the gain can be really short. Whereas running on a big core with more capacity doesn't mean that you will wake up faster because of uarch difference. I agree that "long" running rt task will most probably benefit from big cores to complete earlier but that no more obvious for short one. > was taken looking mainly (perhaps only) to SMP systems. > > On heterogeneous systems, "wakeup-latency" and "completion-time" are > two metrics which *maybe* can be better served by different cores. > However, it's very difficult to argument if one metric is more > important than the other. It's even more difficult to quantify it > because of the multitide of HW and SW combinations. That's the point of my comment, choosing big cores as default and always best choice is far from being obvious. And this patch changes the default behavior without study of the impact apart from stating that this should be ok > > Thus, what's proposed here can be far from being an "obvious good > chooce", but it's also quite difficult to argue and proof that's > defenitively _not_ a good choice. It's just a default which: > > 1) keeps things simple for RT tasks by using the same default > policy for both frequency and CPUs selection > we always run (when possible) at the highest available capacity > > 2) it's based on a per-task/system-wide tunable mechanism > > Is that not enought to justfy it as a default? > > Best, > Patrick > > -- > #include > > Patrick Bellasi