Received: by 2002:a25:1506:0:0:0:0:0 with SMTP id 6csp1662081ybv; Fri, 21 Feb 2020 01:13:35 -0800 (PST) X-Google-Smtp-Source: APXvYqx2PdpPTfV1EWV2y7tW6qfzWx5969VqALUeaI9YtMHd+0yxjMRzn4OK8mqt13doL0zUcxrP X-Received: by 2002:a9d:32f:: with SMTP id 44mr26526332otv.234.1582276415025; Fri, 21 Feb 2020 01:13:35 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1582276415; cv=none; d=google.com; s=arc-20160816; b=iGZWKGQUYXdRHIMDjMzjAdK48xF4USHZrhFQSJ+AJo49PQCzsH1HFLIRc2n1eLnyyc cRFzew5VO6x5hcNyrUle60X6jefmILueL4P9AWT96eS222gllT+qkxvW+rQX4uHempeG l5/CZkcui5Kd3Voglp2VJNucIwo0LhlgZPrB3/h3DKQU5rk1wPfLmZVz1TqQ4NJiQQMz KERR/ft6Ht4abPWpe/9NihHMpZJxkpSIhmIsqZ29x+JN3CyBHGWXGoOVl4tboQFeiYN2 CL0pupcIpDEKSgiE6FONdnEQimcPs+gkfb4Eg2BjaKhohakp+slraEuR+a/CS2HKFfvj 5GqA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=Svypjvy2UZNno/1cMDgFktputdE7RJsU9QZEtR8/pZM=; b=B44M/fMxkiJHKbpxv2E05MensrT6N7rv1zVWBtZLQ1UieuboaisiWbxsuAT/NRjRYh aKw5mAwK3C/Z9qwm04LdDRfG3e+JphcNsoYg/JzRbFNLm7GiSsamT8BUyo6fwwGHUhFj 5sMCWRWIts/wru20O6BEKoKYbyCxVqf/EebLkIh29eKMsGueGJytYWEAztVXslNLUo+f 3PI8DCM013xol5m8BrODA+CKSngFFPQUPYcrfJKDb5tIb3f58d06WdPnwHn1LvZt+xrp QVUdbmTgCdmyxmsJOsX07gHNAEwWfthPrhJDiWgotU7taLneDSLb5C+Cw9NtSoJq/Jvt Z/FA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=VgMcsDTd; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id k2si469456oij.97.2020.02.21.01.13.16; Fri, 21 Feb 2020 01:13:35 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=VgMcsDTd; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730126AbgBUI4a (ORCPT + 99 others); Fri, 21 Feb 2020 03:56:30 -0500 Received: from mail-lj1-f195.google.com ([209.85.208.195]:45803 "EHLO mail-lj1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727859AbgBUI4a (ORCPT ); Fri, 21 Feb 2020 03:56:30 -0500 Received: by mail-lj1-f195.google.com with SMTP id e18so1346524ljn.12 for ; Fri, 21 Feb 2020 00:56:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=Svypjvy2UZNno/1cMDgFktputdE7RJsU9QZEtR8/pZM=; b=VgMcsDTdKVJcnFKGkUOW2mfFv22Gwkw27lKwgL1Kienz7zftcpFzN2A18juJ8D2Cw7 FjhI4shpHMxqRangwynFGoEpX6R33NMJcGpi5uIRfhowZG4glWzs01yerh6Dtsadk9NA pyAuAm7klnSfCKUKpWxPnqxt/8VpSMEctw0fV6leDA7vEO1T0qhRy8DHU2z5y5pGix4/ 7KSPvsj4eCjpshuk9e3HPbd8hSwv56iTitKCk1RHtCMNy6UXZebtrAhDjZ2lcn30gvOW rsJi62ccjK39TUMREQVw5IZ6eEzZvlCpLciV88YiEOAcXSJSDkd7hmHT0GCXHd8P2kuD +WUA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=Svypjvy2UZNno/1cMDgFktputdE7RJsU9QZEtR8/pZM=; b=IbZBOQ8rVJ4guRupTIW6g1zQuUI/8c8nK+a5r/7JrA3/ZaBTfxXg4Ijfq84wXvKLD3 lc5NjjcF4RMtQvaSt74RL2IcD2s9hImOD4ZbsaAZd69ZKIN9THF1DUp3/VzeiSZfvHpD p3q6J4wmNjDFfr3E1eLYXyjybY7APymcQVnueyyUWRy5JS6GRyY5Y11ghH9KvE1C5J8D iCTUB79CpvvIJ6Ug/nO/uD/LHOw24diXHAtQUlAfHLDFlqSB/vJc/HI56PNtT/oe+zwJ KBOMAfK7xeHmhIvDsImDWMvwTGQ4k907wzPQa4FI+An2AbWHTFTDo7L13mPsLgAHtDCp Wpaw== X-Gm-Message-State: APjAAAVgjQR8koQumkJPufs8vY8bDiA0gBMC6VRxOzcx45WN5FKwfmzD YRVQmkwuTCZhqFlRVDojQCiaF5kdqKjqIsUBR7xblQ== X-Received: by 2002:a2e:8699:: with SMTP id l25mr20628240lji.137.1582275388201; Fri, 21 Feb 2020 00:56:28 -0800 (PST) MIME-Version: 1.0 References: <20200214152729.6059-5-vincent.guittot@linaro.org> <20200219125513.8953-1-vincent.guittot@linaro.org> <9fe822fc-c311-2b97-ae14-b9269dd99f1e@arm.com> In-Reply-To: From: Vincent Guittot Date: Fri, 21 Feb 2020 09:56:16 +0100 Message-ID: Subject: Re: [PATCH v3 4/5] sched/pelt: Add a new runnable average signal To: Valentin Schneider Cc: Ingo Molnar , Peter Zijlstra , Juri Lelli , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , linux-kernel , Phil Auld , Parth Shah , Hillf Danton Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, 20 Feb 2020 at 17:11, Valentin Schneider wrote: > > On 20/02/2020 14:36, Vincent Guittot wrote: > > I agree that setting by default to SCHED_CAPACITY_SCALE is too much > > for little core. > > The problem for little core can be fixed by using the cpu capacity instead > > > > So that's indeed better for big.LITTLE & co. Any reason however for not > aligning with the initialization of util_avg ? The runnable_avg is the unweighted version of the load_avg so they should both be sync at init and SCHED_CAPACITY_SCALE is in fact the right value. Using cpu_scale is the same for smp and big core so we can use it instead. Then, the initial value of util_avg has never reflected some kind of realistic value for the utilization of a new task, especially if those tasks will become big ones. Runnable_avg now balances this effect to say that we don't know what will be the behavior of the new task, which might end up using all spare capacity although current utilization is low and CPU is not "fully used". In fact, this is exactly the purpose of runnable: highlight that there is maybe no spare capacity even if CPU's utilization is low because of external event like task migration or having new tasks with most probably wrong utilization. That being said, there is a bigger problem with the current version of this patch, which is that I forgot to use runnable in update_sg_wakeup_stats(). I have a patch that fixes this problem. Also, I have tested both proposals with hackbench on my octo cores and using cpu_scale gives slightly better results than util_avg, which probably reflects the case I mentioned above. grp cpu_scale util_avg improvement 1 1,191(+/-0.77 %) 1,204(+/-1.16 %) -1.07 % 4 1,147(+/-1.14 %) 1,195(+/-0.52 %) -4.21 % 8 1,112(+/-1,52 %) 1,124(+/-1,45 %) -1.12 % 16 1,163(+/-1.72 %) 1,169(+/-1.58 %) -0,45 % > > With the default MC imbalance_pct (117), it takes 875 utilization to make > a single CPU group (with 1024 capacity) overloaded (group_is_overloaded()). > For a completely idle CPU, that means forking at least 3 tasks (512 + 256 + > 128 util_avg) > > With your change, it only takes 2 tasks. I know I'm being nitpicky here, but > I feel like those should be aligned, unless we have a proper argument against I don't see any reason for keeping them aligned > it - in which case this should also appear in the changelog with so far only > mentions issues with util_avg migration, not the fork time initialization. > > > @@ -796,6 +794,8 @@ void post_init_entity_util_avg(struct task_struct *p) > > } > > } > > > > + sa->runnable_avg = cpu_scale; > > + > > if (p->sched_class != &fair_sched_class) { > > /* > > * For !fair tasks do: > >> > >>> sa->load_avg = scale_load_down(se->load.weight); > >>> + } > >>> > >>> /* when this task enqueue'ed, it will contribute to its cfs_rq's load_avg */ > >>> }