Received: by 2002:a6b:500f:0:0:0:0:0 with SMTP id e15csp185329iob; Mon, 2 May 2022 16:35:03 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzwv7m3JCy5YLIvJQOVnuIjf/wBC0ht13xCJz9mlfKzHh5ocaK/a92pdEAHA5T5X6CxrhmK X-Received: by 2002:a17:90a:b106:b0:1d9:7cde:7914 with SMTP id z6-20020a17090ab10600b001d97cde7914mr1727078pjq.56.1651534503557; Mon, 02 May 2022 16:35:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1651534503; cv=none; d=google.com; s=arc-20160816; b=iusQ5d2F8yhBblH/b7cduJNj+sUzASgpORmZy9/KXWv+h3vH0Gj66qgNiMTWIEvjCd OYv+AbT/ZnFNsBb9u3B7YwZhnm7CGRyVXhBJSYANLL6uNE6fhMfYT2FUY1y37ljtyTWT Ekca37JPMM3A5LAEsCfAN/WvrOhMquGsyN7VbfLl5hOCF7fzN1YiXs05/BO3vMr97LNn 1x67k81HAL+uK8PPxykTuIh8wjkCutlLVFhhwMxjeTI9f11B4eDuy5++J0FIqHJH+b60 7+ciiBErYohcNV24qNRv5Z1y2vVtS8lHwAJII/vw0P0qzrva/765NIJuW2a6U7ALZgwn qORg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=cesiLa1FZD+Hd9dhwqsJZ+P05hlu2pAnuWO6DwXEPMs=; b=itJ7S6kaqBUxZdxmb4B0qUQsQ7idAzTI1iboEkDgT/Tq+vHLCJJFELGyHSH19+kST6 +JG7vbAYDQMnx0uJ1BoltN46xo2I9WO5JYaRq1+BC5mOOZ6pb3lHFxQVgZPFlXkZ9m50 TXpEGKXSwBjYsGPPNIuHkCc96M3Bxr0d0y80gOiYF4hDVMbN8+VvufVaUIKQrBjVwZSl pC6p0cfacoOKCID1JsOtAN8FGk/wz2fi8Iaq9TLORADZYwB5zC/fU3M7ketg3B72PCps HBe6snD2E501agzYyDnU9OzvCv3TpM47GheE4Hw14cJYHfk8uFg5BYO7a9GPx9jjHRYc jKjg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [2620:137:e000::1:18]) by mx.google.com with ESMTPS id m13-20020a656a0d000000b0039d98ec6641si16836417pgu.319.2022.05.02.16.35.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 02 May 2022 16:35:03 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) client-ip=2620:137:e000::1:18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 2FC05286D3; Mon, 2 May 2022 16:34:57 -0700 (PDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1377058AbiD2OQ4 (ORCPT + 99 others); Fri, 29 Apr 2022 10:16:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47614 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1376814AbiD2OQd (ORCPT ); Fri, 29 Apr 2022 10:16:33 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id A92972CCBB for ; Fri, 29 Apr 2022 07:13:14 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D8E9F1063; Fri, 29 Apr 2022 07:13:13 -0700 (PDT) Received: from localhost.localdomain (FVFF7649Q05P.cambridge.arm.com [10.1.32.23]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 7CA913F774; Fri, 29 Apr 2022 07:13:12 -0700 (PDT) From: Vincent Donnefort To: peterz@infradead.org, mingo@redhat.com, vincent.guittot@linaro.org Cc: linux-kernel@vger.kernel.org, dietmar.eggemann@arm.com, morten.rasmussen@arm.com, chris.redpath@arm.com, qperret@google.com, tao.zhou@linux.dev, Vincent Donnefort Subject: [PATCH v8 7/7] sched/fair: Remove the energy margin in feec() Date: Fri, 29 Apr 2022 15:11:48 +0100 Message-Id: <20220429141148.181816-8-vincent.donnefort@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220429141148.181816-1-vincent.donnefort@arm.com> References: <20220429141148.181816-1-vincent.donnefort@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-1.9 required=5.0 tests=APP_DEVELOPMENT_NORDNS, BAYES_00,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,RDNS_NONE, SPF_HELO_NONE,T_SCC_BODY_TEXT_LINE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org find_energy_efficient_cpu() integrates a margin to protect tasks from bouncing back and forth from a CPU to another. This margin is set as being 6% of the total current energy estimated on the system. This however does not work for two reasons: 1. The energy estimation is not a good absolute value: compute_energy() used in feec() is a good estimation for task placement as it allows to compare the energy with and without a task. The computed delta will give a good overview of the cost for a certain task placement. It, however, doesn't work as an absolute estimation for the total energy of the system. First it adds the contribution to idle CPUs into the energy, second it mixes util_avg with util_est values. util_avg contains the near history for a CPU usage, it doesn't tell at all what the current utilization is. A system that has been quite busy in the near past will hold a very high energy and then a high margin preventing any task migration to a lower capacity CPU, wasting energy. It even creates a negative feedback loop: by holding the tasks on a less efficient CPU, the margin contributes in keeping the energy high. 2. The margin handicaps small tasks: On a system where the workload is composed mostly of small tasks (which is often the case on Android), the overall energy will be high enough to create a margin none of those tasks can cross. On a Pixel4, a small utilization of 5% on all the CPUs creates a global estimated energy of 140 joules, as per the Energy Model declaration of that same device. This means, after applying the 6% margin that any migration must save more than 8 joules to happen. No task with a utilization lower than 40 would then be able to migrate away from the biggest CPU of the system. The 6% of the overall system energy was brought by the following patch: (eb92692b2544 sched/fair: Speed-up energy-aware wake-ups) It was previously 6% of the prev_cpu energy. Also, the following one made this margin value conditional on the clusters where the task fits: (8d4c97c105ca sched/fair: Only compute base_energy_pd if necessary) We could simply revert that margin change to what it was, but the original version didn't have strong grounds neither and as demonstrated in (1.) the estimated energy isn't a good absolute value. Instead, removing it completely. It is indeed, made possible by recent changes that improved energy estimation comparison fairness (sched/fair: Remove task_util from effective utilization in feec()) (PM: EM: Increase energy calculation precision) and task utilization stabilization (sched/fair: Decay task util_avg during migration) Without a margin, we could have feared bouncing between CPUs. But running LISA's eas_behaviour test coverage on three different platforms (Hikey960, RB-5 and DB-845) showed no issue. Removing the energy margin enables more energy-optimized placements for a more energy efficient system. Signed-off-by: Vincent Donnefort diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 83a6eb99d938..97551d7be8a9 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6887,9 +6887,8 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu) { struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_rq_mask); unsigned long prev_delta = ULONG_MAX, best_delta = ULONG_MAX; - int cpu, best_energy_cpu = prev_cpu, target = -1; struct root_domain *rd = this_rq()->rd; - unsigned long base_energy = 0; + int cpu, best_energy_cpu, target = -1; struct sched_domain *sd; struct perf_domain *pd; struct energy_env eenv; @@ -6921,8 +6920,8 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu) unsigned long cpu_cap, cpu_thermal_cap, util; unsigned long cur_delta, max_spare_cap = 0; bool compute_prev_delta = false; - unsigned long base_energy_pd; int max_spare_cap_cpu = -1; + unsigned long base_energy; cpumask_and(cpus, perf_domain_span(pd), cpu_online_mask); @@ -6977,16 +6976,15 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu) /* Compute the 'base' energy of the pd, without @p */ eenv_pd_busy_time(&eenv, cpus, p); - base_energy_pd = compute_energy(&eenv, pd, cpus, p, -1); - base_energy += base_energy_pd; + base_energy = compute_energy(&eenv, pd, cpus, p, -1); /* Evaluate the energy impact of using prev_cpu. */ if (compute_prev_delta) { prev_delta = compute_energy(&eenv, pd, cpus, p, prev_cpu); - if (prev_delta < base_energy_pd) + if (prev_delta < base_energy) goto unlock; - prev_delta -= base_energy_pd; + prev_delta -= base_energy; best_delta = min(best_delta, prev_delta); } @@ -6994,9 +6992,9 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu) if (max_spare_cap_cpu >= 0) { cur_delta = compute_energy(&eenv, pd, cpus, p, max_spare_cap_cpu); - if (cur_delta < base_energy_pd) + if (cur_delta < base_energy) goto unlock; - cur_delta -= base_energy_pd; + cur_delta -= base_energy; if (cur_delta < best_delta) { best_delta = cur_delta; best_energy_cpu = max_spare_cap_cpu; @@ -7005,12 +7003,7 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu) } rcu_read_unlock(); - /* - * Pick the best CPU if prev_cpu cannot be used, or if it saves at - * least 6% of the energy used by prev_cpu. - */ - if ((prev_delta == ULONG_MAX) || - (prev_delta - best_delta) > ((prev_delta + base_energy) >> 4)) + if (best_delta < prev_delta) target = best_energy_cpu; return target; -- 2.25.1