Received: by 2002:a05:6602:2086:0:0:0:0 with SMTP id a6csp4468746ioa; Wed, 27 Apr 2022 04:40:28 -0700 (PDT) X-Google-Smtp-Source: ABdhPJywvTx4MxpSPUibldUq7upO3brhN1ZNlCHAVlLA0v4480TwtU0MX3oFP7/HCdQ/YpFl1v5f X-Received: by 2002:a17:90b:224b:b0:1d9:a277:c076 with SMTP id hk11-20020a17090b224b00b001d9a277c076mr11655117pjb.134.1651059628069; Wed, 27 Apr 2022 04:40:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1651059628; cv=none; d=google.com; s=arc-20160816; b=JK28fx2+forTF2J93Jsq9SggisaarQGbQIKHrXJa+Wk0ZFM6lPZsUuz1etk3VGzvqd oVXKi3xIexPZmp/Stg5a3hrkvsGuHKYTJAuH9cVh/gwgJaJn+MROCAqnx93rbPr1A56P seo+aWLFERoxZmulwoCc3JOcSGWi/7NWvYl7X6jn+uPboY//09SOHyfWddHdgFjpi4nA eZZewOIBx0vj+p5Lc5Gy13+3TyNHoWqyErIcI/CLGM5CTjLp+DcKWgl/i9cmOSrdbt4C Fa81iUDTjIDBUSTXfVJhGuEbVhythEmbe0av7Rb8WYWv1LwFSdpbAI0d5dgmNaVkQw0q OWRg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=55cVgoeUhiCduo4At8lintsXCkF0VnCopsTLUFCQcUQ=; b=kZfF9kIxUtK0YTpVDwwqzjSBJTBfj9CVIuYwr1RxcBPU84bmMDUWLOGARb0JcE+fXn lKej3EKjZ4T0O4BZQLogF2D4Fki52sr3IyD1snyGsOj6JC7335oaCMgCXFg3ApglFQpN Sd9tBqSpOSN9bKm8khRsHnTnG/8046HFlBlT7uaxFvR9mSo1gwlT3n9F9uqMZfSF7KBc glcps4QQ/VvTNXxqOwv1p2ItCEMlyZNrk+nabgitqJow3y3tcyqUPJKIRxcc/OXr61V6 d/SXjzJXesrDFwkJMr5HB3Iw9ppVlIKU1yvCU/YcsixiMzhpCbtAymOKUcG1ilG+OG/n mL1Q== ARC-Authentication-Results: i=1; mx.google.com; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [23.128.96.19]) by mx.google.com with ESMTPS id y16-20020a1709029b9000b00153b2d16574si1221467plp.380.2022.04.27.04.40.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Apr 2022 04:40:28 -0700 (PDT) Received-SPF: softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) client-ip=23.128.96.19; Authentication-Results: mx.google.com; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id B84733CCABA; Wed, 27 Apr 2022 03:36:06 -0700 (PDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242232AbiDZKNY (ORCPT + 99 others); Tue, 26 Apr 2022 06:13:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58648 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1348554AbiDZKMv (ORCPT ); Tue, 26 Apr 2022 06:12:51 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id D3D3B22169D for ; Tue, 26 Apr 2022 02:35:35 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D09991424; Tue, 26 Apr 2022 02:35:35 -0700 (PDT) Received: from localhost.localdomain (unknown [10.57.41.198]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 1A2ED3F73B; Tue, 26 Apr 2022 02:35:33 -0700 (PDT) From: Vincent Donnefort To: peterz@infradead.org, mingo@redhat.com, vincent.guittot@linaro.org Cc: linux-kernel@vger.kernel.org, dietmar.eggemann@arm.com, morten.rasmussen@arm.com, chris.redpath@arm.com, qperret@google.com, Vincent Donnefort Subject: [PATCH v6 7/7] sched/fair: Remove the energy margin in feec() Date: Tue, 26 Apr 2022 10:35:06 +0100 Message-Id: <20220426093506.3415588-8-vincent.donnefort@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220426093506.3415588-1-vincent.donnefort@arm.com> References: <20220426093506.3415588-1-vincent.donnefort@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-1.6 required=5.0 tests=APP_DEVELOPMENT_NORDNS, BAYES_00,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,RDNS_NONE, SPF_HELO_NONE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org find_energy_efficient_cpu() integrates a margin to protect tasks from bouncing back and forth from a CPU to another. This margin is set as being 6% of the total current energy estimated on the system. This however does not work for two reasons: 1. The energy estimation is not a good absolute value: compute_energy() used in feec() is a good estimation for task placement as it allows to compare the energy with and without a task. The computed delta will give a good overview of the cost for a certain task placement. It, however, doesn't work as an absolute estimation for the total energy of the system. First it adds the contribution to idle CPUs into the energy, second it mixes util_avg with util_est values. util_avg contains the near history for a CPU usage, it doesn't tell at all what the current utilization is. A system that has been quite busy in the near past will hold a very high energy and then a high margin preventing any task migration to a lower capacity CPU, wasting energy. It even creates a negative feedback loop: by holding the tasks on a less efficient CPU, the margin contributes in keeping the energy high. 2. The margin handicaps small tasks: On a system where the workload is composed mostly of small tasks (which is often the case on Android), the overall energy will be high enough to create a margin none of those tasks can cross. On a Pixel4, a small utilization of 5% on all the CPUs creates a global estimated energy of 140 joules, as per the Energy Model declaration of that same device. This means, after applying the 6% margin that any migration must save more than 8 joules to happen. No task with a utilization lower than 40 would then be able to migrate away from the biggest CPU of the system. The 6% of the overall system energy was brought by the following patch: (eb92692b2544 sched/fair: Speed-up energy-aware wake-ups) It was previously 6% of the prev_cpu energy. Also, the following one made this margin value conditional on the clusters where the task fits: (8d4c97c105ca sched/fair: Only compute base_energy_pd if necessary) We could simply revert that margin change to what it was, but the original version didn't have strong grounds neither and as demonstrated in (1.) the estimated energy isn't a good absolute value. Instead, removing it completely. It is indeed, made possible by recent changes that improved energy estimation comparison fairness (sched/fair: Remove task_util from effective utilization in feec()) (PM: EM: Increase energy calculation precision) and task utilization stabilization (sched/fair: Decay task util_avg during migration) Without a margin, we could have feared bouncing between CPUs. But running LISA's eas_behaviour test coverage on three different platforms (Hikey960, RB-5 and DB-845) showed no issue. Removing the energy margin enables more energy-optimized placements for a more energy efficient system. Signed-off-by: Vincent Donnefort diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index a9941299547b..49ac5958aa69 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6848,9 +6848,8 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu) { struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_rq_mask); unsigned long prev_delta = ULONG_MAX, best_delta = ULONG_MAX; - int cpu, best_energy_cpu = prev_cpu, target = -1; struct root_domain *rd = this_rq()->rd; - unsigned long base_energy = 0; + int cpu, best_energy_cpu, target = -1; struct sched_domain *sd; struct perf_domain *pd; struct energy_env eenv; @@ -6882,8 +6881,8 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu) unsigned long cpu_cap, cpu_thermal_cap, util; unsigned long cur_delta, max_spare_cap = 0; bool compute_prev_delta = false; - unsigned long base_energy_pd; int max_spare_cap_cpu = -1; + unsigned long base_energy; cpumask_and(cpus, perf_domain_span(pd), cpu_online_mask); @@ -6938,16 +6937,15 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu) /* Compute the 'base' energy of the pd, without @p */ eenv_pd_busy_time(&eenv, cpus, p); - base_energy_pd = compute_energy(&eenv, pd, cpus, p, -1); - base_energy += base_energy_pd; + base_energy = compute_energy(&eenv, pd, cpus, p, -1); /* Evaluate the energy impact of using prev_cpu. */ if (compute_prev_delta) { prev_delta = compute_energy(&eenv, pd, cpus, p, prev_cpu); - if (prev_delta < base_energy_pd) + if (prev_delta < base_energy) goto unlock; - prev_delta -= base_energy_pd; + prev_delta -= base_energy; best_delta = min(best_delta, prev_delta); } @@ -6955,9 +6953,9 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu) if (max_spare_cap_cpu >= 0) { cur_delta = compute_energy(&eenv, pd, cpus, p, max_spare_cap_cpu); - if (cur_delta < base_energy_pd) + if (cur_delta < base_energy) goto unlock; - cur_delta -= base_energy_pd; + cur_delta -= base_energy; if (cur_delta < best_delta) { best_delta = cur_delta; best_energy_cpu = max_spare_cap_cpu; @@ -6966,12 +6964,7 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu) } rcu_read_unlock(); - /* - * Pick the best CPU if prev_cpu cannot be used, or if it saves at - * least 6% of the energy used by prev_cpu. - */ - if ((prev_delta == ULONG_MAX) || - (prev_delta - best_delta) > ((prev_delta + base_energy) >> 4)) + if (best_delta < prev_delta) target = best_energy_cpu; return target; -- 2.25.1