Received: by 2002:a25:ab43:0:0:0:0:0 with SMTP id u61csp3433460ybi; Sun, 26 May 2019 23:23:18 -0700 (PDT) X-Google-Smtp-Source: APXvYqwMCVWK+VVSox413EgOu0Wk0fjXQoAQu/3WM1t6PKg9LnuQfhm7WA/P3404ysuuo/TrSw/U X-Received: by 2002:a17:90a:9382:: with SMTP id q2mr29062189pjo.131.1558938198181; Sun, 26 May 2019 23:23:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1558938198; cv=none; d=google.com; s=arc-20160816; b=K2975G/gv2cpu7xGTfT1+178Ss5kerrs3vvsHisasGtSI0Pverjmqr6n6C+z6CyRKA ivXaMjfH90L/vZnUmStB8kF5zq/C2HOX8HeZiyD2eC3e0ETm0fie24GKCMHMEzjCMHOU mjp2Pn3xLRkNX9ZSsVegWsBFVvpMl0hexFfEyRj4tdz6dHBB/qz+u6L9ChaZCdWQyi4L 13Dzh+TJKfGe5vltcMCymcItPWXL3CyfAw9j1C7LES9UzLdqhOkQsTeMdbqaryxjNFMe xIinPalco95wfYBdwkkJM4E9aHNJAtoq89WZYOT+rX4GZOGwx0nO9wI90998BW0ig+Iz jYwg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=F1wTs2fQIv9enA7pjXruhUBAHFtovN+JKUdhVbF2aQk=; b=hR5Aeoiv9xHwc6IBq54vrl+MDO9epLdbXmj3Rcpv+5B3mcOagBYUAzLnuQSCMXHCzj 7ZZJay58WHmjQUQ9TOw53ZnVuIR90OucayZ4ZDVfdL2lc2r5x1n8objoz/DJYJDBwTwQ F9wBdwo1yMZPFdQZ1n3TDzYPu3Wqsykf91YqxDkY2OBLJhNxsVtX4MS8JGpbP/tgocEQ eJOw/NlJFEv+RwKsTOVtq1/O3ESOZGxNGbsHACbmRuwSrp/YJ2C7CCYbMswuzR3Zi8Kz bGJFXxSLFzgptjjBuh3bcwPWP0XfJC7chrLrqYltrHxyLejpef+aE8feIxc4pNpXgknY qwyw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d4si7326538pls.233.2019.05.26.23.23.02; Sun, 26 May 2019 23:23:18 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726404AbfE0GVw (ORCPT + 99 others); Mon, 27 May 2019 02:21:52 -0400 Received: from foss.arm.com ([217.140.101.70]:56208 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726392AbfE0GVs (ORCPT ); Mon, 27 May 2019 02:21:48 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1AC8A1688; Sun, 26 May 2019 23:21:48 -0700 (PDT) Received: from e107985-lin.cambridge.arm.com (e107985-lin.cambridge.arm.com [10.1.194.38]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id E32983F59C; Sun, 26 May 2019 23:21:45 -0700 (PDT) From: Dietmar Eggemann To: Peter Zijlstra , Ingo Molnar Cc: Thomas Gleixner , Frederic Weisbecker , Rik van Riel , Vincent Guittot , Morten Rasmussen , Quentin Perret , Valentin Schneider , Patrick Bellasi , linux-kernel@vger.kernel.org Subject: [PATCH 7/7] sched/fair: Rename weighted_cpuload() to cpu_load() Date: Mon, 27 May 2019 07:21:16 +0100 Message-Id: <20190527062116.11512-8-dietmar.eggemann@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190527062116.11512-1-dietmar.eggemann@arm.com> References: <20190527062116.11512-1-dietmar.eggemann@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This is done to align the per cpu (i.e. per rq) load with the util counterpart (cpu_util(int cpu)). The term 'weighted' is not needed since there is no 'unweighted' load to distinguish it from. Signed-off-by: Dietmar Eggemann --- kernel/sched/fair.c | 44 ++++++++++++++++++++------------------------ 1 file changed, 20 insertions(+), 24 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index a33f196703a7..f6d0aad13090 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -1466,7 +1466,7 @@ bool should_numa_migrate_memory(struct task_struct *p, struct page * page, group_faults_cpu(ng, src_nid) * group_faults(p, dst_nid) * 4; } -static unsigned long weighted_cpuload(struct rq *rq); +static unsigned long cpu_load(int cpu); /* Cached statistics for all CPUs within a node */ struct numa_stats { @@ -1485,9 +1485,7 @@ static void update_numa_stats(struct numa_stats *ns, int nid) memset(ns, 0, sizeof(*ns)); for_each_cpu(cpu, cpumask_of_node(nid)) { - struct rq *rq = cpu_rq(cpu); - - ns->load += weighted_cpuload(rq); + ns->load += cpu_load(cpu); ns->compute_capacity += capacity_of(cpu); } @@ -5334,9 +5332,9 @@ static struct { #endif /* CONFIG_NO_HZ_COMMON */ -static unsigned long weighted_cpuload(struct rq *rq) +static unsigned long cpu_load(int cpu) { - return cfs_rq_runnable_load_avg(&rq->cfs); + return cfs_rq_runnable_load_avg(&cpu_rq(cpu)->cfs); } static unsigned long capacity_of(int cpu) @@ -5348,7 +5346,7 @@ static unsigned long cpu_avg_load_per_task(int cpu) { struct rq *rq = cpu_rq(cpu); unsigned long nr_running = READ_ONCE(rq->cfs.h_nr_running); - unsigned long load_avg = weighted_cpuload(rq); + unsigned long load_avg = cpu_load(cpu); if (nr_running) return load_avg / nr_running; @@ -5446,7 +5444,7 @@ wake_affine_weight(struct sched_domain *sd, struct task_struct *p, s64 this_eff_load, prev_eff_load; unsigned long task_load; - this_eff_load = weighted_cpuload(cpu_rq(this_cpu)); + this_eff_load = cpu_load(this_cpu); if (sync) { unsigned long current_load = task_h_load(current); @@ -5464,7 +5462,7 @@ wake_affine_weight(struct sched_domain *sd, struct task_struct *p, this_eff_load *= 100; this_eff_load *= capacity_of(prev_cpu); - prev_eff_load = weighted_cpuload(cpu_rq(this_cpu)); + prev_eff_load = cpu_load(this_cpu); prev_eff_load -= task_load; if (sched_feat(WA_BIAS)) prev_eff_load *= 100 + (sd->imbalance_pct - 100) / 2; @@ -5552,7 +5550,7 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p, max_spare_cap = 0; for_each_cpu(i, sched_group_span(group)) { - load = weighted_cpuload(cpu_rq(i)); + load = cpu_load(i); runnable_load += load; avg_load += cfs_rq_load_avg(&cpu_rq(i)->cfs); @@ -5688,7 +5686,7 @@ find_idlest_group_cpu(struct sched_group *group, struct task_struct *p, int this shallowest_idle_cpu = i; } } else if (shallowest_idle_cpu == -1) { - load = weighted_cpuload(cpu_rq(i)); + load = cpu_load(i); if (load < min_load) { min_load = load; least_loaded_cpu = i; @@ -7259,8 +7257,8 @@ static struct task_struct *detach_one_task(struct lb_env *env) static const unsigned int sched_nr_migrate_break = 32; /* - * detach_tasks() -- tries to detach up to imbalance weighted load from - * busiest_rq, as part of a balancing operation within domain "sd". + * detach_tasks() -- tries to detach up to imbalance load from busiest_rq, + * as part of a balancing operation within domain "sd". * * Returns number of detached tasks if successful and 0 otherwise. */ @@ -7326,8 +7324,7 @@ static int detach_tasks(struct lb_env *env) #endif /* - * We only want to steal up to the prescribed amount of - * weighted load. + * We only want to steal up to the prescribed amount of load. */ if (env->imbalance <= 0) break; @@ -7931,7 +7928,7 @@ static inline void update_sg_lb_stats(struct lb_env *env, if ((env->flags & LBF_NOHZ_STATS) && update_nohz_stats(rq, false)) env->flags |= LBF_NOHZ_AGAIN; - sgs->group_load += weighted_cpuload(rq); + sgs->group_load += cpu_load(i); sgs->group_util += cpu_util(i); sgs->sum_nr_running += rq->cfs.h_nr_running; @@ -8385,8 +8382,7 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s * find_busiest_group - Returns the busiest group within the sched_domain * if there is an imbalance. * - * Also calculates the amount of weighted load which should be moved - * to restore balance. + * Also calculates the amount of load which should be moved to restore balance. * * @env: The load balancing environment. * @@ -8558,11 +8554,11 @@ static struct rq *find_busiest_queue(struct lb_env *env, rq->nr_running == 1) continue; - wl = weighted_cpuload(rq); + wl = cpu_load(i); /* - * When comparing with imbalance, use weighted_cpuload() - * which is not scaled with the CPU capacity. + * When comparing with imbalance, use cpu_load() which is not + * scaled with the CPU capacity. */ if (rq->nr_running == 1 && wl > env->imbalance && @@ -8571,9 +8567,9 @@ static struct rq *find_busiest_queue(struct lb_env *env, /* * For the load comparisons with the other CPU's, consider - * the weighted_cpuload() scaled with the CPU capacity, so - * that the load can be moved away from the CPU that is - * potentially running at a lower capacity. + * the cpu_load() scaled with the CPU capacity, so that the + * load can be moved away from the CPU that is potentially + * running at a lower capacity. * * Thus we're looking for max(wl_i / capacity_i), crosswise * multiplication to rid ourselves of the division works out -- 2.17.1