Received: by 2002:a25:1506:0:0:0:0:0 with SMTP id 6csp2247518ybv; Mon, 24 Feb 2020 01:54:21 -0800 (PST) X-Google-Smtp-Source: APXvYqylmfBIoeKDx8A/zikbiECYoGXWkmUJUhAQ/E2VcV8I7amPMFRNoVX7UwcDTSD2aNxMBUgn X-Received: by 2002:a9d:12af:: with SMTP id g44mr26987435otg.332.1582538061404; Mon, 24 Feb 2020 01:54:21 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1582538061; cv=none; d=google.com; s=arc-20160816; b=NzEhdMdtofQ7TGAEGP3z8OGRxd04NiZNRzZlVkQZPlnZWRQtP7chot5MeGtfEToyUB MGFUJBh5ZLlRuBBehHQZAkWr4VSn5Fy+f1NebYlzKMIS3SNtJKr2qI8EJHRwKEBrOIF5 o7sGrLD+amtD7LH4gbIVtnYQGbN0N6KKwcCH4BAIka/DhTnL7CgzKaB1GPT+XrYJakSa eFk80uJelv7ZGtIdpLPGW1wah0MCHFsHII74HYWjh21tx+qbDnkou3Ynm1E9POxAHCXZ UjHQZOdI38nKpjb7m0YbmeyxDx4hly9HHODGdgrwHjB2qisWnlueYmMmNePyWJvJDea4 4UQw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=j8YDvr3WntGLe/3WQYEJ1lOwoFgPYMy/tDYIQ6u/5vc=; b=yZQwGY5j5vrr4qzXZ9U17kLV17Mk8rS+En5a19a25P31T1Z14j68hBGrMhrM1Ni7pi 83fHzmysh/8BQlP5JkWn7M9iQxqtYG/IhucdbpKCp7LdRgPDCmWCExWTFS7Xf1hqhGFq OSPqlOL/tUs4d7ACw3J4Kgn6nUEOing9KY5sLg4cDin5YwLHvtPybjSCTjjrpMPSzUWP K6jL2fajOB8sRLfVFPNNhWh02VUMrMQ3vw+X7QNDho3yGO+XssN7dgsK2Y23xA/mQRXR Zio4UUXWMvEBdwdcyuoixQUtg+ECdIGbPa26hXH7f2SjKYyYGY9Aar7u2RbTLset1oPy 6nJQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id s8si4568866oic.7.2020.02.24.01.54.09; Mon, 24 Feb 2020 01:54:21 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727584AbgBXJx2 (ORCPT + 99 others); Mon, 24 Feb 2020 04:53:28 -0500 Received: from outbound-smtp29.blacknight.com ([81.17.249.32]:46606 "EHLO outbound-smtp29.blacknight.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726838AbgBXJx2 (ORCPT ); Mon, 24 Feb 2020 04:53:28 -0500 Received: from mail.blacknight.com (pemlinmail04.blacknight.ie [81.17.254.17]) by outbound-smtp29.blacknight.com (Postfix) with ESMTPS id 3B399BED56 for ; Mon, 24 Feb 2020 09:53:26 +0000 (GMT) Received: (qmail 21533 invoked from network); 24 Feb 2020 09:53:26 -0000 Received: from unknown (HELO stampy.112glenside.lan) (mgorman@techsingularity.net@[84.203.18.57]) by 81.17.254.9 with ESMTPA; 24 Feb 2020 09:53:25 -0000 From: Mel Gorman To: Peter Zijlstra Cc: Ingo Molnar , Vincent Guittot , Juri Lelli , Dietmar Eggemann , Steven Rostedt , Ben Segall , Valentin Schneider , Phil Auld , Hillf Danton , LKML , Mel Gorman Subject: [PATCH 05/13] sched/numa: Replace runnable_load_avg by load_avg Date: Mon, 24 Feb 2020 09:52:15 +0000 Message-Id: <20200224095223.13361-6-mgorman@techsingularity.net> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20200224095223.13361-1-mgorman@techsingularity.net> References: <20200224095223.13361-1-mgorman@techsingularity.net> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Vincent Guittot Similarly to what has been done for the normal load balancer, we can replace runnable_load_avg by load_avg in numa load balancing and track the other statistics like the utilization and the number of running tasks to get to better view of the current state of a node. Signed-off-by: Vincent Guittot Reviewed-by: "Dietmar Eggemann " Signed-off-by: Mel Gorman --- kernel/sched/fair.c | 102 +++++++++++++++++++++++++++++++++++----------------- 1 file changed, 70 insertions(+), 32 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 4395951b1530..975d7c1554de 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -1473,38 +1473,35 @@ bool should_numa_migrate_memory(struct task_struct *p, struct page * page, group_faults_cpu(ng, src_nid) * group_faults(p, dst_nid) * 4; } -static inline unsigned long cfs_rq_runnable_load_avg(struct cfs_rq *cfs_rq); - -static unsigned long cpu_runnable_load(struct rq *rq) -{ - return cfs_rq_runnable_load_avg(&rq->cfs); -} +/* + * 'numa_type' describes the node at the moment of load balancing. + */ +enum numa_type { + /* The node has spare capacity that can be used to run more tasks. */ + node_has_spare = 0, + /* + * The node is fully used and the tasks don't compete for more CPU + * cycles. Nevertheless, some tasks might wait before running. + */ + node_fully_busy, + /* + * The node is overloaded and can't provide expected CPU cycles to all + * tasks. + */ + node_overloaded +}; /* Cached statistics for all CPUs within a node */ struct numa_stats { unsigned long load; - + unsigned long util; /* Total compute capacity of CPUs on a node */ unsigned long compute_capacity; + unsigned int nr_running; + unsigned int weight; + enum numa_type node_type; }; -/* - * XXX borrowed from update_sg_lb_stats - */ -static void update_numa_stats(struct numa_stats *ns, int nid) -{ - int cpu; - - memset(ns, 0, sizeof(*ns)); - for_each_cpu(cpu, cpumask_of_node(nid)) { - struct rq *rq = cpu_rq(cpu); - - ns->load += cpu_runnable_load(rq); - ns->compute_capacity += capacity_of(cpu); - } - -} - struct task_numa_env { struct task_struct *p; @@ -1521,6 +1518,47 @@ struct task_numa_env { int best_cpu; }; +static unsigned long cpu_load(struct rq *rq); +static unsigned long cpu_util(int cpu); + +static inline enum +numa_type numa_classify(unsigned int imbalance_pct, + struct numa_stats *ns) +{ + if ((ns->nr_running > ns->weight) && + ((ns->compute_capacity * 100) < (ns->util * imbalance_pct))) + return node_overloaded; + + if ((ns->nr_running < ns->weight) || + ((ns->compute_capacity * 100) > (ns->util * imbalance_pct))) + return node_has_spare; + + return node_fully_busy; +} + +/* + * XXX borrowed from update_sg_lb_stats + */ +static void update_numa_stats(struct task_numa_env *env, + struct numa_stats *ns, int nid) +{ + int cpu; + + memset(ns, 0, sizeof(*ns)); + for_each_cpu(cpu, cpumask_of_node(nid)) { + struct rq *rq = cpu_rq(cpu); + + ns->load += cpu_load(rq); + ns->util += cpu_util(cpu); + ns->nr_running += rq->cfs.h_nr_running; + ns->compute_capacity += capacity_of(cpu); + } + + ns->weight = cpumask_weight(cpumask_of_node(nid)); + + ns->node_type = numa_classify(env->imbalance_pct, ns); +} + static void task_numa_assign(struct task_numa_env *env, struct task_struct *p, long imp) { @@ -1556,6 +1594,11 @@ static bool load_too_imbalanced(long src_load, long dst_load, long orig_src_load, orig_dst_load; long src_capacity, dst_capacity; + + /* If dst node has spare capacity, there is no real load imbalance */ + if (env->dst_stats.node_type == node_has_spare) + return false; + /* * The load is corrected for the CPU capacity available on each node. * @@ -1788,10 +1831,10 @@ static int task_numa_migrate(struct task_struct *p) dist = env.dist = node_distance(env.src_nid, env.dst_nid); taskweight = task_weight(p, env.src_nid, dist); groupweight = group_weight(p, env.src_nid, dist); - update_numa_stats(&env.src_stats, env.src_nid); + update_numa_stats(&env, &env.src_stats, env.src_nid); taskimp = task_weight(p, env.dst_nid, dist) - taskweight; groupimp = group_weight(p, env.dst_nid, dist) - groupweight; - update_numa_stats(&env.dst_stats, env.dst_nid); + update_numa_stats(&env, &env.dst_stats, env.dst_nid); /* Try to find a spot on the preferred nid. */ task_numa_find_cpu(&env, taskimp, groupimp); @@ -1824,7 +1867,7 @@ static int task_numa_migrate(struct task_struct *p) env.dist = dist; env.dst_nid = nid; - update_numa_stats(&env.dst_stats, env.dst_nid); + update_numa_stats(&env, &env.dst_stats, env.dst_nid); task_numa_find_cpu(&env, taskimp, groupimp); } } @@ -3687,11 +3730,6 @@ static void remove_entity_load_avg(struct sched_entity *se) raw_spin_unlock_irqrestore(&cfs_rq->removed.lock, flags); } -static inline unsigned long cfs_rq_runnable_load_avg(struct cfs_rq *cfs_rq) -{ - return cfs_rq->avg.runnable_load_avg; -} - static inline unsigned long cfs_rq_load_avg(struct cfs_rq *cfs_rq) { return cfs_rq->avg.load_avg; -- 2.16.4