Received: by 2002:a05:6a10:1a4d:0:0:0:0 with SMTP id nk13csp1682594pxb; Wed, 9 Feb 2022 02:04:54 -0800 (PST) X-Google-Smtp-Source: ABdhPJxT9QtiOwDMyboGd67ANQe4ohY7nYD9/hxXRUnyxsHXs8svXVUjjUhBwOdwFry3V1ZCyOcE X-Received: by 2002:a17:903:2341:: with SMTP id c1mr1410619plh.77.1644401093658; Wed, 09 Feb 2022 02:04:53 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1644401093; cv=none; d=google.com; s=arc-20160816; b=a9L77XIt/zDfoTJhTX7oTcVnQWWQ/LqN+2yLEJQmV1GlNAQW9LMN6/gcGM84fHo8x8 SkV8Z/QxyBNMzJ6es/G2zKd+sDvMZJYz2tGL4+NuNZoUF4f0avmzMKIOdfDF2AMXQGl0 UYOSvO1m0nRrBM9YUKQ4wr5JtxHXxUOMop8eVNPh3VTf2valC/X3qlHQz4YIfyceWznt zDfG7Np5XSKQKFT+tGgvf9yzi/XgnU/QncnmDKvgqBuWHix2i2W1ROT/McMuSulTsWml D4eYs4WEXN3cWTD6/cA2k809S/kI4o5b/RLjE+3ZXKDMrua8wDURPG2X1owaUB9rF012 GVtQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=H/IoXc/iIQZc2N61CXwDOXXho3zvyCPBPYzMOm/mf0c=; b=V2yXr8r4LhN34tKIbSpa6y18dde42Mve7gbAr8WtJxW76k7mXUctnGEEC5u9XhYBiw bK1PuHUDwVHBIJHt3XI1kLN6OZE8Iz9RYjygzHHclK+SHz8IjEbbomGeA5U3d84qs3BB ro31G9/Ij8ND7Yw4oUwznmiGzt48P3x0JwdKOj3/C+I9yRFiDkj7fHA1nhf0aBI8aCf/ x/Nl2XwHexnoiuVEBOBK/3sivThM3sLOmpdBHSsOwQ/9RK/N0FBMxPRsd44oiNiABsy5 x5I7Mg0xq4pcSNCAvPFlRWMaHcD1GBhgldl5Mc7NpeFhtnGARKarkhStEEh/zoojIZUt clsQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id oa10si4887855pjb.146.2022.02.09.02.04.39; Wed, 09 Feb 2022 02:04:53 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1355414AbiBHLXg (ORCPT + 99 others); Tue, 8 Feb 2022 06:23:36 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47362 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1355741AbiBHJxB (ORCPT ); Tue, 8 Feb 2022 04:53:01 -0500 Received: from outbound-smtp21.blacknight.com (outbound-smtp21.blacknight.com [81.17.249.41]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3FABFC03FEC3 for ; Tue, 8 Feb 2022 01:53:00 -0800 (PST) Received: from mail.blacknight.com (pemlinmail06.blacknight.ie [81.17.255.152]) by outbound-smtp21.blacknight.com (Postfix) with ESMTPS id D1D94CCF8C for ; Tue, 8 Feb 2022 09:43:55 +0000 (GMT) Received: (qmail 1822 invoked from network); 8 Feb 2022 09:43:55 -0000 Received: from unknown (HELO stampy.112glenside.lan) (mgorman@techsingularity.net@[84.203.17.223]) by 81.17.254.9 with ESMTPA; 8 Feb 2022 09:43:55 -0000 From: Mel Gorman To: Peter Zijlstra Cc: Ingo Molnar , Vincent Guittot , Valentin Schneider , Aubrey Li , Barry Song , Mike Galbraith , Srikar Dronamraju , Gautham Shenoy , K Prateek Nayak , LKML , Mel Gorman Subject: [PATCH 1/2] sched/fair: Improve consistency of allowed NUMA balance calculations Date: Tue, 8 Feb 2022 09:43:33 +0000 Message-Id: <20220208094334.16379-2-mgorman@techsingularity.net> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220208094334.16379-1-mgorman@techsingularity.net> References: <20220208094334.16379-1-mgorman@techsingularity.net> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org There are inconsistencies when determining if a NUMA imbalance is allowed that should be corrected. o allow_numa_imbalance changes types and is not always examining the destination group so both the type should be corrected as well as the naming. o find_idlest_group uses the sched_domain's weight instead of the group weight which is different to find_busiest_group o find_busiest_group uses the source group instead of the destination which is different to task_numa_find_cpu o Both find_idlest_group and find_busiest_group should account for the number of running tasks if a move was allowed to be consistent with task_numa_find_cpu Fixes: 7d2b5dd0bcc4 ("sched/numa: Allow a floating imbalance between NUMA nodes") Signed-off-by: Mel Gorman --- kernel/sched/fair.c | 18 ++++++++++-------- 1 file changed, 10 insertions(+), 8 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 095b0aa378df..4592ccf82c34 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -9003,9 +9003,10 @@ static bool update_pick_idlest(struct sched_group *idlest, * This is an approximation as the number of running tasks may not be * related to the number of busy CPUs due to sched_setaffinity. */ -static inline bool allow_numa_imbalance(int dst_running, int dst_weight) +static inline bool +allow_numa_imbalance(unsigned int running, unsigned int weight) { - return (dst_running < (dst_weight >> 2)); + return (running < (weight >> 2)); } /* @@ -9139,12 +9140,13 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p, int this_cpu) return idlest; #endif /* - * Otherwise, keep the task on this node to stay close - * its wakeup source and improve locality. If there is - * a real need of migration, periodic load balance will - * take care of it. + * Otherwise, keep the task close to the wakeup source + * and improve locality if the number of running tasks + * would remain below threshold where an imbalance is + * allowed. If there is a real need of migration, + * periodic load balance will take care of it. */ - if (allow_numa_imbalance(local_sgs.sum_nr_running, sd->span_weight)) + if (allow_numa_imbalance(local_sgs.sum_nr_running + 1, local_sgs.group_weight)) return NULL; } @@ -9350,7 +9352,7 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s /* Consider allowing a small imbalance between NUMA groups */ if (env->sd->flags & SD_NUMA) { env->imbalance = adjust_numa_imbalance(env->imbalance, - busiest->sum_nr_running, busiest->group_weight); + local->sum_nr_running + 1, local->group_weight); } return; -- 2.31.1