Received: by 2002:a05:6a10:1a4d:0:0:0:0 with SMTP id nk13csp3577519pxb; Fri, 4 Feb 2022 11:30:56 -0800 (PST) X-Google-Smtp-Source: ABdhPJwRlobGFe4afdEX3w4L3X3T8AlnsMyi5HJwb/IdDmlXKkU75oZegLFDa6VdC0yn5SuAoHFT X-Received: by 2002:a17:90a:34c3:: with SMTP id m3mr4963738pjf.26.1644003056397; Fri, 04 Feb 2022 11:30:56 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1644003056; cv=none; d=google.com; s=arc-20160816; b=vAffJ36qgIHeKqHxzIuSkyb7xxwEQ/RJtMBmPy9nkHiCUfreKFKUHhewaGHKKkrgRT ioHWtxloNYJ7D/jCrujoFOmGnuVlJIbBVsXs0MPPO0N06HbTyFvH3u6D9vpZiS5CcRyw WJlHmHwmnuPjOEviRuvcLoICC5XOBuglme88UeWRu0tu2294/AGzEZqvmv8vtslSz7G0 dWeAUD1O+5hp+kKK+gxia93bGz+FOfW03MM+4pOy9eG+mUjqI/d9fJiNP7wRs4Z10+fw dQE7SDGiHBHS66O5pKd3RGMbQ/B3j/F4RI9Gc6G4t/eepMOAOw5OEFwp8v+Z42g92IzM LiOA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=H/IoXc/iIQZc2N61CXwDOXXho3zvyCPBPYzMOm/mf0c=; b=EyNkrA5X3ri5ROEzomNETgAtXU9iaNIjPwi1ZfmbaHXZjLK1PVGK4nPB6oNnfQAKKf dX+QhJnM+r73Wim85j/yBDlAICll/waXjlqmTqijanydpYgsyU/RBJlcxpbM98as0mxU 7LFK9Hg1Xlu7xwh3huBQuK0gVzkRAKxXnB2dpKl+Ld0qppPaNwkPFwNHjzknNVGLoRGP MJlZDZmPOqZdSN4536f7M7mxRnPxxrzYc0imKDMrwdOoUYfS9cCq/UerO8h0Pc4F7DeW c0T6k0PWyezWknjiEpb2GYDXdQg0wba/ehCxeM326UdqBGn4sSxNSS9PPK2BUkYaTjSo 8bPw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id lx9si3038245pjb.183.2022.02.04.11.30.39; Fri, 04 Feb 2022 11:30:56 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351351AbiBCOra (ORCPT + 99 others); Thu, 3 Feb 2022 09:47:30 -0500 Received: from outbound-smtp05.blacknight.com ([81.17.249.38]:60949 "EHLO outbound-smtp05.blacknight.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351489AbiBCOrP (ORCPT ); Thu, 3 Feb 2022 09:47:15 -0500 Received: from mail.blacknight.com (pemlinmail01.blacknight.ie [81.17.254.10]) by outbound-smtp05.blacknight.com (Postfix) with ESMTPS id 92656CCD28 for ; Thu, 3 Feb 2022 14:47:14 +0000 (GMT) Received: (qmail 1893 invoked from network); 3 Feb 2022 14:47:14 -0000 Received: from unknown (HELO stampy.112glenside.lan) (mgorman@techsingularity.net@[84.203.17.223]) by 81.17.254.9 with ESMTPA; 3 Feb 2022 14:47:14 -0000 From: Mel Gorman To: Peter Zijlstra Cc: Ingo Molnar , Vincent Guittot , Valentin Schneider , Aubrey Li , Barry Song , Mike Galbraith , Srikar Dronamraju , Gautham Shenoy , LKML , Mel Gorman Subject: [PATCH 1/2] sched/fair: Improve consistency of allowed NUMA balance calculations Date: Thu, 3 Feb 2022 14:46:51 +0000 Message-Id: <20220203144652.12540-2-mgorman@techsingularity.net> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220203144652.12540-1-mgorman@techsingularity.net> References: <20220203144652.12540-1-mgorman@techsingularity.net> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org There are inconsistencies when determining if a NUMA imbalance is allowed that should be corrected. o allow_numa_imbalance changes types and is not always examining the destination group so both the type should be corrected as well as the naming. o find_idlest_group uses the sched_domain's weight instead of the group weight which is different to find_busiest_group o find_busiest_group uses the source group instead of the destination which is different to task_numa_find_cpu o Both find_idlest_group and find_busiest_group should account for the number of running tasks if a move was allowed to be consistent with task_numa_find_cpu Fixes: 7d2b5dd0bcc4 ("sched/numa: Allow a floating imbalance between NUMA nodes") Signed-off-by: Mel Gorman --- kernel/sched/fair.c | 18 ++++++++++-------- 1 file changed, 10 insertions(+), 8 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 095b0aa378df..4592ccf82c34 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -9003,9 +9003,10 @@ static bool update_pick_idlest(struct sched_group *idlest, * This is an approximation as the number of running tasks may not be * related to the number of busy CPUs due to sched_setaffinity. */ -static inline bool allow_numa_imbalance(int dst_running, int dst_weight) +static inline bool +allow_numa_imbalance(unsigned int running, unsigned int weight) { - return (dst_running < (dst_weight >> 2)); + return (running < (weight >> 2)); } /* @@ -9139,12 +9140,13 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p, int this_cpu) return idlest; #endif /* - * Otherwise, keep the task on this node to stay close - * its wakeup source and improve locality. If there is - * a real need of migration, periodic load balance will - * take care of it. + * Otherwise, keep the task close to the wakeup source + * and improve locality if the number of running tasks + * would remain below threshold where an imbalance is + * allowed. If there is a real need of migration, + * periodic load balance will take care of it. */ - if (allow_numa_imbalance(local_sgs.sum_nr_running, sd->span_weight)) + if (allow_numa_imbalance(local_sgs.sum_nr_running + 1, local_sgs.group_weight)) return NULL; } @@ -9350,7 +9352,7 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s /* Consider allowing a small imbalance between NUMA groups */ if (env->sd->flags & SD_NUMA) { env->imbalance = adjust_numa_imbalance(env->imbalance, - busiest->sum_nr_running, busiest->group_weight); + local->sum_nr_running + 1, local->group_weight); } return; -- 2.31.1