Received: by 2002:a05:6a10:1a4d:0:0:0:0 with SMTP id nk13csp5985736pxb; Mon, 14 Feb 2022 12:23:59 -0800 (PST) X-Google-Smtp-Source: ABdhPJyuYkIlttcEZqGjYzdCfW/mOahHEOX5NLpQWO8ksp+Vn6UU9zs6opbbE1o8vvElSm581+En X-Received: by 2002:a63:854a:: with SMTP id u71mr646453pgd.212.1644870239337; Mon, 14 Feb 2022 12:23:59 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1644870239; cv=none; d=google.com; s=arc-20160816; b=OU7LvTLt726E0B3cDCfWGfuTPPfO8Jty+FVicb+itfbapR7Wj8nOUt3zztKuQ0ZqVP IEoYnW7P4mU9/DW66NHUZt8Q1lE5pZJkMLfbn3Nb2sNzkSh1eTptpqb4z+xbYzxw9onC WcbP5Q8J/0HiZ6JqM+S1zWqYEbbTrsg6SUQJ3t3UipopJ3kOW/E/FaGAxzuQsX99XC6h T2cuuEj07FahPfx4j333CKie2oFXoSDWR0vAtyhCDUY0JtRTz9xEV9aYeDnq5F0EWR3/ GNmKPVcmspLCU3RLeOUNJOGLRSx4zZNXMAqnE/odOP40rOj2Vf9rXG0L7K13GRXKzI5X qm5Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=YCvPx0LAxErtAf0GfhIq/50YVwjpT8qlAVAFyOV4q5s=; b=Iui40GiwqkFqiwSvbngcBLZLwaguAlhdnic26a9rvsh9pZnnLQK3k8Rnsd3G+1oicP HsItf6DC2m/n41DAeJSol0ytPdsCXjdqBkyzqEx3qTU0hruKpYt5o4omDt1utFCwdczX V+qcnjH43wmDcVHUYc/Vg2+2NzSOohvX0Fy2g/LzszemoZUoQ69Rcm6Xrz5GEtSgbtGg Wxk5NmnMBw7mcatfj8HvoavvotFOMYOIhNKSzuoqsuzM0GrM4Bgsfv0MUiKesGfx7Gw0 Mo4A8ICSjVPoVgxRu32pvXaqW5Vgj6nTOfh07ktsUNFHI3j/BQIo1MDBqsLHommKhCjG G6uQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=PuHccwHr; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [23.128.96.19]) by mx.google.com with ESMTPS id k35si677305pgl.310.2022.02.14.12.23.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Feb 2022 12:23:59 -0800 (PST) Received-SPF: softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) client-ip=23.128.96.19; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=PuHccwHr; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from out1.vger.email (out1.vger.email [IPv6:2620:137:e000::1:20]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id E33A747571; Mon, 14 Feb 2022 11:56:38 -0800 (PST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232446AbiBNKIj (ORCPT + 99 others); Mon, 14 Feb 2022 05:08:39 -0500 Received: from mxb-00190b01.gslb.pphosted.com ([23.128.96.19]:54704 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344919AbiBNKDW (ORCPT ); Mon, 14 Feb 2022 05:03:22 -0500 Received: from mail-lj1-x22c.google.com (mail-lj1-x22c.google.com [IPv6:2a00:1450:4864:20::22c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 453AA41993 for ; Mon, 14 Feb 2022 01:48:48 -0800 (PST) Received: by mail-lj1-x22c.google.com with SMTP id be32so2925753ljb.7 for ; Mon, 14 Feb 2022 01:48:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=YCvPx0LAxErtAf0GfhIq/50YVwjpT8qlAVAFyOV4q5s=; b=PuHccwHrNCiz2jePG34br/PP18S87pJiYXywXU7SsUeJBQuaPUbIjyWANZz9nO3eNW lj+45LSzwLr++WGj2aYqhMrjuDN/K7W8H4yLF9F51yxXNUP7va4DPwa5UpGhWJZ65KqD CYgNUvJ9GtlD6EfUNaXhWe20EuwFsuBn4Gia7BRmMsLw6coSlbRz72F1mgVpGnumM1Q1 039A467iGxr80lnu8Ws+Iq5b/QrZ/3t8iJuVRLEGse7zuTwYsWh0G+cQBnqGyfNbbLtu ISeZFmNmbAmbDuQ6Ub7s7XUOvlwtxWDjS79ZHPh3A6jwxkMX7u6aycX2ZCMrHmlrp9xi Dgsg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=YCvPx0LAxErtAf0GfhIq/50YVwjpT8qlAVAFyOV4q5s=; b=y0C/1ebgOtnxX4MHizcsIYAQKxlzsJvtke9T7i9Wk5obI78X58yxbL/tFoHL2mv4FG PEQUEvTtmFe+OlBEVfg8EirCw5qSJN7Pp5ZKxFumRn/eQHRZrhlYL76bJ34NrnVfNOfa r0goW2FgDV8fgN9QQfK4qwmLNJaO/6d+beLGVvx2bZSIHukPO6+RdfWCOl1190oVJXsQ SdXgp4ux3bj9t+eODwtUqFjugJalDG8MMbDOPG+ih6LB8ODEYu1W9uMGbxYR2lH7jL9t 8M0RzpFc3CBAuyTVCwFHhHbkI20nppXY73a/QzD7xxn12uqG1KovaSAQT/gHRsjE4YuL Zg0Q== X-Gm-Message-State: AOAM530tjMsuVXFQ5veN/hVKgRpmi8Gf8ZGUbJO5wAVFcSWN1yajiC4c yaqz4J92SAcR+VBvoRCBhGiAdFpZ0hRlir4MaKzxVA== X-Received: by 2002:a05:651c:a0b:: with SMTP id k11mr8709410ljq.266.1644832126546; Mon, 14 Feb 2022 01:48:46 -0800 (PST) MIME-Version: 1.0 References: <20220208094334.16379-1-mgorman@techsingularity.net> <20220208094334.16379-2-mgorman@techsingularity.net> In-Reply-To: <20220208094334.16379-2-mgorman@techsingularity.net> From: Vincent Guittot Date: Mon, 14 Feb 2022 10:48:35 +0100 Message-ID: Subject: Re: [PATCH 1/2] sched/fair: Improve consistency of allowed NUMA balance calculations To: Mel Gorman Cc: Peter Zijlstra , Ingo Molnar , Valentin Schneider , Aubrey Li , Barry Song , Mike Galbraith , Srikar Dronamraju , Gautham Shenoy , K Prateek Nayak , LKML Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-2.0 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,RDNS_NONE,SPF_HELO_NONE,T_SCC_BODY_TEXT_LINE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, 8 Feb 2022 at 10:43, Mel Gorman wrote: > > There are inconsistencies when determining if a NUMA imbalance is allowed > that should be corrected. > > o allow_numa_imbalance changes types and is not always examining > the destination group so both the type should be corrected as > well as the naming. > o find_idlest_group uses the sched_domain's weight instead of the > group weight which is different to find_busiest_group > o find_busiest_group uses the source group instead of the destination > which is different to task_numa_find_cpu > o Both find_idlest_group and find_busiest_group should account > for the number of running tasks if a move was allowed to be > consistent with task_numa_find_cpu > > Fixes: 7d2b5dd0bcc4 ("sched/numa: Allow a floating imbalance between NUMA nodes") > Signed-off-by: Mel Gorman Reviewed-by: Vincent Guittot > --- > kernel/sched/fair.c | 18 ++++++++++-------- > 1 file changed, 10 insertions(+), 8 deletions(-) > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index 095b0aa378df..4592ccf82c34 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -9003,9 +9003,10 @@ static bool update_pick_idlest(struct sched_group *idlest, > * This is an approximation as the number of running tasks may not be > * related to the number of busy CPUs due to sched_setaffinity. > */ > -static inline bool allow_numa_imbalance(int dst_running, int dst_weight) > +static inline bool > +allow_numa_imbalance(unsigned int running, unsigned int weight) > { > - return (dst_running < (dst_weight >> 2)); > + return (running < (weight >> 2)); > } > > /* > @@ -9139,12 +9140,13 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p, int this_cpu) > return idlest; > #endif > /* > - * Otherwise, keep the task on this node to stay close > - * its wakeup source and improve locality. If there is > - * a real need of migration, periodic load balance will > - * take care of it. > + * Otherwise, keep the task close to the wakeup source > + * and improve locality if the number of running tasks > + * would remain below threshold where an imbalance is > + * allowed. If there is a real need of migration, > + * periodic load balance will take care of it. > */ > - if (allow_numa_imbalance(local_sgs.sum_nr_running, sd->span_weight)) > + if (allow_numa_imbalance(local_sgs.sum_nr_running + 1, local_sgs.group_weight)) > return NULL; > } > > @@ -9350,7 +9352,7 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s > /* Consider allowing a small imbalance between NUMA groups */ > if (env->sd->flags & SD_NUMA) { > env->imbalance = adjust_numa_imbalance(env->imbalance, > - busiest->sum_nr_running, busiest->group_weight); > + local->sum_nr_running + 1, local->group_weight); > } > > return; > -- > 2.31.1 >