Received: by 2002:a05:6358:489b:b0:bb:da1:e618 with SMTP id x27csp2585516rwn; Fri, 9 Sep 2022 16:44:49 -0700 (PDT) X-Google-Smtp-Source: AA6agR6zX/4mZy1nt11jUmD2Qgv88i56TtRImeNiwk2tzBjp545FbDWOTTlk9Tx/s7ewOKHyJDJF X-Received: by 2002:a17:90a:e513:b0:200:2275:2d27 with SMTP id t19-20020a17090ae51300b0020022752d27mr12072491pjy.162.1662767088959; Fri, 09 Sep 2022 16:44:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1662767088; cv=none; d=google.com; s=arc-20160816; b=VJNs704oaWSmuCCQ/r1EX7B/NLLFcKHNRQ471TpESxKH/xc3cKAMkSRN+w4nB+5XQ7 4XRNPpGUxoY+Kxd47JkeZBTvRv9X8XlABssezeVQWTG1Z9Lzx6XVPJkTZIZT9R32c+P/ 60NNZe6QovHUXQcEMH9rP80UOvgUqgUCSt02DQb47CxiXIGvbFVIUYEBTycMNh9eY98r yBh9Ce9x7NXP+4vWofZ6NkGZJo0TP+HVdnqQouAQnpnEKnpw7qlZEd4aej7Iz5YUntYF tPZsiXDyYX8AyUAZQ3xKvKZVFV+gVFf/kMrWl04KdGQSv1BJV2G9Vsrf6Cs6LWuX+pEd eSWw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=SHuptdpACdlfn8NGcttCDLoZcxqoNV9kXSYTdXm6di8=; b=WUpN5oOB30RTRskXvB+28l60CA/1hoYftMtgcTeUjjevl4rBlZNPyXKk/3n4CWsu3Q 6FwVpDiAFaZ+xCJe5xt35HlBQbYO2G45jKeho3hq/fMhoXjzBYiZJ7J/G4xxZVERu3Va /1CMN6fS3mYT6WegRTRkp8ksaHb1YDtK+lHAMPn8dTzDEY49/PzUsD5qnP+SrVncoaWt YPTEA5nY6Ijiw5w3cu8+Z5eds6osmo58/O4ZQ05/lahgCdsJL+kY59N3PPIcYfUdlLj0 FL9U6EpiTI1gC7FwuO1j3dFM4NmFEAHISQkAmaMS6za/WgDwXvVENYMMdgW4PWc1fxAm 3KVA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=AHXDIVfb; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id 26-20020a63175a000000b004230ec370eesi1706704pgx.647.2022.09.09.16.44.37; Fri, 09 Sep 2022 16:44:48 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=AHXDIVfb; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231624AbiIIXGv (ORCPT + 99 others); Fri, 9 Sep 2022 19:06:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48270 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231338AbiIIXGf (ORCPT ); Fri, 9 Sep 2022 19:06:35 -0400 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 21AFB112B13 for ; Fri, 9 Sep 2022 16:06:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1662764795; x=1694300795; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=tvE2qSOAJYgPdimf6y9rbAfyRdXmrVfr0H+iQAnFajc=; b=AHXDIVfb7Yozs3wxCMCjPmW9yhJsowylDipVEt4VQLw/e8F4t0FGmkl7 uyy4LR63H4CXFFFDziuDj0gUCT41+uL1/1X4Gd+xZqDCCYlw7np5N4E85 oKr0LtT54T5ABqp1uvLNB9FlY59Gt0lRTz9319CbCTi9I0eRCOZM/a7Zi PU5qJM6xuJ2KI4ZE3BZj2sEvSWMbL6IanT4LKk/7aBUagHt/JU/eBVaVw 63WY8bIKiID3RwUwxj8SRVNXfDAcmNpbhMCt6kP5ZpIqV4hdBvBeF+Lxk Wab3bEbN9VtSKW8BmFYK1cUbpHW9P9pZXpzU+TTLbPUZKUOAcLMedsu4s Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10465"; a="298386910" X-IronPort-AV: E=Sophos;i="5.93,304,1654585200"; d="scan'208";a="298386910" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Sep 2022 16:06:33 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,304,1654585200"; d="scan'208";a="677354993" Received: from ranerica-svr.sc.intel.com ([172.25.110.23]) by fmsmga008.fm.intel.com with ESMTP; 09 Sep 2022 16:06:32 -0700 From: Ricardo Neri To: "Peter Zijlstra (Intel)" , Juri Lelli , Vincent Guittot Cc: Ricardo Neri , "Ravi V. Shankar" , Ben Segall , Daniel Bristot de Oliveira , Dietmar Eggemann , Len Brown , Mel Gorman , "Rafael J. Wysocki" , Srinivas Pandruvada , Steven Rostedt , Tim Chen , Valentin Schneider , x86@kernel.org, linux-kernel@vger.kernel.org, Ricardo Neri , "Tim C . Chen" Subject: [RFC PATCH 08/23] sched/fair: Compute task-class performance scores for load balancing Date: Fri, 9 Sep 2022 16:11:50 -0700 Message-Id: <20220909231205.14009-9-ricardo.neri-calderon@linux.intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220909231205.14009-1-ricardo.neri-calderon@linux.intel.com> References: <20220909231205.14009-1-ricardo.neri-calderon@linux.intel.com> X-Spam-Status: No, score=-4.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Compute both the current and the prospective the task-class performance of a scheduling group. As task-class statistics are used during asym_ packing load balancing, the scheduling group will become idle. For a scheduling group with only one CPU, the prospective performance is the performance of its current task if placed on the destination CPU. In a scheduling group composed of SMT siblings the current tasks of all CPUs share the resources of the core. Divide the task-class performance of scheduling group by the number of busy CPUs. After load balancing, the throughput of the siblings that remain busy increases. Plus, the destination CPU now contributes to the overall throughput. Cc: Ben Segall Cc: Daniel Bristot de Oliveira Cc: Dietmar Eggemann Cc: Len Brown Cc: Mel Gorman Cc: Rafael J. Wysocki Cc: Srinivas Pandruvada Cc: Steven Rostedt Cc: Tim C. Chen Cc: Valentin Schneider Cc: x86@kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Ricardo Neri --- kernel/sched/fair.c | 53 +++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 53 insertions(+) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 58a435a04c1c..97731f81b570 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -8405,6 +8405,8 @@ struct sg_lb_stats { enum group_type group_type; unsigned int group_asym_packing; /* Tasks should be moved to preferred CPU */ unsigned long group_misfit_task_load; /* A CPU has a task too big for its capacity */ + long task_class_score_after; /* Prospective task-class score after load balancing */ + long task_class_score_before; /* Task-class score before load balancing */ #ifdef CONFIG_NUMA_BALANCING unsigned int nr_numa_running; unsigned int nr_preferred_running; @@ -8732,6 +8734,49 @@ static void update_rq_task_classes_stats(struct sg_lb_task_class_stats *class_sg class_sgs->min_score = score; class_sgs->p_min_score = rq->curr; } + +static void compute_ilb_sg_task_class_scores(struct sg_lb_task_class_stats *class_sgs, + struct sg_lb_stats *sgs, + int dst_cpu) +{ + int group_score, group_score_without, score_on_dst_cpu; + int busy_cpus = sgs->group_weight - sgs->idle_cpus; + + if (!sched_task_classes_enabled()) + return; + + /* No busy CPUs in the group. No tasks to move. */ + if (!busy_cpus) + return; + + score_on_dst_cpu = arch_get_task_class_score(class_sgs->p_min_score->class, + dst_cpu); + + /* + * The simpest case. The single busy CPU in the current group will + * become idle after pulling its current task. The destination CPU is + * idle. + */ + if (busy_cpus == 1) { + sgs->task_class_score_before = class_sgs->sum_score; + sgs->task_class_score_after = score_on_dst_cpu; + return; + } + + /* + * Now compute the group score with and without the task with the + * lowest score. We assume that the tasks that remain in the group share + * the CPU resources equally. + */ + group_score = class_sgs->sum_score / busy_cpus; + + group_score_without = (class_sgs->sum_score - class_sgs->min_score) / + (busy_cpus - 1); + + sgs->task_class_score_after = group_score_without + score_on_dst_cpu; + sgs->task_class_score_before = group_score; +} + #else /* CONFIG_SCHED_TASK_CLASSES */ static void update_rq_task_classes_stats(struct sg_lb_task_class_stats *class_sgs, struct rq *rq) @@ -8741,6 +8786,13 @@ static void update_rq_task_classes_stats(struct sg_lb_task_class_stats *class_sg static void init_rq_task_classes_stats(struct sg_lb_task_class_stats *class_sgs) { } + +static void compute_ilb_sg_task_class_scores(struct sg_lb_task_class_stats *class_sgs, + struct sg_lb_stats *sgs, + int dst_cpu) +{ +} + #endif /* CONFIG_SCHED_TASK_CLASSES */ /** @@ -8920,6 +8972,7 @@ static inline void update_sg_lb_stats(struct lb_env *env, if (!local_group && env->sd->flags & SD_ASYM_PACKING && env->idle != CPU_NOT_IDLE && sgs->sum_h_nr_running && sched_asym(env, sds, sgs, group)) { + compute_ilb_sg_task_class_scores(&class_stats, sgs, env->dst_cpu); sgs->group_asym_packing = 1; } -- 2.25.1