Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A8F6CC433EF for ; Fri, 10 Dec 2021 09:33:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239464AbhLJJgy (ORCPT ); Fri, 10 Dec 2021 04:36:54 -0500 Received: from outbound-smtp17.blacknight.com ([46.22.139.234]:59477 "EHLO outbound-smtp17.blacknight.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231384AbhLJJgx (ORCPT ); Fri, 10 Dec 2021 04:36:53 -0500 Received: from mail.blacknight.com (pemlinmail06.blacknight.ie [81.17.255.152]) by outbound-smtp17.blacknight.com (Postfix) with ESMTPS id 22A601C3E4E for ; Fri, 10 Dec 2021 09:33:18 +0000 (GMT) Received: (qmail 8451 invoked from network); 10 Dec 2021 09:33:17 -0000 Received: from unknown (HELO stampy.112glenside.lan) (mgorman@techsingularity.net@[84.203.197.169]) by 81.17.254.9 with ESMTPA; 10 Dec 2021 09:33:17 -0000 From: Mel Gorman To: Peter Zijlstra Cc: Ingo Molnar , Vincent Guittot , Valentin Schneider , Aubrey Li , Barry Song , Mike Galbraith , Srikar Dronamraju , Gautham Shenoy , LKML , Mel Gorman Subject: [PATCH v4 0/2] Adjust NUMA imbalance for multiple LLCs Date: Fri, 10 Dec 2021 09:33:05 +0000 Message-Id: <20211210093307.31701-1-mgorman@techsingularity.net> X-Mailer: git-send-email 2.31.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Changelog since V3 o Calculate imb_numa_nr for multiple SD_NUMA domains o Restore behaviour where communicating pairs remain on the same node Commit 7d2b5dd0bcc4 ("sched/numa: Allow a floating imbalance between NUMA nodes") allowed an imbalance between NUMA nodes such that communicating tasks would not be pulled apart by the load balancer. This works fine when there is a 1:1 relationship between LLC and node but can be suboptimal for multiple LLCs if independent tasks prematurely use CPUs sharing cache. The series addresses two problems -- inconsistent use of scheduler domain weights and sub-optimal performance when there are many LLCs per NUMA node. include/linux/sched/topology.h | 1 + kernel/sched/fair.c | 36 ++++++++++++++++--------------- kernel/sched/topology.c | 39 ++++++++++++++++++++++++++++++++++ 3 files changed, 59 insertions(+), 17 deletions(-) -- 2.31.1 Mel Gorman (2): sched/fair: Use weight of SD_NUMA domain in find_busiest_group sched/fair: Adjust the allowed NUMA imbalance when SD_NUMA spans multiple LLCs include/linux/sched/topology.h | 1 + kernel/sched/fair.c | 36 +++++++++++++++++---------------- kernel/sched/topology.c | 37 ++++++++++++++++++++++++++++++++++ 3 files changed, 57 insertions(+), 17 deletions(-) -- 2.31.1