Received: by 2002:a05:7412:b10a:b0:f3:1519:9f41 with SMTP id az10csp2914483rdb; Mon, 4 Dec 2023 10:54:48 -0800 (PST) X-Google-Smtp-Source: AGHT+IFclM008hoX8J0RLP16c7dxXDWBxeIhtYk+jG/+VMKMlOzQlxhLIXe7NqtF9kDQH4LeSUoZ X-Received: by 2002:a17:90b:17c5:b0:286:b853:e67d with SMTP id me5-20020a17090b17c500b00286b853e67dmr32378pjb.15.1701716088333; Mon, 04 Dec 2023 10:54:48 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701716088; cv=none; d=google.com; s=arc-20160816; b=iqTStwVOpUvw7KzK+whiDzFU+tDISlmntmE0FjzOMX+WUK0fqdkK1J8D1vuC7dSSq7 TrG4gmG0r+/SsOzNq31ABzDJUbNMkyeyVT+hI+Tqyb8f9WaZ6eEGNV0fbaHykZ08dd30 CeLzFCzObH1hxe6VVQ0QNdoQ4PLuYoiIDF6x2MlbMv12nuBWstMqb+yILTlcJjxQz8Z6 DRDf2mUNMrumZG8RT9vdbcpJhX4CgyXXLJEBi0o+mhPI07YgXcfYlalyn18pwo3DVNLT bCaquD8yir8jMQNow7eodO30bQtcWqAT5n6PUse0jYdL2Zyn0AG8hzfxNY6s69XJjNEe xd3A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=fTG0qzW5YwVd2C6ri3pbs6kYUwtujHbcPMvvgVkcvck=; fh=Vn7xSirI6nqm5i0/AysSYQOSFcHB81/IRDfzPMkc31I=; b=BvYP8OWE/CCT5ItCNPU6gbus39ivFlXv6vN5XsHgjDNNoPBHfj0EUw5E3jf+FLt+qC sVHIlmOp39+/HbOfKjNos9WpjS6aaNsNQTYm3HQk9GaqomXTXUUKkfrMZRfB/ItaM388 moY/0qLdEBHvwgs+dpbQvChaH6J6n+hxqIz5qMnsY+sXxoVZkkfOJMQVd8b5kf9spWjY 1C62bBheq2OI5180ynwI48iD6QNQQSBcTqcsG5cVRAo+qEAwBPI2e/qDjGJx0OljXFn1 vhQJjWrQteZXw+FqGDOC+HzqRFfGqh3mh6VMfyf//C6OMNQMTviVZwLDETJW8zVG5bTP ZPug== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="j3/36/BB"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from lipwig.vger.email (lipwig.vger.email. [2620:137:e000::3:3]) by mx.google.com with ESMTPS id z17-20020a17090ab11100b002850d5f8ab3si10627814pjq.174.2023.12.04.10.54.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Dec 2023 10:54:48 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) client-ip=2620:137:e000::3:3; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="j3/36/BB"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by lipwig.vger.email (Postfix) with ESMTP id 346618051A1D; Mon, 4 Dec 2023 10:54:43 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at lipwig.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231831AbjLDSyZ (ORCPT + 99 others); Mon, 4 Dec 2023 13:54:25 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38560 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231293AbjLDSyK (ORCPT ); Mon, 4 Dec 2023 13:54:10 -0500 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.12]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 80665119; Mon, 4 Dec 2023 10:54:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1701716054; x=1733252054; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=HhesDOeHTlAqkqakJA/ux9enaqFLkadFLrpOMstKfTs=; b=j3/36/BBfWrAAY9cxIpzIjiMKRnB+jXafk7V1hBqmra8ndutopBfhy9B JhvaYTPQ6/y668HAhoiipx1BMWhM4X2Ar+FcCpOnlqDv5+3Tw7XXyBhGt mMfPU51L8uOACsgE4pDZ6MK5StEFI/+t7yMVfslDRjkRpAQBGXNVqkhYX DunRTTb8nJ9o39Y/KobJtjjXmGxTR2Szs7bgt6mA4MK/Of0PMmx1Wr/hg 5IFfaFXCfIc2wCMZDzM74k1dkIHjjdRUW8qPVhxSKc3mA/rPTSH3ARn6t 9sJipZ+yot1xi9YwYIgrfOpperEm6iBsua9W/sf22S6MqLu6WKr12aKjP g==; X-IronPort-AV: E=McAfee;i="6600,9927,10914"; a="838525" X-IronPort-AV: E=Sophos;i="6.04,250,1695711600"; d="scan'208";a="838525" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orvoesa104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2023 10:54:09 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10914"; a="836687208" X-IronPort-AV: E=Sophos;i="6.04,250,1695711600"; d="scan'208";a="836687208" Received: from agluck-desk3.sc.intel.com ([172.25.222.74]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2023 10:54:08 -0800 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Peter Newman , Jonathan Corbet , Shuah Khan , x86@kernel.org Cc: Shaopeng Tan , James Morse , Jamie Iles , Babu Moger , Randy Dunlap , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, patches@lists.linux.dev, Tony Luck , Shaopeng Tan Subject: [PATCH v13 7/8] x86/resctrl: Sub NUMA Cluster detection and enable Date: Mon, 4 Dec 2023 10:53:56 -0800 Message-ID: <20231204185357.120501-8-tony.luck@intel.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231204185357.120501-1-tony.luck@intel.com> References: <20231130003418.89964-1-tony.luck@intel.com> <20231204185357.120501-1-tony.luck@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-0.9 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lipwig.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (lipwig.vger.email [0.0.0.0]); Mon, 04 Dec 2023 10:54:43 -0800 (PST) There isn't a simple hardware bit that indicates whether a CPU is running in Sub NUMA Cluster (SNC) mode. Infer the state by comparing the ratio of NUMA nodes to L3 cache instances. When SNC mode is detected, reconfigure the RMID counters by updating the MSR_RMID_SNC_CONFIG MSR on each socket as CPUs are seen. Clearing bit zero of the MSR divides the RMIDs and renumbers the ones on the second SNC node to start from zero. Signed-off-by: Tony Luck Tested-by: Shaopeng Tan Reviewed-by: Peter Newman Reviewed-by: Reinette Chatre Reviewed-by: Shaopeng Tan --- arch/x86/include/asm/msr-index.h | 1 + arch/x86/kernel/cpu/resctrl/core.c | 118 +++++++++++++++++++++++++++++ 2 files changed, 119 insertions(+) diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h index 1d51e1850ed0..94d29d81e6db 100644 --- a/arch/x86/include/asm/msr-index.h +++ b/arch/x86/include/asm/msr-index.h @@ -1111,6 +1111,7 @@ #define MSR_IA32_QM_CTR 0xc8e #define MSR_IA32_PQR_ASSOC 0xc8f #define MSR_IA32_L3_CBM_BASE 0xc90 +#define MSR_RMID_SNC_CONFIG 0xca0 #define MSR_IA32_L2_CBM_BASE 0xd10 #define MSR_IA32_MBA_THRTL_BASE 0xd50 diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resctrl/core.c index cf5aba8a74bf..1de1c4499b7d 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -16,11 +16,14 @@ #define pr_fmt(fmt) "resctrl: " fmt +#include #include #include #include #include +#include +#include #include #include #include "internal.h" @@ -740,11 +743,42 @@ static void clear_closid_rmid(int cpu) wrmsr(MSR_IA32_PQR_ASSOC, 0, 0); } +/* + * The power-on reset value of MSR_RMID_SNC_CONFIG is 0x1 + * which indicates that RMIDs are configured in legacy mode. + * This mode is incompatible with Linux resctrl semantics + * as RMIDs are partitioned between SNC nodes, which requires + * a user to know which RMID is allocated to a task. + * Clearing bit 0 reconfigures the RMID counters for use + * in Sub NUMA Cluster mode. This mode is better for Linux. + * The RMID space is divided between all SNC nodes with the + * RMIDs renumbered to start from zero in each node when + * couning operations from tasks. Code to read the counters + * must adjust RMID counter numbers based on SNC node. See + * __rmid_read() for code that does this. + */ +static void snc_remap_rmids(int cpu) +{ + u64 val; + + /* Only need to enable once per package. */ + if (cpumask_first(topology_core_cpumask(cpu)) != cpu) + return; + + rdmsrl(MSR_RMID_SNC_CONFIG, val); + val &= ~BIT_ULL(0); + wrmsrl(MSR_RMID_SNC_CONFIG, val); +} + static int resctrl_online_cpu(unsigned int cpu) { struct rdt_resource *r; mutex_lock(&rdtgroup_mutex); + + if (snc_nodes_per_l3_cache > 1) + snc_remap_rmids(cpu); + for_each_capable_rdt_resource(r) domain_add_cpu(cpu, r); /* The cpu is set in default rdtgroup after online. */ @@ -999,11 +1033,95 @@ static __init bool get_rdt_resources(void) return (rdt_mon_capable || rdt_alloc_capable); } +/* CPU models that support MSR_RMID_SNC_CONFIG */ +static const struct x86_cpu_id snc_cpu_ids[] __initconst = { + X86_MATCH_INTEL_FAM6_MODEL(ICELAKE_X, 0), + X86_MATCH_INTEL_FAM6_MODEL(SAPPHIRERAPIDS_X, 0), + X86_MATCH_INTEL_FAM6_MODEL(EMERALDRAPIDS_X, 0), + X86_MATCH_INTEL_FAM6_MODEL(GRANITERAPIDS_X, 0), + {} +}; + +/* + * There isn't a simple hardware bit that indicates whether a CPU is running + * in Sub NUMA Cluster (SNC) mode. Infer the state by comparing the + * ratio of NUMA nodes to L3 cache instances. + * It is not possible to accurately determine SNC state if the system is + * booted with a maxcpus=N parameter. That distorts the ratio of SNC nodes + * to L3 caches. It will be OK if system is booted with hyperthreading + * disabled (since this doesn't affect the ratio). + */ +static __init int snc_get_config(void) +{ + unsigned long *node_caches; + int mem_only_nodes = 0; + int cpu, node, ret; + int num_l3_caches; + int cache_id; + + if (!x86_match_cpu(snc_cpu_ids)) + return 1; + + node_caches = bitmap_zalloc(num_possible_cpus(), GFP_KERNEL); + if (!node_caches) + return 1; + + cpus_read_lock(); + + if (num_online_cpus() != num_present_cpus()) + pr_warn("Some CPUs offline, SNC detection may be incorrect\n"); + + for_each_node(node) { + cpu = cpumask_first(cpumask_of_node(node)); + if (cpu < nr_cpu_ids) { + cache_id = get_cpu_cacheinfo_id(cpu, 3); + if (cache_id != -1) + set_bit(cache_id, node_caches); + } else { + mem_only_nodes++; + } + } + cpus_read_unlock(); + + num_l3_caches = bitmap_weight(node_caches, num_possible_cpus()); + kfree(node_caches); + + if (!num_l3_caches) + goto insane; + + /* sanity check #1: Number of CPU nodes must be multiple of num_l3_caches */ + if ((nr_node_ids - mem_only_nodes) % num_l3_caches) + goto insane; + + ret = (nr_node_ids - mem_only_nodes) / num_l3_caches; + + /* sanity check #2: Only valid results are 1, 2, 3, 4 */ + switch (ret) { + case 1: + break; + case 2: + case 3: + case 4: + rdt_resources_all[RDT_RESOURCE_L3].r_resctrl.mon_scope = RESCTRL_NODE; + break; + default: + goto insane; + } + + return ret; +insane: + pr_warn("SNC insanity: CPU nodes = %d num_l3_caches = %d\n", + (nr_node_ids - mem_only_nodes), num_l3_caches); + return 1; +} + static __init void rdt_init_res_defs_intel(void) { struct rdt_hw_resource *hw_res; struct rdt_resource *r; + snc_nodes_per_l3_cache = snc_get_config(); + for_each_rdt_resource(r) { hw_res = resctrl_to_arch_res(r); -- 2.41.0