Received: by 2002:a05:7412:3784:b0:e2:908c:2ebd with SMTP id jk4csp2063935rdb; Tue, 3 Oct 2023 09:08:43 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHwOFkpAjOKXUQd6M6yVdG6QAhFdZlKUeZKNdpbsKvtEL2UhpYXutpHI5DoF4j+uv2ot+h1 X-Received: by 2002:a05:6a21:7985:b0:163:a3ce:a699 with SMTP id bh5-20020a056a21798500b00163a3cea699mr5383782pzc.57.1696349323362; Tue, 03 Oct 2023 09:08:43 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1696349323; cv=none; d=google.com; s=arc-20160816; b=jU0K9anLiUnA0Qfu2kb/WWvhEPCNNE2B+g2EF3v8zUqfQBCX4OTnqfECXoQXlOm+pj fnlyPXH/PHzFh3/ynquwg0IUmN34HHaotCbdZ8ssn63my03L35y9h0FoKWM7a9xHkzgM D7GSDI35bya4Mpmfb7i3xCa6d5wosxB6P712ytAquJH7Kni8cq/XQBAq3YEtmqz8oWWM x/zeTKUF6HvUtAsd2TRxHw0HTrKSKVQbCbkkh6eXxbBvFBqFExfwDh/XmE4LR6fGDAb9 lRPvHr9Atzpj3EVDpjHmZBpEdsviktVPkEk23KmtHTQ3Lp4kPAGCHnwaXw745AtmOT8u s8rg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=7h0V5kW9kO50wlOlX3IIVJ1q7QRbjte5ReX5eIPXjf4=; fh=EIH9XAmicvPIUSP7TBeBhZ/WaoqG49JQ3xV1i3Gl7Co=; b=nK4r4ZjP500vxR85WtDTviPtCK7PMto4sup+hLVykFd4u2Da7TsiQxam0ltRrgL3yW ZWBj55d/Olqv7xDUmgnIQ9su95oIEimOERZsmvyP2uNpxz4Dd8bdgfs076aiV7JstDqZ N6sNJR9kHTaH3Ym1gWAKKz7Z79yjp4mJdeecRJBv8H6iMVcDALMTCILXwC99v/IfbVTU dvKRItTzf+R9JTllcKzRLzyKskL6h7ZOz1LCYVd6GOIJFBSzxULIQeRw2lRjYvgcSIB7 pHLAb4lG7uRJD/Qj/gcWoll8wm/2khnXXikzJCM172c8DK8xuLnS7j6mXVDVZQm4RMQ0 CfIA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="Bc/b95TC"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from howler.vger.email (howler.vger.email. [2620:137:e000::3:4]) by mx.google.com with ESMTPS id l8-20020a170902f68800b001bbabd5b14esi1779801plg.608.2023.10.03.09.08.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 03 Oct 2023 09:08:43 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) client-ip=2620:137:e000::3:4; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="Bc/b95TC"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by howler.vger.email (Postfix) with ESMTP id EC31682CEFFD; Tue, 3 Oct 2023 09:08:41 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at howler.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240409AbjJCQIh (ORCPT + 99 others); Tue, 3 Oct 2023 12:08:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47672 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240343AbjJCQIa (ORCPT ); Tue, 3 Oct 2023 12:08:30 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 91C43B7; Tue, 3 Oct 2023 09:08:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1696349299; x=1727885299; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=nS/6A+9fudTsj3ZJvjkkKp3n8jEZoSzViL0DBflXzXU=; b=Bc/b95TCuy5J+edx9xRG4aG4sP0fu0ZfEvNK3G53aG2DFQfygHJxX8Me pkCJuuiGqcUH2ihFxD9sMs9WItHWcU0RzCJ8PGIiOtsvP89BzwD3AQYhf k6zvxsiDoCzKA/HdheFL7h2jahYuDGrNZXETknnBneGxCdm8VNyechxjN x9FC/wZDUcY5E1drqUqiaZ60/ZtwsOwHKA+/ClqsEDWq8Vda2gchHecRn uccJHiWlxZK6AD40UBbUkOVsv3BmWoIA6nBrlUjXjrhOyyKyKck4hKk7F 04dUuadr/whzM026tu5Pfi3sisWfC3cfFsppmm9WGRNkSWS/gmy3WModE g==; X-IronPort-AV: E=McAfee;i="6600,9927,10852"; a="447083303" X-IronPort-AV: E=Sophos;i="6.03,197,1694761200"; d="scan'208";a="447083303" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Oct 2023 09:08:11 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10852"; a="998098110" X-IronPort-AV: E=Sophos;i="6.03,197,1694761200"; d="scan'208";a="998098110" Received: from agluck-desk3.sc.intel.com ([172.25.222.74]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Oct 2023 09:08:11 -0700 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Peter Newman , Jonathan Corbet , Shuah Khan , x86@kernel.org Cc: Shaopeng Tan , James Morse , Jamie Iles , Babu Moger , Randy Dunlap , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v7 6/8] x86/resctrl: Introduce snc_nodes_per_l3_cache Date: Tue, 3 Oct 2023 09:07:57 -0700 Message-ID: <20231003160800.8601-7-tony.luck@intel.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231003160800.8601-1-tony.luck@intel.com> References: <20230928191350.205703-1-tony.luck@intel.com> <20231003160800.8601-1-tony.luck@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (howler.vger.email [0.0.0.0]); Tue, 03 Oct 2023 09:08:42 -0700 (PDT) Intel Sub-NUMA Cluster (SNC) is a feature that subdivides the CPU cores and memory controllers on a socket into two or more groups. These are presented to the operating system as NUMA nodes. This may enable some workloads to have slightly lower latency to memory as the memory controller(s) in an SNC node are electrically closer to the CPU cores on that SNC node. This cost may be offset by lower bandwidth since the memory accesses for each core can only be interleaved between the memory controllers on the same SNC node. Resctrl monitoring on Intel system depends upon attaching RMIDs to tasks to track L3 cache occupancy and memory bandwidth. There is an MSR that controls how the RMIDs are shared between SNC nodes. The default mode divides them numerically. E.g. when there are two SNC nodes on a socket the lower number half of the RMIDs are given to the first node, the remainder to the second node. This would be difficult to use with the Linux resctrl interface as specific RMID values assigned to resctrl groups are not visible to users. The other mode divides the RMIDs and renumbers the ones on the second SNC node to start from zero. Even with this renumbering SNC mode requires several changes in resctrl behavior for correct operation. Add a global integer "snc_nodes_per_l3_cache" that will show how many SNC nodes share each L3 cache. When this is "1", SNC mode is either not implemented, or not enabled. A later patch will detect SNC mode and set snc_nodes_per_l3_cache to the appropriate value. For now it remains at the default "1" to indicate SNC mode is not active. Code that needs to take action when SNC is enabled is: 1) The number of logical RMIDs per L3 cache available for use is the number of physical RMIDs divided by the number of SNC nodes. 2) Likewise the "mon_scale" value must be adjusted for the number of SNC nodes. 3) The RMID renumbering operates when using the value from the IA32_PQR_ASSOC MSR to count accesses by a task. When reading an RMID counter, code must adjust from the logical RMID used to the physical RMID value for the SNC node that it wishes to read and load the adjusted value into the IA32_QM_EVTSEL MSR. 4) The L3 cache is divided between the SNC nodes. So the value reported in the resctrl "size" file is adjusted. 5) The "-o mba_MBps" mount option must be disabled in SNC mode because the monitoring is being done per SNC node, while the bandwidth allocation is still done at the L3 cache scope. Trying to use this feedback loop might result in contradictory changes to the throttling level coming from each of the SNC node bandwidth measurements. Signed-off-by: Tony Luck Reviewed-by: Peter Newman --- Changes since last version: In commit comment s/redumbering/renumbering/ Move check that SNC is not enabled into supports_mba_mbps(). arch/x86/kernel/cpu/resctrl/internal.h | 2 ++ arch/x86/kernel/cpu/resctrl/core.c | 6 ++++++ arch/x86/kernel/cpu/resctrl/monitor.c | 16 +++++++++++++--- arch/x86/kernel/cpu/resctrl/rdtgroup.c | 5 +++-- 4 files changed, 24 insertions(+), 5 deletions(-) diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/resctrl/internal.h index 3aed8e7b8487..3fddda401b83 100644 --- a/arch/x86/kernel/cpu/resctrl/internal.h +++ b/arch/x86/kernel/cpu/resctrl/internal.h @@ -446,6 +446,8 @@ DECLARE_STATIC_KEY_FALSE(rdt_alloc_enable_key); extern struct dentry *debugfs_resctrl; +extern int snc_nodes_per_l3_cache; + enum resctrl_res_level { RDT_RESOURCE_L3, RDT_RESOURCE_L2, diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resctrl/core.c index 6b937da36e4c..cd189b7ca6ea 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -48,6 +48,12 @@ int max_name_width, max_data_width; */ bool rdt_alloc_capable; +/* + * Number of SNC nodes that share each L3 cache. Default is 1 for + * systems that do not support SNC, or have SNC disabled. + */ +int snc_nodes_per_l3_cache = 1; + static void mba_wrmsr_intel(struct rdt_ctrl_domain *d, struct msr_param *m, struct rdt_resource *r); diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c index 97d2ed829f5d..e6e566921a60 100644 --- a/arch/x86/kernel/cpu/resctrl/monitor.c +++ b/arch/x86/kernel/cpu/resctrl/monitor.c @@ -148,8 +148,18 @@ static inline struct rmid_entry *__rmid_entry(u32 rmid) static int __rmid_read(u32 rmid, enum resctrl_event_id eventid, u64 *val) { + struct rdt_resource *r = &rdt_resources_all[RDT_RESOURCE_L3].r_resctrl; + int cpu = smp_processor_id(); + int rmid_offset = 0; u64 msr_val; + /* + * When SNC mode is on, need to compute the offset to read the + * physical RMID counter for the node to which this CPU belongs. + */ + if (snc_nodes_per_l3_cache > 1) + rmid_offset = (cpu_to_node(cpu) % snc_nodes_per_l3_cache) * r->num_rmid; + /* * As per the SDM, when IA32_QM_EVTSEL.EvtID (bits 7:0) is configured * with a valid event code for supported resource type and the bits @@ -158,7 +168,7 @@ static int __rmid_read(u32 rmid, enum resctrl_event_id eventid, u64 *val) * IA32_QM_CTR.Error (bit 63) and IA32_QM_CTR.Unavailable (bit 62) * are error bits. */ - wrmsr(MSR_IA32_QM_EVTSEL, eventid, rmid); + wrmsr(MSR_IA32_QM_EVTSEL, eventid, rmid + rmid_offset); rdmsrl(MSR_IA32_QM_CTR, msr_val); if (msr_val & RMID_VAL_ERROR) @@ -783,8 +793,8 @@ int __init rdt_get_mon_l3_config(struct rdt_resource *r) int ret; resctrl_rmid_realloc_limit = boot_cpu_data.x86_cache_size * 1024; - hw_res->mon_scale = boot_cpu_data.x86_cache_occ_scale; - r->num_rmid = boot_cpu_data.x86_cache_max_rmid + 1; + hw_res->mon_scale = boot_cpu_data.x86_cache_occ_scale / snc_nodes_per_l3_cache; + r->num_rmid = (boot_cpu_data.x86_cache_max_rmid + 1) / snc_nodes_per_l3_cache; hw_res->mbm_width = MBM_CNTR_WIDTH_BASE; if (mbm_offset > 0 && mbm_offset <= MBM_CNTR_WIDTH_OFFSET_MAX) diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c index afa7a8dca48d..def203c40d70 100644 --- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c +++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c @@ -1357,7 +1357,7 @@ unsigned int rdtgroup_cbm_to_size(struct rdt_resource *r, } } - return size; + return size / snc_nodes_per_l3_cache; } /** @@ -2207,7 +2207,8 @@ static bool supports_mba_mbps(void) struct rdt_resource *r = &rdt_resources_all[RDT_RESOURCE_MBA].r_resctrl; return (is_mbm_local_enabled() && - r->alloc_capable && is_mba_linear()); + r->alloc_capable && is_mba_linear() && + snc_nodes_per_l3_cache == 1); } /* -- 2.41.0