Received: by 2002:a05:6358:7058:b0:131:369:b2a3 with SMTP id 24csp1391862rwp; Thu, 13 Jul 2023 10:07:59 -0700 (PDT) X-Google-Smtp-Source: APBJJlFOoyj7Dan+LtMVa99UuA2nqqum6Sa24J5nulFZaWI6adLgL9Elj5QYtjMynvXm5L7C+Rou X-Received: by 2002:a17:906:92:b0:993:dd1d:824d with SMTP id 18-20020a170906009200b00993dd1d824dmr2192866ejc.19.1689268079031; Thu, 13 Jul 2023 10:07:59 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689268079; cv=none; d=google.com; s=arc-20160816; b=NbygZq91Xs8kW7ZCskZwQEs3/SKBB4/6dbcgvmNN404qUvcKHGzo1gCaTKOn4E49Ge SCHFGOHh+oK73ulbHeTPIKUKtQJjja+tvm1FWaSJvuYzdf4a8rpJ38p7YlkZNAjp+rqa twZnCBsvlJCPGDhq8hq8lzG3MpWsT+GM69g/nPlF/ts6SCKNKZ5zlcxhjnTYJxlEDeMh mVOm/BWDR/GP4IFWfimpPF7iHvCK/2THP37/joIihATYW5R1fw+0ce10nuPVMd4SLxAq 5GQ4noAc0gZy8JTGPqUbYU6yq2cYGnWS1XtL7wOWvYV3KZJ06VTWD7xJClDm1w9EZF1Z UCug== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=+M+RwbBFyGVX0ctdg9A0tvTgaxRgcTvBkk5t2K21DVc=; fh=g0WKEVDdB8sQamShH39vSa8SoB8TDPrv8hO7zbxV08s=; b=IZTdt05cJcVHjpMk3Fsa0qlvbLifeUY3lM3SFoW357fv7HzYcpD0meKv9/pH+TcL3q mLknKdXQM2pPmCRJ75keogidnlYOHOtRRriDPLgB+6Hj437420XhRMODxN8bnbXwgqIB FlfgIPrepy3bUnzQJCSYlEd5TTh2AEYbShRGZ1VciCgTWoOBab2LuT/VXLF4BP+6sXP5 O/ozsQ4eFNSqRUQ85qMm5Ch0H5+P1e9fu4Rsymqif2dmfLxqPTkxJ6y5lORH+Cym50GG zMKK/D6Xs5HHE4Y3H3+ExqqsDC1ewcW9vQrngs3/6rNCEc3jwO2giL7sr3sb3NW+wG+v jYfQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=PeuhXeUk; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id u24-20020a170906069800b0098dafe075ccsi7997992ejb.97.2023.07.13.10.07.34; Thu, 13 Jul 2023 10:07:59 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=PeuhXeUk; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234553AbjGMQcj (ORCPT + 99 others); Thu, 13 Jul 2023 12:32:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43488 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232700AbjGMQcd (ORCPT ); Thu, 13 Jul 2023 12:32:33 -0400 Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 75E91272A; Thu, 13 Jul 2023 09:32:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1689265952; x=1720801952; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=O8VYBIBmlW5jTpuxcxQvWpwg4pXyAEtUg3Nl4gqJA+Y=; b=PeuhXeUkI5KDD7qoOATvsf9N56p2NgOP3fZ+Fn5+RpjGVqJZa+DbjQ68 8sUytTqq1ex1agC0j9ASWWBWHJOGqS1y+DnZd5j5tfr/v77imdbKsZFST MVjP9ehLcavbllvh5LY4n9kM7gA5GBYr6y4UIrBhZE+aCnM4ImPhjNI6h PUOnzS9rmUNktdbvHTYyNKjL09sWE53APYI4e60GqMitvgB8hTmtiAgQ/ Y6moQvt6FBn0AWb0L0I+Jnge/ULalVNOXrF2Ua8DKlfISZOTBnn63nDwQ Xqz8mTWXS/FPMLKPQd/+7Of3UPG6YVmFLKVmNT3RSYIk3YvsNU+eb5wwy A==; X-IronPort-AV: E=McAfee;i="6600,9927,10770"; a="362707593" X-IronPort-AV: E=Sophos;i="6.01,203,1684825200"; d="scan'208";a="362707593" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Jul 2023 09:32:22 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10770"; a="722046370" X-IronPort-AV: E=Sophos;i="6.01,203,1684825200"; d="scan'208";a="722046370" Received: from agluck-desk3.sc.intel.com ([172.25.222.74]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Jul 2023 09:32:21 -0700 From: Tony Luck To: Fenghua Yu , Reinette Chatre , Peter Newman , Jonathan Corbet , Shuah Khan , x86@kernel.org Cc: Shaopeng Tan , James Morse , Jamie Iles , Babu Moger , Randy Dunlap , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, patches@lists.linux.dev, Tony Luck Subject: [PATCH v3 1/8] x86/resctrl: Refactor in preparation for node-scoped resources Date: Thu, 13 Jul 2023 09:32:00 -0700 Message-Id: <20230713163207.219710-2-tony.luck@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230713163207.219710-1-tony.luck@intel.com> References: <20230713163207.219710-1-tony.luck@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Sub-NUMA cluster systems provide monitoring resources at the NUMA node scope instead of the L3 cache scope. Rename the cache_level field in struct rdt_resource to the more generic "scope" and add symbolic names and a helper function. No functional change. Signed-off-by: Tony Luck Reviewed-by: Peter Newman --- include/linux/resctrl.h | 4 ++-- arch/x86/kernel/cpu/resctrl/internal.h | 5 +++++ arch/x86/kernel/cpu/resctrl/core.c | 17 +++++++++++------ arch/x86/kernel/cpu/resctrl/pseudo_lock.c | 2 +- arch/x86/kernel/cpu/resctrl/rdtgroup.c | 2 +- 5 files changed, 20 insertions(+), 10 deletions(-) diff --git a/include/linux/resctrl.h b/include/linux/resctrl.h index 8334eeacfec5..25051daa6655 100644 --- a/include/linux/resctrl.h +++ b/include/linux/resctrl.h @@ -150,7 +150,7 @@ struct resctrl_schema; * @alloc_capable: Is allocation available on this machine * @mon_capable: Is monitor feature available on this machine * @num_rmid: Number of RMIDs available - * @cache_level: Which cache level defines scope of this resource + * @scope: Scope of this resource (cache level or NUMA node) * @cache: Cache allocation related data * @membw: If the component has bandwidth controls, their properties. * @domains: All domains for this resource @@ -168,7 +168,7 @@ struct rdt_resource { bool alloc_capable; bool mon_capable; int num_rmid; - int cache_level; + int scope; struct resctrl_cache cache; struct resctrl_membw membw; struct list_head domains; diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/resctrl/internal.h index 85ceaf9a31ac..8275b8a74f7e 100644 --- a/arch/x86/kernel/cpu/resctrl/internal.h +++ b/arch/x86/kernel/cpu/resctrl/internal.h @@ -440,6 +440,11 @@ enum resctrl_res_level { RDT_NUM_RESOURCES, }; +enum resctrl_scope { + SCOPE_L2_CACHE = 2, + SCOPE_L3_CACHE = 3 +}; + static inline struct rdt_resource *resctrl_inc(struct rdt_resource *res) { struct rdt_hw_resource *hw_res = resctrl_to_arch_res(res); diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resctrl/core.c index 030d3b409768..6571514752f3 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -65,7 +65,7 @@ struct rdt_hw_resource rdt_resources_all[] = { .r_resctrl = { .rid = RDT_RESOURCE_L3, .name = "L3", - .cache_level = 3, + .scope = SCOPE_L3_CACHE, .domains = domain_init(RDT_RESOURCE_L3), .parse_ctrlval = parse_cbm, .format_str = "%d=%0*x", @@ -79,7 +79,7 @@ struct rdt_hw_resource rdt_resources_all[] = { .r_resctrl = { .rid = RDT_RESOURCE_L2, .name = "L2", - .cache_level = 2, + .scope = SCOPE_L2_CACHE, .domains = domain_init(RDT_RESOURCE_L2), .parse_ctrlval = parse_cbm, .format_str = "%d=%0*x", @@ -93,7 +93,7 @@ struct rdt_hw_resource rdt_resources_all[] = { .r_resctrl = { .rid = RDT_RESOURCE_MBA, .name = "MB", - .cache_level = 3, + .scope = SCOPE_L3_CACHE, .domains = domain_init(RDT_RESOURCE_MBA), .parse_ctrlval = parse_bw, .format_str = "%d=%*u", @@ -105,7 +105,7 @@ struct rdt_hw_resource rdt_resources_all[] = { .r_resctrl = { .rid = RDT_RESOURCE_SMBA, .name = "SMBA", - .cache_level = 3, + .scope = 3, .domains = domain_init(RDT_RESOURCE_SMBA), .parse_ctrlval = parse_bw, .format_str = "%d=%*u", @@ -487,6 +487,11 @@ static int arch_domain_mbm_alloc(u32 num_rmid, struct rdt_hw_domain *hw_dom) return 0; } +static int get_domain_id(int cpu, enum resctrl_scope scope) +{ + return get_cpu_cacheinfo_id(cpu, scope); +} + /* * domain_add_cpu - Add a cpu to a resource's domain list. * @@ -502,7 +507,7 @@ static int arch_domain_mbm_alloc(u32 num_rmid, struct rdt_hw_domain *hw_dom) */ static void domain_add_cpu(int cpu, struct rdt_resource *r) { - int id = get_cpu_cacheinfo_id(cpu, r->cache_level); + int id = get_domain_id(cpu, r->scope); struct list_head *add_pos = NULL; struct rdt_hw_domain *hw_dom; struct rdt_domain *d; @@ -552,7 +557,7 @@ static void domain_add_cpu(int cpu, struct rdt_resource *r) static void domain_remove_cpu(int cpu, struct rdt_resource *r) { - int id = get_cpu_cacheinfo_id(cpu, r->cache_level); + int id = get_domain_id(cpu, r->scope); struct rdt_hw_domain *hw_dom; struct rdt_domain *d; diff --git a/arch/x86/kernel/cpu/resctrl/pseudo_lock.c b/arch/x86/kernel/cpu/resctrl/pseudo_lock.c index 458cb7419502..42f124ffb968 100644 --- a/arch/x86/kernel/cpu/resctrl/pseudo_lock.c +++ b/arch/x86/kernel/cpu/resctrl/pseudo_lock.c @@ -297,7 +297,7 @@ static int pseudo_lock_region_init(struct pseudo_lock_region *plr) plr->size = rdtgroup_cbm_to_size(plr->s->res, plr->d, plr->cbm); for (i = 0; i < ci->num_leaves; i++) { - if (ci->info_list[i].level == plr->s->res->cache_level) { + if (ci->info_list[i].level == plr->s->res->scope) { plr->line_size = ci->info_list[i].coherency_line_size; return 0; } diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c index 725344048f85..418658f0a9ad 100644 --- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c +++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c @@ -1348,7 +1348,7 @@ unsigned int rdtgroup_cbm_to_size(struct rdt_resource *r, num_b = bitmap_weight(&cbm, r->cache.cbm_len); ci = get_cpu_cacheinfo(cpumask_any(&d->cpu_mask)); for (i = 0; i < ci->num_leaves; i++) { - if (ci->info_list[i].level == r->cache_level) { + if (ci->info_list[i].level == r->scope) { size = ci->info_list[i].size / r->cache.cbm_len * num_b; break; } -- 2.40.1