Received: by 2002:a05:6a10:c604:0:0:0:0 with SMTP id y4csp4732984pxt; Wed, 11 Aug 2021 12:45:00 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxCm1wxjL0R09UdRaDaTFx35wYB/LoTklvEtUcUyR7icU63oPsNxhc4KMaO9MfxOjsaZnOY X-Received: by 2002:a17:906:3c45:: with SMTP id i5mr170053ejg.336.1628711100612; Wed, 11 Aug 2021 12:45:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1628711100; cv=none; d=google.com; s=arc-20160816; b=hu91MmtAPkNNZAttpj9p3ssf4kdT29n3LzguEz5Jdd0ROL+zeG4VuGJvZxkO5anMDN 1rM8neFNjFKmwvIDkgsNwqvKcl9dSDDj5EPqDblu1w6BNe7pLNPmTiSAgY/nCsoUOKUk JKwnCumaaCw7YzTNyiSOL2u472nlkt3mb/bQj90JVA8A+9rUdOo0npaQNQOg30LglZvR hqoJ8fRLXE6/uqRmmS+64EQzATF1AcmP8p78dLYS5xzu25IlEinTRI0WzV8KHFSnZzfS syjXCw9G/yQzAii+wYYkipnRD2wkoIxEb1q9xLIydkXqqL2TEtjzcb8FG60gFstS5rQa 3Phw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:robot-unsubscribe :robot-id:message-id:mime-version:references:in-reply-to:cc:subject :to:reply-to:sender:from:dkim-signature:dkim-signature:date; bh=7x0FmXQmj/5Yft3tNNg6PcnEIQL9YW982NXID7J+yrM=; b=e1t0ZXMHfY2YDOetC0T5+0E6pMdwCHYEXWNeTgw4Q7/s5zoydclLO/Bn+JZWp5t7PT eyO7d8nchV6IAb5oUmjn2wKKApjf/0+KOV1OOUTWwemr4W/l+h4n4YwY2OgQ45nRxbU+ tVn0Ozjl6/TLOO3rfrHYyP5RU5MStXdRA5IZcBJNwZ5ovsFHVqM7QPOv8efsyl/WcvWm 7x8u0clLNLz1jly+vdZw0PtOiqRAfqupeXFvkrgXa92qd9fZyu4mAVYf75luoS++8aJp mLF/fBFGfxI25LZPtYHG5lskWgfRBCt+YNbDIAPI/Tqki50N2eYAmytrBvRdrcbpk+E1 tJQw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linutronix.de header.s=2020 header.b=nGlPhBju; dkim=neutral (no key) header.i=@linutronix.de; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=linutronix.de Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id z21si66840ejf.461.2021.08.11.12.44.30; Wed, 11 Aug 2021 12:45:00 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linutronix.de header.s=2020 header.b=nGlPhBju; dkim=neutral (no key) header.i=@linutronix.de; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=linutronix.de Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232226AbhHKTmZ (ORCPT + 99 others); Wed, 11 Aug 2021 15:42:25 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:53656 "EHLO galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231364AbhHKTl5 (ORCPT ); Wed, 11 Aug 2021 15:41:57 -0400 Date: Wed, 11 Aug 2021 19:41:32 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1628710892; h=from:from:sender:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=7x0FmXQmj/5Yft3tNNg6PcnEIQL9YW982NXID7J+yrM=; b=nGlPhBjurzCgFp+3xqQ/h0cgV6f91DaHfN+OzzeWEjSce36P2nMGBb0HOb92JXITAUlVy8 Ei9Qe4RpQX03mlyLvxHidKhNq7mr0r8VQT44t9ctvGCZlIa8xJ2d5zg+uFR7ukODYO7Oai Dpw3wzccPCdmmovvbv+hDxAc+2fdEVnE2KHJJaVeD+pFeCOIzKggLlJ7bBL9WZOGu8tnb2 pMHcNtBjcfTDPpBd+5hFR5FVGyDvG6x6kDthm1+5cSQEvWwkZoTze0YmgMicNMI1HmaDPt ccnZM+dikB97X0jBynnYcqZDirwtHAAmuptZUCdoS1SJ+zhMZKhHwRhUMtNxOQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1628710892; h=from:from:sender:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=7x0FmXQmj/5Yft3tNNg6PcnEIQL9YW982NXID7J+yrM=; b=vIpZKsnuNGgse1dY5OPd/L2CEeovbzVvcqy+qWjCcxc3dpXfSG8+0dJ4lHMYy2qCXAIajz OGDVtY4b9zJFiyAw== From: "tip-bot2 for James Morse" Sender: tip-bot2@linutronix.de Reply-to: linux-kernel@vger.kernel.org To: linux-tip-commits@vger.kernel.org Subject: [tip: x86/cache] x86/resctrl: Walk the resctrl schema list instead of an arch list Cc: James Morse , Borislav Petkov , Jamie Iles , Reinette Chatre , Babu Moger , x86@kernel.org, linux-kernel@vger.kernel.org In-Reply-To: <20210728170637.25610-7-james.morse@arm.com> References: <20210728170637.25610-7-james.morse@arm.com> MIME-Version: 1.0 Message-ID: <162871089210.395.13588797537065926685.tip-bot2@tip-bot2> Robot-ID: Robot-Unsubscribe: Contact to get blacklisted from these emails Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The following commit has been merged into the x86/cache branch of tip: Commit-ID: 331ebe4c43496cdc7f8d9a32d4ef59300b748435 Gitweb: https://git.kernel.org/tip/331ebe4c43496cdc7f8d9a32d4ef59300b748435 Author: James Morse AuthorDate: Wed, 28 Jul 2021 17:06:19 Committer: Borislav Petkov CommitterDate: Wed, 11 Aug 2021 13:20:43 +02:00 x86/resctrl: Walk the resctrl schema list instead of an arch list When parsing a schema configuration value from user-space, resctrl walks the architectures rdt_resources_all[] array to find a matching struct rdt_resource. Once the CDP resources are merged there will be one resource in use by two schemata. Anything walking rdt_resources_all[] on behalf of a user-space request should walk the list of struct resctrl_schema instead. Change the users of for_each_alloc_enabled_rdt_resource() to walk the schema instead. Schemata were only created for alloc_enabled resources so these two lists are currently equivalent. schemata_list_create() and rdt_kill_sb() are ignored. The first creates the schema list, and will eventually loop over the resource indexes using an arch helper to retrieve the resource. rdt_kill_sb() will eventually make use of an arch 'reset everything' helper. After the filesystem code is moved, rdtgroup_pseudo_locked_in_hierarchy() remains part of the x86 specific hooks to support pseudo lock. This code walks each domain, and still does this after the separate resources are merged. Signed-off-by: James Morse Signed-off-by: Borislav Petkov Reviewed-by: Jamie Iles Reviewed-by: Reinette Chatre Tested-by: Babu Moger Link: https://lkml.kernel.org/r/20210728170637.25610-7-james.morse@arm.com --- arch/x86/kernel/cpu/resctrl/ctrlmondata.c | 23 ++++++++++++++-------- arch/x86/kernel/cpu/resctrl/rdtgroup.c | 18 +++++++++++------ 2 files changed, 27 insertions(+), 14 deletions(-) diff --git a/arch/x86/kernel/cpu/resctrl/ctrlmondata.c b/arch/x86/kernel/cpu/resctrl/ctrlmondata.c index 08eef53..405b99d 100644 --- a/arch/x86/kernel/cpu/resctrl/ctrlmondata.c +++ b/arch/x86/kernel/cpu/resctrl/ctrlmondata.c @@ -287,10 +287,12 @@ static int rdtgroup_parse_resource(char *resname, char *tok, struct rdtgroup *rdtgrp) { struct rdt_hw_resource *hw_res; + struct resctrl_schema *s; struct rdt_resource *r; - for_each_alloc_enabled_rdt_resource(r) { - hw_res = resctrl_to_arch_res(r); + list_for_each_entry(s, &resctrl_schema_all, list) { + r = s->res; + hw_res = resctrl_to_arch_res(s->res); if (!strcmp(resname, r->name) && rdtgrp->closid < hw_res->num_closid) return parse_line(tok, r, rdtgrp); } @@ -301,6 +303,7 @@ static int rdtgroup_parse_resource(char *resname, char *tok, ssize_t rdtgroup_schemata_write(struct kernfs_open_file *of, char *buf, size_t nbytes, loff_t off) { + struct resctrl_schema *s; struct rdtgroup *rdtgrp; struct rdt_domain *dom; struct rdt_resource *r; @@ -331,8 +334,8 @@ ssize_t rdtgroup_schemata_write(struct kernfs_open_file *of, goto out; } - for_each_alloc_enabled_rdt_resource(r) { - list_for_each_entry(dom, &r->domains, list) + list_for_each_entry(s, &resctrl_schema_all, list) { + list_for_each_entry(dom, &s->res->domains, list) dom->have_new_ctrl = false; } @@ -353,7 +356,8 @@ ssize_t rdtgroup_schemata_write(struct kernfs_open_file *of, goto out; } - for_each_alloc_enabled_rdt_resource(r) { + list_for_each_entry(s, &resctrl_schema_all, list) { + r = s->res; ret = update_domains(r, rdtgrp->closid); if (ret) goto out; @@ -401,6 +405,7 @@ int rdtgroup_schemata_show(struct kernfs_open_file *of, struct seq_file *s, void *v) { struct rdt_hw_resource *hw_res; + struct resctrl_schema *schema; struct rdtgroup *rdtgrp; struct rdt_resource *r; int ret = 0; @@ -409,8 +414,10 @@ int rdtgroup_schemata_show(struct kernfs_open_file *of, rdtgrp = rdtgroup_kn_lock_live(of->kn); if (rdtgrp) { if (rdtgrp->mode == RDT_MODE_PSEUDO_LOCKSETUP) { - for_each_alloc_enabled_rdt_resource(r) + list_for_each_entry(schema, &resctrl_schema_all, list) { + r = schema->res; seq_printf(s, "%s:uninitialized\n", r->name); + } } else if (rdtgrp->mode == RDT_MODE_PSEUDO_LOCKED) { if (!rdtgrp->plr->d) { rdt_last_cmd_clear(); @@ -424,8 +431,8 @@ int rdtgroup_schemata_show(struct kernfs_open_file *of, } } else { closid = rdtgrp->closid; - for_each_alloc_enabled_rdt_resource(r) { - hw_res = resctrl_to_arch_res(r); + list_for_each_entry(schema, &resctrl_schema_all, list) { + hw_res = resctrl_to_arch_res(schema->res); if (closid < hw_res->num_closid) show_doms(s, r, closid); } diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c index d7fd071..7502b7d 100644 --- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c +++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c @@ -104,12 +104,12 @@ int closids_supported(void) static void closid_init(void) { struct rdt_hw_resource *hw_res; - struct rdt_resource *r; + struct resctrl_schema *s; int rdt_min_closid = 32; /* Compute rdt_min_closid across all resources */ - for_each_alloc_enabled_rdt_resource(r) { - hw_res = resctrl_to_arch_res(r); + list_for_each_entry(s, &resctrl_schema_all, list) { + hw_res = resctrl_to_arch_res(s->res); rdt_min_closid = min(rdt_min_closid, hw_res->num_closid); } @@ -1276,11 +1276,13 @@ static bool rdtgroup_mode_test_exclusive(struct rdtgroup *rdtgrp) { struct rdt_hw_domain *hw_dom; int closid = rdtgrp->closid; + struct resctrl_schema *s; struct rdt_resource *r; bool has_cache = false; struct rdt_domain *d; - for_each_alloc_enabled_rdt_resource(r) { + list_for_each_entry(s, &resctrl_schema_all, list) { + r = s->res; if (r->rid == RDT_RESOURCE_MBA) continue; has_cache = true; @@ -1418,6 +1420,7 @@ unsigned int rdtgroup_cbm_to_size(struct rdt_resource *r, static int rdtgroup_size_show(struct kernfs_open_file *of, struct seq_file *s, void *v) { + struct resctrl_schema *schema; struct rdt_hw_domain *hw_dom; struct rdtgroup *rdtgrp; struct rdt_resource *r; @@ -1449,7 +1452,8 @@ static int rdtgroup_size_show(struct kernfs_open_file *of, goto out; } - for_each_alloc_enabled_rdt_resource(r) { + list_for_each_entry(schema, &resctrl_schema_all, list) { + r = schema->res; sep = false; seq_printf(s, "%*s:", max_name_width, r->name); list_for_each_entry(d, &r->domains, list) { @@ -2815,10 +2819,12 @@ static void rdtgroup_init_mba(struct rdt_resource *r) /* Initialize the RDT group's allocations. */ static int rdtgroup_init_alloc(struct rdtgroup *rdtgrp) { + struct resctrl_schema *s; struct rdt_resource *r; int ret; - for_each_alloc_enabled_rdt_resource(r) { + list_for_each_entry(s, &resctrl_schema_all, list) { + r = s->res; if (r->rid == RDT_RESOURCE_MBA) { rdtgroup_init_mba(r); } else {