2021-05-19 20:17:03

by James Morse

[permalink] [raw]
Subject: [PATCH v3 00/24] x86/resctrl: Merge the CDP resources

Hi folks,

Thanks to Reinette comments on v2. The major changes in v3 is a juggling
of all the commit messages. One patch got merged into its parent, and
the msr_param range thing got pulled out into its own patch. Otherwise
changes are noted in the commit messages.
----

This series re-folds the resctrl code so the CDP resources (L3CODE et al)
behaviour is all contained in the filesystem parts, with a minimum amount
of arch specific code.

Arm have some CPU support for dividing caches into portions, and
applying bandwidth limits at various points in the SoC. The collective term
for these features is MPAM: Memory Partitioning and Monitoring.

MPAM is similar enough to Intel RDT, that it should use the defacto linux
interface: resctrl. This filesystem currently lives under arch/x86, and is
tightly coupled to the architecture.
Ultimately, my plan is to split the existing resctrl code up to have an
arch<->fs abstraction, then move all the bits out to fs/resctrl. From there
MPAM can be wired up.

x86 might have two resources with cache controls, (L2 and L3) but has
extra copies for CDP: L{2,3}{CODE,DATA}, which are marked as enabled
if CDP is enabled for the corresponding cache.

MPAM has an equivalent feature to CDP, but its a property of the CPU,
not the cache. Resctrl needs to have x86's odd/even behaviour, as that
its the ABI, but this isn't how the MPAM hardware works. It is entirely
possible that an in-kernel user of MPAM would not be using CDP, whereas
resctrl is.

Pretending L3CODE and L3DATA are entirely separate resources is a neat
trick, but doing this is specific to x86.
Doing this leaves the arch code in control of various parts of the
filesystem ABI: the resources names, and the way the schemata are parsed.
Allowing this stuff to vary between architectures is bad for user space.

This series collapses the CODE/DATA resources, moving all the user-visible
resctrl ABI into what becomes the filesystem code. CDP becomes the type of
configuration being applied to a cache. This is done by adding a
struct resctrl_schema to the parts of resctrl that will move to fs. This
holds the arch-code resource that is in use for this schema, along with
other properties like the name, and whether the configuration being applied
is CODE/DATA/BOTH.

This lets us fold the extra resources out of the arch code so that they
don't need to be duplicated if the equivalent feature to CDP is missing, or
implemented in a different way.


The first two patches split the resource and domain structs to have an
arch specific 'hw' portion, and the rest that is visible to resctrl.
Future series massage the resctrl code so there are no accesses to 'hw'
structures in the parts of resctrl that will move to fs, providing helpers
where necessary.

This series adds temporary scaffolding, which it removes a few patches
later. This is to allow things like the ctrlval arrays and resources to be
merged separately, which should make is easier to bisect. These things
are marked temporary, and should all be gone by the end of the series.

This series is a little rough around the monitors, would a fake
struct resctrl_schema for the monitors simplify things, or be a source
of bugs?

This series is based on v5.12-rc2, and can be retrieved from:
git://git.kernel.org/pub/scm/linux/kernel/git/morse/linux.git mpam/resctrl_merge_cdp/v3

v2: https://lore.kernel.org/lkml/[email protected]/
v1: https://lore.kernel.org/lkml/[email protected]/

Parts were previously posted as an RFC here:
https://lore.kernel.org/lkml/[email protected]/

James Morse (24):
x86/resctrl: Split struct rdt_resource
x86/resctrl: Split struct rdt_domain
x86/resctrl: Add a separate schema list for resctrl
x86/resctrl: Pass the schema in info dir's private pointer
x86/resctrl: Label the resources with their configuration type
x86/resctrl: Walk the resctrl schema list instead of an arch list
x86/resctrl: Store the effective num_closid in the schema
x86/resctrl: Add resctrl_arch_get_num_closid()
x86/resctrl: Pass the schema to resctrl filesystem functions
x86/resctrl: Swizzle rdt_resource and resctrl_schema in
pseudo_lock_region
x86/resctrl: Move the schemata names into struct resctrl_schema
x86/resctrl: Group staged configuration into a separate struct
x86/resctrl: Allow different CODE/DATA configurations to be staged
x86/resctrl: Rename update_domains() resctrl_arch_update_domains()
x86/resctrl: Add a helper to read a closid's configuration
x86/resctrl: Add a helper to read/set the CDP configuration
x86/resctrl: Pass configuration type to resctrl_arch_get_config()
x86/resctrl: Make ctrlval arrays the same size
x86/resctrl: Apply offset correction when config is staged
x86/resctrl: Calculate the index from the configuration type
x86/resctrl: Merge the ctrl_val arrays
x86/resctrl: Remove rdt_cdp_peer_get()
x86/resctrl: Expand resctrl_arch_update_domains()'s msr_param range
x86/resctrl: Merge the CDP resources

arch/x86/kernel/cpu/resctrl/core.c | 269 +++++--------
arch/x86/kernel/cpu/resctrl/ctrlmondata.c | 164 +++++---
arch/x86/kernel/cpu/resctrl/internal.h | 234 ++++-------
arch/x86/kernel/cpu/resctrl/monitor.c | 44 ++-
arch/x86/kernel/cpu/resctrl/pseudo_lock.c | 12 +-
arch/x86/kernel/cpu/resctrl/rdtgroup.c | 447 ++++++++++++----------
include/linux/resctrl.h | 182 +++++++++
7 files changed, 758 insertions(+), 594 deletions(-)

--
2.30.2



2021-05-19 20:17:13

by James Morse

[permalink] [raw]
Subject: [PATCH v3 02/24] x86/resctrl: Split struct rdt_domain

resctrl is the defacto Linux ABI for SoC resource partitioning features.

To support it on another architecture, it needs to be abstracted from
the features provided by Intel RDT and AMD PQoS, and moved to /fs/.
struct rdt_resource contains a mix of architecture private details
and properties of the filesystem interface user-space users.

Continue by splitting struct rdt_domain, into an architecture private
'hw' struct, which contains the common resctrl structure that would be
used by any architecture. The hardware values in ctrl_val and mbps_val
need to be accessed via helpers to allow another architecture to convert
these into a different format if necessary. After this split, filesystem
code code paths touching a 'hw' struct indicates where an abstraction
is needed.

Splitting this structure only moves types around, and should not lead
to any change in behaviour.

Reviewed-by: Jamie Iles <[email protected]>
Signed-off-by: James Morse <[email protected]>
---
Changes since v2:
* Shuffled commit message,
* Changed kerneldoc text above rdt_hw_domain

Changes since v1:
* Tabs space and other whitespace
* cpu becomes CPU in all comments touched
---
arch/x86/kernel/cpu/resctrl/core.c | 32 ++++++++++-------
arch/x86/kernel/cpu/resctrl/ctrlmondata.c | 10 ++++--
arch/x86/kernel/cpu/resctrl/internal.h | 43 +++++++----------------
arch/x86/kernel/cpu/resctrl/monitor.c | 8 +++--
arch/x86/kernel/cpu/resctrl/rdtgroup.c | 29 +++++++++------
include/linux/resctrl.h | 32 ++++++++++++++++-
6 files changed, 94 insertions(+), 60 deletions(-)

diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resctrl/core.c
index 415d0f04efd7..aff5d0dde6c1 100644
--- a/arch/x86/kernel/cpu/resctrl/core.c
+++ b/arch/x86/kernel/cpu/resctrl/core.c
@@ -385,10 +385,11 @@ static void
mba_wrmsr_amd(struct rdt_domain *d, struct msr_param *m, struct rdt_resource *r)
{
unsigned int i;
+ struct rdt_hw_domain *hw_dom = resctrl_to_arch_dom(d);
struct rdt_hw_resource *hw_res = resctrl_to_arch_res(r);

for (i = m->low; i < m->high; i++)
- wrmsrl(hw_res->msr_base + i, d->ctrl_val[i]);
+ wrmsrl(hw_res->msr_base + i, hw_dom->ctrl_val[i]);
}

/*
@@ -410,21 +411,23 @@ mba_wrmsr_intel(struct rdt_domain *d, struct msr_param *m,
struct rdt_resource *r)
{
unsigned int i;
+ struct rdt_hw_domain *hw_dom = resctrl_to_arch_dom(d);
struct rdt_hw_resource *hw_res = resctrl_to_arch_res(r);

/* Write the delay values for mba. */
for (i = m->low; i < m->high; i++)
- wrmsrl(hw_res->msr_base + i, delay_bw_map(d->ctrl_val[i], r));
+ wrmsrl(hw_res->msr_base + i, delay_bw_map(hw_dom->ctrl_val[i], r));
}

static void
cat_wrmsr(struct rdt_domain *d, struct msr_param *m, struct rdt_resource *r)
{
unsigned int i;
+ struct rdt_hw_domain *hw_dom = resctrl_to_arch_dom(d);
struct rdt_hw_resource *hw_res = resctrl_to_arch_res(r);

for (i = m->low; i < m->high; i++)
- wrmsrl(hw_res->msr_base + cbm_idx(r, i), d->ctrl_val[i]);
+ wrmsrl(hw_res->msr_base + cbm_idx(r, i), hw_dom->ctrl_val[i]);
}

struct rdt_domain *get_domain_from_cpu(int cpu, struct rdt_resource *r)
@@ -510,21 +513,22 @@ void setup_default_ctrlval(struct rdt_resource *r, u32 *dc, u32 *dm)
static int domain_setup_ctrlval(struct rdt_resource *r, struct rdt_domain *d)
{
struct rdt_hw_resource *hw_res = resctrl_to_arch_res(r);
+ struct rdt_hw_domain *hw_dom = resctrl_to_arch_dom(d);
struct msr_param m;
u32 *dc, *dm;

- dc = kmalloc_array(hw_res->num_closid, sizeof(*d->ctrl_val), GFP_KERNEL);
+ dc = kmalloc_array(hw_res->num_closid, sizeof(*hw_dom->ctrl_val), GFP_KERNEL);
if (!dc)
return -ENOMEM;

- dm = kmalloc_array(hw_res->num_closid, sizeof(*d->mbps_val), GFP_KERNEL);
+ dm = kmalloc_array(hw_res->num_closid, sizeof(*hw_dom->mbps_val), GFP_KERNEL);
if (!dm) {
kfree(dc);
return -ENOMEM;
}

- d->ctrl_val = dc;
- d->mbps_val = dm;
+ hw_dom->ctrl_val = dc;
+ hw_dom->mbps_val = dm;
setup_default_ctrlval(r, dc, dm);

m.low = 0;
@@ -586,6 +590,7 @@ static void domain_add_cpu(int cpu, struct rdt_resource *r)
{
int id = get_cpu_cacheinfo_id(cpu, r->cache_level);
struct list_head *add_pos = NULL;
+ struct rdt_hw_domain *hw_dom;
struct rdt_domain *d;

d = rdt_find_domain(r, id, &add_pos);
@@ -601,10 +606,11 @@ static void domain_add_cpu(int cpu, struct rdt_resource *r)
return;
}

- d = kzalloc_node(sizeof(*d), GFP_KERNEL, cpu_to_node(cpu));
- if (!d)
+ hw_dom = kzalloc_node(sizeof(*hw_dom), GFP_KERNEL, cpu_to_node(cpu));
+ if (!hw_dom)
return;

+ d = &hw_dom->resctrl;
d->id = id;
cpumask_set_cpu(cpu, &d->cpu_mask);

@@ -633,6 +639,7 @@ static void domain_add_cpu(int cpu, struct rdt_resource *r)
static void domain_remove_cpu(int cpu, struct rdt_resource *r)
{
int id = get_cpu_cacheinfo_id(cpu, r->cache_level);
+ struct rdt_hw_domain *hw_dom;
struct rdt_domain *d;

d = rdt_find_domain(r, id, NULL);
@@ -640,6 +647,7 @@ static void domain_remove_cpu(int cpu, struct rdt_resource *r)
pr_warn("Couldn't find cache id for CPU %d\n", cpu);
return;
}
+ hw_dom = resctrl_to_arch_dom(d);

cpumask_clear_cpu(cpu, &d->cpu_mask);
if (cpumask_empty(&d->cpu_mask)) {
@@ -672,12 +680,12 @@ static void domain_remove_cpu(int cpu, struct rdt_resource *r)
if (d->plr)
d->plr->d = NULL;

- kfree(d->ctrl_val);
- kfree(d->mbps_val);
+ kfree(hw_dom->ctrl_val);
+ kfree(hw_dom->mbps_val);
bitmap_free(d->rmid_busy_llc);
kfree(d->mbm_total);
kfree(d->mbm_local);
- kfree(d);
+ kfree(hw_dom);
return;
}

diff --git a/arch/x86/kernel/cpu/resctrl/ctrlmondata.c b/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
index ab6e584c9d2d..2e7466659af3 100644
--- a/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
+++ b/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
@@ -238,6 +238,7 @@ static int parse_line(char *line, struct rdt_resource *r,

int update_domains(struct rdt_resource *r, int closid)
{
+ struct rdt_hw_domain *hw_dom;
struct msr_param msr_param;
cpumask_var_t cpu_mask;
struct rdt_domain *d;
@@ -254,7 +255,8 @@ int update_domains(struct rdt_resource *r, int closid)

mba_sc = is_mba_sc(r);
list_for_each_entry(d, &r->domains, list) {
- dc = !mba_sc ? d->ctrl_val : d->mbps_val;
+ hw_dom = resctrl_to_arch_dom(d);
+ dc = !mba_sc ? hw_dom->ctrl_val : hw_dom->mbps_val;
if (d->have_new_ctrl && d->new_ctrl != dc[closid]) {
cpumask_set_cpu(cpumask_any(&d->cpu_mask), cpu_mask);
dc[closid] = d->new_ctrl;
@@ -375,17 +377,19 @@ ssize_t rdtgroup_schemata_write(struct kernfs_open_file *of,

static void show_doms(struct seq_file *s, struct rdt_resource *r, int closid)
{
+ struct rdt_hw_domain *hw_dom;
struct rdt_domain *dom;
bool sep = false;
u32 ctrl_val;

seq_printf(s, "%*s:", max_name_width, r->name);
list_for_each_entry(dom, &r->domains, list) {
+ hw_dom = resctrl_to_arch_dom(dom);
if (sep)
seq_puts(s, ";");

- ctrl_val = (!is_mba_sc(r) ? dom->ctrl_val[closid] :
- dom->mbps_val[closid]);
+ ctrl_val = (!is_mba_sc(r) ? hw_dom->ctrl_val[closid] :
+ hw_dom->mbps_val[closid]);
seq_printf(s, r->format_str, dom->id, max_data_width,
ctrl_val);
sep = true;
diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/resctrl/internal.h
index 43c8cf6b2b12..a511a6a66a67 100644
--- a/arch/x86/kernel/cpu/resctrl/internal.h
+++ b/arch/x86/kernel/cpu/resctrl/internal.h
@@ -299,44 +299,25 @@ struct mbm_state {
};

/**
- * struct rdt_domain - group of cpus sharing an RDT resource
- * @list: all instances of this resource
- * @id: unique id for this instance
- * @cpu_mask: which cpus share this resource
- * @rmid_busy_llc:
- * bitmap of which limbo RMIDs are above threshold
- * @mbm_total: saved state for MBM total bandwidth
- * @mbm_local: saved state for MBM local bandwidth
- * @mbm_over: worker to periodically read MBM h/w counters
- * @cqm_limbo: worker to periodically read CQM h/w counters
- * @mbm_work_cpu:
- * worker cpu for MBM h/w counters
- * @cqm_work_cpu:
- * worker cpu for CQM h/w counters
+ * struct rdt_hw_domain - Arch private attributes of a set of CPUs that share
+ * a resource
+ * @resctrl: Properties exposed to the resctrl file system
* @ctrl_val: array of cache or mem ctrl values (indexed by CLOSID)
* @mbps_val: When mba_sc is enabled, this holds the bandwidth in MBps
- * @new_ctrl: new ctrl value to be loaded
- * @have_new_ctrl: did user provide new_ctrl for this domain
- * @plr: pseudo-locked region (if any) associated with domain
+ *
+ * Members of this structure are accessed via helpers that provide abstraction.
*/
-struct rdt_domain {
- struct list_head list;
- int id;
- struct cpumask cpu_mask;
- unsigned long *rmid_busy_llc;
- struct mbm_state *mbm_total;
- struct mbm_state *mbm_local;
- struct delayed_work mbm_over;
- struct delayed_work cqm_limbo;
- int mbm_work_cpu;
- int cqm_work_cpu;
+struct rdt_hw_domain {
+ struct rdt_domain resctrl;
u32 *ctrl_val;
u32 *mbps_val;
- u32 new_ctrl;
- bool have_new_ctrl;
- struct pseudo_lock_region *plr;
};

+static inline struct rdt_hw_domain *resctrl_to_arch_dom(struct rdt_domain *r)
+{
+ return container_of(r, struct rdt_hw_domain, resctrl);
+}
+
/**
* struct msr_param - set a range of MSRs from a domain
* @res: The resource to use
diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c
index 685e7f86d908..76eea10f2e2c 100644
--- a/arch/x86/kernel/cpu/resctrl/monitor.c
+++ b/arch/x86/kernel/cpu/resctrl/monitor.c
@@ -418,6 +418,7 @@ static void update_mba_bw(struct rdtgroup *rgrp, struct rdt_domain *dom_mbm)
u32 closid, rmid, cur_msr, cur_msr_val, new_msr_val;
struct mbm_state *pmbm_data, *cmbm_data;
struct rdt_hw_resource *hw_r_mba;
+ struct rdt_hw_domain *hw_dom_mba;
u32 cur_bw, delta_bw, user_bw;
struct rdt_resource *r_mba;
struct rdt_domain *dom_mba;
@@ -438,11 +439,12 @@ static void update_mba_bw(struct rdtgroup *rgrp, struct rdt_domain *dom_mbm)
pr_warn_once("Failure to get domain for MBA update\n");
return;
}
+ hw_dom_mba = resctrl_to_arch_dom(dom_mba);

cur_bw = pmbm_data->prev_bw;
- user_bw = dom_mba->mbps_val[closid];
+ user_bw = hw_dom_mba->mbps_val[closid];
delta_bw = pmbm_data->delta_bw;
- cur_msr_val = dom_mba->ctrl_val[closid];
+ cur_msr_val = hw_dom_mba->ctrl_val[closid];

/*
* For Ctrl groups read data from child monitor groups.
@@ -479,7 +481,7 @@ static void update_mba_bw(struct rdtgroup *rgrp, struct rdt_domain *dom_mbm)

cur_msr = hw_r_mba->msr_base + closid;
wrmsrl(cur_msr, delay_bw_map(new_msr_val, r_mba));
- dom_mba->ctrl_val[closid] = new_msr_val;
+ hw_dom_mba->ctrl_val[closid] = new_msr_val;

/*
* Delta values are updated dynamically package wise for each
diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
index 5126a1e58d97..9a8665c8ab89 100644
--- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
+++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
@@ -915,7 +915,7 @@ static int rdt_bit_usage_show(struct kernfs_open_file *of,
list_for_each_entry(dom, &r->domains, list) {
if (sep)
seq_putc(seq, ';');
- ctrl = dom->ctrl_val;
+ ctrl = resctrl_to_arch_dom(dom)->ctrl_val;
sw_shareable = 0;
exclusive = 0;
seq_printf(seq, "%d=", dom->id);
@@ -1193,7 +1193,7 @@ static bool __rdtgroup_cbm_overlaps(struct rdt_resource *r, struct rdt_domain *d
}

/* Check for overlap with other resource groups */
- ctrl = d->ctrl_val;
+ ctrl = resctrl_to_arch_dom(d)->ctrl_val;
for (i = 0; i < closids_supported(); i++, ctrl++) {
ctrl_b = *ctrl;
mode = rdtgroup_mode_by_closid(i);
@@ -1262,6 +1262,7 @@ bool rdtgroup_cbm_overlaps(struct rdt_resource *r, struct rdt_domain *d,
*/
static bool rdtgroup_mode_test_exclusive(struct rdtgroup *rdtgrp)
{
+ struct rdt_hw_domain *hw_dom;
int closid = rdtgrp->closid;
struct rdt_resource *r;
bool has_cache = false;
@@ -1272,7 +1273,8 @@ static bool rdtgroup_mode_test_exclusive(struct rdtgroup *rdtgrp)
continue;
has_cache = true;
list_for_each_entry(d, &r->domains, list) {
- if (rdtgroup_cbm_overlaps(r, d, d->ctrl_val[closid],
+ hw_dom = resctrl_to_arch_dom(d);
+ if (rdtgroup_cbm_overlaps(r, d, hw_dom->ctrl_val[closid],
rdtgrp->closid, false)) {
rdt_last_cmd_puts("Schemata overlaps\n");
return false;
@@ -1404,6 +1406,7 @@ unsigned int rdtgroup_cbm_to_size(struct rdt_resource *r,
static int rdtgroup_size_show(struct kernfs_open_file *of,
struct seq_file *s, void *v)
{
+ struct rdt_hw_domain *hw_dom;
struct rdtgroup *rdtgrp;
struct rdt_resource *r;
struct rdt_domain *d;
@@ -1438,14 +1441,15 @@ static int rdtgroup_size_show(struct kernfs_open_file *of,
sep = false;
seq_printf(s, "%*s:", max_name_width, r->name);
list_for_each_entry(d, &r->domains, list) {
+ hw_dom = resctrl_to_arch_dom(d);
if (sep)
seq_putc(s, ';');
if (rdtgrp->mode == RDT_MODE_PSEUDO_LOCKSETUP) {
size = 0;
} else {
ctrl = (!is_mba_sc(r) ?
- d->ctrl_val[rdtgrp->closid] :
- d->mbps_val[rdtgrp->closid]);
+ hw_dom->ctrl_val[rdtgrp->closid] :
+ hw_dom->mbps_val[rdtgrp->closid]);
if (r->rid == RDT_RESOURCE_MBA)
size = ctrl;
else
@@ -1940,6 +1944,7 @@ void rdt_domain_reconfigure_cdp(struct rdt_resource *r)
static int set_mba_sc(bool mba_sc)
{
struct rdt_resource *r = &rdt_resources_all[RDT_RESOURCE_MBA].resctrl;
+ struct rdt_hw_domain *hw_dom;
struct rdt_domain *d;

if (!is_mbm_enabled() || !is_mba_linear() ||
@@ -1947,8 +1952,10 @@ static int set_mba_sc(bool mba_sc)
return -EINVAL;

r->membw.mba_sc = mba_sc;
- list_for_each_entry(d, &r->domains, list)
- setup_default_ctrlval(r, d->ctrl_val, d->mbps_val);
+ list_for_each_entry(d, &r->domains, list) {
+ hw_dom = resctrl_to_arch_dom(d);
+ setup_default_ctrlval(r, hw_dom->ctrl_val, hw_dom->mbps_val);
+ }

return 0;
}
@@ -2265,6 +2272,7 @@ static int rdt_init_fs_context(struct fs_context *fc)
static int reset_all_ctrls(struct rdt_resource *r)
{
struct rdt_hw_resource *hw_res = resctrl_to_arch_res(r);
+ struct rdt_hw_domain *hw_dom;
struct msr_param msr_param;
cpumask_var_t cpu_mask;
struct rdt_domain *d;
@@ -2283,10 +2291,11 @@ static int reset_all_ctrls(struct rdt_resource *r)
* from each domain to update the MSRs below.
*/
list_for_each_entry(d, &r->domains, list) {
+ hw_dom = resctrl_to_arch_dom(d);
cpumask_set_cpu(cpumask_any(&d->cpu_mask), cpu_mask);

for (i = 0; i < hw_res->num_closid; i++)
- d->ctrl_val[i] = r->default_ctrl;
+ hw_dom->ctrl_val[i] = r->default_ctrl;
}
cpu = get_cpu();
/* Update CBM on this cpu if it's in cpu_mask. */
@@ -2665,7 +2674,7 @@ static int __init_one_rdt_domain(struct rdt_domain *d, struct rdt_resource *r,
d->have_new_ctrl = false;
d->new_ctrl = r->cache.shareable_bits;
used_b = r->cache.shareable_bits;
- ctrl = d->ctrl_val;
+ ctrl = resctrl_to_arch_dom(d)->ctrl_val;
for (i = 0; i < closids_supported(); i++, ctrl++) {
if (closid_allocated(i) && i != closid) {
mode = rdtgroup_mode_by_closid(i);
@@ -2682,7 +2691,7 @@ static int __init_one_rdt_domain(struct rdt_domain *d, struct rdt_resource *r,
* with an exclusive group.
*/
if (d_cdp)
- peer_ctl = d_cdp->ctrl_val[i];
+ peer_ctl = resctrl_to_arch_dom(d_cdp)->ctrl_val[i];
else
peer_ctl = 0;
used_b |= *ctrl | peer_ctl;
diff --git a/include/linux/resctrl.h b/include/linux/resctrl.h
index ee92df9c7252..be6f5df78e31 100644
--- a/include/linux/resctrl.h
+++ b/include/linux/resctrl.h
@@ -15,7 +15,37 @@ int proc_resctrl_show(struct seq_file *m,

#endif

-struct rdt_domain;
+/**
+ * struct rdt_domain - group of CPUs sharing a resctrl resource
+ * @list: all instances of this resource
+ * @id: unique id for this instance
+ * @cpu_mask: which CPUs share this resource
+ * @new_ctrl: new ctrl value to be loaded
+ * @have_new_ctrl: did user provide new_ctrl for this domain
+ * @rmid_busy_llc: bitmap of which limbo RMIDs are above threshold
+ * @mbm_total: saved state for MBM total bandwidth
+ * @mbm_local: saved state for MBM local bandwidth
+ * @mbm_over: worker to periodically read MBM h/w counters
+ * @cqm_limbo: worker to periodically read CQM h/w counters
+ * @mbm_work_cpu: worker CPU for MBM h/w counters
+ * @cqm_work_cpu: worker CPU for CQM h/w counters
+ * @plr: pseudo-locked region (if any) associated with domain
+ */
+struct rdt_domain {
+ struct list_head list;
+ int id;
+ struct cpumask cpu_mask;
+ u32 new_ctrl;
+ bool have_new_ctrl;
+ unsigned long *rmid_busy_llc;
+ struct mbm_state *mbm_total;
+ struct mbm_state *mbm_local;
+ struct delayed_work mbm_over;
+ struct delayed_work cqm_limbo;
+ int mbm_work_cpu;
+ int cqm_work_cpu;
+ struct pseudo_lock_region *plr;
+};

/**
* struct resctrl_cache - Cache allocation related data
--
2.30.2


2021-05-19 20:17:14

by James Morse

[permalink] [raw]
Subject: [PATCH v3 24/24] x86/resctrl: Merge the CDP resources

resctrl uses struct rdt_resource to describe the available hardware
resources. The domains of the CDP aliases share a single ctrl_val[]
array. The only differences between the struct rdt_hw_resource
aliases is the name and conf_type.

The name from struct rdt_hw_resource is visible to user-space. To
support another architecture, as many user-visible details should be
handled in the filesystem parts of the code that is common to all
architectures. The name and conf_type go together.

Remove conf_type and the CDP aliases. When CDP is supported and
enabled, schemata_list_create() can create two schema using the
single resource, generating the CODE/DATA suffix to the schema
name itself.
This allows the alloc_ctrlval_array() and complications around free()ing
the ctrl_val arrays to be removed.

Reviewed-by: Jamie Iles <[email protected]>
Signed-off-by: James Morse <[email protected]>
---
Changes since v2:
* Removed stray conf_type that remained in the arch specific struct
* Shuffled commit message,

Changes since v1:
* rdt_get_cdp_config() is kept for its comment.
---
arch/x86/kernel/cpu/resctrl/core.c | 177 ++-----------------------
arch/x86/kernel/cpu/resctrl/internal.h | 6 -
arch/x86/kernel/cpu/resctrl/rdtgroup.c | 122 +++++++++--------
3 files changed, 75 insertions(+), 230 deletions(-)

diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resctrl/core.c
index 94cfc72beccf..daaa326e5a25 100644
--- a/arch/x86/kernel/cpu/resctrl/core.c
+++ b/arch/x86/kernel/cpu/resctrl/core.c
@@ -62,7 +62,6 @@ mba_wrmsr_amd(struct rdt_domain *d, struct msr_param *m,
struct rdt_hw_resource rdt_resources_all[] = {
[RDT_RESOURCE_L3] =
{
- .conf_type = CDP_NONE,
.resctrl = {
.rid = RDT_RESOURCE_L3,
.name = "L3",
@@ -78,45 +77,8 @@ struct rdt_hw_resource rdt_resources_all[] = {
.msr_base = MSR_IA32_L3_CBM_BASE,
.msr_update = cat_wrmsr,
},
- [RDT_RESOURCE_L3DATA] =
- {
- .conf_type = CDP_DATA,
- .resctrl = {
- .rid = RDT_RESOURCE_L3DATA,
- .name = "L3DATA",
- .cache_level = 3,
- .cache = {
- .min_cbm_bits = 1,
- },
- .domains = domain_init(RDT_RESOURCE_L3DATA),
- .parse_ctrlval = parse_cbm,
- .format_str = "%d=%0*x",
- .fflags = RFTYPE_RES_CACHE,
- },
- .msr_base = MSR_IA32_L3_CBM_BASE,
- .msr_update = cat_wrmsr,
- },
- [RDT_RESOURCE_L3CODE] =
- {
- .conf_type = CDP_CODE,
- .resctrl = {
- .rid = RDT_RESOURCE_L3CODE,
- .name = "L3CODE",
- .cache_level = 3,
- .cache = {
- .min_cbm_bits = 1,
- },
- .domains = domain_init(RDT_RESOURCE_L3CODE),
- .parse_ctrlval = parse_cbm,
- .format_str = "%d=%0*x",
- .fflags = RFTYPE_RES_CACHE,
- },
- .msr_base = MSR_IA32_L3_CBM_BASE,
- .msr_update = cat_wrmsr,
- },
[RDT_RESOURCE_L2] =
{
- .conf_type = CDP_NONE,
.resctrl = {
.rid = RDT_RESOURCE_L2,
.name = "L2",
@@ -132,45 +94,8 @@ struct rdt_hw_resource rdt_resources_all[] = {
.msr_base = MSR_IA32_L2_CBM_BASE,
.msr_update = cat_wrmsr,
},
- [RDT_RESOURCE_L2DATA] =
- {
- .conf_type = CDP_DATA,
- .resctrl = {
- .rid = RDT_RESOURCE_L2DATA,
- .name = "L2DATA",
- .cache_level = 2,
- .cache = {
- .min_cbm_bits = 1,
- },
- .domains = domain_init(RDT_RESOURCE_L2DATA),
- .parse_ctrlval = parse_cbm,
- .format_str = "%d=%0*x",
- .fflags = RFTYPE_RES_CACHE,
- },
- .msr_base = MSR_IA32_L2_CBM_BASE,
- .msr_update = cat_wrmsr,
- },
- [RDT_RESOURCE_L2CODE] =
- {
- .conf_type = CDP_CODE,
- .resctrl = {
- .rid = RDT_RESOURCE_L2CODE,
- .name = "L2CODE",
- .cache_level = 2,
- .cache = {
- .min_cbm_bits = 1,
- },
- .domains = domain_init(RDT_RESOURCE_L2CODE),
- .parse_ctrlval = parse_cbm,
- .format_str = "%d=%0*x",
- .fflags = RFTYPE_RES_CACHE,
- },
- .msr_base = MSR_IA32_L2_CBM_BASE,
- .msr_update = cat_wrmsr,
- },
[RDT_RESOURCE_MBA] =
{
- .conf_type = CDP_NONE,
.resctrl = {
.rid = RDT_RESOURCE_MBA,
.name = "MB",
@@ -339,40 +264,24 @@ static void rdt_get_cache_alloc_cfg(int idx, struct rdt_resource *r)
r->alloc_enabled = true;
}

-static void rdt_get_cdp_config(int level, int type)
+static void rdt_get_cdp_config(int level)
{
- struct rdt_resource *r_l = &rdt_resources_all[level].resctrl;
- struct rdt_hw_resource *hw_res_l = resctrl_to_arch_res(r_l);
- struct rdt_resource *r = &rdt_resources_all[type].resctrl;
- struct rdt_hw_resource *hw_res = resctrl_to_arch_res(r);
-
- hw_res->num_closid = hw_res_l->num_closid;
- r->cache.cbm_len = r_l->cache.cbm_len;
- r->default_ctrl = r_l->default_ctrl;
- r->cache.shareable_bits = r_l->cache.shareable_bits;
- r->data_width = (r->cache.cbm_len + 3) / 4;
- r->alloc_capable = true;
/*
* By default, CDP is disabled. CDP can be enabled by mount parameter
* "cdp" during resctrl file system mount time.
*/
- r->alloc_enabled = false;
rdt_resources_all[level].cdp_enabled = false;
- rdt_resources_all[type].cdp_enabled = false;
rdt_resources_all[level].cdp_capable = true;
- rdt_resources_all[type].cdp_capable = true;
}

static void rdt_get_cdp_l3_config(void)
{
- rdt_get_cdp_config(RDT_RESOURCE_L3, RDT_RESOURCE_L3DATA);
- rdt_get_cdp_config(RDT_RESOURCE_L3, RDT_RESOURCE_L3CODE);
+ rdt_get_cdp_config(RDT_RESOURCE_L3);
}

static void rdt_get_cdp_l2_config(void)
{
- rdt_get_cdp_config(RDT_RESOURCE_L2, RDT_RESOURCE_L2DATA);
- rdt_get_cdp_config(RDT_RESOURCE_L2, RDT_RESOURCE_L2CODE);
+ rdt_get_cdp_config(RDT_RESOURCE_L2);
}

static void
@@ -509,58 +418,6 @@ void setup_default_ctrlval(struct rdt_resource *r, u32 *dc, u32 *dm)
}
}

-static u32 *alloc_ctrlval_array(struct rdt_resource *r, struct rdt_domain *d,
- bool mba_sc)
-{
- /* these are for the underlying hardware, they may not match r/d */
- struct rdt_domain *underlying_domain;
- struct rdt_hw_resource *hw_res;
- struct rdt_hw_domain *hw_dom;
- bool remapped;
-
- switch (r->rid) {
- case RDT_RESOURCE_L3DATA:
- case RDT_RESOURCE_L3CODE:
- hw_res = &rdt_resources_all[RDT_RESOURCE_L3];
- remapped = true;
- break;
- case RDT_RESOURCE_L2DATA:
- case RDT_RESOURCE_L2CODE:
- hw_res = &rdt_resources_all[RDT_RESOURCE_L2];
- remapped = true;
- break;
- default:
- hw_res = resctrl_to_arch_res(r);
- remapped = false;
- }
-
-
- /*
- * If we changed the resource, we need to search for the underlying
- * domain. Doing this for all resources would make it tricky to add the
- * first resource, as domains aren't added to a resource list until
- * after the ctrlval arrays have been allocated.
- */
- if (remapped)
- underlying_domain = rdt_find_domain(&hw_res->resctrl, d->id,
- NULL);
- else
- underlying_domain = d;
- hw_dom = resctrl_to_arch_dom(underlying_domain);
-
- if (mba_sc) {
- if (hw_dom->mbps_val)
- return hw_dom->mbps_val;
- return kmalloc_array(hw_res->num_closid,
- sizeof(*hw_dom->mbps_val), GFP_KERNEL);
- } else {
- if (hw_dom->ctrl_val)
- return hw_dom->ctrl_val;
- return kmalloc_array(hw_res->num_closid,
- sizeof(*hw_dom->ctrl_val), GFP_KERNEL);
- }
-}
-
static int domain_setup_ctrlval(struct rdt_resource *r, struct rdt_domain *d)
{
struct rdt_hw_resource *hw_res = resctrl_to_arch_res(r);
@@ -568,11 +425,13 @@ static int domain_setup_ctrlval(struct rdt_resource *r, struct rdt_domain *d)
struct msr_param m;
u32 *dc, *dm;

- dc = alloc_ctrlval_array(r, d, false);
+ dc = kmalloc_array(hw_res->num_closid, sizeof(*hw_dom->ctrl_val),
+ GFP_KERNEL);
if (!dc)
return -ENOMEM;

- dm = alloc_ctrlval_array(r, d, true);
+ dm = kmalloc_array(hw_res->num_closid, sizeof(*hw_dom->mbps_val),
+ GFP_KERNEL);
if (!dm) {
kfree(dc);
return -ENOMEM;
@@ -731,14 +590,8 @@ static void domain_remove_cpu(int cpu, struct rdt_resource *r)
if (d->plr)
d->plr->d = NULL;

- /* temporary: these four don't have a unique ctrlval array */
- if ((r->rid != RDT_RESOURCE_L3CODE) &&
- (r->rid != RDT_RESOURCE_L3DATA) &&
- (r->rid != RDT_RESOURCE_L2CODE) &&
- (r->rid != RDT_RESOURCE_L2DATA)) {
- kfree(hw_dom->ctrl_val);
- kfree(hw_dom->mbps_val);
- }
+ kfree(hw_dom->ctrl_val);
+ kfree(hw_dom->mbps_val);
bitmap_free(d->rmid_busy_llc);
kfree(d->mbm_total);
kfree(d->mbm_local);
@@ -1011,11 +864,7 @@ static __init void rdt_init_res_defs_intel(void)
hw_res = resctrl_to_arch_res(r);

if (r->rid == RDT_RESOURCE_L3 ||
- r->rid == RDT_RESOURCE_L3DATA ||
- r->rid == RDT_RESOURCE_L3CODE ||
- r->rid == RDT_RESOURCE_L2 ||
- r->rid == RDT_RESOURCE_L2DATA ||
- r->rid == RDT_RESOURCE_L2CODE) {
+ r->rid == RDT_RESOURCE_L2) {
r->cache.arch_has_sparse_bitmaps = false;
r->cache.arch_has_empty_bitmaps = false;
r->cache.arch_has_per_cpu_cfg = false;
@@ -1035,11 +884,7 @@ static __init void rdt_init_res_defs_amd(void)
hw_res = resctrl_to_arch_res(r);

if (r->rid == RDT_RESOURCE_L3 ||
- r->rid == RDT_RESOURCE_L3DATA ||
- r->rid == RDT_RESOURCE_L3CODE ||
- r->rid == RDT_RESOURCE_L2 ||
- r->rid == RDT_RESOURCE_L2DATA ||
- r->rid == RDT_RESOURCE_L2CODE) {
+ r->rid == RDT_RESOURCE_L2) {
r->cache.arch_has_sparse_bitmaps = true;
r->cache.arch_has_empty_bitmaps = true;
r->cache.arch_has_per_cpu_cfg = true;
diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/resctrl/internal.h
index d963a45e7aec..3ff37c7df5b4 100644
--- a/arch/x86/kernel/cpu/resctrl/internal.h
+++ b/arch/x86/kernel/cpu/resctrl/internal.h
@@ -364,7 +364,6 @@ struct rdt_parse_data {

/**
* struct rdt_hw_resource - arch private attributes of a resctrl resource
- * @conf_type: The type that should be used when configuring. temporary
* @resctrl: Attributes of the resource used directly by resctrl.
* @num_closid: Maximum number of closid this hardware can support,
* regardless of CDP. This is exposed via
@@ -383,7 +382,6 @@ struct rdt_parse_data {
* msr_update and msr_base.
*/
struct rdt_hw_resource {
- enum resctrl_conf_type conf_type;
struct rdt_resource resctrl;
u32 num_closid;
unsigned int msr_base;
@@ -415,11 +413,7 @@ extern struct dentry *debugfs_resctrl;

enum resctrl_res_level {
RDT_RESOURCE_L3,
- RDT_RESOURCE_L3DATA,
- RDT_RESOURCE_L3CODE,
RDT_RESOURCE_L2,
- RDT_RESOURCE_L2DATA,
- RDT_RESOURCE_L2CODE,
RDT_RESOURCE_MBA,

/* Must be the last */
diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
index cdf21deafd2a..4bb44713c4ac 100644
--- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
+++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
@@ -1880,10 +1880,10 @@ void rdt_domain_reconfigure_cdp(struct rdt_resource *r)
if (!hw_res->cdp_capable)
return;

- if (r == &rdt_resources_all[RDT_RESOURCE_L2DATA].resctrl)
+ if (r->rid == RDT_RESOURCE_L2)
l2_qos_cfg_update(&hw_res->cdp_enabled);

- if (r == &rdt_resources_all[RDT_RESOURCE_L3DATA].resctrl)
+ if (r->rid == RDT_RESOURCE_L3)
l3_qos_cfg_update(&hw_res->cdp_enabled);
}

@@ -1912,68 +1912,42 @@ static int set_mba_sc(bool mba_sc)
return 0;
}

-static int cdp_enable(int level, int data_type, int code_type)
+static int cdp_enable(int level)
{
- struct rdt_resource *r_ldata = &rdt_resources_all[data_type].resctrl;
- struct rdt_resource *r_lcode = &rdt_resources_all[code_type].resctrl;
struct rdt_resource *r_l = &rdt_resources_all[level].resctrl;
int ret;

- if (!r_l->alloc_capable || !r_ldata->alloc_capable ||
- !r_lcode->alloc_capable)
+ if (!r_l->alloc_capable)
return -EINVAL;

ret = set_cache_qos_cfg(level, true);
- if (!ret) {
- r_l->alloc_enabled = false;
- r_ldata->alloc_enabled = true;
- r_lcode->alloc_enabled = true;
+ if (!ret)
rdt_resources_all[level].cdp_enabled = true;
- rdt_resources_all[data_type].cdp_enabled = true;
- rdt_resources_all[code_type].cdp_enabled = true;
- }
+
return ret;
}

-static void cdp_disable(int level, int data_type, int code_type)
+static void cdp_disable(int level)
{
struct rdt_hw_resource *r_hw = &rdt_resources_all[level];
- struct rdt_resource *r = &r_hw->resctrl;
-
- r->alloc_enabled = r->alloc_capable;

if (r_hw->cdp_enabled) {
- rdt_resources_all[data_type].resctrl.alloc_enabled = false;
- rdt_resources_all[code_type].resctrl.alloc_enabled = false;
set_cache_qos_cfg(level, false);
r_hw->cdp_enabled = false;
- rdt_resources_all[data_type].cdp_enabled = false;
- rdt_resources_all[code_type].cdp_enabled = false;
}
}

int resctrl_arch_set_cdp_enabled(enum resctrl_res_level l, bool enable)
{
struct rdt_hw_resource *hw_res = &rdt_resources_all[l];
- enum resctrl_res_level code_type, data_type;

if (!hw_res->cdp_capable)
return -EINVAL;

- if (l == RDT_RESOURCE_L3) {
- code_type = RDT_RESOURCE_L3CODE;
- data_type = RDT_RESOURCE_L3DATA;
- } else if (l == RDT_RESOURCE_L2) {
- code_type = RDT_RESOURCE_L2CODE;
- data_type = RDT_RESOURCE_L2DATA;
- } else {
- return -EINVAL;
- }
-
if (enable)
- return cdp_enable(l, data_type, code_type);
+ return cdp_enable(l);

- cdp_disable(l, data_type, code_type);
+ cdp_disable(l);

return 0;
}
@@ -2072,40 +2046,72 @@ static int rdt_enable_ctx(struct rdt_fs_context *ctx)
return ret;
}

-static int schemata_list_create(void)
+static int schemata_list_add(struct rdt_resource *r, enum resctrl_conf_type type)
{
struct resctrl_schema *s;
- struct rdt_resource *r;
+ const char *suffix = "";
int ret, cl;

- for_each_alloc_enabled_rdt_resource(r) {
- s = kzalloc(sizeof(*s), GFP_KERNEL);
- if (!s)
- return -ENOMEM;
-
- s->res = r;
- s->conf_type = resctrl_to_arch_res(r)->conf_type;
- s->num_closid = resctrl_arch_get_num_closid(r);
- if (resctrl_arch_get_cdp_enabled(r->rid))
- s->num_closid /= 2;
-
- ret = snprintf(s->name, sizeof(s->name), r->name);
- if (ret >= sizeof(s->name)) {
- kfree(s);
- return -EINVAL;
- }
+ s = kzalloc(sizeof(*s), GFP_KERNEL);
+ if (!s)
+ return -ENOMEM;

- cl = strlen(s->name);
- if (cl > max_name_width)
- max_name_width = cl;
+ s->res = r;
+ s->num_closid = resctrl_arch_get_num_closid(r);
+ if (resctrl_arch_get_cdp_enabled(r->rid))
+ s->num_closid /= 2;

- INIT_LIST_HEAD(&s->list);
- list_add(&s->list, &resctrl_schema_all);
+ s->conf_type = type;
+ switch (type) {
+ case CDP_CODE:
+ suffix = "CODE";
+ break;
+ case CDP_DATA:
+ suffix = "DATA";
+ break;
+ case CDP_NONE:
+ suffix = "";
+ break;
}

+ ret = snprintf(s->name, sizeof(s->name), "%s%s", r->name, suffix);
+ if (ret >= sizeof(s->name)) {
+ kfree(s);
+ return -EINVAL;
+ }
+
+ cl = strlen(s->name);
+ if (cl > max_name_width)
+ max_name_width = cl;
+
+ INIT_LIST_HEAD(&s->list);
+ list_add(&s->list, &resctrl_schema_all);
+
return 0;
}

+static int schemata_list_create(void)
+{
+ struct rdt_resource *r;
+ int ret = 0;
+
+ for_each_alloc_enabled_rdt_resource(r) {
+ if (resctrl_arch_get_cdp_enabled(r->rid)) {
+ ret = schemata_list_add(r, CDP_CODE);
+ if (ret)
+ break;
+
+ ret = schemata_list_add(r, CDP_DATA);
+ } else
+ ret = schemata_list_add(r, CDP_NONE);
+
+ if (ret)
+ break;
+ }
+
+ return ret;
+}
+
static void schemata_list_destroy(void)
{
struct resctrl_schema *s, *tmp;
--
2.30.2


2021-05-19 20:17:18

by James Morse

[permalink] [raw]
Subject: [PATCH v3 04/24] x86/resctrl: Pass the schema in info dir's private pointer

Many of resctrl's per-schema files return a value from struct
rdt_resource, which they take as their 'priv' pointer.

Moving properties that resctrl exposes to user-space into the core
'fs' code, (e.g. the name of the schema), means some of the functions
that back the filesystem need the schema struct (to where the properties
are moved), but currently take struct rdt_resource. For example, once
the CDP resources are merged, struct rdt_resource no longer reflects all
the properties of the schema.
For the info dirs that represent a control, the information needed
will be accessed via struct resctrl_schema, as this is how the resource
is being used. For the monitors, its still struct rdt_resource as the
monitors aren't described as schema.
This difference means the type of the private pointers varies
between control and monitor info dirs.

Change the 'priv' pointer to point to struct resctrl_schema for
the per-schema files that represent a control. The type can be determined
from the fflags field. If the flags are RF_MON_INFO, its a struct
rdt_resource. If the flags are RF_CTRL_INFO, its a struct resctrl_schema.
No entry in res_common_files[] has both flags.

Reviewed-by: Jamie Iles <[email protected]>
Signed-off-by: James Morse <[email protected]>
---
Changes since v2:
* Shuffled commit message,

Changes since v1:
* Added comment above removed for_each_alloc_enabled_rdt_resource() to hint
at symmetry.
---
arch/x86/kernel/cpu/resctrl/rdtgroup.c | 38 +++++++++++++++++---------
1 file changed, 25 insertions(+), 13 deletions(-)

diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
index 14ea1212f476..6ad9df322282 100644
--- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
+++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
@@ -848,7 +848,8 @@ static int rdt_last_cmd_status_show(struct kernfs_open_file *of,
static int rdt_num_closids_show(struct kernfs_open_file *of,
struct seq_file *seq, void *v)
{
- struct rdt_resource *r = of->kn->parent->priv;
+ struct resctrl_schema *s = of->kn->parent->priv;
+ struct rdt_resource *r = s->res;
struct rdt_hw_resource *hw_res;

hw_res = resctrl_to_arch_res(r);
@@ -859,7 +860,8 @@ static int rdt_num_closids_show(struct kernfs_open_file *of,
static int rdt_default_ctrl_show(struct kernfs_open_file *of,
struct seq_file *seq, void *v)
{
- struct rdt_resource *r = of->kn->parent->priv;
+ struct resctrl_schema *s = of->kn->parent->priv;
+ struct rdt_resource *r = s->res;

seq_printf(seq, "%x\n", r->default_ctrl);
return 0;
@@ -868,7 +870,8 @@ static int rdt_default_ctrl_show(struct kernfs_open_file *of,
static int rdt_min_cbm_bits_show(struct kernfs_open_file *of,
struct seq_file *seq, void *v)
{
- struct rdt_resource *r = of->kn->parent->priv;
+ struct resctrl_schema *s = of->kn->parent->priv;
+ struct rdt_resource *r = s->res;

seq_printf(seq, "%u\n", r->cache.min_cbm_bits);
return 0;
@@ -877,7 +880,8 @@ static int rdt_min_cbm_bits_show(struct kernfs_open_file *of,
static int rdt_shareable_bits_show(struct kernfs_open_file *of,
struct seq_file *seq, void *v)
{
- struct rdt_resource *r = of->kn->parent->priv;
+ struct resctrl_schema *s = of->kn->parent->priv;
+ struct rdt_resource *r = s->res;

seq_printf(seq, "%x\n", r->cache.shareable_bits);
return 0;
@@ -900,13 +904,14 @@ static int rdt_shareable_bits_show(struct kernfs_open_file *of,
static int rdt_bit_usage_show(struct kernfs_open_file *of,
struct seq_file *seq, void *v)
{
- struct rdt_resource *r = of->kn->parent->priv;
+ struct resctrl_schema *s = of->kn->parent->priv;
/*
* Use unsigned long even though only 32 bits are used to ensure
* test_bit() is used safely.
*/
unsigned long sw_shareable = 0, hw_shareable = 0;
unsigned long exclusive = 0, pseudo_locked = 0;
+ struct rdt_resource *r = s->res;
struct rdt_domain *dom;
int i, hwb, swb, excl, psl;
enum rdtgrp_mode mode;
@@ -978,7 +983,8 @@ static int rdt_bit_usage_show(struct kernfs_open_file *of,
static int rdt_min_bw_show(struct kernfs_open_file *of,
struct seq_file *seq, void *v)
{
- struct rdt_resource *r = of->kn->parent->priv;
+ struct resctrl_schema *s = of->kn->parent->priv;
+ struct rdt_resource *r = s->res;

seq_printf(seq, "%u\n", r->membw.min_bw);
return 0;
@@ -1009,7 +1015,8 @@ static int rdt_mon_features_show(struct kernfs_open_file *of,
static int rdt_bw_gran_show(struct kernfs_open_file *of,
struct seq_file *seq, void *v)
{
- struct rdt_resource *r = of->kn->parent->priv;
+ struct resctrl_schema *s = of->kn->parent->priv;
+ struct rdt_resource *r = s->res;

seq_printf(seq, "%u\n", r->membw.bw_gran);
return 0;
@@ -1018,7 +1025,8 @@ static int rdt_bw_gran_show(struct kernfs_open_file *of,
static int rdt_delay_linear_show(struct kernfs_open_file *of,
struct seq_file *seq, void *v)
{
- struct rdt_resource *r = of->kn->parent->priv;
+ struct resctrl_schema *s = of->kn->parent->priv;
+ struct rdt_resource *r = s->res;

seq_printf(seq, "%u\n", r->membw.delay_linear);
return 0;
@@ -1038,7 +1046,8 @@ static int max_threshold_occ_show(struct kernfs_open_file *of,
static int rdt_thread_throttle_mode_show(struct kernfs_open_file *of,
struct seq_file *seq, void *v)
{
- struct rdt_resource *r = of->kn->parent->priv;
+ struct resctrl_schema *s = of->kn->parent->priv;
+ struct rdt_resource *r = s->res;

if (r->membw.throttle_mode == THREAD_THROTTLE_PER_THREAD)
seq_puts(seq, "per-thread\n");
@@ -1771,14 +1780,14 @@ int rdtgroup_kn_mode_restore(struct rdtgroup *r, const char *name,
return ret;
}

-static int rdtgroup_mkdir_info_resdir(struct rdt_resource *r, char *name,
+static int rdtgroup_mkdir_info_resdir(void *priv, char *name,
unsigned long fflags)
{
struct kernfs_node *kn_subdir;
int ret;

kn_subdir = kernfs_create_dir(kn_info, name,
- kn_info->mode, r);
+ kn_info->mode, priv);
if (IS_ERR(kn_subdir))
return PTR_ERR(kn_subdir);

@@ -1795,6 +1804,7 @@ static int rdtgroup_mkdir_info_resdir(struct rdt_resource *r, char *name,

static int rdtgroup_create_info_dir(struct kernfs_node *parent_kn)
{
+ struct resctrl_schema *s;
struct rdt_resource *r;
unsigned long fflags;
char name[32];
@@ -1809,9 +1819,11 @@ static int rdtgroup_create_info_dir(struct kernfs_node *parent_kn)
if (ret)
goto out_destroy;

- for_each_alloc_enabled_rdt_resource(r) {
+ /* loop over enabled controls, these are all alloc_enabled */
+ list_for_each_entry(s, &resctrl_schema_all, list) {
+ r = s->res;
fflags = r->fflags | RF_CTRL_INFO;
- ret = rdtgroup_mkdir_info_resdir(r, r->name, fflags);
+ ret = rdtgroup_mkdir_info_resdir(s, r->name, fflags);
if (ret)
goto out_destroy;
}
--
2.30.2


2021-05-19 20:17:45

by James Morse

[permalink] [raw]
Subject: [PATCH v3 09/24] x86/resctrl: Pass the schema to resctrl filesystem functions

Once the CDP resources are merged, there will be two struct
resctrl_schema for one struct rdt_resource. CDP becomes a type of
configuration that belongs to the schema.

Helpers like rdtgroup_cbm_overlaps() need access to the schema to
query the configuration (or configurations) based on schema properties.

Change these functions to take a struct schema instead of the
struct rdt_resource. All the modified functions are part of the filesystem
code that will move to /fs/resctrl once it is possible to support a
second architecture.

Reviewed-by: Jamie Iles <[email protected]>
Signed-off-by: James Morse <[email protected]>
---
Changes since v2:
* Shuffled commit message,

Changes since v1:
* split from a larger patch
---
arch/x86/kernel/cpu/resctrl/ctrlmondata.c | 23 +++++++++++++----------
arch/x86/kernel/cpu/resctrl/internal.h | 6 +++---
arch/x86/kernel/cpu/resctrl/rdtgroup.c | 19 +++++++++++--------
include/linux/resctrl.h | 3 ++-
4 files changed, 29 insertions(+), 22 deletions(-)

diff --git a/arch/x86/kernel/cpu/resctrl/ctrlmondata.c b/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
index fcd6ca73ac41..dbbdd9f275e9 100644
--- a/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
+++ b/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
@@ -57,9 +57,10 @@ static bool bw_validate(char *buf, unsigned long *data, struct rdt_resource *r)
return true;
}

-int parse_bw(struct rdt_parse_data *data, struct rdt_resource *r,
+int parse_bw(struct rdt_parse_data *data, struct resctrl_schema *s,
struct rdt_domain *d)
{
+ struct rdt_resource *r = s->res;
unsigned long bw_val;

if (d->have_new_ctrl) {
@@ -125,10 +126,11 @@ static bool cbm_validate(char *buf, u32 *data, struct rdt_resource *r)
* Read one cache bit mask (hex). Check that it is valid for the current
* resource type.
*/
-int parse_cbm(struct rdt_parse_data *data, struct rdt_resource *r,
+int parse_cbm(struct rdt_parse_data *data, struct resctrl_schema *s,
struct rdt_domain *d)
{
struct rdtgroup *rdtgrp = data->rdtgrp;
+ struct rdt_resource *r = s->res;
u32 cbm_val;

if (d->have_new_ctrl) {
@@ -160,12 +162,12 @@ int parse_cbm(struct rdt_parse_data *data, struct rdt_resource *r,
* The CBM may not overlap with the CBM of another closid if
* either is exclusive.
*/
- if (rdtgroup_cbm_overlaps(r, d, cbm_val, rdtgrp->closid, true)) {
+ if (rdtgroup_cbm_overlaps(s, d, cbm_val, rdtgrp->closid, true)) {
rdt_last_cmd_puts("Overlaps with exclusive group\n");
return -EINVAL;
}

- if (rdtgroup_cbm_overlaps(r, d, cbm_val, rdtgrp->closid, false)) {
+ if (rdtgroup_cbm_overlaps(s, d, cbm_val, rdtgrp->closid, false)) {
if (rdtgrp->mode == RDT_MODE_EXCLUSIVE ||
rdtgrp->mode == RDT_MODE_PSEUDO_LOCKSETUP) {
rdt_last_cmd_puts("Overlaps with other group\n");
@@ -185,9 +187,10 @@ int parse_cbm(struct rdt_parse_data *data, struct rdt_resource *r,
* separated by ";". The "id" is in decimal, and must match one of
* the "id"s for this resource.
*/
-static int parse_line(char *line, struct rdt_resource *r,
+static int parse_line(char *line, struct resctrl_schema *s,
struct rdtgroup *rdtgrp)
{
+ struct rdt_resource *r = s->res;
struct rdt_parse_data data;
char *dom = NULL, *id;
struct rdt_domain *d;
@@ -213,7 +216,7 @@ static int parse_line(char *line, struct rdt_resource *r,
if (d->id == dom_id) {
data.buf = dom;
data.rdtgrp = rdtgrp;
- if (r->parse_ctrlval(&data, r, d))
+ if (r->parse_ctrlval(&data, s, d))
return -EINVAL;
if (rdtgrp->mode == RDT_MODE_PSEUDO_LOCKSETUP) {
/*
@@ -292,7 +295,7 @@ static int rdtgroup_parse_resource(char *resname, char *tok,
list_for_each_entry(s, &resctrl_schema_all, list) {
r = s->res;
if (!strcmp(resname, r->name) && rdtgrp->closid < s->num_closid)
- return parse_line(tok, r, rdtgrp);
+ return parse_line(tok, s, rdtgrp);
}
rdt_last_cmd_printf("Unknown or unsupported resource name '%s'\n", resname);
return -EINVAL;
@@ -377,8 +380,9 @@ ssize_t rdtgroup_schemata_write(struct kernfs_open_file *of,
return ret ?: nbytes;
}

-static void show_doms(struct seq_file *s, struct rdt_resource *r, int closid)
+static void show_doms(struct seq_file *s, struct resctrl_schema *schema, int closid)
{
+ struct rdt_resource *r = schema->res;
struct rdt_hw_domain *hw_dom;
struct rdt_domain *dom;
bool sep = false;
@@ -429,9 +433,8 @@ int rdtgroup_schemata_show(struct kernfs_open_file *of,
} else {
closid = rdtgrp->closid;
list_for_each_entry(schema, &resctrl_schema_all, list) {
- r = schema->res;
if (closid < schema->num_closid)
- show_doms(s, r, closid);
+ show_doms(s, schema, closid);
}
}
} else {
diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/resctrl/internal.h
index f2f9d7fe6168..34e90bfca2d9 100644
--- a/arch/x86/kernel/cpu/resctrl/internal.h
+++ b/arch/x86/kernel/cpu/resctrl/internal.h
@@ -396,9 +396,9 @@ static inline struct rdt_hw_resource *resctrl_to_arch_res(struct rdt_resource *r
return container_of(r, struct rdt_hw_resource, resctrl);
}

-int parse_cbm(struct rdt_parse_data *data, struct rdt_resource *r,
+int parse_cbm(struct rdt_parse_data *data, struct resctrl_schema *s,
struct rdt_domain *d);
-int parse_bw(struct rdt_parse_data *data, struct rdt_resource *r,
+int parse_bw(struct rdt_parse_data *data, struct resctrl_schema *s,
struct rdt_domain *d);

extern struct mutex rdtgroup_mutex;
@@ -501,7 +501,7 @@ ssize_t rdtgroup_schemata_write(struct kernfs_open_file *of,
char *buf, size_t nbytes, loff_t off);
int rdtgroup_schemata_show(struct kernfs_open_file *of,
struct seq_file *s, void *v);
-bool rdtgroup_cbm_overlaps(struct rdt_resource *r, struct rdt_domain *d,
+bool rdtgroup_cbm_overlaps(struct resctrl_schema *s, struct rdt_domain *d,
unsigned long cbm, int closid, bool exclusive);
unsigned int rdtgroup_cbm_to_size(struct rdt_resource *r, struct rdt_domain *d,
unsigned long cbm);
diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
index 39ec0344963e..301a8acfaaf3 100644
--- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
+++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
@@ -1221,7 +1221,7 @@ static bool __rdtgroup_cbm_overlaps(struct rdt_resource *r, struct rdt_domain *d

/**
* rdtgroup_cbm_overlaps - Does CBM overlap with other use of hardware
- * @r: Resource to which domain instance @d belongs.
+ * @s: Schema for the resource to which domain instance @d belongs.
* @d: The domain instance for which @closid is being tested.
* @cbm: Capacity bitmask being tested.
* @closid: Intended closid for @cbm.
@@ -1239,9 +1239,10 @@ static bool __rdtgroup_cbm_overlaps(struct rdt_resource *r, struct rdt_domain *d
*
* Return: true if CBM overlap detected, false if there is no overlap
*/
-bool rdtgroup_cbm_overlaps(struct rdt_resource *r, struct rdt_domain *d,
+bool rdtgroup_cbm_overlaps(struct resctrl_schema *s, struct rdt_domain *d,
unsigned long cbm, int closid, bool exclusive)
{
+ struct rdt_resource *r = s->res;
struct rdt_resource *r_cdp;
struct rdt_domain *d_cdp;

@@ -1282,7 +1283,8 @@ static bool rdtgroup_mode_test_exclusive(struct rdtgroup *rdtgrp)
has_cache = true;
list_for_each_entry(d, &r->domains, list) {
hw_dom = resctrl_to_arch_dom(d);
- if (rdtgroup_cbm_overlaps(r, d, hw_dom->ctrl_val[closid],
+ if (rdtgroup_cbm_overlaps(s, d,
+ hw_dom->ctrl_val[closid],
rdtgrp->closid, false)) {
rdt_last_cmd_puts("Schemata overlaps\n");
return false;
@@ -2712,11 +2714,12 @@ static u32 cbm_ensure_valid(u32 _val, struct rdt_resource *r)
* Set the RDT domain up to start off with all usable allocations. That is,
* all shareable and unused bits. All-zero CBM is invalid.
*/
-static int __init_one_rdt_domain(struct rdt_domain *d, struct rdt_resource *r,
+static int __init_one_rdt_domain(struct rdt_domain *d, struct resctrl_schema *s,
u32 closid)
{
struct rdt_resource *r_cdp = NULL;
struct rdt_domain *d_cdp = NULL;
+ struct rdt_resource *r = s->res;
u32 used_b = 0, unused_b = 0;
unsigned long tmp_cbm;
enum rdtgrp_mode mode;
@@ -2786,13 +2789,13 @@ static int __init_one_rdt_domain(struct rdt_domain *d, struct rdt_resource *r,
* If there are no more shareable bits available on any domain then
* the entire allocation will fail.
*/
-static int rdtgroup_init_cat(struct rdt_resource *r, u32 closid)
+static int rdtgroup_init_cat(struct resctrl_schema *s, u32 closid)
{
struct rdt_domain *d;
int ret;

- list_for_each_entry(d, &r->domains, list) {
- ret = __init_one_rdt_domain(d, r, closid);
+ list_for_each_entry(d, &s->res->domains, list) {
+ ret = __init_one_rdt_domain(d, s, closid);
if (ret < 0)
return ret;
}
@@ -2823,7 +2826,7 @@ static int rdtgroup_init_alloc(struct rdtgroup *rdtgrp)
if (r->rid == RDT_RESOURCE_MBA) {
rdtgroup_init_mba(r);
} else {
- ret = rdtgroup_init_cat(r, rdtgrp->closid);
+ ret = rdtgroup_init_cat(s, rdtgrp->closid);
if (ret < 0)
return ret;
}
diff --git a/include/linux/resctrl.h b/include/linux/resctrl.h
index 25cd6d788fe0..aa06f32c5820 100644
--- a/include/linux/resctrl.h
+++ b/include/linux/resctrl.h
@@ -117,6 +117,7 @@ struct resctrl_membw {
};

struct rdt_parse_data;
+struct resctrl_schema;

/**
* struct rdt_resource - attributes of a resctrl resource
@@ -155,7 +156,7 @@ struct rdt_resource {
u32 default_ctrl;
const char *format_str;
int (*parse_ctrlval)(struct rdt_parse_data *data,
- struct rdt_resource *r,
+ struct resctrl_schema *s,
struct rdt_domain *d);
struct list_head evt_list;
unsigned long fflags;
--
2.30.2


2021-05-19 20:17:47

by James Morse

[permalink] [raw]
Subject: [PATCH v3 15/24] x86/resctrl: Add a helper to read a closid's configuration

Functions like show_doms() reach into the architecture's private
structure to retrieve the configuration from the struct rdt_hw_resource.

The hardware configuration may look completely different to the
values resctrl gets from user-space. The staged configuration
and resctrl_arch_update_domains() allow the architecture to
convert or translate these values.

Resctrl shouldn't read or write the ctrl_val[] values directly.
Add a helper to read the current configuration. This will allow another
architecture to scale the bitmaps if necessary, and possibly use controls
that don't take the user-space control format at all.
Of the remaining functions that access ctrl_val[] directly,
apply_config() is part of the architecture specifc code, and is called
via resctrl_arch_update_domains(). reset_all_ctrls() will be an
architecture specific helper.
update_mba_bw() manipulates both ctrl_val[], mbps_val[] and the
hardware. The mbps_val[] that matches the mba_sc state of the
resource is changed, but the other is left unchanged. Abstracting
this is the subject of later patches that affect set_mba_sc() too.

Reviewed-by: Jamie Iles <[email protected]>
Signed-off-by: James Morse <[email protected]>
---
Changes since v2:
* Shuffled commit message,

---
arch/x86/kernel/cpu/resctrl/ctrlmondata.c | 16 ++++++---
arch/x86/kernel/cpu/resctrl/monitor.c | 6 +++-
arch/x86/kernel/cpu/resctrl/rdtgroup.c | 43 ++++++++++-------------
include/linux/resctrl.h | 2 ++
4 files changed, 37 insertions(+), 30 deletions(-)

diff --git a/arch/x86/kernel/cpu/resctrl/ctrlmondata.c b/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
index 271f5d28412a..1cd54402b02a 100644
--- a/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
+++ b/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
@@ -401,22 +401,30 @@ ssize_t rdtgroup_schemata_write(struct kernfs_open_file *of,
return ret ?: nbytes;
}

+void resctrl_arch_get_config(struct rdt_resource *r, struct rdt_domain *d,
+ u32 closid, u32 *value)
+{
+ struct rdt_hw_domain *hw_dom = resctrl_to_arch_dom(d);
+
+ if (!is_mba_sc(r))
+ *value = hw_dom->ctrl_val[closid];
+ else
+ *value = hw_dom->mbps_val[closid];
+}
+
static void show_doms(struct seq_file *s, struct resctrl_schema *schema, int closid)
{
struct rdt_resource *r = schema->res;
- struct rdt_hw_domain *hw_dom;
struct rdt_domain *dom;
bool sep = false;
u32 ctrl_val;

seq_printf(s, "%*s:", max_name_width, schema->name);
list_for_each_entry(dom, &r->domains, list) {
- hw_dom = resctrl_to_arch_dom(dom);
if (sep)
seq_puts(s, ";");

- ctrl_val = (!is_mba_sc(r) ? hw_dom->ctrl_val[closid] :
- hw_dom->mbps_val[closid]);
+ resctrl_arch_get_config(r, dom, closid, &ctrl_val);
seq_printf(s, r->format_str, dom->id, max_data_width,
ctrl_val);
sep = true;
diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c
index 76eea10f2e2c..647c0be76ea6 100644
--- a/arch/x86/kernel/cpu/resctrl/monitor.c
+++ b/arch/x86/kernel/cpu/resctrl/monitor.c
@@ -442,8 +442,12 @@ static void update_mba_bw(struct rdtgroup *rgrp, struct rdt_domain *dom_mbm)
hw_dom_mba = resctrl_to_arch_dom(dom_mba);

cur_bw = pmbm_data->prev_bw;
- user_bw = hw_dom_mba->mbps_val[closid];
+ resctrl_arch_get_config(r_mba, dom_mba, closid, &user_bw);
delta_bw = pmbm_data->delta_bw;
+ /*
+ * resctrl_arch_get_config() chooses the mbps/ctrl value to return
+ * based on is_mba_sc(). For now, reach into the hw_dom.
+ */
cur_msr_val = hw_dom_mba->ctrl_val[closid];

/*
diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
index 1dec9afd9ff4..57e4d793f576 100644
--- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
+++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
@@ -910,27 +910,27 @@ static int rdt_bit_usage_show(struct kernfs_open_file *of,
int i, hwb, swb, excl, psl;
enum rdtgrp_mode mode;
bool sep = false;
- u32 *ctrl;
+ u32 ctrl_val;

mutex_lock(&rdtgroup_mutex);
hw_shareable = r->cache.shareable_bits;
list_for_each_entry(dom, &r->domains, list) {
if (sep)
seq_putc(seq, ';');
- ctrl = resctrl_to_arch_dom(dom)->ctrl_val;
sw_shareable = 0;
exclusive = 0;
seq_printf(seq, "%d=", dom->id);
- for (i = 0; i < closids_supported(); i++, ctrl++) {
+ for (i = 0; i < closids_supported(); i++) {
if (!closid_allocated(i))
continue;
+ resctrl_arch_get_config(r, dom, i, &ctrl_val);
mode = rdtgroup_mode_by_closid(i);
switch (mode) {
case RDT_MODE_SHAREABLE:
- sw_shareable |= *ctrl;
+ sw_shareable |= ctrl_val;
break;
case RDT_MODE_EXCLUSIVE:
- exclusive |= *ctrl;
+ exclusive |= ctrl_val;
break;
case RDT_MODE_PSEUDO_LOCKSETUP:
/*
@@ -1188,7 +1188,6 @@ static bool __rdtgroup_cbm_overlaps(struct rdt_resource *r, struct rdt_domain *d
{
enum rdtgrp_mode mode;
unsigned long ctrl_b;
- u32 *ctrl;
int i;

/* Check for any overlap with regions used by hardware directly */
@@ -1199,9 +1198,8 @@ static bool __rdtgroup_cbm_overlaps(struct rdt_resource *r, struct rdt_domain *d
}

/* Check for overlap with other resource groups */
- ctrl = resctrl_to_arch_dom(d)->ctrl_val;
- for (i = 0; i < closids_supported(); i++, ctrl++) {
- ctrl_b = *ctrl;
+ for (i = 0; i < closids_supported(); i++) {
+ resctrl_arch_get_config(r, d, i, (u32 *)&ctrl_b);
mode = rdtgroup_mode_by_closid(i);
if (closid_allocated(i) && i != closid &&
mode != RDT_MODE_PSEUDO_LOCKSETUP) {
@@ -1269,12 +1267,12 @@ bool rdtgroup_cbm_overlaps(struct resctrl_schema *s, struct rdt_domain *d,
*/
static bool rdtgroup_mode_test_exclusive(struct rdtgroup *rdtgrp)
{
- struct rdt_hw_domain *hw_dom;
int closid = rdtgrp->closid;
struct resctrl_schema *s;
struct rdt_resource *r;
bool has_cache = false;
struct rdt_domain *d;
+ u32 ctrl;

list_for_each_entry(s, &resctrl_schema_all, list) {
r = s->res;
@@ -1282,10 +1280,8 @@ static bool rdtgroup_mode_test_exclusive(struct rdtgroup *rdtgrp)
continue;
has_cache = true;
list_for_each_entry(d, &r->domains, list) {
- hw_dom = resctrl_to_arch_dom(d);
- if (rdtgroup_cbm_overlaps(s, d,
- hw_dom->ctrl_val[closid],
- rdtgrp->closid, false)) {
+ resctrl_arch_get_config(r, d, closid, &ctrl);
+ if (rdtgroup_cbm_overlaps(s, d, ctrl, closid, false)) {
rdt_last_cmd_puts("Schemata overlaps\n");
return false;
}
@@ -1417,7 +1413,6 @@ static int rdtgroup_size_show(struct kernfs_open_file *of,
struct seq_file *s, void *v)
{
struct resctrl_schema *schema;
- struct rdt_hw_domain *hw_dom;
struct rdtgroup *rdtgrp;
struct rdt_resource *r;
struct rdt_domain *d;
@@ -1453,15 +1448,13 @@ static int rdtgroup_size_show(struct kernfs_open_file *of,
sep = false;
seq_printf(s, "%*s:", max_name_width, schema->name);
list_for_each_entry(d, &r->domains, list) {
- hw_dom = resctrl_to_arch_dom(d);
if (sep)
seq_putc(s, ';');
if (rdtgrp->mode == RDT_MODE_PSEUDO_LOCKSETUP) {
size = 0;
} else {
- ctrl = (!is_mba_sc(r) ?
- hw_dom->ctrl_val[rdtgrp->closid] :
- hw_dom->mbps_val[rdtgrp->closid]);
+ resctrl_arch_get_config(r, d, rdtgrp->closid,
+ &ctrl);
if (r->rid == RDT_RESOURCE_MBA)
size = ctrl;
else
@@ -2736,7 +2729,7 @@ static int __init_one_rdt_domain(struct rdt_domain *d, struct resctrl_schema *s,
u32 used_b = 0, unused_b = 0;
unsigned long tmp_cbm;
enum rdtgrp_mode mode;
- u32 peer_ctl, *ctrl;
+ u32 peer_ctl, ctrl_val;
int i;

rdt_cdp_peer_get(r, d, &r_cdp, &d_cdp);
@@ -2744,8 +2737,7 @@ static int __init_one_rdt_domain(struct rdt_domain *d, struct resctrl_schema *s,
cfg->have_new_ctrl = false;
cfg->new_ctrl = r->cache.shareable_bits;
used_b = r->cache.shareable_bits;
- ctrl = resctrl_to_arch_dom(d)->ctrl_val;
- for (i = 0; i < closids_supported(); i++, ctrl++) {
+ for (i = 0; i < closids_supported(); i++) {
if (closid_allocated(i) && i != closid) {
mode = rdtgroup_mode_by_closid(i);
if (mode == RDT_MODE_PSEUDO_LOCKSETUP)
@@ -2761,12 +2753,13 @@ static int __init_one_rdt_domain(struct rdt_domain *d, struct resctrl_schema *s,
* with an exclusive group.
*/
if (d_cdp)
- peer_ctl = resctrl_to_arch_dom(d_cdp)->ctrl_val[i];
+ resctrl_arch_get_config(r_cdp, d_cdp, i, &peer_ctl);
else
peer_ctl = 0;
- used_b |= *ctrl | peer_ctl;
+ resctrl_arch_get_config(r, d, i, &ctrl_val);
+ used_b |= ctrl_val | peer_ctl;
if (mode == RDT_MODE_SHAREABLE)
- cfg->new_ctrl |= *ctrl | peer_ctl;
+ cfg->new_ctrl |= ctrl_val | peer_ctl;
}
}
if (d->plr && d->plr->cbm > 0)
diff --git a/include/linux/resctrl.h b/include/linux/resctrl.h
index bd867bd8c2f4..aaa2c3244783 100644
--- a/include/linux/resctrl.h
+++ b/include/linux/resctrl.h
@@ -197,5 +197,7 @@ struct resctrl_schema {
/* The number of closid supported by this resource regardless of CDP */
u32 resctrl_arch_get_num_closid(struct rdt_resource *r);
int resctrl_arch_update_domains(struct rdt_resource *r, u32 closid);
+void resctrl_arch_get_config(struct rdt_resource *r, struct rdt_domain *d,
+ u32 closid, u32 *value);

#endif /* _RESCTRL_H */
--
2.30.2


2021-05-19 20:17:55

by James Morse

[permalink] [raw]
Subject: [PATCH v3 20/24] x86/resctrl: Calculate the index from the configuration type

resctrl uses cbm_idx() to map a closid to an index in the
configuration array. This is based on a multiplier and offset
that are held in the resource.

To merge the resources, the resctrl arch code needs to calculate
the index from something else, as there will only be one resource.

Decide based on the staged configuration type. This makes the
static mult and offset parameters redundant.

Reviewed-by: Jamie Iles <[email protected]>
Signed-off-by: James Morse <[email protected]>
---
Changes since v2:
* Shuffled commit message,

arch/x86/kernel/cpu/resctrl/core.c | 12 -----------
arch/x86/kernel/cpu/resctrl/ctrlmondata.c | 25 ++++++++++++++---------
include/linux/resctrl.h | 6 ------
3 files changed, 15 insertions(+), 28 deletions(-)

diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resctrl/core.c
index 15b57f70564b..08603487cb7d 100644
--- a/arch/x86/kernel/cpu/resctrl/core.c
+++ b/arch/x86/kernel/cpu/resctrl/core.c
@@ -69,8 +69,6 @@ struct rdt_hw_resource rdt_resources_all[] = {
.cache_level = 3,
.cache = {
.min_cbm_bits = 1,
- .cbm_idx_mult = 1,
- .cbm_idx_offset = 0,
},
.domains = domain_init(RDT_RESOURCE_L3),
.parse_ctrlval = parse_cbm,
@@ -89,8 +87,6 @@ struct rdt_hw_resource rdt_resources_all[] = {
.cache_level = 3,
.cache = {
.min_cbm_bits = 1,
- .cbm_idx_mult = 2,
- .cbm_idx_offset = 0,
},
.domains = domain_init(RDT_RESOURCE_L3DATA),
.parse_ctrlval = parse_cbm,
@@ -109,8 +105,6 @@ struct rdt_hw_resource rdt_resources_all[] = {
.cache_level = 3,
.cache = {
.min_cbm_bits = 1,
- .cbm_idx_mult = 2,
- .cbm_idx_offset = 1,
},
.domains = domain_init(RDT_RESOURCE_L3CODE),
.parse_ctrlval = parse_cbm,
@@ -129,8 +123,6 @@ struct rdt_hw_resource rdt_resources_all[] = {
.cache_level = 2,
.cache = {
.min_cbm_bits = 1,
- .cbm_idx_mult = 1,
- .cbm_idx_offset = 0,
},
.domains = domain_init(RDT_RESOURCE_L2),
.parse_ctrlval = parse_cbm,
@@ -149,8 +141,6 @@ struct rdt_hw_resource rdt_resources_all[] = {
.cache_level = 2,
.cache = {
.min_cbm_bits = 1,
- .cbm_idx_mult = 2,
- .cbm_idx_offset = 0,
},
.domains = domain_init(RDT_RESOURCE_L2DATA),
.parse_ctrlval = parse_cbm,
@@ -169,8 +159,6 @@ struct rdt_hw_resource rdt_resources_all[] = {
.cache_level = 2,
.cache = {
.min_cbm_bits = 1,
- .cbm_idx_mult = 2,
- .cbm_idx_offset = 1,
},
.domains = domain_init(RDT_RESOURCE_L2CODE),
.parse_ctrlval = parse_cbm,
diff --git a/arch/x86/kernel/cpu/resctrl/ctrlmondata.c b/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
index 360f352ccc4d..e384398374da 100644
--- a/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
+++ b/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
@@ -246,12 +246,17 @@ static int parse_line(char *line, struct resctrl_schema *s,
return -EINVAL;
}

-static unsigned u32 cbm_idx(struct rdt_resource *r, unsigned int closid)
+static u32 get_config_index(u32 closid, enum resctrl_conf_type type)
{
- if (r->rid == RDT_RESOURCE_MBA)
+ switch (type) {
+ default:
+ case CDP_NONE:
return closid;
-
- return closid * r->cache.cbm_idx_mult + r->cache.cbm_idx_offset;
+ case CDP_CODE:
+ return (closid * 2) + 1;
+ case CDP_DATA:
+ return (closid * 2);
+ }
}

static bool apply_config(struct rdt_hw_domain *hw_dom,
@@ -286,10 +291,6 @@ int resctrl_arch_update_domains(struct rdt_resource *r, u32 closid)
if (!zalloc_cpumask_var(&cpu_mask, GFP_KERNEL))
return -ENOMEM;

- msr_param.low = cbm_idx(r, closid);
- msr_param.high = msr_param.low + 1;
- msr_param.res = r;
-
mba_sc = is_mba_sc(r);
list_for_each_entry(d, &r->domains, list) {
hw_dom = resctrl_to_arch_dom(d);
@@ -298,9 +299,13 @@ int resctrl_arch_update_domains(struct rdt_resource *r, u32 closid)
if (!cfg->have_new_ctrl)
continue;

- idx = cbm_idx(r, closid);
+ idx = get_config_index(closid, t);
if (!apply_config(hw_dom, cfg, idx, cpu_mask, mba_sc))
continue;
+
+ msr_param.low = idx;
+ msr_param.high = msr_param.low + 1;
+ msr_param.res = r;
}
}

@@ -420,7 +425,7 @@ void resctrl_arch_get_config(struct rdt_resource *r, struct rdt_domain *d,
u32 closid, enum resctrl_conf_type type, u32 *value)
{
struct rdt_hw_domain *hw_dom = resctrl_to_arch_dom(d);
- u32 idx = cbm_idx(r, closid);
+ u32 idx = get_config_index(closid, type);

if (!is_mba_sc(r))
*value = hw_dom->ctrl_val[idx];
diff --git a/include/linux/resctrl.h b/include/linux/resctrl.h
index 7cde6c2b9d16..1b202682f411 100644
--- a/include/linux/resctrl.h
+++ b/include/linux/resctrl.h
@@ -70,10 +70,6 @@ struct rdt_domain {
* struct resctrl_cache - Cache allocation related data
* @cbm_len: Length of the cache bit mask
* @min_cbm_bits: Minimum number of consecutive bits to be set
- * @cbm_idx_mult: Multiplier of CBM index
- * @cbm_idx_offset: Offset of CBM index. CBM index is computed by:
- * closid * cbm_idx_multi + cbm_idx_offset
- * in a cache bit mask
* @shareable_bits: Bitmask of shareable resource with other
* executing entities
* @arch_has_sparse_bitmaps: True if a bitmap like f00f is valid.
@@ -84,8 +80,6 @@ struct rdt_domain {
struct resctrl_cache {
unsigned int cbm_len;
unsigned int min_cbm_bits;
- unsigned int cbm_idx_mult; // TODO remove this
- unsigned int cbm_idx_offset; // TODO remove this
unsigned int shareable_bits;
bool arch_has_sparse_bitmaps;
bool arch_has_empty_bitmaps;
--
2.30.2


2021-05-19 20:19:04

by James Morse

[permalink] [raw]
Subject: [PATCH v3 12/24] x86/resctrl: Group staged configuration into a separate struct

When configuration changes are made, the new value is written
to struct rdt_domain's new_ctrl field and the have_new_ctrl flag
is set. Later new_ctrl is copied to hardware by a call to
update_domains().

Once the CDP resources are merged, there will be one new_ctrl
field in use by two struct resctrl_schema requiring a per-schema
IPI to copy the value to hardware.

Move new_ctrl and have_new_ctrl into a new struct resctrl_staged_config.
Before the CDP resources can be merged, struct rdt_domain will
need an array of these, one per type of configuration. Using the
type as an index to the array will ensure that a schema configuration
string can't specify the same domain twice.

Reviewed-by: Jamie Iles <[email protected]>
Signed-off-by: James Morse <[email protected]>
---
Changes since v2:
* Shuffled commit message,

Changes since v1:
* Expanded commit message
* Removed explicit clearing of have_new_ctrl,
* Moved ARRAY_SIZE() trickery to a later patch
* Removed extra whitespace
---
arch/x86/kernel/cpu/resctrl/ctrlmondata.c | 43 +++++++++++++++--------
arch/x86/kernel/cpu/resctrl/rdtgroup.c | 22 +++++++-----
include/linux/resctrl.h | 16 ++++++---
3 files changed, 54 insertions(+), 27 deletions(-)

diff --git a/arch/x86/kernel/cpu/resctrl/ctrlmondata.c b/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
index 57c2b0e121d2..a47a792fdcb3 100644
--- a/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
+++ b/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
@@ -62,16 +62,17 @@ int parse_bw(struct rdt_parse_data *data, struct resctrl_schema *s,
{
struct rdt_resource *r = s->res;
unsigned long bw_val;
+ struct resctrl_staged_config *cfg = &d->staged_config;

- if (d->have_new_ctrl) {
+ if (cfg->have_new_ctrl) {
rdt_last_cmd_printf("Duplicate domain %d\n", d->id);
return -EINVAL;
}

if (!bw_validate(data->buf, &bw_val, r))
return -EINVAL;
- d->new_ctrl = bw_val;
- d->have_new_ctrl = true;
+ cfg->new_ctrl = bw_val;
+ cfg->have_new_ctrl = true;

return 0;
}
@@ -129,11 +130,12 @@ static bool cbm_validate(char *buf, u32 *data, struct rdt_resource *r)
int parse_cbm(struct rdt_parse_data *data, struct resctrl_schema *s,
struct rdt_domain *d)
{
+ struct resctrl_staged_config *cfg = &d->staged_config;
struct rdtgroup *rdtgrp = data->rdtgrp;
struct rdt_resource *r = s->res;
u32 cbm_val;

- if (d->have_new_ctrl) {
+ if (cfg->have_new_ctrl) {
rdt_last_cmd_printf("Duplicate domain %d\n", d->id);
return -EINVAL;
}
@@ -175,8 +177,8 @@ int parse_cbm(struct rdt_parse_data *data, struct resctrl_schema *s,
}
}

- d->new_ctrl = cbm_val;
- d->have_new_ctrl = true;
+ cfg->new_ctrl = cbm_val;
+ cfg->have_new_ctrl = true;

return 0;
}
@@ -190,6 +192,7 @@ int parse_cbm(struct rdt_parse_data *data, struct resctrl_schema *s,
static int parse_line(char *line, struct resctrl_schema *s,
struct rdtgroup *rdtgrp)
{
+ struct resctrl_staged_config *cfg;
struct rdt_resource *r = s->res;
struct rdt_parse_data data;
char *dom = NULL, *id;
@@ -219,6 +222,7 @@ static int parse_line(char *line, struct resctrl_schema *s,
if (r->parse_ctrlval(&data, s, d))
return -EINVAL;
if (rdtgrp->mode == RDT_MODE_PSEUDO_LOCKSETUP) {
+ cfg = &d->staged_config;
/*
* In pseudo-locking setup mode and just
* parsed a valid CBM that should be
@@ -229,7 +233,7 @@ static int parse_line(char *line, struct resctrl_schema *s,
*/
rdtgrp->plr->s = s;
rdtgrp->plr->d = d;
- rdtgrp->plr->cbm = d->new_ctrl;
+ rdtgrp->plr->cbm = cfg->new_ctrl;
d->plr = rdtgrp->plr;
return 0;
}
@@ -239,14 +243,27 @@ static int parse_line(char *line, struct resctrl_schema *s,
return -EINVAL;
}

+static void apply_config(struct rdt_hw_domain *hw_dom,
+ struct resctrl_staged_config *cfg, int closid,
+ cpumask_var_t cpu_mask, bool mba_sc)
+{
+ struct rdt_domain *dom = &hw_dom->resctrl;
+ u32 *dc = !mba_sc ? hw_dom->ctrl_val : hw_dom->mbps_val;
+
+ if (cfg->new_ctrl != dc[closid]) {
+ cpumask_set_cpu(cpumask_any(&dom->cpu_mask), cpu_mask);
+ dc[closid] = cfg->new_ctrl;
+ }
+}
+
int update_domains(struct rdt_resource *r, int closid)
{
+ struct resctrl_staged_config *cfg;
struct rdt_hw_domain *hw_dom;
struct msr_param msr_param;
cpumask_var_t cpu_mask;
struct rdt_domain *d;
bool mba_sc;
- u32 *dc;
int cpu;

if (!zalloc_cpumask_var(&cpu_mask, GFP_KERNEL))
@@ -259,11 +276,9 @@ int update_domains(struct rdt_resource *r, int closid)
mba_sc = is_mba_sc(r);
list_for_each_entry(d, &r->domains, list) {
hw_dom = resctrl_to_arch_dom(d);
- dc = !mba_sc ? hw_dom->ctrl_val : hw_dom->mbps_val;
- if (d->have_new_ctrl && d->new_ctrl != dc[closid]) {
- cpumask_set_cpu(cpumask_any(&d->cpu_mask), cpu_mask);
- dc[closid] = d->new_ctrl;
- }
+ cfg = &hw_dom->resctrl.staged_config;
+ if (cfg->have_new_ctrl)
+ apply_config(hw_dom, cfg, closid, cpu_mask, mba_sc);
}

/*
@@ -335,7 +350,7 @@ ssize_t rdtgroup_schemata_write(struct kernfs_open_file *of,

list_for_each_entry(s, &resctrl_schema_all, list) {
list_for_each_entry(dom, &s->res->domains, list)
- dom->have_new_ctrl = false;
+ memset(&dom->staged_config, 0, sizeof(dom->staged_config));
}

while ((tok = strsep(&buf, "\n")) != NULL) {
diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
index 5551331ac94e..4a850d6f77c0 100644
--- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
+++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
@@ -2729,6 +2729,7 @@ static int __init_one_rdt_domain(struct rdt_domain *d, struct resctrl_schema *s,
u32 closid)
{
struct rdt_resource *r_cdp = NULL;
+ struct resctrl_staged_config *cfg;
struct rdt_domain *d_cdp = NULL;
struct rdt_resource *r = s->res;
u32 used_b = 0, unused_b = 0;
@@ -2738,8 +2739,9 @@ static int __init_one_rdt_domain(struct rdt_domain *d, struct resctrl_schema *s,
int i;

rdt_cdp_peer_get(r, d, &r_cdp, &d_cdp);
- d->have_new_ctrl = false;
- d->new_ctrl = r->cache.shareable_bits;
+ cfg = &d->staged_config;
+ cfg->have_new_ctrl = false;
+ cfg->new_ctrl = r->cache.shareable_bits;
used_b = r->cache.shareable_bits;
ctrl = resctrl_to_arch_dom(d)->ctrl_val;
for (i = 0; i < closids_supported(); i++, ctrl++) {
@@ -2763,29 +2765,29 @@ static int __init_one_rdt_domain(struct rdt_domain *d, struct resctrl_schema *s,
peer_ctl = 0;
used_b |= *ctrl | peer_ctl;
if (mode == RDT_MODE_SHAREABLE)
- d->new_ctrl |= *ctrl | peer_ctl;
+ cfg->new_ctrl |= *ctrl | peer_ctl;
}
}
if (d->plr && d->plr->cbm > 0)
used_b |= d->plr->cbm;
unused_b = used_b ^ (BIT_MASK(r->cache.cbm_len) - 1);
unused_b &= BIT_MASK(r->cache.cbm_len) - 1;
- d->new_ctrl |= unused_b;
+ cfg->new_ctrl |= unused_b;
/*
* Force the initial CBM to be valid, user can
* modify the CBM based on system availability.
*/
- d->new_ctrl = cbm_ensure_valid(d->new_ctrl, r);
+ cfg->new_ctrl = cbm_ensure_valid(cfg->new_ctrl, r);
/*
* Assign the u32 CBM to an unsigned long to ensure that
* bitmap_weight() does not access out-of-bound memory.
*/
- tmp_cbm = d->new_ctrl;
+ tmp_cbm = cfg->new_ctrl;
if (bitmap_weight(&tmp_cbm, r->cache.cbm_len) < r->cache.min_cbm_bits) {
rdt_last_cmd_printf("No space on %s:%d\n", s->name, d->id);
return -ENOSPC;
}
- d->have_new_ctrl = true;
+ cfg->have_new_ctrl = true;

return 0;
}
@@ -2817,11 +2819,13 @@ static int rdtgroup_init_cat(struct resctrl_schema *s, u32 closid)
/* Initialize MBA resource with default values. */
static void rdtgroup_init_mba(struct rdt_resource *r)
{
+ struct resctrl_staged_config *cfg;
struct rdt_domain *d;

list_for_each_entry(d, &r->domains, list) {
- d->new_ctrl = is_mba_sc(r) ? MBA_MAX_MBPS : r->default_ctrl;
- d->have_new_ctrl = true;
+ cfg = &d->staged_config;
+ cfg->new_ctrl = is_mba_sc(r) ? MBA_MAX_MBPS : r->default_ctrl;
+ cfg->have_new_ctrl = true;
}
}

diff --git a/include/linux/resctrl.h b/include/linux/resctrl.h
index b70208963093..452783fa26b9 100644
--- a/include/linux/resctrl.h
+++ b/include/linux/resctrl.h
@@ -23,13 +23,21 @@ enum resctrl_conf_type {
CDP_DATA,
};

+/**
+ * struct resctrl_staged_config - parsed configuration to be applied
+ * @new_ctrl: new ctrl value to be loaded
+ * @have_new_ctrl: whether the user provide new_ctrl is valid
+ */
+struct resctrl_staged_config {
+ u32 new_ctrl;
+ bool have_new_ctrl;
+};
+
/**
* struct rdt_domain - group of CPUs sharing a resctrl resource
* @list: all instances of this resource
* @id: unique id for this instance
* @cpu_mask: which CPUs share this resource
- * @new_ctrl: new ctrl value to be loaded
- * @have_new_ctrl: did user provide new_ctrl for this domain
* @rmid_busy_llc: bitmap of which limbo RMIDs are above threshold
* @mbm_total: saved state for MBM total bandwidth
* @mbm_local: saved state for MBM local bandwidth
@@ -38,13 +46,12 @@ enum resctrl_conf_type {
* @mbm_work_cpu: worker CPU for MBM h/w counters
* @cqm_work_cpu: worker CPU for CQM h/w counters
* @plr: pseudo-locked region (if any) associated with domain
+ * @staged_config: parsed configuration to be applied
*/
struct rdt_domain {
struct list_head list;
int id;
struct cpumask cpu_mask;
- u32 new_ctrl;
- bool have_new_ctrl;
unsigned long *rmid_busy_llc;
struct mbm_state *mbm_total;
struct mbm_state *mbm_local;
@@ -53,6 +60,7 @@ struct rdt_domain {
int mbm_work_cpu;
int cqm_work_cpu;
struct pseudo_lock_region *plr;
+ struct resctrl_staged_config staged_config;
};

/**
--
2.30.2


2021-05-19 20:19:13

by James Morse

[permalink] [raw]
Subject: [PATCH v3 17/24] x86/resctrl: Pass configuration type to resctrl_arch_get_config()

The ctrl_val[] array for a struct rdt_hw_resource only holds
configurations of one type. The type is implicit.

Once the CDP resources are merged, the ctrl_val[] array will hold
all the configurations for the hardware resource. When a particular
type of configuration is needed, it must be specified explicitly.

Pass the expected type from the schema into resctrl_arch_get_config().
Nothing uses this yet, but once a single ctrl_val[] array is used
for the three struct rdt_hw_resources that share hardware, the type
will be used to return the correct configuration value from the
shared array.

Reviewed-by: Jamie Iles <[email protected]>
Signed-off-by: James Morse <[email protected]>
---
Changes since v2:
* Shuffled commit message,

arch/x86/kernel/cpu/resctrl/ctrlmondata.c | 5 ++--
arch/x86/kernel/cpu/resctrl/monitor.c | 2 +-
arch/x86/kernel/cpu/resctrl/rdtgroup.c | 35 +++++++++++++++--------
include/linux/resctrl.h | 3 +-
4 files changed, 29 insertions(+), 16 deletions(-)

diff --git a/arch/x86/kernel/cpu/resctrl/ctrlmondata.c b/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
index 1cd54402b02a..72a8cf52de47 100644
--- a/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
+++ b/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
@@ -402,7 +402,7 @@ ssize_t rdtgroup_schemata_write(struct kernfs_open_file *of,
}

void resctrl_arch_get_config(struct rdt_resource *r, struct rdt_domain *d,
- u32 closid, u32 *value)
+ u32 closid, enum resctrl_conf_type type, u32 *value)
{
struct rdt_hw_domain *hw_dom = resctrl_to_arch_dom(d);

@@ -424,7 +424,8 @@ static void show_doms(struct seq_file *s, struct resctrl_schema *schema, int clo
if (sep)
seq_puts(s, ";");

- resctrl_arch_get_config(r, dom, closid, &ctrl_val);
+ resctrl_arch_get_config(r, dom, closid, schema->conf_type,
+ &ctrl_val);
seq_printf(s, r->format_str, dom->id, max_data_width,
ctrl_val);
sep = true;
diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c
index 647c0be76ea6..3f00dd54fb03 100644
--- a/arch/x86/kernel/cpu/resctrl/monitor.c
+++ b/arch/x86/kernel/cpu/resctrl/monitor.c
@@ -442,7 +442,7 @@ static void update_mba_bw(struct rdtgroup *rgrp, struct rdt_domain *dom_mbm)
hw_dom_mba = resctrl_to_arch_dom(dom_mba);

cur_bw = pmbm_data->prev_bw;
- resctrl_arch_get_config(r_mba, dom_mba, closid, &user_bw);
+ resctrl_arch_get_config(r_mba, dom_mba, closid, CDP_NONE, &user_bw);
delta_bw = pmbm_data->delta_bw;
/*
* resctrl_arch_get_config() chooses the mbps/ctrl value to return
diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
index 6b01902ac037..740d2d0ff4df 100644
--- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
+++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
@@ -923,7 +923,8 @@ static int rdt_bit_usage_show(struct kernfs_open_file *of,
for (i = 0; i < closids_supported(); i++) {
if (!closid_allocated(i))
continue;
- resctrl_arch_get_config(r, dom, i, &ctrl_val);
+ resctrl_arch_get_config(r, dom, i, s->conf_type,
+ &ctrl_val);
mode = rdtgroup_mode_by_closid(i);
switch (mode) {
case RDT_MODE_SHAREABLE:
@@ -1099,6 +1100,7 @@ static int rdtgroup_mode_show(struct kernfs_open_file *of,
* Used to return the result.
* @d_cdp: RDT domain that shares hardware with @d (RDT domain peer)
* Used to return the result.
+ * @peer_type: The CDP configuration type of the peer resource.
*
* RDT resources are managed independently and by extension the RDT domains
* (RDT resource instances) are managed independently also. The Code and
@@ -1116,7 +1118,8 @@ static int rdtgroup_mode_show(struct kernfs_open_file *of,
*/
static int rdt_cdp_peer_get(struct rdt_resource *r, struct rdt_domain *d,
struct rdt_resource **r_cdp,
- struct rdt_domain **d_cdp)
+ struct rdt_domain **d_cdp,
+ enum resctrl_conf_type *peer_type)
{
struct rdt_resource *_r_cdp = NULL;
struct rdt_domain *_d_cdp = NULL;
@@ -1125,15 +1128,19 @@ static int rdt_cdp_peer_get(struct rdt_resource *r, struct rdt_domain *d,
switch (r->rid) {
case RDT_RESOURCE_L3DATA:
_r_cdp = &rdt_resources_all[RDT_RESOURCE_L3CODE].resctrl;
+ *peer_type = CDP_CODE;
break;
case RDT_RESOURCE_L3CODE:
_r_cdp = &rdt_resources_all[RDT_RESOURCE_L3DATA].resctrl;
+ *peer_type = CDP_DATA;
break;
case RDT_RESOURCE_L2DATA:
_r_cdp = &rdt_resources_all[RDT_RESOURCE_L2CODE].resctrl;
+ *peer_type = CDP_CODE;
break;
case RDT_RESOURCE_L2CODE:
_r_cdp = &rdt_resources_all[RDT_RESOURCE_L2DATA].resctrl;
+ *peer_type = CDP_DATA;
break;
default:
ret = -ENOENT;
@@ -1184,7 +1191,8 @@ static int rdt_cdp_peer_get(struct rdt_resource *r, struct rdt_domain *d,
* Return: false if CBM does not overlap, true if it does.
*/
static bool __rdtgroup_cbm_overlaps(struct rdt_resource *r, struct rdt_domain *d,
- unsigned long cbm, int closid, bool exclusive)
+ unsigned long cbm, int closid,
+ enum resctrl_conf_type type, bool exclusive)
{
enum rdtgrp_mode mode;
unsigned long ctrl_b;
@@ -1199,7 +1207,7 @@ static bool __rdtgroup_cbm_overlaps(struct rdt_resource *r, struct rdt_domain *d

/* Check for overlap with other resource groups */
for (i = 0; i < closids_supported(); i++) {
- resctrl_arch_get_config(r, d, i, (u32 *)&ctrl_b);
+ resctrl_arch_get_config(r, d, i, type, (u32 *)&ctrl_b);
mode = rdtgroup_mode_by_closid(i);
if (closid_allocated(i) && i != closid &&
mode != RDT_MODE_PSEUDO_LOCKSETUP) {
@@ -1240,17 +1248,19 @@ static bool __rdtgroup_cbm_overlaps(struct rdt_resource *r, struct rdt_domain *d
bool rdtgroup_cbm_overlaps(struct resctrl_schema *s, struct rdt_domain *d,
unsigned long cbm, int closid, bool exclusive)
{
+ enum resctrl_conf_type peer_type;
struct rdt_resource *r = s->res;
struct rdt_resource *r_cdp;
struct rdt_domain *d_cdp;

- if (__rdtgroup_cbm_overlaps(r, d, cbm, closid, exclusive))
+ if (__rdtgroup_cbm_overlaps(r, d, cbm, closid, s->conf_type,
+ exclusive))
return true;

- if (rdt_cdp_peer_get(r, d, &r_cdp, &d_cdp) < 0)
+ if (rdt_cdp_peer_get(r, d, &r_cdp, &d_cdp, &peer_type) < 0)
return false;

- return __rdtgroup_cbm_overlaps(r_cdp, d_cdp, cbm, closid, exclusive);
+ return __rdtgroup_cbm_overlaps(r_cdp, d_cdp, cbm, closid, peer_type, exclusive);
}

/**
@@ -1280,7 +1290,7 @@ static bool rdtgroup_mode_test_exclusive(struct rdtgroup *rdtgrp)
continue;
has_cache = true;
list_for_each_entry(d, &r->domains, list) {
- resctrl_arch_get_config(r, d, closid, &ctrl);
+ resctrl_arch_get_config(r, d, closid, s->conf_type, &ctrl);
if (rdtgroup_cbm_overlaps(s, d, ctrl, closid, false)) {
rdt_last_cmd_puts("Schemata overlaps\n");
return false;
@@ -1454,7 +1464,7 @@ static int rdtgroup_size_show(struct kernfs_open_file *of,
size = 0;
} else {
resctrl_arch_get_config(r, d, rdtgrp->closid,
- &ctrl);
+ schema->conf_type, &ctrl);
if (r->rid == RDT_RESOURCE_MBA)
size = ctrl;
else
@@ -2737,6 +2747,7 @@ static int __init_one_rdt_domain(struct rdt_domain *d, struct resctrl_schema *s,
enum resctrl_conf_type t = s->conf_type;
struct rdt_resource *r_cdp = NULL;
struct resctrl_staged_config *cfg;
+ enum resctrl_conf_type peer_type;
struct rdt_domain *d_cdp = NULL;
struct rdt_resource *r = s->res;
u32 used_b = 0, unused_b = 0;
@@ -2745,7 +2756,7 @@ static int __init_one_rdt_domain(struct rdt_domain *d, struct resctrl_schema *s,
u32 peer_ctl, ctrl_val;
int i;

- rdt_cdp_peer_get(r, d, &r_cdp, &d_cdp);
+ rdt_cdp_peer_get(r, d, &r_cdp, &d_cdp, &peer_type);
cfg = &d->staged_config[t];
cfg->have_new_ctrl = false;
cfg->new_ctrl = r->cache.shareable_bits;
@@ -2766,10 +2777,10 @@ static int __init_one_rdt_domain(struct rdt_domain *d, struct resctrl_schema *s,
* with an exclusive group.
*/
if (d_cdp)
- resctrl_arch_get_config(r_cdp, d_cdp, i, &peer_ctl);
+ resctrl_arch_get_config(r_cdp, d_cdp, i, peer_type, &peer_ctl);
else
peer_ctl = 0;
- resctrl_arch_get_config(r, d, i, &ctrl_val);
+ resctrl_arch_get_config(r, d, i, s->conf_type, &ctrl_val);
used_b |= ctrl_val | peer_ctl;
if (mode == RDT_MODE_SHAREABLE)
cfg->new_ctrl |= ctrl_val | peer_ctl;
diff --git a/include/linux/resctrl.h b/include/linux/resctrl.h
index aaa2c3244783..7cde6c2b9d16 100644
--- a/include/linux/resctrl.h
+++ b/include/linux/resctrl.h
@@ -198,6 +198,7 @@ struct resctrl_schema {
u32 resctrl_arch_get_num_closid(struct rdt_resource *r);
int resctrl_arch_update_domains(struct rdt_resource *r, u32 closid);
void resctrl_arch_get_config(struct rdt_resource *r, struct rdt_domain *d,
- u32 closid, u32 *value);
+ u32 closid, enum resctrl_conf_type type,
+ u32 *value);

#endif /* _RESCTRL_H */
--
2.30.2


2021-06-05 00:00:33

by Reinette Chatre

[permalink] [raw]
Subject: Re: [PATCH v3 00/24] x86/resctrl: Merge the CDP resources

Hi James,

On 5/19/2021 9:24 AM, James Morse wrote:
> Hi folks,
>
> Thanks to Reinette comments on v2. The major changes in v3 is a juggling
> of all the commit messages. One patch got merged into its parent, and
> the msr_param range thing got pulled out into its own patch. Otherwise
> changes are noted in the commit messages.

Thank you very much for reworking the commit messages. The additional
context do make these changes easier to digest.

On a high level the goal of these patches look good to me but the
patches themselves do have a few formatting issues and spelling mistakes
and while bisectability is a stated goal of this work I was surprised to
find that one patch ("x86/resctrl: Apply offset correction when config
is staged") cannot compile.

I started to document the formatting issues but found myself duplicating
a lot of what checkpatch.pl would already tell you. Could you please
ensure that this series gets a clean bill of health when using
"checkpatch.pl --strict"? I also recommend the codespell option ...
there are a few typos in this series that checkpatch.pl was able to pick
up. It would be more efficient to review this series from such a baseline.

Thank you

Reinette

2021-06-11 18:07:31

by James Morse

[permalink] [raw]
Subject: Re: [PATCH v3 00/24] x86/resctrl: Merge the CDP resources

Hi Reinette,

On 05/06/2021 00:56, Reinette Chatre wrote:
> On 5/19/2021 9:24 AM, James Morse wrote:
>> Thanks to Reinette comments on v2. The major changes in v3 is a juggling
>> of all the commit messages. One patch got merged into its parent, and
>> the msr_param range thing got pulled out into its own patch. Otherwise
>> changes are noted in the commit messages.
>
> Thank you very much for reworking the commit messages. The additional context do make
> these changes easier to digest.
>
> On a high level the goal of these patches look good to me but the patches themselves do
> have a few formatting issues and spelling mistakes and while bisectability is a stated
> goal of this work I was surprised to find that one patch ("x86/resctrl: Apply offset
> correction when config is staged") cannot compile.

Huh. I remember thinking I didn't need to retest the whole tree after some minor change,
evidently I was very wrong!


> I started to document the formatting issues but found myself duplicating a lot of what
> checkpatch.pl would already tell you. Could you please ensure that this series gets a
> clean bill of health when using "checkpatch.pl --strict"?

Sure. (I've become so accustomed to ignoring it due to its allergy to assembly, that I
don't bother running it)


> I also recommend the codespell
> option ... there are a few typos in this series that checkpatch.pl was able to pick up.

Aha, that's a new one, my existing way of doing that only works for commit messages.


> It
> would be more efficient to review this series from such a baseline.

Lookout for v4 on monday!


Thanks,

James