2017-03-01 14:04:37

by Thomas Gleixner

[permalink] [raw]
Subject: Re: [PATCH 2/5] x86/intel_rdt: Improvements to parsing schemata

On Fri, 17 Feb 2017, Vikas Shivappa wrote:

> The schemata file requires all RDT (Resource director technology)
> resources be entered in the same order they are shown in the root
> schemata file.
> Hence remove the looping through all resources while parsing each
> schemata and get the next enabled resource after processing a resource.

Again, you desribe WHAT you are doing and not WHY.

x86/intel_rdt: Improveme schemata parsing

The schemata file requires all resources be written in the same order
as they are shown in the root schemata file.

The current parser searches all resources to find a matching resource for
each resource line in the schemata file. This is suboptimal as the order
of the resources is fixed.

Avoid the repeating lookups by walking the resource descriptors linearly
while processing the input lines.

So that would describe again the context, the problem and the solution in a
precise and understandable way. It's not that hard.

Though, I have to ask the question WHY is that required. It's neither
required by the current implementation nor by anything else. The current
implementation can nicely deal with any ordering of the resource lines due
to the lookup. So now the question is, what makes this change necessary and
what's the advantage of doing it this way?

> +/*
> + * Parameter r must be NULL or pointing to
> + * a valid rdt_resource_all entry.
> + * returns next enabled RDT resource.

New sentences start with an upper case letter. Aside of that, please do not
make arbitrary line breaks at randomly chosen locations. Use the 80 chars
estate unless there is a structural reason not to do so.

> +static inline struct rdt_resource*
> +get_next_enabled_rdt_resource(struct rdt_resource *r)
> +{
> + struct rdt_resource *it = r;
> +
> + if (!it)
> + it = rdt_resources_all;
> + else
> + it++;
> + for (; it < rdt_resources_all + RDT_NUM_RESOURCES; it++)
> + if (it->enabled)
> + return it;

Once more. This lacks curly braces around the for() construct. See

http://lkml.kernel.org/r/alpine.DEB.2.20.1701171956290.3645@nanos

But that's the least of the problems with this code.

> +
> + return NULL;
> +}
> +
> ssize_t rdtgroup_schemata_write(struct kernfs_open_file *of,
> char *buf, size_t nbytes, loff_t off)
> {
> @@ -171,22 +192,21 @@ ssize_t rdtgroup_schemata_write(struct kernfs_open_file *of,
> r->num_tmp_cbms = 0;
> }
>
> + r = NULL;
> while ((tok = strsep(&buf, "\n")) != NULL) {
> resname = strsep(&tok, ":");
> if (!tok) {
> ret = -EINVAL;
> goto out;
> }
> - for_each_enabled_rdt_resource(r) {
> - if (!strcmp(resname, r->name) &&
> - closid < r->num_closid) {
> - ret = parse_line(tok, r);
> - if (ret)
> - goto out;
> - break;
> - }
> - }
> - if (!r->name) {
> +
> + r = get_next_enabled_rdt_resource(r);
> +
> + if (r && !strcmp(resname, r->name)) {
> + ret = parse_line(tok, r);
> + if (ret)
> + goto out;
> + } else {
> ret = -EINVAL;
> goto out;
> }

I really have a hard time to figure out, why this convoluted
get_next_enabled_rdt_resource() is better than what we have now.

The write function is hardly a hot path and enforcing the resource write
ordering is questionable at best.

If there is a real reason to do so, then this can be written way less
convoluted.

static struct rdt_resource *get_enabled_resource(struct rdt_resource *r)
{
for (; r < rdt_resources_all + RDT_NUM_RESOURCES; r++) {
if (r->enabled)
return r;
}

ssize_t rdtgroup_schemata_write(struct kernfs_open_file *of,
char *buf, size_t nbytes, loff_t off)
{
.....

r = get_enabled_resource(rdt_resources_all);

while (r && (tok = strsep(&buf, "\n")) != NULL) {
ret = -EINVAL;

resname = strsep(&tok, ":");
if (!tok || strcmp(resname, r->name))
goto out;

ret = parse_line(tok, r);
if (ret)
goto out;

r = get_enabled_resource(++r);
}

Can you spot the difference?

Anyway, first of all we want to know WHY this ordering is required. If it's
not required then why would we enforce it?

Thanks,

tglx


2017-03-10 00:03:02

by Shivappa Vikas

[permalink] [raw]
Subject: Re: [PATCH 2/5] x86/intel_rdt: Improvements to parsing schemata


> Anyway, first of all we want to know WHY this ordering is required. If it's
> not required then why would we enforce it?

Do the users want to see the data in the same order they entered ? If we want to
do that then its easy to implement when we enforce the order .. (because right
now resources are always displayed in a predefined order).
Otherwise i can drop this patch.

Thanks,
Vikas

>
> Thanks,
>
> tglx
>

2017-03-10 10:55:24

by Thomas Gleixner

[permalink] [raw]
Subject: Re: [PATCH 2/5] x86/intel_rdt: Improvements to parsing schemata

On Thu, 9 Mar 2017, Shivappa Vikas wrote:

>
> > Anyway, first of all we want to know WHY this ordering is required. If it's
> > not required then why would we enforce it?
>
> Do the users want to see the data in the same order they entered ? If we want
> to do that then its easy to implement when we enforce the order .. (because
> right now resources are always displayed in a predefined order).
> Otherwise i can drop this patch.

It's fine to display them in a defined order, but there is no point to
enforce the ordering on write.

The real question here is whether we really have to write every line on
every update. IMO it's sufficient to write a single resource line and not
require to update all resources every time.

Thanks,

tglx


2017-03-10 18:24:23

by Shivappa Vikas

[permalink] [raw]
Subject: Re: [PATCH 2/5] x86/intel_rdt: Improvements to parsing schemata



On Fri, 10 Mar 2017, Thomas Gleixner wrote:

> On Thu, 9 Mar 2017, Shivappa Vikas wrote:
>
>>
>>> Anyway, first of all we want to know WHY this ordering is required. If it's
>>> not required then why would we enforce it?
>>
>> Do the users want to see the data in the same order they entered ? If we want
>> to do that then its easy to implement when we enforce the order .. (because
>> right now resources are always displayed in a predefined order).
>> Otherwise i can drop this patch.
>
> It's fine to display them in a defined order, but there is no point to
> enforce the ordering on write.
>
> The real question here is whether we really have to write every line on
> every update. IMO it's sufficient to write a single resource line and not
> require to update all resources every time.

Ok in that case we can drop this. because my thought was that user wants to see
the contents he wrote when he overwrites the whole file like below , IOW its
wierd for user to do

# echo "L3:0=0xff;1=0xf0" > /sys/fs/resctrl/schemata
then

# cat /sys/fs/resctrl/schemata
L3:0=0xff;1=0xf0
L2:0=0xff;1=0xf0
MB:....

But he did not write the L2,MB when he did an overwrite of the whole file.


Thanks,
Vikas

>
> Thanks,
>
> tglx
>
>
>

2017-03-10 18:59:10

by Thomas Gleixner

[permalink] [raw]
Subject: Re: [PATCH 2/5] x86/intel_rdt: Improvements to parsing schemata

On Fri, 10 Mar 2017, Shivappa Vikas wrote:
> On Fri, 10 Mar 2017, Thomas Gleixner wrote:
> > It's fine to display them in a defined order, but there is no point to
> > enforce the ordering on write.
> >
> > The real question here is whether we really have to write every line on
> > every update. IMO it's sufficient to write a single resource line and not
> > require to update all resources every time.
>
> Ok in that case we can drop this. because my thought was that user wants to
> see the contents he wrote when he overwrites the whole file like below , IOW
> its wierd for user to do
>
> # echo "L3:0=0xff;1=0xf0" > /sys/fs/resctrl/schemata
> then
>
> # cat /sys/fs/resctrl/schemata
> L3:0=0xff;1=0xf0
> L2:0=0xff;1=0xf0
> MB:....
>
> But he did not write the L2,MB when he did an overwrite of the whole file.

Well, we have several options to tackle this:

1) Have schemata files for each resource

schemata_l2, _l3 _mb

2) Request a full overwrite every time (all entries required)

That still does not require ordering

3) Allow full overwrite and 'append' mode

echo "...." > schemata

Overwrites the whole file. It does not require all entries to be
supplied. Non supplied entries are reset to default

echo "...." >> schemata

"Appends" the supplied entries by overwriting the existing ones.

My favourite would be #1, but I have no strong opinions other than not
caring about resource write ordering for #2 and #3.

Thanks,

tglx

2017-03-10 22:06:05

by Luck, Tony

[permalink] [raw]
Subject: Re: [PATCH 2/5] x86/intel_rdt: Improvements to parsing schemata

On Fri, Mar 10, 2017 at 07:58:51PM +0100, Thomas Gleixner wrote:
> Well, we have several options to tackle this:
>
> 1) Have schemata files for each resource
>
> schemata_l2, _l3 _mb
>
> 2) Request a full overwrite every time (all entries required)
>
> That still does not require ordering
>
> 3) Allow full overwrite and 'append' mode
>
> echo "...." > schemata
>
> Overwrites the whole file. It does not require all entries to be
> supplied. Non supplied entries are reset to default
>
> echo "...." >> schemata
>
> "Appends" the supplied entries by overwriting the existing ones.
>
> My favourite would be #1, but I have no strong opinions other than not
> caring about resource write ordering for #2 and #3.

If you are going to head in the direction of partial update, then
why not go for:

4) Drop the code that check that the user wrote all
the fields as well as the check for all the lines. Just update
the bits they list, and leave the rest unchanged.

I.e. the user could say:

# echo "L3:1=0x3f" > schemata

if they just wanted to update resource L3, instance 1.

I don't think there is much benefit to the overwrite vs. append
semantics for the user.

-Tony

2017-03-11 07:48:05

by Thomas Gleixner

[permalink] [raw]
Subject: Re: [PATCH 2/5] x86/intel_rdt: Improvements to parsing schemata

On Fri, 10 Mar 2017, Luck, Tony wrote:
> On Fri, Mar 10, 2017 at 07:58:51PM +0100, Thomas Gleixner wrote:
> > Well, we have several options to tackle this:
> >
> > 1) Have schemata files for each resource
> >
> > schemata_l2, _l3 _mb
> >
> > 2) Request a full overwrite every time (all entries required)
> >
> > That still does not require ordering
> >
> > 3) Allow full overwrite and 'append' mode
> >
> > echo "...." > schemata
> >
> > Overwrites the whole file. It does not require all entries to be
> > supplied. Non supplied entries are reset to default
> >
> > echo "...." >> schemata
> >
> > "Appends" the supplied entries by overwriting the existing ones.
> >
> > My favourite would be #1, but I have no strong opinions other than not
> > caring about resource write ordering for #2 and #3.
>
> If you are going to head in the direction of partial update, then
> why not go for:
>
> 4) Drop the code that check that the user wrote all
> the fields as well as the check for all the lines. Just update
> the bits they list, and leave the rest unchanged.
>
> I.e. the user could say:
>
> # echo "L3:1=0x3f" > schemata
>
> if they just wanted to update resource L3, instance 1.

Even better

> I don't think there is much benefit to the overwrite vs. append
> semantics for the user.

Agreed.

Thanks,

tglx

2017-03-24 17:52:54

by Luck, Tony

[permalink] [raw]
Subject: [PATCH] x86/intel_rdt: Implement "update" mode when writing schemata file

From: Tony Luck <[email protected]>

The schemata file can have multiple lines and it is cumbersome to
update from shell scripts.

Remove code that requires that the user provide values for every
resource (in the right order). If the user provides values for
just a few resources, update them and leave the rest unchanged.

Side benefit: we now check which values were updated and only send
IPIs to cpus that actually have updates.

Cc: Vikas Shivappa <[email protected]>
Cc: Fenghua Yu <[email protected]>
Cc: [email protected]
Cc: [email protected]
Tested-by: Sai Praneeth Prakhya <[email protected]>
Signed-off-by: Tony Luck <[email protected]>
---
Documentation/x86/intel_rdt_ui.txt | 14 ++++++
arch/x86/include/asm/intel_rdt.h | 10 ++---
arch/x86/kernel/cpu/intel_rdt.c | 2 -
arch/x86/kernel/cpu/intel_rdt_schemata.c | 73 +++++++++++++-------------------
4 files changed, 48 insertions(+), 51 deletions(-)

diff --git a/Documentation/x86/intel_rdt_ui.txt b/Documentation/x86/intel_rdt_ui.txt
index 51cf6fa5591f..3ea198460469 100644
--- a/Documentation/x86/intel_rdt_ui.txt
+++ b/Documentation/x86/intel_rdt_ui.txt
@@ -129,6 +129,20 @@ schemata format is always:

L2:<cache_id0>=<cbm>;<cache_id1>=<cbm>;...

+Reading/writing the schemata file
+---------------------------------
+Reading the schemata file will show the state of all resources
+on all domains. When writing you only need to specify those values
+which you wish to change. E.g.
+
+# cat schemata
+L3DATA:0=fffff;1=fffff;2=fffff;3=fffff
+L3CODE:0=fffff;1=fffff;2=fffff;3=fffff
+# echo "L3DATA:2=3c0;" > schemata
+# cat schemata
+L3DATA:0=fffff;1=fffff;2=3c0;3=fffff
+L3CODE:0=fffff;1=fffff;2=fffff;3=fffff
+
Example 1
---------
On a two socket machine (one L3 cache per socket) with just four bits
diff --git a/arch/x86/include/asm/intel_rdt.h b/arch/x86/include/asm/intel_rdt.h
index 0d64397cee58..d7705270c401 100644
--- a/arch/x86/include/asm/intel_rdt.h
+++ b/arch/x86/include/asm/intel_rdt.h
@@ -76,10 +76,7 @@ struct rftype {
* @min_cbm_bits: Minimum number of consecutive bits to be set
* in a cache bit mask
* @domains: All domains for this resource
- * @num_domains: Number of domains active
* @msr_base: Base MSR address for CBMs
- * @tmp_cbms: Scratch space when updating schemata
- * @num_tmp_cbms: Number of CBMs in tmp_cbms
* @cache_level: Which cache level defines scope of this domain
* @cbm_idx_multi: Multiplier of CBM index
* @cbm_idx_offset: Offset of CBM index. CBM index is computed by:
@@ -94,10 +91,7 @@ struct rdt_resource {
int min_cbm_bits;
u32 max_cbm;
struct list_head domains;
- int num_domains;
int msr_base;
- u32 *tmp_cbms;
- int num_tmp_cbms;
int cache_level;
int cbm_idx_multi;
int cbm_idx_offset;
@@ -109,12 +103,16 @@ struct rdt_resource {
* @id: unique id for this instance
* @cpu_mask: which cpus share this resource
* @cbm: array of cache bit masks (indexed by CLOSID)
+ * @new_cbm: new cbm value to be loaded
+ * @have_new_cbm: did user provide new_cbm for this domain
*/
struct rdt_domain {
struct list_head list;
int id;
struct cpumask cpu_mask;
u32 *cbm;
+ u32 new_cbm;
+ bool have_new_cbm;
};

/**
diff --git a/arch/x86/kernel/cpu/intel_rdt.c b/arch/x86/kernel/cpu/intel_rdt.c
index 5a533fefefa0..329b8876b984 100644
--- a/arch/x86/kernel/cpu/intel_rdt.c
+++ b/arch/x86/kernel/cpu/intel_rdt.c
@@ -309,7 +309,6 @@ static void domain_add_cpu(int cpu, struct rdt_resource *r)

cpumask_set_cpu(cpu, &d->cpu_mask);
list_add_tail(&d->list, add_pos);
- r->num_domains++;
}

static void domain_remove_cpu(int cpu, struct rdt_resource *r)
@@ -325,7 +324,6 @@ static void domain_remove_cpu(int cpu, struct rdt_resource *r)

cpumask_clear_cpu(cpu, &d->cpu_mask);
if (cpumask_empty(&d->cpu_mask)) {
- r->num_domains--;
kfree(d->cbm);
list_del(&d->list);
kfree(d);
diff --git a/arch/x86/kernel/cpu/intel_rdt_schemata.c b/arch/x86/kernel/cpu/intel_rdt_schemata.c
index f369cb8db0d5..c8e054f4f269 100644
--- a/arch/x86/kernel/cpu/intel_rdt_schemata.c
+++ b/arch/x86/kernel/cpu/intel_rdt_schemata.c
@@ -56,7 +56,7 @@ static bool cbm_validate(unsigned long var, struct rdt_resource *r)
* Read one cache bit mask (hex). Check that it is valid for the current
* resource type.
*/
-static int parse_cbm(char *buf, struct rdt_resource *r)
+static int parse_cbm(char *buf, struct rdt_resource *r, struct rdt_domain *d)
{
unsigned long data;
int ret;
@@ -66,7 +66,8 @@ static int parse_cbm(char *buf, struct rdt_resource *r)
return ret;
if (!cbm_validate(data, r))
return -EINVAL;
- r->tmp_cbms[r->num_tmp_cbms++] = data;
+ d->new_cbm = data;
+ d->have_new_cbm = true;

return 0;
}
@@ -74,8 +75,8 @@ static int parse_cbm(char *buf, struct rdt_resource *r)
/*
* For each domain in this resource we expect to find a series of:
* id=mask
- * separated by ";". The "id" is in decimal, and must appear in the
- * right order.
+ * separated by ";". The "id" is in decimal, and must match one of
+ * the "id"s for this resource.
*/
static int parse_line(char *line, struct rdt_resource *r)
{
@@ -83,21 +84,21 @@ static int parse_line(char *line, struct rdt_resource *r)
struct rdt_domain *d;
unsigned long dom_id;

+next:
+ if (!line || line[0] == '\0')
+ return 0;
+ dom = strsep(&line, ";");
+ id = strsep(&dom, "=");
+ if (!dom || kstrtoul(id, 10, &dom_id))
+ return -EINVAL;
list_for_each_entry(d, &r->domains, list) {
- dom = strsep(&line, ";");
- if (!dom)
- return -EINVAL;
- id = strsep(&dom, "=");
- if (kstrtoul(id, 10, &dom_id) || dom_id != d->id)
- return -EINVAL;
- if (parse_cbm(dom, r))
- return -EINVAL;
+ if (d->id == dom_id) {
+ if (parse_cbm(dom, r, d))
+ return -EINVAL;
+ goto next;
+ }
}
-
- /* Any garbage at the end of the line? */
- if (line && line[0])
- return -EINVAL;
- return 0;
+ return -EINVAL;
}

static int update_domains(struct rdt_resource *r, int closid)
@@ -105,7 +106,7 @@ static int update_domains(struct rdt_resource *r, int closid)
struct msr_param msr_param;
cpumask_var_t cpu_mask;
struct rdt_domain *d;
- int cpu, idx = 0;
+ int cpu;

if (!zalloc_cpumask_var(&cpu_mask, GFP_KERNEL))
return -ENOMEM;
@@ -115,9 +116,13 @@ static int update_domains(struct rdt_resource *r, int closid)
msr_param.res = r;

list_for_each_entry(d, &r->domains, list) {
- cpumask_set_cpu(cpumask_any(&d->cpu_mask), cpu_mask);
- d->cbm[msr_param.low] = r->tmp_cbms[idx++];
+ if (d->have_new_cbm && d->new_cbm != d->cbm[closid]) {
+ cpumask_set_cpu(cpumask_any(&d->cpu_mask), cpu_mask);
+ d->cbm[closid] = d->new_cbm;
+ }
}
+ if (cpumask_empty(cpu_mask))
+ goto done;
cpu = get_cpu();
/* Update CBM on this cpu if it's in cpu_mask. */
if (cpumask_test_cpu(cpu, cpu_mask))
@@ -126,6 +131,7 @@ static int update_domains(struct rdt_resource *r, int closid)
smp_call_function_many(cpu_mask, rdt_cbm_update, &msr_param, 1);
put_cpu();

+done:
free_cpumask_var(cpu_mask);

return 0;
@@ -135,10 +141,10 @@ ssize_t rdtgroup_schemata_write(struct kernfs_open_file *of,
char *buf, size_t nbytes, loff_t off)
{
struct rdtgroup *rdtgrp;
+ struct rdt_domain *dom;
struct rdt_resource *r;
char *tok, *resname;
int closid, ret = 0;
- u32 *l3_cbms = NULL;

/* Valid input requires a trailing newline */
if (nbytes == 0 || buf[nbytes - 1] != '\n')
@@ -153,16 +159,9 @@ ssize_t rdtgroup_schemata_write(struct kernfs_open_file *of,

closid = rdtgrp->closid;

- /* get scratch space to save all the masks while we validate input */
- for_each_enabled_rdt_resource(r) {
- r->tmp_cbms = kcalloc(r->num_domains, sizeof(*l3_cbms),
- GFP_KERNEL);
- if (!r->tmp_cbms) {
- ret = -ENOMEM;
- goto out;
- }
- r->num_tmp_cbms = 0;
- }
+ for_each_enabled_rdt_resource(r)
+ list_for_each_entry(dom, &r->domains, list)
+ dom->have_new_cbm = false;

while ((tok = strsep(&buf, "\n")) != NULL) {
resname = strsep(&tok, ":");
@@ -185,14 +184,6 @@ ssize_t rdtgroup_schemata_write(struct kernfs_open_file *of,
}
}

- /* Did the parser find all the masks we need? */
- for_each_enabled_rdt_resource(r) {
- if (r->num_tmp_cbms != r->num_domains) {
- ret = -EINVAL;
- goto out;
- }
- }
-
for_each_enabled_rdt_resource(r) {
ret = update_domains(r, closid);
if (ret)
@@ -201,10 +192,6 @@ ssize_t rdtgroup_schemata_write(struct kernfs_open_file *of,

out:
rdtgroup_kn_unlock(of->kn);
- for_each_enabled_rdt_resource(r) {
- kfree(r->tmp_cbms);
- r->tmp_cbms = NULL;
- }
return ret ?: nbytes;
}

--
2.7.4

2017-03-24 23:16:57

by Fenghua Yu

[permalink] [raw]
Subject: Re: [PATCH] x86/intel_rdt: Implement "update" mode when writing schemata file

On Fri, Mar 24, 2017 at 10:51:58AM -0700, Luck, Tony wrote:
> From: Tony Luck <[email protected]>
>
> The schemata file can have multiple lines and it is cumbersome to
> update from shell scripts.

"from shell scripts" makes people a bit confused. Not just shell scripts,
C or Java code also can be cumbersome to update the file, right?

> +Reading/writing the schemata file
> +---------------------------------
> +Reading the schemata file will show the state of all resources
> +on all domains. When writing you only need to specify those values
> +which you wish to change. E.g.

User can still write all values including not changed and change just
like 4.10 does. We may say "When writing you can either write all values
or only need to specify those values which you wish to change."?

Thanks.

-Fenghua

2017-03-30 18:32:57

by Shivappa Vikas

[permalink] [raw]
Subject: Re: [PATCH] x86/intel_rdt: Implement "update" mode when writing schemata file



On Fri, 24 Mar 2017, Fenghua Yu wrote:

> On Fri, Mar 24, 2017 at 10:51:58AM -0700, Luck, Tony wrote:
>> From: Tony Luck <[email protected]>
>>
>> The schemata file can have multiple lines and it is cumbersome to
>> update from shell scripts.
>
> "from shell scripts" makes people a bit confused. Not just shell scripts,
> C or Java code also can be cumbersome to update the file, right?
>
>> +Reading/writing the schemata file
>> +---------------------------------
>> +Reading the schemata file will show the state of all resources
>> +on all domains. When writing you only need to specify those values
>> +which you wish to change. E.g.
>
> User can still write all values including not changed and change just
> like 4.10 does. We may say "When writing you can either write all values
> or only need to specify those values which you wish to change."?

Will integrate these changes to Tony's patch and send it as part of the next
series.

Thanks,
Vikas

2017-03-31 08:24:23

by Thomas Gleixner

[permalink] [raw]
Subject: Re: [PATCH] x86/intel_rdt: Implement "update" mode when writing schemata file

On Fri, 24 Mar 2017, Luck, Tony wrote:
> +Reading/writing the schemata file
> +---------------------------------
> +Reading the schemata file will show the state of all resources
> +on all domains. When writing you only need to specify those values
> +which you wish to change. E.g.
> +
> +# cat schemata
> +L3DATA:0=fffff;1=fffff;2=fffff;3=fffff
> +L3CODE:0=fffff;1=fffff;2=fffff;3=fffff
> +# echo "L3DATA:2=3c0;" > schemata
> +# cat schemata
> +L3DATA:0=fffff;1=fffff;2=3c0;3=fffff
> +L3CODE:0=fffff;1=fffff;2=fffff;3=fffff

It would be nice if the output would cover the mask width, i.e:

L3DATA:0=fffff;1=fffff;2=003c0;3=fffff
L3CODE:0=fffff;1=fffff;2=fffff;3=fffff

because that makes it tabular and simpler to read.

Thanks,

tglx

2017-03-31 17:40:17

by Shivappa Vikas

[permalink] [raw]
Subject: Re: [PATCH] x86/intel_rdt: Implement "update" mode when writing schemata file



On Fri, 31 Mar 2017, Thomas Gleixner wrote:

> On Fri, 24 Mar 2017, Luck, Tony wrote:
>> +Reading/writing the schemata file
>> +---------------------------------
>> +Reading the schemata file will show the state of all resources
>> +on all domains. When writing you only need to specify those values
>> +which you wish to change. E.g.
>> +
>> +# cat schemata
>> +L3DATA:0=fffff;1=fffff;2=fffff;3=fffff
>> +L3CODE:0=fffff;1=fffff;2=fffff;3=fffff
>> +# echo "L3DATA:2=3c0;" > schemata
>> +# cat schemata
>> +L3DATA:0=fffff;1=fffff;2=3c0;3=fffff
>> +L3CODE:0=fffff;1=fffff;2=fffff;3=fffff
>
> It would be nice if the output would cover the mask width, i.e:
>
> L3DATA:0=fffff;1=fffff;2=003c0;3=fffff
> L3CODE:0=fffff;1=fffff;2=fffff;3=fffff
>
> because that makes it tabular and simpler to read.

Will add this.

A sample with MBA:

L3DATA:0=fffff;1=fffff;2=003c0;3=fffff
L3CODE:0=fffff;1=fffff;2=fffff;3=fffff
MB:0= 10;1= 20;2= 100;3= 100

Thanks,
Vikas

>
> Thanks,
>
> tglx
>

2017-03-31 17:49:24

by Thomas Gleixner

[permalink] [raw]
Subject: Re: [PATCH] x86/intel_rdt: Implement "update" mode when writing schemata file

On Fri, 31 Mar 2017, Shivappa Vikas wrote:
> On Fri, 31 Mar 2017, Thomas Gleixner wrote:
>
> > On Fri, 24 Mar 2017, Luck, Tony wrote:
> > > +Reading/writing the schemata file
> > > +---------------------------------
> > > +Reading the schemata file will show the state of all resources
> > > +on all domains. When writing you only need to specify those values
> > > +which you wish to change. E.g.
> > > +
> > > +# cat schemata
> > > +L3DATA:0=fffff;1=fffff;2=fffff;3=fffff
> > > +L3CODE:0=fffff;1=fffff;2=fffff;3=fffff
> > > +# echo "L3DATA:2=3c0;" > schemata
> > > +# cat schemata
> > > +L3DATA:0=fffff;1=fffff;2=3c0;3=fffff
> > > +L3CODE:0=fffff;1=fffff;2=fffff;3=fffff
> >
> > It would be nice if the output would cover the mask width, i.e:
> >
> > L3DATA:0=fffff;1=fffff;2=003c0;3=fffff
> > L3CODE:0=fffff;1=fffff;2=fffff;3=fffff
> >
> > because that makes it tabular and simpler to read.
>
> Will add this.
>
> A sample with MBA:
>
> L3DATA:0=fffff;1=fffff;2=003c0;3=fffff
> L3CODE:0=fffff;1=fffff;2=fffff;3=fffff
> MB:0= 10;1= 20;2= 100;3= 100

Well done, the widest group sets the width for all others!

Thanks,

tglx

2017-03-31 18:45:50

by Shivappa Vikas

[permalink] [raw]
Subject: Re: [PATCH] x86/intel_rdt: Implement "update" mode when writing schemata file



On Fri, 24 Mar 2017, Luck, Tony wrote:

> +++ b/arch/x86/kernel/cpu/intel_rdt_schemata.c
> @@ -56,7 +56,7 @@ static bool cbm_validate(unsigned long var, struct rdt_resource *r)
> * Read one cache bit mask (hex). Check that it is valid for the current
> * resource type.
> */
> -static int parse_cbm(char *buf, struct rdt_resource *r)
> +static int parse_cbm(char *buf, struct rdt_resource *r, struct rdt_domain *d)
> {
> unsigned long data;
> int ret;
> @@ -66,7 +66,8 @@ static int parse_cbm(char *buf, struct rdt_resource *r)
> return ret;

Would need a check here for repeated domain entries to error out something like:

echo -e "L3:1=7f;1=7;1=7ff" | cat > /sys/fs/resctrl/p1/schemata

if (d->have_newcbm)
return -EINVAL;


> if (!cbm_validate(data, r))
> return -EINVAL;
> - r->tmp_cbms[r->num_tmp_cbms++] = data;
> + d->new_cbm = data;
> + d->have_new_cbm = true;
>