2017-08-09 18:44:41

by Shivappa Vikas

[permalink] [raw]
Subject: [PATCH 0/3 V3] Cqm3 based on resctrl fs with MBM support

Sending some changes to the V2 version based on Thomas feedback that the
worst case limbo processing in IPI was still not acceptable and to
improve the cache line access during scheduling.
This also includes a fix during hot cpu processing which was reported.

Patches are based on the tip/x86/cache (where the V2 was merged)

Vikas Shivappa (3):
x86/intel_rdt/cqm: Add fix to clear the default RMID during hotcpu
x86/intel_rdt: Modify the intel_pqr_state for better performance
x86/intel_rdt/cqm: Improve limbo list processing

arch/x86/include/asm/intel_rdt_sched.h | 30 +++--
arch/x86/kernel/cpu/intel_rdt.c | 34 +++--
arch/x86/kernel/cpu/intel_rdt.h | 10 ++
arch/x86/kernel/cpu/intel_rdt_monitor.c | 209 +++++++++++++------------------
arch/x86/kernel/cpu/intel_rdt_rdtgroup.c | 10 +-
5 files changed, 146 insertions(+), 147 deletions(-)

--
1.9.1


2017-08-09 18:44:42

by Shivappa Vikas

[permalink] [raw]
Subject: [PATCH 1/3] x86/intel_rdt/cqm: Add fix to clear the default RMID during hotcpu

The user configured per cpu default RMID is not cleared during cpu
hotplug. This may lead to incorrect RMID values after a cpu goes offline
and again comes back online. Clear the per cpu default RMID during cpu
offline and online handling.

Reported-by: Prakyha Sai Praneeth <[email protected]>
Signed-off-by: Vikas Shivappa <[email protected]>
---
arch/x86/kernel/cpu/intel_rdt.c | 1 +
1 file changed, 1 insertion(+)

diff --git a/arch/x86/kernel/cpu/intel_rdt.c b/arch/x86/kernel/cpu/intel_rdt.c
index da4f389..4b9edb2 100644
--- a/arch/x86/kernel/cpu/intel_rdt.c
+++ b/arch/x86/kernel/cpu/intel_rdt.c
@@ -551,6 +551,7 @@ static void clear_closid_rmid(int cpu)
struct intel_pqr_state *state = this_cpu_ptr(&pqr_state);

per_cpu(rdt_cpu_default.closid, cpu) = 0;
+ per_cpu(rdt_cpu_default.rmid, cpu) = 0;
state->closid = 0;
state->rmid = 0;
wrmsr(IA32_PQR_ASSOC, 0, 0);
--
1.9.1

2017-08-09 18:45:04

by Shivappa Vikas

[permalink] [raw]
Subject: [PATCH 3/3] x86/intel_rdt/cqm: Improve limbo list processing

During a mkdir, we synchronously check the entire limbo list on each
package for free RMIDs by sending IPIs and the worst case would end up
taking more than tolerable amount of time in IPIs. For ex: On SKL we
could end up reading all 192 RMIDs for llc_occupancy.

Modify this to make some improvements mainly avoiding the use of IPIs.
We use asynchronous worker threads on each package which would
periodically scan the limbo list and move the RMIDs that have:

llc_occupancy < threshold_occupancy

on all packages to free list. The mkdir would simply return -ENOSPC if the
limbo list is empty or would return -EBUSY if there are RMIDs on limbo
but not on free list. As a side effect of not accessing the limbo list
in parallel on IPIs we simplify some of the related data structures.

Signed-off-by: Vikas Shivappa <[email protected]>
---
arch/x86/kernel/cpu/intel_rdt.c | 25 +++-
arch/x86/kernel/cpu/intel_rdt.h | 10 ++
arch/x86/kernel/cpu/intel_rdt_monitor.c | 209 ++++++++++++++------------------
3 files changed, 120 insertions(+), 124 deletions(-)

diff --git a/arch/x86/kernel/cpu/intel_rdt.c b/arch/x86/kernel/cpu/intel_rdt.c
index 97c8d83..792f142 100644
--- a/arch/x86/kernel/cpu/intel_rdt.c
+++ b/arch/x86/kernel/cpu/intel_rdt.c
@@ -426,6 +426,9 @@ static int domain_setup_mon_state(struct rdt_resource *r, struct rdt_domain *d)
GFP_KERNEL);
if (!d->rmid_busy_llc)
return -ENOMEM;
+ INIT_DELAYED_WORK(&d->cqm_limbo, cqm_handle_limbo);
+ if (has_busy_rmid(r, d))
+ cqm_setup_limbo_handler(d);
}
if (is_mbm_total_enabled()) {
tsize = sizeof(*d->mbm_total);
@@ -536,11 +539,25 @@ static void domain_remove_cpu(int cpu, struct rdt_resource *r)
list_del(&d->list);
if (is_mbm_enabled())
cancel_delayed_work(&d->mbm_over);
+
+ if (is_llc_occupancy_enabled() &&
+ has_busy_rmid(r, d))
+ cancel_delayed_work(&d->cqm_limbo);
+
kfree(d);
- } else if (r == &rdt_resources_all[RDT_RESOURCE_L3] &&
- cpu == d->mbm_work_cpu && is_mbm_enabled()) {
- cancel_delayed_work(&d->mbm_over);
- mbm_setup_overflow_handler(d);
+ return;
+ }
+
+ if (r == &rdt_resources_all[RDT_RESOURCE_L3]) {
+ if (is_mbm_enabled() && cpu == d->mbm_work_cpu) {
+ cancel_delayed_work(&d->mbm_over);
+ mbm_setup_overflow_handler(d);
+ }
+ if (is_llc_occupancy_enabled() && cpu == d->mbm_work_cpu &&
+ has_busy_rmid(r, d)) {
+ cancel_delayed_work(&d->cqm_limbo);
+ cqm_setup_limbo_handler(d);
+ }
}
}

diff --git a/arch/x86/kernel/cpu/intel_rdt.h b/arch/x86/kernel/cpu/intel_rdt.h
index 4040bf1..6d57d7e 100644
--- a/arch/x86/kernel/cpu/intel_rdt.h
+++ b/arch/x86/kernel/cpu/intel_rdt.h
@@ -20,6 +20,8 @@
#define QOS_L3_MBM_TOTAL_EVENT_ID 0x02
#define QOS_L3_MBM_LOCAL_EVENT_ID 0x03

+#define CQM_LIMBOCHECK_INTERVAL 1000
+
#define MBM_CNTR_WIDTH 24
#define MBM_OVERFLOW_INTERVAL 1000

@@ -187,8 +189,11 @@ struct mbm_state {
* @mbm_total: saved state for MBM total bandwidth
* @mbm_local: saved state for MBM local bandwidth
* @mbm_over: worker to periodically read MBM h/w counters
+ * @cqm_limbo: worker to periodically read CQM h/w counters
* @mbm_work_cpu:
* worker cpu for MBM h/w counters
+ * @cqm_work_cpu:
+ * worker cpu for CQM h/w counters
* @ctrl_val: array of cache or mem ctrl values (indexed by CLOSID)
* @new_ctrl: new ctrl value to be loaded
* @have_new_ctrl: did user provide new_ctrl for this domain
@@ -201,7 +206,9 @@ struct rdt_domain {
struct mbm_state *mbm_total;
struct mbm_state *mbm_local;
struct delayed_work mbm_over;
+ struct delayed_work cqm_limbo;
int mbm_work_cpu;
+ int cqm_work_cpu;
u32 *ctrl_val;
u32 new_ctrl;
bool have_new_ctrl;
@@ -424,5 +431,8 @@ void mon_event_read(struct rmid_read *rr, struct rdt_domain *d,
struct rdtgroup *rdtgrp, int evtid, int first);
void mbm_setup_overflow_handler(struct rdt_domain *dom);
void mbm_handle_overflow(struct work_struct *work);
+void cqm_setup_limbo_handler(struct rdt_domain *dom);
+void cqm_handle_limbo(struct work_struct *work);
+bool has_busy_rmid(struct rdt_resource *r, struct rdt_domain *d);

#endif /* _ASM_X86_INTEL_RDT_H */
diff --git a/arch/x86/kernel/cpu/intel_rdt_monitor.c b/arch/x86/kernel/cpu/intel_rdt_monitor.c
index d6bfdfd..d9c03a0 100644
--- a/arch/x86/kernel/cpu/intel_rdt_monitor.c
+++ b/arch/x86/kernel/cpu/intel_rdt_monitor.c
@@ -33,7 +33,7 @@

struct rmid_entry {
u32 rmid;
- atomic_t busy;
+ int busy;
struct list_head list;
};

@@ -45,13 +45,13 @@ struct rmid_entry {
static LIST_HEAD(rmid_free_lru);

/**
- * @rmid_limbo_lru list of currently unused but (potentially)
+ * @rmid_limbo_count count of currently unused but (potentially)
* dirty RMIDs.
- * This list contains RMIDs that no one is currently using but that
+ * This counts RMIDs that no one is currently using but that
* may have a occupancy value > intel_cqm_threshold. User can change
* the threshold occupancy value.
*/
-static LIST_HEAD(rmid_limbo_lru);
+unsigned int rmid_limbo_count;

/**
* @rmid_entry - The entry in the limbo and free lists.
@@ -104,123 +104,45 @@ static u64 __rmid_read(u32 rmid, u32 eventid)
}

/*
- * Walk the limbo list looking at any RMIDs that are flagged in the
- * domain rmid_busy_llc bitmap as busy. If the reported LLC occupancy
- * is below the threshold clear the busy bit and decrement the count.
- * If the busy count gets to zero on an RMID we stop looking.
- * This can be called from an IPI.
- * We need an atomic for the busy count because multiple CPUs may check
- * the same RMID at the same time.
+ * Check the RMIDs that are marked as busy for this domain. If the
+ * reported LLC occupancy is below the threshold clear the busy bit and
+ * decrement the count. If the busy count gets to zero on an RMID, we
+ * free the RMID
*/
-static bool __check_limbo(struct rdt_domain *d)
+static void __check_limbo(struct rdt_domain *d)
{
struct rmid_entry *entry;
- u64 val;
-
- list_for_each_entry(entry, &rmid_limbo_lru, list) {
- if (!test_bit(entry->rmid, d->rmid_busy_llc))
- continue;
- val = __rmid_read(entry->rmid, QOS_L3_OCCUP_EVENT_ID);
- if (val <= intel_cqm_threshold) {
- clear_bit(entry->rmid, d->rmid_busy_llc);
- if (atomic_dec_and_test(&entry->busy))
- return true;
- }
- }
- return false;
-}
-
-static void check_limbo(void *arg)
-{
- struct rdt_domain *d;
-
- d = get_domain_from_cpu(smp_processor_id(),
- &rdt_resources_all[RDT_RESOURCE_L3]);
-
- if (d)
- __check_limbo(d);
-}
-
-static bool has_busy_rmid(struct rdt_resource *r, struct rdt_domain *d)
-{
- return find_first_bit(d->rmid_busy_llc, r->num_rmid) != r->num_rmid;
-}
-
-/*
- * Scan the limbo list and move all entries that are below the
- * intel_cqm_threshold to the free list.
- * Return "true" if the limbo list is empty, "false" if there are
- * still some RMIDs there.
- */
-static bool try_freeing_limbo_rmid(void)
-{
- struct rmid_entry *entry, *tmp;
struct rdt_resource *r;
- cpumask_var_t cpu_mask;
- struct rdt_domain *d;
- bool ret = true;
- int cpu;
-
- if (list_empty(&rmid_limbo_lru))
- return ret;
+ u32 crmid = 1, nrmid;
+ u64 val;

r = &rdt_resources_all[RDT_RESOURCE_L3];

- cpu = get_cpu();
-
/*
- * First see if we can free up an RMID by checking busy values
- * on the local package.
+ * We skip the RMID 0 and start from RMID 1 and check all
+ * the RMIDs that are marked as busy if the occupancy < threshold.
*/
- d = get_domain_from_cpu(cpu, r);
- if (d && has_busy_rmid(r, d) && __check_limbo(d)) {
- list_for_each_entry_safe(entry, tmp, &rmid_limbo_lru, list) {
- if (atomic_read(&entry->busy) == 0) {
- list_del(&entry->list);
+ for (;;) {
+ nrmid = find_next_bit(d->rmid_busy_llc, r->num_rmid, crmid);
+ if (nrmid == r->num_rmid)
+ break;
+
+ entry = __rmid_entry(nrmid);
+ val = __rmid_read(entry->rmid, QOS_L3_OCCUP_EVENT_ID);
+ if (val <= intel_cqm_threshold) {
+ clear_bit(entry->rmid, d->rmid_busy_llc);
+ if (!--entry->busy) {
+ rmid_limbo_count--;
list_add_tail(&entry->list, &rmid_free_lru);
- goto done;
}
}
+ crmid = nrmid + 1;
}
+}

- if (!zalloc_cpumask_var(&cpu_mask, GFP_KERNEL)) {
- ret = false;
- goto done;
- }
-
- /*
- * Build a mask of other domains that have busy RMIDs
- */
- list_for_each_entry(d, &r->domains, list) {
- if (!cpumask_test_cpu(cpu, &d->cpu_mask) &&
- has_busy_rmid(r, d))
- cpumask_set_cpu(cpumask_any(&d->cpu_mask), cpu_mask);
- }
- if (cpumask_empty(cpu_mask)) {
- ret = false;
- goto free_mask;
- }
-
- /*
- * Scan domains with busy RMIDs to check if they still are busy
- */
- on_each_cpu_mask(cpu_mask, check_limbo, NULL, true);
-
- /* Walk limbo list moving all free RMIDs to the &rmid_free_lru list */
- list_for_each_entry_safe(entry, tmp, &rmid_limbo_lru, list) {
- if (atomic_read(&entry->busy) != 0) {
- ret = false;
- continue;
- }
- list_del(&entry->list);
- list_add_tail(&entry->list, &rmid_free_lru);
- }
-
-free_mask:
- free_cpumask_var(cpu_mask);
-done:
- put_cpu();
- return ret;
+bool has_busy_rmid(struct rdt_resource *r, struct rdt_domain *d)
+{
+ return find_first_bit(d->rmid_busy_llc, r->num_rmid) != r->num_rmid;
}

/*
@@ -231,15 +153,11 @@ static bool try_freeing_limbo_rmid(void)
int alloc_rmid(void)
{
struct rmid_entry *entry;
- bool ret;

lockdep_assert_held(&rdtgroup_mutex);

- if (list_empty(&rmid_free_lru)) {
- ret = try_freeing_limbo_rmid();
- if (list_empty(&rmid_free_lru))
- return ret ? -ENOSPC : -EBUSY;
- }
+ if (list_empty(&rmid_free_lru))
+ return rmid_limbo_count ? -EBUSY : -ENOSPC;

entry = list_first_entry(&rmid_free_lru,
struct rmid_entry, list);
@@ -252,11 +170,12 @@ static void add_rmid_to_limbo(struct rmid_entry *entry)
{
struct rdt_resource *r;
struct rdt_domain *d;
- int cpu, nbusy = 0;
+ int cpu;
u64 val;

r = &rdt_resources_all[RDT_RESOURCE_L3];

+ entry->busy = 0;
cpu = get_cpu();
list_for_each_entry(d, &r->domains, list) {
if (cpumask_test_cpu(cpu, &d->cpu_mask)) {
@@ -264,17 +183,22 @@ static void add_rmid_to_limbo(struct rmid_entry *entry)
if (val <= intel_cqm_threshold)
continue;
}
+
+ /*
+ * For the first limbo RMID in the domain,
+ * setup up the limbo worker.
+ */
+ if (!has_busy_rmid(r, d))
+ cqm_setup_limbo_handler(d);
set_bit(entry->rmid, d->rmid_busy_llc);
- nbusy++;
+ entry->busy++;
}
put_cpu();

- if (nbusy) {
- atomic_set(&entry->busy, nbusy);
- list_add_tail(&entry->list, &rmid_limbo_lru);
- } else {
+ if (entry->busy)
+ rmid_limbo_count++;
+ else
list_add_tail(&entry->list, &rmid_free_lru);
- }
}

void free_rmid(u32 rmid)
@@ -387,6 +311,51 @@ static void mbm_update(struct rdt_domain *d, int rmid)
}
}

+/*
+ * Handler to scan the limbo list and move the RMIDs
+ * to free list whose occupancy < threshold_occupancy.
+ */
+void cqm_handle_limbo(struct work_struct *work)
+{
+ unsigned long delay = msecs_to_jiffies(CQM_LIMBOCHECK_INTERVAL);
+ int cpu = smp_processor_id();
+ struct rdt_resource *r;
+ struct rdt_domain *d;
+
+ mutex_lock(&rdtgroup_mutex);
+
+ r = &rdt_resources_all[RDT_RESOURCE_L3];
+ d = get_domain_from_cpu(cpu, r);
+
+ if (!d) {
+ pr_warn_once("Failure to get domain for limbo worker\n");
+ goto out_unlock;
+ }
+
+ __check_limbo(d);
+
+ if (!has_busy_rmid(r, d))
+ goto out_unlock;
+ schedule_delayed_work_on(cpu, &d->cqm_limbo, delay);
+
+out_unlock:
+ mutex_unlock(&rdtgroup_mutex);
+}
+
+void cqm_setup_limbo_handler(struct rdt_domain *dom)
+{
+ unsigned long delay = msecs_to_jiffies(CQM_LIMBOCHECK_INTERVAL);
+ struct rdt_resource *r;
+ int cpu;
+
+ r = &rdt_resources_all[RDT_RESOURCE_L3];
+
+ cpu = cpumask_any(&dom->cpu_mask);
+ dom->cqm_work_cpu = cpu;
+
+ schedule_delayed_work_on(cpu, &dom->cqm_limbo, delay);
+}
+
void mbm_handle_overflow(struct work_struct *work)
{
unsigned long delay = msecs_to_jiffies(MBM_OVERFLOW_INTERVAL);
--
1.9.1

2017-08-09 18:45:17

by Shivappa Vikas

[permalink] [raw]
Subject: [PATCH 2/3] x86/intel_rdt: Modify the intel_pqr_state for better performance

Currently we have pqr_state and rdt_default_state which store the cached
CLOSID/RMIDs and the user configured cpu default values respectively. We
touch both of these during context switch. Put all of them in one
structure so that we can spare a cache line.

Reported-by: Thomas Gleixner <[email protected]>
Signed-off-by: Vikas Shivappa <[email protected]>
---
arch/x86/include/asm/intel_rdt_sched.h | 30 +++++++++++++++++-------------
arch/x86/kernel/cpu/intel_rdt.c | 10 ++++------
arch/x86/kernel/cpu/intel_rdt_rdtgroup.c | 10 +++++-----
3 files changed, 26 insertions(+), 24 deletions(-)

diff --git a/arch/x86/include/asm/intel_rdt_sched.h b/arch/x86/include/asm/intel_rdt_sched.h
index 3badc0a..b4bbf8b 100644
--- a/arch/x86/include/asm/intel_rdt_sched.h
+++ b/arch/x86/include/asm/intel_rdt_sched.h
@@ -10,8 +10,10 @@

/**
* struct intel_pqr_state - State cache for the PQR MSR
- * @rmid: The cached Resource Monitoring ID
- * @closid: The cached Class Of Service ID
+ * @cur_rmid: The cached Resource Monitoring ID
+ * @cur_closid: The cached Class Of Service ID
+ * @default_rmid: The user assigned Resource Monitoring ID
+ * @default_closid: The user assigned cached Class Of Service ID
*
* The upper 32 bits of IA32_PQR_ASSOC contain closid and the
* lower 10 bits rmid. The update to IA32_PQR_ASSOC always
@@ -22,12 +24,13 @@
* not change.
*/
struct intel_pqr_state {
- u32 rmid;
- u32 closid;
+ u32 cur_rmid;
+ u32 cur_closid;
+ u32 default_rmid;
+ u32 default_closid;
};

DECLARE_PER_CPU(struct intel_pqr_state, pqr_state);
-DECLARE_PER_CPU_READ_MOSTLY(struct intel_pqr_state, rdt_cpu_default);

DECLARE_STATIC_KEY_FALSE(rdt_enable_key);
DECLARE_STATIC_KEY_FALSE(rdt_alloc_enable_key);
@@ -49,8 +52,9 @@ struct intel_pqr_state {
*/
static void __intel_rdt_sched_in(void)
{
- struct intel_pqr_state newstate = this_cpu_read(rdt_cpu_default);
- struct intel_pqr_state *curstate = this_cpu_ptr(&pqr_state);
+ struct intel_pqr_state *state = this_cpu_ptr(&pqr_state);
+ u32 closid = state->default_closid;
+ u32 rmid = state->default_rmid;

/*
* If this task has a closid/rmid assigned, use it.
@@ -58,18 +62,18 @@ static void __intel_rdt_sched_in(void)
*/
if (static_branch_likely(&rdt_alloc_enable_key)) {
if (current->closid)
- newstate.closid = current->closid;
+ closid = current->closid;
}

if (static_branch_likely(&rdt_mon_enable_key)) {
if (current->rmid)
- newstate.rmid = current->rmid;
+ rmid = current->rmid;
}

- if (newstate.closid != curstate->closid ||
- newstate.rmid != curstate->rmid) {
- *curstate = newstate;
- wrmsr(IA32_PQR_ASSOC, newstate.rmid, newstate.closid);
+ if (closid != state->cur_closid || rmid != state->cur_rmid) {
+ state->cur_closid = closid;
+ state->cur_rmid = rmid;
+ wrmsr(IA32_PQR_ASSOC, rmid, closid);
}
}

diff --git a/arch/x86/kernel/cpu/intel_rdt.c b/arch/x86/kernel/cpu/intel_rdt.c
index 4b9edb2..97c8d83 100644
--- a/arch/x86/kernel/cpu/intel_rdt.c
+++ b/arch/x86/kernel/cpu/intel_rdt.c
@@ -47,8 +47,6 @@
*/
DEFINE_PER_CPU(struct intel_pqr_state, pqr_state);

-DEFINE_PER_CPU_READ_MOSTLY(struct intel_pqr_state, rdt_cpu_default);
-
/*
* Used to store the max resource name width and max resource data width
* to display the schemata in a tabular format
@@ -550,10 +548,10 @@ static void clear_closid_rmid(int cpu)
{
struct intel_pqr_state *state = this_cpu_ptr(&pqr_state);

- per_cpu(rdt_cpu_default.closid, cpu) = 0;
- per_cpu(rdt_cpu_default.rmid, cpu) = 0;
- state->closid = 0;
- state->rmid = 0;
+ state->default_closid = 0;
+ state->default_rmid = 0;
+ state->cur_closid = 0;
+ state->cur_rmid = 0;
wrmsr(IA32_PQR_ASSOC, 0, 0);
}

diff --git a/arch/x86/kernel/cpu/intel_rdt_rdtgroup.c b/arch/x86/kernel/cpu/intel_rdt_rdtgroup.c
index 2621ae3..86a6979 100644
--- a/arch/x86/kernel/cpu/intel_rdt_rdtgroup.c
+++ b/arch/x86/kernel/cpu/intel_rdt_rdtgroup.c
@@ -202,8 +202,8 @@ static void update_cpu_closid_rmid(void *info)
struct rdtgroup *r = info;

if (r) {
- this_cpu_write(rdt_cpu_default.closid, r->closid);
- this_cpu_write(rdt_cpu_default.rmid, r->mon.rmid);
+ this_cpu_write(pqr_state.default_closid, r->closid);
+ this_cpu_write(pqr_state.default_rmid, r->mon.rmid);
}

/*
@@ -1733,7 +1733,7 @@ static int rdtgroup_rmdir_mon(struct kernfs_node *kn, struct rdtgroup *rdtgrp,

/* Update per cpu rmid of the moved CPUs first */
for_each_cpu(cpu, &rdtgrp->cpu_mask)
- per_cpu(rdt_cpu_default.rmid, cpu) = prdtgrp->mon.rmid;
+ per_cpu(pqr_state.default_rmid, cpu) = prdtgrp->mon.rmid;
/*
* Update the MSR on moved CPUs and CPUs which have moved
* task running on them.
@@ -1774,8 +1774,8 @@ static int rdtgroup_rmdir_ctrl(struct kernfs_node *kn, struct rdtgroup *rdtgrp,

/* Update per cpu closid and rmid of the moved CPUs first */
for_each_cpu(cpu, &rdtgrp->cpu_mask) {
- per_cpu(rdt_cpu_default.closid, cpu) = rdtgroup_default.closid;
- per_cpu(rdt_cpu_default.rmid, cpu) = rdtgroup_default.mon.rmid;
+ per_cpu(pqr_state.default_closid, cpu) = rdtgroup_default.closid;
+ per_cpu(pqr_state.default_rmid, cpu) = rdtgroup_default.mon.rmid;
}

/*
--
1.9.1

2017-08-14 09:45:51

by Thomas Gleixner

[permalink] [raw]
Subject: Re: [PATCH 3/3] x86/intel_rdt/cqm: Improve limbo list processing

On Wed, 9 Aug 2017, Vikas Shivappa wrote:

> @@ -426,6 +426,9 @@ static int domain_setup_mon_state(struct rdt_resource *r, struct rdt_domain *d)
> GFP_KERNEL);
> if (!d->rmid_busy_llc)
> return -ENOMEM;
> + INIT_DELAYED_WORK(&d->cqm_limbo, cqm_handle_limbo);
> + if (has_busy_rmid(r, d))
> + cqm_setup_limbo_handler(d);

This is beyond silly. d->rmid_busy_llc is allocated a few lines above. How
would a bit be set here?

> }
> if (is_mbm_total_enabled()) {
> tsize = sizeof(*d->mbm_total);
> @@ -536,11 +539,25 @@ static void domain_remove_cpu(int cpu, struct rdt_resource *r)
> list_del(&d->list);
> if (is_mbm_enabled())
> cancel_delayed_work(&d->mbm_over);
> +
> + if (is_llc_occupancy_enabled() &&
> + has_busy_rmid(r, d))

What is that line break helping here and why can't you just unconditionally
cancel the work?

> + cancel_delayed_work(&d->cqm_limbo);
> +
> kfree(d);
> - } else if (r == &rdt_resources_all[RDT_RESOURCE_L3] &&
> - cpu == d->mbm_work_cpu && is_mbm_enabled()) {
> - cancel_delayed_work(&d->mbm_over);
> - mbm_setup_overflow_handler(d);
> + return;
> + }
> +
> + if (r == &rdt_resources_all[RDT_RESOURCE_L3]) {
> + if (is_mbm_enabled() && cpu == d->mbm_work_cpu) {
> + cancel_delayed_work(&d->mbm_over);
> + mbm_setup_overflow_handler(d);

I think this is the wrong approach. If the timer is about to fire you
essentially double the interval. So you better flush the work, which will
reschedule it if needed.

> + }
> + if (is_llc_occupancy_enabled() && cpu == d->mbm_work_cpu &&

That want's to be d->cbm_work_cpu, right?

> + has_busy_rmid(r, d)) {
> + cancel_delayed_work(&d->cqm_limbo);
> + cqm_setup_limbo_handler(d);

See above.

Thanks,

tglx

Subject: [tip:x86/cache] x86/intel_rdt/cqm: Clear the default RMID during hotcpu

Commit-ID: eda61c265f3656be8345fdf0334b3a77829437fc
Gitweb: http://git.kernel.org/tip/eda61c265f3656be8345fdf0334b3a77829437fc
Author: Vikas Shivappa <[email protected]>
AuthorDate: Wed, 9 Aug 2017 11:46:33 -0700
Committer: Thomas Gleixner <[email protected]>
CommitDate: Mon, 14 Aug 2017 11:47:46 +0200

x86/intel_rdt/cqm: Clear the default RMID during hotcpu

The user configured per cpu default RMID is not cleared during cpu
hotplug. This may lead to incorrect RMID values after a cpu goes offline
and again comes back online. Clear the per cpu default RMID during cpu
offline and online handling.

Reported-by: Prakyha Sai Praneeth <[email protected]>
Signed-off-by: Vikas Shivappa <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]

---
arch/x86/kernel/cpu/intel_rdt.c | 1 +
1 file changed, 1 insertion(+)

diff --git a/arch/x86/kernel/cpu/intel_rdt.c b/arch/x86/kernel/cpu/intel_rdt.c
index da4f389..4b9edb2 100644
--- a/arch/x86/kernel/cpu/intel_rdt.c
+++ b/arch/x86/kernel/cpu/intel_rdt.c
@@ -551,6 +551,7 @@ static void clear_closid_rmid(int cpu)
struct intel_pqr_state *state = this_cpu_ptr(&pqr_state);

per_cpu(rdt_cpu_default.closid, cpu) = 0;
+ per_cpu(rdt_cpu_default.rmid, cpu) = 0;
state->closid = 0;
state->rmid = 0;
wrmsr(IA32_PQR_ASSOC, 0, 0);

Subject: [tip:x86/cache] x86/intel_rdt: Modify the intel_pqr_state for better performance

Commit-ID: a9110b552d44fedbd1221eb0e5bde81da32d9350
Gitweb: http://git.kernel.org/tip/a9110b552d44fedbd1221eb0e5bde81da32d9350
Author: Vikas Shivappa <[email protected]>
AuthorDate: Wed, 9 Aug 2017 11:46:34 -0700
Committer: Thomas Gleixner <[email protected]>
CommitDate: Mon, 14 Aug 2017 11:47:47 +0200

x86/intel_rdt: Modify the intel_pqr_state for better performance

Currently we have pqr_state and rdt_default_state which store the cached
CLOSID/RMIDs and the user configured cpu default values respectively. We
touch both of these during context switch. Put all of them in one
structure so that we can spare a cache line.

Reported-by: Thomas Gleixner <[email protected]>
Signed-off-by: Vikas Shivappa <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]

---
arch/x86/include/asm/intel_rdt_sched.h | 30 +++++++++++++++++-------------
arch/x86/kernel/cpu/intel_rdt.c | 10 ++++------
arch/x86/kernel/cpu/intel_rdt_rdtgroup.c | 10 +++++-----
3 files changed, 26 insertions(+), 24 deletions(-)

diff --git a/arch/x86/include/asm/intel_rdt_sched.h b/arch/x86/include/asm/intel_rdt_sched.h
index 3badc0a..b4bbf8b 100644
--- a/arch/x86/include/asm/intel_rdt_sched.h
+++ b/arch/x86/include/asm/intel_rdt_sched.h
@@ -10,8 +10,10 @@

/**
* struct intel_pqr_state - State cache for the PQR MSR
- * @rmid: The cached Resource Monitoring ID
- * @closid: The cached Class Of Service ID
+ * @cur_rmid: The cached Resource Monitoring ID
+ * @cur_closid: The cached Class Of Service ID
+ * @default_rmid: The user assigned Resource Monitoring ID
+ * @default_closid: The user assigned cached Class Of Service ID
*
* The upper 32 bits of IA32_PQR_ASSOC contain closid and the
* lower 10 bits rmid. The update to IA32_PQR_ASSOC always
@@ -22,12 +24,13 @@
* not change.
*/
struct intel_pqr_state {
- u32 rmid;
- u32 closid;
+ u32 cur_rmid;
+ u32 cur_closid;
+ u32 default_rmid;
+ u32 default_closid;
};

DECLARE_PER_CPU(struct intel_pqr_state, pqr_state);
-DECLARE_PER_CPU_READ_MOSTLY(struct intel_pqr_state, rdt_cpu_default);

DECLARE_STATIC_KEY_FALSE(rdt_enable_key);
DECLARE_STATIC_KEY_FALSE(rdt_alloc_enable_key);
@@ -49,8 +52,9 @@ DECLARE_STATIC_KEY_FALSE(rdt_mon_enable_key);
*/
static void __intel_rdt_sched_in(void)
{
- struct intel_pqr_state newstate = this_cpu_read(rdt_cpu_default);
- struct intel_pqr_state *curstate = this_cpu_ptr(&pqr_state);
+ struct intel_pqr_state *state = this_cpu_ptr(&pqr_state);
+ u32 closid = state->default_closid;
+ u32 rmid = state->default_rmid;

/*
* If this task has a closid/rmid assigned, use it.
@@ -58,18 +62,18 @@ static void __intel_rdt_sched_in(void)
*/
if (static_branch_likely(&rdt_alloc_enable_key)) {
if (current->closid)
- newstate.closid = current->closid;
+ closid = current->closid;
}

if (static_branch_likely(&rdt_mon_enable_key)) {
if (current->rmid)
- newstate.rmid = current->rmid;
+ rmid = current->rmid;
}

- if (newstate.closid != curstate->closid ||
- newstate.rmid != curstate->rmid) {
- *curstate = newstate;
- wrmsr(IA32_PQR_ASSOC, newstate.rmid, newstate.closid);
+ if (closid != state->cur_closid || rmid != state->cur_rmid) {
+ state->cur_closid = closid;
+ state->cur_rmid = rmid;
+ wrmsr(IA32_PQR_ASSOC, rmid, closid);
}
}

diff --git a/arch/x86/kernel/cpu/intel_rdt.c b/arch/x86/kernel/cpu/intel_rdt.c
index 4b9edb2..97c8d83 100644
--- a/arch/x86/kernel/cpu/intel_rdt.c
+++ b/arch/x86/kernel/cpu/intel_rdt.c
@@ -47,8 +47,6 @@ DEFINE_MUTEX(rdtgroup_mutex);
*/
DEFINE_PER_CPU(struct intel_pqr_state, pqr_state);

-DEFINE_PER_CPU_READ_MOSTLY(struct intel_pqr_state, rdt_cpu_default);
-
/*
* Used to store the max resource name width and max resource data width
* to display the schemata in a tabular format
@@ -550,10 +548,10 @@ static void clear_closid_rmid(int cpu)
{
struct intel_pqr_state *state = this_cpu_ptr(&pqr_state);

- per_cpu(rdt_cpu_default.closid, cpu) = 0;
- per_cpu(rdt_cpu_default.rmid, cpu) = 0;
- state->closid = 0;
- state->rmid = 0;
+ state->default_closid = 0;
+ state->default_rmid = 0;
+ state->cur_closid = 0;
+ state->cur_rmid = 0;
wrmsr(IA32_PQR_ASSOC, 0, 0);
}

diff --git a/arch/x86/kernel/cpu/intel_rdt_rdtgroup.c b/arch/x86/kernel/cpu/intel_rdt_rdtgroup.c
index 2621ae3..86a6979 100644
--- a/arch/x86/kernel/cpu/intel_rdt_rdtgroup.c
+++ b/arch/x86/kernel/cpu/intel_rdt_rdtgroup.c
@@ -202,8 +202,8 @@ static void update_cpu_closid_rmid(void *info)
struct rdtgroup *r = info;

if (r) {
- this_cpu_write(rdt_cpu_default.closid, r->closid);
- this_cpu_write(rdt_cpu_default.rmid, r->mon.rmid);
+ this_cpu_write(pqr_state.default_closid, r->closid);
+ this_cpu_write(pqr_state.default_rmid, r->mon.rmid);
}

/*
@@ -1733,7 +1733,7 @@ static int rdtgroup_rmdir_mon(struct kernfs_node *kn, struct rdtgroup *rdtgrp,

/* Update per cpu rmid of the moved CPUs first */
for_each_cpu(cpu, &rdtgrp->cpu_mask)
- per_cpu(rdt_cpu_default.rmid, cpu) = prdtgrp->mon.rmid;
+ per_cpu(pqr_state.default_rmid, cpu) = prdtgrp->mon.rmid;
/*
* Update the MSR on moved CPUs and CPUs which have moved
* task running on them.
@@ -1774,8 +1774,8 @@ static int rdtgroup_rmdir_ctrl(struct kernfs_node *kn, struct rdtgroup *rdtgrp,

/* Update per cpu closid and rmid of the moved CPUs first */
for_each_cpu(cpu, &rdtgrp->cpu_mask) {
- per_cpu(rdt_cpu_default.closid, cpu) = rdtgroup_default.closid;
- per_cpu(rdt_cpu_default.rmid, cpu) = rdtgroup_default.mon.rmid;
+ per_cpu(pqr_state.default_closid, cpu) = rdtgroup_default.closid;
+ per_cpu(pqr_state.default_rmid, cpu) = rdtgroup_default.mon.rmid;
}

/*

2017-08-14 19:14:21

by Shivappa Vikas

[permalink] [raw]
Subject: Re: [PATCH 3/3] x86/intel_rdt/cqm: Improve limbo list processing



On Mon, 14 Aug 2017, Thomas Gleixner wrote:

> On Wed, 9 Aug 2017, Vikas Shivappa wrote:
>
>> @@ -426,6 +426,9 @@ static int domain_setup_mon_state(struct rdt_resource *r, struct rdt_domain *d)
>> GFP_KERNEL);
>> if (!d->rmid_busy_llc)
>> return -ENOMEM;
>> + INIT_DELAYED_WORK(&d->cqm_limbo, cqm_handle_limbo);
>> + if (has_busy_rmid(r, d))
>> + cqm_setup_limbo_handler(d);
>
> This is beyond silly. d->rmid_busy_llc is allocated a few lines above. How
> would a bit be set here?

If we logically offline all cpus in a package and bring it back, the worker
needs to be scheduled on the package if there were busy RMIDs on this package.
Otherwise that RMID never gets freed as its rmid->busy stays 1..

I needed to scan the limbo list and set the bits for all limbo RMIDs after the
alloc and before doing the 'has_busy_rmid' check. Will fix

>
>> }
>> if (is_mbm_total_enabled()) {
>> tsize = sizeof(*d->mbm_total);
>> @@ -536,11 +539,25 @@ static void domain_remove_cpu(int cpu, struct rdt_resource *r)
>> list_del(&d->list);
>> if (is_mbm_enabled())
>> cancel_delayed_work(&d->mbm_over);
>> +
>> + if (is_llc_occupancy_enabled() &&
>> + has_busy_rmid(r, d))
>
> What is that line break helping here and why can't you just unconditionally
> cancel the work?

Will fix the line break. The has_busy_rmid makes sure the worker was indeed
scheduled - that way we cancel the worker which was actually scheduled..

>
>> + cancel_delayed_work(&d->cqm_limbo);
>> +
>> kfree(d);
>> - } else if (r == &rdt_resources_all[RDT_RESOURCE_L3] &&
>> - cpu == d->mbm_work_cpu && is_mbm_enabled()) {
>> - cancel_delayed_work(&d->mbm_over);
>> - mbm_setup_overflow_handler(d);
>> + return;
>> + }
>> +
>> + if (r == &rdt_resources_all[RDT_RESOURCE_L3]) {
>> + if (is_mbm_enabled() && cpu == d->mbm_work_cpu) {
>> + cancel_delayed_work(&d->mbm_over);
>> + mbm_setup_overflow_handler(d);
>
> I think this is the wrong approach. If the timer is about to fire you
> essentially double the interval. So you better flush the work, which will
> reschedule it if needed.

Ok will fix. We can flush(setup and run it immediately) the work here
on the new cpu.

>
>> + }
>> + if (is_llc_occupancy_enabled() && cpu == d->mbm_work_cpu &&
>
> That want's to be d->cbm_work_cpu, right?

Correct - thanks for pointing , will fix.

>
>> + has_busy_rmid(r, d)) {
>> + cancel_delayed_work(&d->cqm_limbo);
>> + cqm_setup_limbo_handler(d);
>
> See above.

For cqm 1s is not a hard requirement, but can flush the work like mbm to keep it
uniform..

Thanks,
Vikas

>
> Thanks,
>
> tglx
>

2017-08-14 21:15:45

by Shivappa Vikas

[permalink] [raw]
Subject: Re: [PATCH 3/3] x86/intel_rdt/cqm: Improve limbo list processing



On Mon, 14 Aug 2017, Shivappa Vikas wrote:

>
>
> On Mon, 14 Aug 2017, Thomas Gleixner wrote:
>
>> On Wed, 9 Aug 2017, Vikas Shivappa wrote:
>>
>>> @@ -426,6 +426,9 @@ static int domain_setup_mon_state(struct rdt_resource
>>> *r, struct rdt_domain *d)
>>> GFP_KERNEL);
>>> if (!d->rmid_busy_llc)
>>> return -ENOMEM;
>>> + INIT_DELAYED_WORK(&d->cqm_limbo, cqm_handle_limbo);
>>> + if (has_busy_rmid(r, d))
>>> + cqm_setup_limbo_handler(d);
>>
>> This is beyond silly. d->rmid_busy_llc is allocated a few lines above. How
>> would a bit be set here?
>
> If we logically offline all cpus in a package and bring it back, the worker
> needs to be scheduled on the package if there were busy RMIDs on this
> package. Otherwise that RMID never gets freed as its rmid->busy stays 1..
>
> I needed to scan the limbo list and set the bits for all limbo RMIDs after
> the alloc and before doing the 'has_busy_rmid' check. Will fix

Tony pointed out that there is no guarentee that a domain will come back up once
its down, so the above issue of rmid->busy staying at > 0 can still happen.
So I will delete this -
if (has_busy_rmid(r, d))
cqm_setup_limbo_handler(d);

and add this when a domain is powered down -
for each rmid in d->rmid_busy_llc
if (--entry->busy)
free_rmid(rmid);

We have no way to know if the L3 was indeed flushed (or package was powered
off). This may lead to incorrect counts in rare scenarios but can document the
same.

Thanks,
vikas