Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754160AbbFEADq (ORCPT ); Thu, 4 Jun 2015 20:03:46 -0400 Received: from mga14.intel.com ([192.55.52.115]:11325 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753898AbbFEAC5 (ORCPT ); Thu, 4 Jun 2015 20:02:57 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.13,555,1427785200"; d="scan'208";a="502988138" From: Vikas Shivappa To: linux-kernel@vger.kernel.org Cc: vikas.shivappa@intel.com, x86@kernel.org, hpa@zytor.com, tglx@linutronix.de, mingo@kernel.org, tj@kernel.org, peterz@infradead.org, matt.fleming@intel.com, will.auld@intel.com, kanaka.d.juvva@intel.com, vikas.shivappa@linux.intel.com Subject: [PATCH 09/10] x86/intel_rdt: Hot cpu support for Cache Allocation Date: Thu, 4 Jun 2015 17:01:36 -0700 Message-Id: <1433462497-27087-10-git-send-email-vikas.shivappa@linux.intel.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1433462497-27087-1-git-send-email-vikas.shivappa@linux.intel.com> References: <1433462497-27087-1-git-send-email-vikas.shivappa@linux.intel.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4166 Lines: 150 This patch adds hot cpu support for Intel Cache allocation. Support includes updating the cache bitmask MSRs IA32_L3_QOS_n when a new CPU package comes online. The IA32_L3_QOS_n MSRs are one per Class of service on each CPU package. The new package's MSRs are synchronized with the values of existing MSRs. Also the software cache for IA32_PQR_ASSOC MSRs are updated during hot cpu notifications. Signed-off-by: Vikas Shivappa --- This is a new patch seperating the hot cpu updates to PQR cache and CBM MSRs Changes as per Thomas feedback : - update the rdt_cpu_mask which has one cpu for each package, using topology_core_cpumask instead of looping through existing rdt_cpu_mask. Realized topology_core_cpumask name is misleading and it actually returns the cores in a cpu package! -arranged the code better to have the code relating to similar task together. -Improved searching for the next online cpu sibling and maintaining the rdt_cpu_mask which has one cpu per package. arch/x86/kernel/cpu/intel_rdt.c | 84 ++++++++++++++++++++++++++++++++++++++++- 1 file changed, 82 insertions(+), 2 deletions(-) diff --git a/arch/x86/kernel/cpu/intel_rdt.c b/arch/x86/kernel/cpu/intel_rdt.c index f90e7ab..d0be46d 100644 --- a/arch/x86/kernel/cpu/intel_rdt.c +++ b/arch/x86/kernel/cpu/intel_rdt.c @@ -25,6 +25,7 @@ #include #include #include +#include #include /* @@ -313,13 +314,84 @@ out: return err; } -static inline void rdt_cpumask_update(int cpu) +static inline bool rdt_cpumask_update(int cpu) { cpumask_t tmp; cpumask_and(&tmp, &rdt_cpumask, topology_core_cpumask(cpu)); - if (cpumask_empty(&tmp)) + if (cpumask_empty(&tmp)) { cpumask_set_cpu(cpu, &rdt_cpumask); + return true; + } + + return false; +} + +/* + * cbm_update_msrs() - Updates all the existing IA32_L3_MASK_n MSRs + * which are one per CLOSid except IA32_L3_MASK_0 on the current package. + */ +static inline void cbm_update_msrs(void) +{ + int maxid = boot_cpu_data.x86_rdt_max_closid; + unsigned int i; + + /* + * At cpureset, all bits of IA32_L3_MASK_n are set. + * The index starts from one as there is no need + * to update IA32_L3_MASK_0 as it belongs to root cgroup + * whose cache mask is all 1s always. + */ + for (i = 1; i < maxid; i++) { + if (ccmap[i].clos_refcnt) + cbm_cpu_update((void *)i); + } +} + +static inline void intel_rdt_cpu_start(int cpu) +{ + struct intel_pqr_state *state = &per_cpu(pqr_state, cpu); + + state->closid = 0; + mutex_lock(&rdt_group_mutex); + if (rdt_cpumask_update(cpu)) + cbm_update_msrs(); + mutex_unlock(&rdt_group_mutex); +} + +static void intel_rdt_cpu_exit(unsigned int cpu) +{ + int i; + + mutex_lock(&rdt_group_mutex); + if (!cpumask_test_and_clear_cpu(cpu, &rdt_cpumask)) { + mutex_unlock(&rdt_group_mutex); + return; + } + + i = cpumask_any_online_but(topology_core_cpumask(cpu), cpu); + if (i < nr_cpu_ids) + cpumask_set_cpu(i, &rdt_cpumask); + mutex_unlock(&rdt_group_mutex); +} + +static int intel_rdt_cpu_notifier(struct notifier_block *nb, + unsigned long action, void *hcpu) +{ + unsigned int cpu = (unsigned long)hcpu; + + switch (action) { + case CPU_STARTING: + intel_rdt_cpu_start(cpu); + break; + case CPU_DOWN_PREPARE: + intel_rdt_cpu_exit(cpu); + break; + default: + break; + } + + return NOTIFY_OK; } static int __init intel_rdt_late_init(void) @@ -358,8 +430,16 @@ static int __init intel_rdt_late_init(void) ccm->cache_mask = (1ULL << max_cbm_len) - 1; ccm->clos_refcnt = 1; + cpu_notifier_register_begin(); + + mutex_lock(&rdt_group_mutex); for_each_online_cpu(i) rdt_cpumask_update(i); + mutex_unlock(&rdt_group_mutex); + + __hotcpu_notifier(intel_rdt_cpu_notifier, 0); + + cpu_notifier_register_done(); static_key_slow_inc(&rdt_enable_key); pr_info("Intel cache allocation enabled\n"); -- 1.9.1 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/