Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S965669AbbLRVgm (ORCPT ); Fri, 18 Dec 2015 16:36:42 -0500 Received: from terminus.zytor.com ([198.137.202.10]:59152 "EHLO terminus.zytor.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S965205AbbLRVgk (ORCPT ); Fri, 18 Dec 2015 16:36:40 -0500 Date: Fri, 18 Dec 2015 13:36:30 -0800 From: tip-bot for Fenghua Yu Message-ID: Cc: linux-kernel@vger.kernel.org, vikas.shivappa@linux.intel.com, fenghua.yu@intel.com, tglx@linutronix.de, mingo@kernel.org, hpa@zytor.com Reply-To: mingo@kernel.org, tglx@linutronix.de, fenghua.yu@intel.com, vikas.shivappa@linux.intel.com, linux-kernel@vger.kernel.org, hpa@zytor.com In-Reply-To: <1450392376-6397-10-git-send-email-fenghua.yu@intel.com> References: <1450392376-6397-10-git-send-email-fenghua.yu@intel.com> To: linux-tip-commits@vger.kernel.org Subject: [tip:x86/cache] x86/intel_rdt: Intel haswell Cache Allocation enumeration Git-Commit-ID: 8741b655628d89380bfbe0ded7a83c0bc2293a72 X-Mailer: tip-git-log-daemon Robot-ID: Robot-Unsubscribe: Contact to get blacklisted from these emails MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset=UTF-8 Content-Disposition: inline Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4043 Lines: 125 Commit-ID: 8741b655628d89380bfbe0ded7a83c0bc2293a72 Gitweb: http://git.kernel.org/tip/8741b655628d89380bfbe0ded7a83c0bc2293a72 Author: Fenghua Yu AuthorDate: Thu, 17 Dec 2015 14:46:14 -0800 Committer: H. Peter Anvin CommitDate: Fri, 18 Dec 2015 13:17:56 -0800 x86/intel_rdt: Intel haswell Cache Allocation enumeration From: Vikas Shivappa This patch is specific to Intel haswell (hsw) server SKUs. Cache Allocation on hsw server needs to be enumerated separately as HSW does not have support for CPUID enumeration for Cache Allocation. This patch does a probe by writing a CLOSid (Class of service id) into high 32 bits of IA32_PQR_MSR and see if the bits stick. The probe is only done after confirming that the CPU is HSW server. Other hardcoded values are: - L3 cache bit mask must be at least two bits. - Maximum CLOSids supported is always 4. - Maximum bits support in cache bit mask is always 20. Signed-off-by: Vikas Shivappa Link: http://lkml.kernel.org/r/1450392376-6397-10-git-send-email-fenghua.yu@intel.com Signed-off-by: Fenghua Yu --- arch/x86/kernel/cpu/intel_rdt.c | 59 +++++++++++++++++++++++++++++++++++++++-- 1 file changed, 57 insertions(+), 2 deletions(-) diff --git a/arch/x86/kernel/cpu/intel_rdt.c b/arch/x86/kernel/cpu/intel_rdt.c index 31f8588..ecaf8e6 100644 --- a/arch/x86/kernel/cpu/intel_rdt.c +++ b/arch/x86/kernel/cpu/intel_rdt.c @@ -38,6 +38,10 @@ static struct clos_cbm_table *cctable; */ unsigned long *closmap; /* + * Minimum bits required in Cache bitmask. + */ +static unsigned int min_bitmask_len = 1; +/* * Mask of CPUs for writing CBM values. We only need one CPU per-socket. */ static cpumask_t rdt_cpumask; @@ -54,6 +58,57 @@ struct rdt_remote_data { u64 val; }; +/* + * cache_alloc_hsw_probe() - Have to probe for Intel haswell server CPUs + * as it does not have CPUID enumeration support for Cache allocation. + * + * Probes by writing to the high 32 bits(CLOSid) of the IA32_PQR_MSR and + * testing if the bits stick. Max CLOSids is always 4 and max cbm length + * is always 20 on hsw server parts. The minimum cache bitmask length + * allowed for HSW server is always 2 bits. Hardcode all of them. + */ +static inline bool cache_alloc_hsw_probe(void) +{ + u32 l, h_old, h_new, h_tmp; + + if (rdmsr_safe(MSR_IA32_PQR_ASSOC, &l, &h_old)) + return false; + + /* + * Default value is always 0 if feature is present. + */ + h_tmp = h_old ^ 0x1U; + if (wrmsr_safe(MSR_IA32_PQR_ASSOC, l, h_tmp) || + rdmsr_safe(MSR_IA32_PQR_ASSOC, &l, &h_new)) + return false; + + if (h_tmp != h_new) + return false; + + wrmsr_safe(MSR_IA32_PQR_ASSOC, l, h_old); + + boot_cpu_data.x86_cache_max_closid = 4; + boot_cpu_data.x86_cache_max_cbm_len = 20; + min_bitmask_len = 2; + + return true; +} + +static inline bool cache_alloc_supported(struct cpuinfo_x86 *c) +{ + if (cpu_has(c, X86_FEATURE_CAT_L3)) + return true; + + /* + * Probe for Haswell server CPUs. + */ + if (c->x86 == 0x6 && c->x86_model == 0x3f) + return cache_alloc_hsw_probe(); + + return false; +} + + void __intel_rdt_sched_in(void *dummy) { struct intel_pqr_state *state = this_cpu_ptr(&pqr_state); @@ -126,7 +181,7 @@ static bool cbm_validate(unsigned long var) unsigned long first_bit, zero_bit; u64 max_cbm; - if (bitmap_weight(&var, max_cbm_len) < 1) + if (bitmap_weight(&var, max_cbm_len) < min_bitmask_len) return false; max_cbm = (1ULL << max_cbm_len) - 1; @@ -310,7 +365,7 @@ static int __init intel_rdt_late_init(void) u32 maxid, max_cbm_len; int err = 0, size, i; - if (!cpu_has(c, X86_FEATURE_CAT_L3)) + if (!cache_alloc_supported(c)) return -ENODEV; maxid = c->x86_cache_max_closid; -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/