Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932814AbcJGXyY (ORCPT ); Fri, 7 Oct 2016 19:54:24 -0400 Received: from mga11.intel.com ([192.55.52.93]:59372 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932593AbcJGXyP (ORCPT ); Fri, 7 Oct 2016 19:54:15 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.31,457,1473145200"; d="scan'208";a="770169235" Date: Fri, 7 Oct 2016 16:54:14 -0700 From: "Luck, Tony" To: Fenghua Yu Cc: Thomas Gleixner , "H. Peter Anvin" , Ingo Molnar , Peter Zijlstra , Stephane Eranian , Borislav Petkov , Dave Hansen , Nilay Vaish , Shaohua Li , David Carrillo-Cisneros , Ravi V Shankar , Sai Prakhya , Vikas Shivappa , linux-kernel , x86 Subject: [RFC PATCH 19/18] x86/intel_rdt: Add support for L2 cache allocation Message-ID: <20161007235413.GA23113@intel.com> References: <1475894763-64683-1-git-send-email-fenghua.yu@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1475894763-64683-1-git-send-email-fenghua.yu@intel.com> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2489 Lines: 83 Just in case you think that part 0010 is a bit complex setting up all that infrastructure for just the L3 cache and using for_each_rdt_resource() all over the place to loop over just one thing. Here's the payoff. Untested because I don't have a machine handy that supports L2 ... but this should be all that needs to change for L2 cache allocation (both as an alternate to L3, or in combination with L3). Next feature after L2 is not a cache - so rdt_resource will have to grow some new flags, or perhaps some "->resource_ops()" functions to handle the pieces that work a bit differently, but the fundamentals should still apply. Not-signed-off-yet: Tony Luck --- arch/x86/include/asm/intel_rdt.h | 2 ++ arch/x86/kernel/cpu/intel_rdt.c | 23 +++++++++++++++++++++++ 2 files changed, 25 insertions(+) diff --git a/arch/x86/include/asm/intel_rdt.h b/arch/x86/include/asm/intel_rdt.h index aefa3a655408..641d32db9d74 100644 --- a/arch/x86/include/asm/intel_rdt.h +++ b/arch/x86/include/asm/intel_rdt.h @@ -97,6 +97,7 @@ struct rdt_resource { #define IA32_L3_QOS_CFG 0xc81 #define IA32_L3_CBM_BASE 0xc90 +#define IA32_L2_CBM_BASE 0xd10 /** * struct rdt_domain - group of cpus sharing an RDT resource @@ -134,6 +135,7 @@ extern struct rdt_resource rdt_resources_all[]; enum { RDT_RESOURCE_L3, + RDT_RESOURCE_L2, }; /* Maximum CLOSID allowed across all enabled resoources */ diff --git a/arch/x86/kernel/cpu/intel_rdt.c b/arch/x86/kernel/cpu/intel_rdt.c index 1afca46e080d..c457641c49b6 100644 --- a/arch/x86/kernel/cpu/intel_rdt.c +++ b/arch/x86/kernel/cpu/intel_rdt.c @@ -50,6 +50,13 @@ struct rdt_resource rdt_resources_all[] = { .cache_level = 3 }, { + .name = "L2", + .domains = domain_init(RDT_RESOURCE_L2), + .msr_base = IA32_L2_CBM_BASE, + .min_cbm_bits = 1, + .cache_level = 2 + }, + { /* NULL terminated */ } }; @@ -122,6 +129,22 @@ static inline bool get_rdt_resources(struct cpuinfo_x86 *c) ret = true; } + if (cpu_has(c, X86_FEATURE_CAT_L2)) { + union cpuid_0x10_1_eax eax; + union cpuid_0x10_1_edx edx; + u32 ebx, ecx; + + /* CPUID 0x10.2 fields are same format at 0x10.1 */ + r = &rdt_resources_all[RDT_RESOURCE_L2]; + cpuid_count(0x00000010, 2, &eax.full, &ebx, &ecx, &edx.full); + r->max_closid = edx.split.cos_max + 1; + r->num_closid = r->max_closid; + r->cbm_len = eax.split.cbm_len + 1; + r->max_cbm = BIT_MASK(eax.split.cbm_len + 1) - 1; + r->enabled = true; + + ret = true; + } return ret; } -- 2.5.0