Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751911AbdITHQx (ORCPT ); Wed, 20 Sep 2017 03:16:53 -0400 Received: from szxga05-in.huawei.com ([45.249.212.191]:6524 "EHLO szxga05-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751554AbdITHQv (ORCPT ); Wed, 20 Sep 2017 03:16:51 -0400 Subject: Re: [PATCH v2 1/6] ACPI/PPTT: Add Processor Properties Topology Table parsing To: Jeremy Linton , References: <20170919184751.25110-1-jeremy.linton@arm.com> <20170919184751.25110-2-jeremy.linton@arm.com> CC: , , , , , , , , , , , , , , , , , From: Xiongfeng Wang Message-ID: <4c9cfa18-507b-01c4-7464-ff2b149fbb68@huawei.com> Date: Wed, 20 Sep 2017 15:15:53 +0800 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:45.0) Gecko/20100101 Thunderbird/45.2.0 MIME-Version: 1.0 In-Reply-To: <20170919184751.25110-2-jeremy.linton@arm.com> Content-Type: text/plain; charset="windows-1252" Content-Transfer-Encoding: 7bit X-Originating-IP: [10.177.32.209] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A010201.59C215C5.003A,ss=1,re=0.000,recu=0.000,reip=0.000,cl=1,cld=1,fgs=0, ip=0.0.0.0, so=2014-11-16 11:51:01, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: a64af2f2721ca95098a95fccf2ed6ad2 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 17229 Lines: 503 Hi Jeremy, On 2017/9/20 2:47, Jeremy Linton wrote: > ACPI 6.2 adds a new table, which describes how processing units > are related to each other in tree like fashion. Caches are > also sprinkled throughout the tree and describe the properties > of the caches in relation to other caches and processing units. > > Add the code to parse the cache hierarchy and report the total > number of levels of cache for a given core using > acpi_find_last_cache_level() as well as fill out the individual > cores cache information with cache_setup_acpi() once the > cpu_cacheinfo structure has been populated by the arch specific > code. > > Further, report peers in the topology using setup_acpi_cpu_topology() > to report a unique ID for each processing unit at a given level > in the tree. These unique id's can then be used to match related > processing units which exist as threads, COD (clusters > on die), within a given package, etc. > > Signed-off-by: Jeremy Linton > --- > drivers/acpi/pptt.c | 458 ++++++++++++++++++++++++++++++++++++++++++++++++++++ > 1 file changed, 458 insertions(+) > create mode 100644 drivers/acpi/pptt.c > > diff --git a/drivers/acpi/pptt.c b/drivers/acpi/pptt.c > new file mode 100644 > index 000000000000..f7694fa1e0bd > --- /dev/null > +++ b/drivers/acpi/pptt.c > @@ -0,0 +1,458 @@ > +/* > + * Copyright (C) 2017, ARM > + * > + * This program is free software; you can redistribute it and/or modify it > + * under the terms and conditions of the GNU General Public License, > + * version 2, as published by the Free Software Foundation. > + * > + * This program is distributed in the hope it will be useful, but WITHOUT > + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or > + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for > + * more details. > + * > + * This file implements parsing of Processor Properties Topology Table (PPTT) > + * which is optionally used to describe the processor and cache topology. > + * Due to the relative pointers used throughout the table, this doesn't > + * leverage the existing subtable parsing in the kernel. > + */ > +#define pr_fmt(fmt) "ACPI PPTT: " fmt > + > +#include > +#include > +#include > + > +/* > + * Given the PPTT table, find and verify that the subtable entry > + * is located within the table > + */ > +static struct acpi_subtable_header *fetch_pptt_subtable( > + struct acpi_table_header *table_hdr, u32 pptt_ref) > +{ > + struct acpi_subtable_header *entry; > + > + /* there isn't a subtable at reference 0 */ > + if (!pptt_ref) > + return NULL; > + > + if (pptt_ref + sizeof(struct acpi_subtable_header) > table_hdr->length) > + return NULL; > + > + entry = (struct acpi_subtable_header *)((u8 *)table_hdr + pptt_ref); > + > + if (pptt_ref + entry->length > table_hdr->length) > + return NULL; > + > + return entry; > +} > + > +static struct acpi_pptt_processor *fetch_pptt_node( > + struct acpi_table_header *table_hdr, u32 pptt_ref) > +{ > + return (struct acpi_pptt_processor *)fetch_pptt_subtable(table_hdr, pptt_ref); > +} > + > +static struct acpi_pptt_cache *fetch_pptt_cache( > + struct acpi_table_header *table_hdr, u32 pptt_ref) > +{ > + return (struct acpi_pptt_cache *)fetch_pptt_subtable(table_hdr, pptt_ref); > +} > + > +static struct acpi_subtable_header *acpi_get_pptt_resource( > + struct acpi_table_header *table_hdr, > + struct acpi_pptt_processor *node, int resource) > +{ > + u32 ref; > + > + if (resource >= node->number_of_priv_resources) > + return NULL; > + > + ref = *(u32 *)((u8 *)node + sizeof(struct acpi_pptt_processor) + > + sizeof(u32) * resource); > + > + return fetch_pptt_subtable(table_hdr, ref); > +} > + > +/* > + * given a pptt resource, verify that it is a cache node, then walk > + * down each level of caches, counting how many levels are found > + * as well as checking the cache type (icache, dcache, unified). If a > + * level & type match, then we set found, and continue the search. > + * Once the entire cache branch has been walked return its max > + * depth. > + */ > +static int acpi_pptt_walk_cache(struct acpi_table_header *table_hdr, > + int local_level, > + struct acpi_subtable_header *res, > + struct acpi_pptt_cache **found, > + int level, int type) > +{ > + struct acpi_pptt_cache *cache; > + > + if (res->type != ACPI_PPTT_TYPE_CACHE) > + return 0; > + > + cache = (struct acpi_pptt_cache *) res; > + while (cache) { > + local_level++; > + > + if ((local_level == level) && > + (cache->flags & ACPI_PPTT_CACHE_TYPE_VALID) && > + ((cache->attributes & ACPI_PPTT_MASK_CACHE_TYPE) == type)) { > + if (*found != NULL) > + pr_err("Found duplicate cache level/type unable to determine uniqueness\n"); > + > + pr_debug("Found cache @ level %d\n", level); > + *found = cache; > + /* > + * continue looking at this node's resource list > + * to verify that we don't find a duplicate > + * cache node. > + */ > + } > + cache = fetch_pptt_cache(table_hdr, cache->next_level_of_cache); > + } > + return local_level; > +} > + > +/* > + * Given a CPU node look for cache levels that exist at this level, and then > + * for each cache node, count how many levels exist below (logically above) it. > + * If a level and type are specified, and we find that level/type, abort > + * processing and return the acpi_pptt_cache structure. > + */ > +static struct acpi_pptt_cache *acpi_find_cache_level( > + struct acpi_table_header *table_hdr, > + struct acpi_pptt_processor *cpu_node, > + int *starting_level, int level, int type) > +{ > + struct acpi_subtable_header *res; > + int number_of_levels = *starting_level; > + int resource = 0; > + struct acpi_pptt_cache *ret = NULL; > + int local_level; > + > + /* walk down from processor node */ > + while ((res = acpi_get_pptt_resource(table_hdr, cpu_node, resource))) { > + resource++; > + > + local_level = acpi_pptt_walk_cache(table_hdr, *starting_level, > + res, &ret, level, type); > + /* > + * we are looking for the max depth. Since its potentially > + * possible for a given node to have resources with differing > + * depths verify that the depth we have found is the largest. > + */ > + if (number_of_levels < local_level) > + number_of_levels = local_level; > + } > + if (number_of_levels > *starting_level) > + *starting_level = number_of_levels; > + > + return ret; > +} > + > +/* > + * given a processor node containing a processing unit, walk into it and count > + * how many levels exist solely for it, and then walk up each level until we hit > + * the root node (ignore the package level because it may be possible to have > + * caches that exist across packages). Count the number of cache levels that > + * exist at each level on the way up. > + */ > +static int acpi_process_node(struct acpi_table_header *table_hdr, > + struct acpi_pptt_processor *cpu_node) > +{ > + int total_levels = 0; > + > + do { > + acpi_find_cache_level(table_hdr, cpu_node, &total_levels, 0, 0); > + cpu_node = fetch_pptt_node(table_hdr, cpu_node->parent); > + } while (cpu_node); > + > + return total_levels; > +} > + > +/* > + * Find the subtable entry describing the provided processor > + */ > +static struct acpi_pptt_processor *acpi_find_processor_node( > + struct acpi_table_header *table_hdr, > + u32 acpi_cpu_id) > +{ > + struct acpi_subtable_header *entry; > + unsigned long table_end; > + struct acpi_pptt_processor *cpu_node; > + > + table_end = (unsigned long)table_hdr + table_hdr->length; > + entry = (struct acpi_subtable_header *)((u8 *)table_hdr + > + sizeof(struct acpi_table_pptt)); > + > + /* find the processor structure associated with this cpuid */ > + while (((unsigned long)entry) + sizeof(struct acpi_subtable_header) < table_end) { > + cpu_node = (struct acpi_pptt_processor *)entry; > + > + if ((entry->type == ACPI_PPTT_TYPE_PROCESSOR) && > + (cpu_node->flags & ACPI_PPTT_ACPI_PROCESSOR_ID_VALID)) { > + pr_debug("checking phy_cpu_id %d against acpi id %d\n", > + acpi_cpu_id, cpu_node->acpi_processor_id); > + if (acpi_cpu_id == cpu_node->acpi_processor_id) { > + /* found the correct entry */ > + pr_debug("match found!\n"); > + return (struct acpi_pptt_processor *)entry; > + } > + } > + > + if (entry->length == 0) { > + pr_err("Invalid zero length subtable\n"); > + break; > + } > + entry = (struct acpi_subtable_header *) > + ((u8 *)entry + entry->length); > + } > + > + return NULL; > +} > + > +/* > + * Given a acpi_pptt_processor node, walk up until we identify the > + * package that the node is associated with or we run out of levels > + * to request. > + */ > +static struct acpi_pptt_processor *acpi_find_processor_package_id( > + struct acpi_table_header *table_hdr, > + struct acpi_pptt_processor *cpu, > + int level) > +{ > + struct acpi_pptt_processor *prev_node; > + > + while (cpu && level && !(cpu->flags & ACPI_PPTT_PHYSICAL_PACKAGE)) { > + pr_debug("level %d\n", level); > + prev_node = fetch_pptt_node(table_hdr, cpu->parent); > + if (prev_node == NULL) > + break; > + cpu = prev_node; > + level--; > + } > + return cpu; > +} > + > +static int acpi_parse_pptt(struct acpi_table_header *table_hdr, u32 acpi_cpu_id) > +{ > + int number_of_levels = 0; > + struct acpi_pptt_processor *cpu; > + > + cpu = acpi_find_processor_node(table_hdr, acpi_cpu_id); > + if (cpu) > + number_of_levels = acpi_process_node(table_hdr, cpu); > + > + return number_of_levels; > +} > + > +#define ACPI_6_2_CACHE_TYPE_DATA (0x0) > +#define ACPI_6_2_CACHE_TYPE_INSTR (1<<2) > +#define ACPI_6_2_CACHE_TYPE_UNIFIED (1<<3) > +#define ACPI_6_2_CACHE_POLICY_WB (0x0) > +#define ACPI_6_2_CACHE_POLICY_WT (1<<4) > +#define ACPI_6_2_CACHE_READ_ALLOCATE (0x0) > +#define ACPI_6_2_CACHE_WRITE_ALLOCATE (0x01) > +#define ACPI_6_2_CACHE_RW_ALLOCATE (0x02) > + > +static u8 acpi_cache_type(enum cache_type type) > +{ > + switch (type) { > + case CACHE_TYPE_DATA: > + pr_debug("Looking for data cache\n"); > + return ACPI_6_2_CACHE_TYPE_DATA; > + case CACHE_TYPE_INST: > + pr_debug("Looking for instruction cache\n"); > + return ACPI_6_2_CACHE_TYPE_INSTR; > + default: > + pr_debug("Unknown cache type, assume unified\n"); > + case CACHE_TYPE_UNIFIED: > + pr_debug("Looking for unified cache\n"); > + return ACPI_6_2_CACHE_TYPE_UNIFIED; > + } > +} > + > +/* find the ACPI node describing the cache type/level for the given CPU */ > +static struct acpi_pptt_cache *acpi_find_cache_node( > + struct acpi_table_header *table_hdr, u32 acpi_cpu_id, > + enum cache_type type, unsigned int level, > + struct acpi_pptt_processor **node) > +{ > + int total_levels = 0; > + struct acpi_pptt_cache *found = NULL; > + struct acpi_pptt_processor *cpu_node; > + u8 acpi_type = acpi_cache_type(type); > + > + pr_debug("Looking for CPU %d's level %d cache type %d\n", > + acpi_cpu_id, level, acpi_type); > + > + cpu_node = acpi_find_processor_node(table_hdr, acpi_cpu_id); > + if (!cpu_node) > + return NULL; > + > + do { > + found = acpi_find_cache_level(table_hdr, cpu_node, &total_levels, level, acpi_type); > + *node = cpu_node; > + cpu_node = fetch_pptt_node(table_hdr, cpu_node->parent); > + } while ((cpu_node) && (!found)); > + > + return found; > +} > + > +int acpi_find_last_cache_level(unsigned int cpu) > +{ > + u32 acpi_cpu_id; > + struct acpi_table_header *table; > + int number_of_levels = 0; > + acpi_status status; > + > + pr_debug("Cache Setup find last level cpu=%d\n", cpu); > + > + acpi_cpu_id = acpi_cpu_get_madt_gicc(cpu)->uid; > + status = acpi_get_table(ACPI_SIG_PPTT, 0, &table); > + if (ACPI_FAILURE(status)) { > + pr_err_once("No PPTT table found, cache topology may be inaccurate\n"); > + } else { > + number_of_levels = acpi_parse_pptt(table, acpi_cpu_id); > + acpi_put_table(table); > + } > + pr_debug("Cache Setup find last level level=%d\n", number_of_levels); > + > + return number_of_levels; > +} > + > +/* > + * The ACPI spec implies that the fields in the cache structures are used to > + * extend and correct the information probed from the hardware. In the case > + * of arm64 the CCSIDR probing has been removed because it might be incorrect. > + */ > +static void update_cache_properties(struct cacheinfo *this_leaf, > + struct acpi_pptt_cache *found_cache, > + struct acpi_pptt_processor *cpu_node) > +{ > + if (found_cache->flags & ACPI_PPTT_SIZE_PROPERTY_VALID) > + this_leaf->size = found_cache->size; > + if (found_cache->flags & ACPI_PPTT_LINE_SIZE_VALID) > + this_leaf->coherency_line_size = found_cache->line_size; > + if (found_cache->flags & ACPI_PPTT_NUMBER_OF_SETS_VALID) > + this_leaf->number_of_sets = found_cache->number_of_sets; > + if (found_cache->flags & ACPI_PPTT_ASSOCIATIVITY_VALID) > + this_leaf->ways_of_associativity = found_cache->associativity; > + if (found_cache->flags & ACPI_PPTT_WRITE_POLICY_VALID) > + switch (found_cache->attributes & ACPI_PPTT_MASK_WRITE_POLICY) { > + case ACPI_6_2_CACHE_POLICY_WT: > + this_leaf->attributes = CACHE_WRITE_THROUGH; > + break; > + case ACPI_6_2_CACHE_POLICY_WB: > + this_leaf->attributes = CACHE_WRITE_BACK; > + break; > + default: > + pr_err("Unknown ACPI cache policy %d\n", > + found_cache->attributes & ACPI_PPTT_MASK_WRITE_POLICY); > + } > + if (found_cache->flags & ACPI_PPTT_ALLOCATION_TYPE_VALID) > + switch (found_cache->attributes & ACPI_PPTT_MASK_ALLOCATION_TYPE) { > + case ACPI_6_2_CACHE_READ_ALLOCATE: > + this_leaf->attributes |= CACHE_READ_ALLOCATE; > + break; > + case ACPI_6_2_CACHE_WRITE_ALLOCATE: > + this_leaf->attributes |= CACHE_WRITE_ALLOCATE; > + break; > + case ACPI_6_2_CACHE_RW_ALLOCATE: > + this_leaf->attributes |= > + CACHE_READ_ALLOCATE|CACHE_WRITE_ALLOCATE; > + break; > + default: > + pr_err("Unknown ACPI cache allocation policy %d\n", > + found_cache->attributes & ACPI_PPTT_MASK_ALLOCATION_TYPE); > + } > +} > + > +static void cache_setup_acpi_cpu(struct acpi_table_header *table, > + unsigned int cpu) > +{ > + struct acpi_pptt_cache *found_cache; > + struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu); > + u32 acpi_cpu_id = acpi_cpu_get_madt_gicc(cpu)->uid; > + struct cacheinfo *this_leaf; > + unsigned int index = 0; > + struct acpi_pptt_processor *cpu_node = NULL; > + > + while (index < get_cpu_cacheinfo(cpu)->num_leaves) { > + this_leaf = this_cpu_ci->info_list + index; > + found_cache = acpi_find_cache_node(table, acpi_cpu_id, > + this_leaf->type, > + this_leaf->level, > + &cpu_node); > + pr_debug("found = %p %p\n", found_cache, cpu_node); > + if (found_cache) > + update_cache_properties(this_leaf, > + found_cache, > + cpu_node); > + > + index++; > + } > +} > + > +static int topology_setup_acpi_cpu(struct acpi_table_header *table, > + unsigned int cpu, int level) > +{ > + struct acpi_pptt_processor *cpu_node; > + u32 acpi_cpu_id = acpi_cpu_get_madt_gicc(cpu)->uid; > + > + cpu_node = acpi_find_processor_node(table, acpi_cpu_id); > + if (cpu_node) { > + cpu_node = acpi_find_processor_package_id(table, cpu_node, level); > + return (int)((u8 *)cpu_node - (u8 *)table); > + } > + pr_err_once("PPTT table found, but unable to locate core for %d\n", > + cpu); > + return -ENOENT; Can we return -1 when PPTT doesn't exist? So that we can still get topo info from MPIDR. 'store_cpu_topology()' determine whether cpu topology has been populated by checking whether cluster_id is -1. If cluster_id is not -1, it won't read cpu topo info from MPIDR. Or maybe we can change 'store_cpu_topology()' as well. If cluster_id is less than zero, we read cpu topo info from MPIDR. > +} > + > +/* > + * simply assign a ACPI cache entry to each known CPU cache entry > + * determining which entries are shared is done later. > + */ > +int cache_setup_acpi(unsigned int cpu) > +{ > + struct acpi_table_header *table; > + acpi_status status; > + > + pr_debug("Cache Setup ACPI cpu %d\n", cpu); > + > + status = acpi_get_table(ACPI_SIG_PPTT, 0, &table); > + if (ACPI_FAILURE(status)) { > + pr_err_once("No PPTT table found, cache topology may be inaccurate\n"); > + return -ENOENT; > + } > + > + cache_setup_acpi_cpu(table, cpu); > + acpi_put_table(table); > + > + return status; > +} > + > +/* > + * Determine a topology unique ID for each thread/core/cluster/socket/etc. > + * This ID can then be used to group peers. > + */ > +int setup_acpi_cpu_topology(unsigned int cpu, int level) > +{ > + struct acpi_table_header *table; > + acpi_status status; > + int retval; Can we add a static int array to record already assigned id for each level? So that we can count the id starting from zero. And also the id can be successive. > + > + status = acpi_get_table(ACPI_SIG_PPTT, 0, &table); > + if (ACPI_FAILURE(status)) { > + pr_err_once("No PPTT table found, cpu topology may be inaccurate\n"); > + return -ENOENT; > + } > + retval = topology_setup_acpi_cpu(table, cpu, level); > + pr_debug("Topology Setup ACPI cpu %d, level %d ret = %d\n", > + cpu, level, retval); > + acpi_put_table(table); > + > + return retval; > +} >