Received: by 10.223.185.116 with SMTP id b49csp7590317wrg; Thu, 1 Mar 2018 07:54:31 -0800 (PST) X-Google-Smtp-Source: AG47ELvmC8ssZ+42Sk5wj4vbXDYaBFY46t4ahh0NMRZdRzAlecN/7rnIxjSUSjkHYu98FLS55GCQ X-Received: by 10.101.100.9 with SMTP id a9mr1917540pgv.102.1519919671089; Thu, 01 Mar 2018 07:54:31 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1519919671; cv=none; d=google.com; s=arc-20160816; b=0UhOmC84faQWICmtMRfpkl7Lk34BdwuablxLufdTbKxwecVZWx+4imHqN1ONUeliKc wRozYCWgJpm1qinNYSrjANWXiNfeJ0mO825Nn9jkqKNU4Zl0tUVCPHW8rF1M3JjoFIzb DKei+RHK4X2YXLldGWGC5g9YZgIdjKr0BCDuK0R2Ney3tHLP5xOsWZwQ4RdFY9Al5xRp eU6NY+chclbevG4YDeDa6o5cv/nfxe3aWoNT8PQJ+ftPE/HQ6zMRDubp/kJrwaQM9JEz eMwrWebvhGVRM3nUmxoIpHpsWPE5ZslSWjvVjlOCp4S7jOVxtRXHgsCIRihBtJAOIvhv whjw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=MyxSh93fOd8zW4niIgpOcRx3Z8lPekXY1p+5ICtcLjs=; b=ZwoAJSqgK9sQ/8X90HtSFbXlS4K9RN2Y898hot8pw7cVHjWjRhJm1WBxpjlIeMz30l +KmB0BlPV1joBMVm/mOuavy0uIWTpdcsJ3iI4acIj6TSKc1Ymx0lQB64r9zfJZm6aXuE 2sZD3M3Jhqc2+AmgJzfquj29BaiiMIkPe+vETxe+sx/OMIn+D5rt47EAcGxrH09lLj9M CQVvDfFvA5ILbsHsWBjrOT2uxoaUHSbzvLRv9+yvJpDXa/VHp6ykFdqnGS4vm4cyfFms QJwic+yq/HpejdoGp8f+ddIy1VDh/bdkO7UMW2ME/id6yAogRRozLmApCYwc2MlAZZ0c fjjw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z13si2558917pgu.804.2018.03.01.07.54.15; Thu, 01 Mar 2018 07:54:31 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1032620AbeCAPwY (ORCPT + 99 others); Thu, 1 Mar 2018 10:52:24 -0500 Received: from foss.arm.com ([217.140.101.70]:40474 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1031962AbeCAPwW (ORCPT ); Thu, 1 Mar 2018 10:52:22 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3E2091529; Thu, 1 Mar 2018 07:52:22 -0800 (PST) Received: from e105550-lin.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id AFE3F3F25C; Thu, 1 Mar 2018 07:52:18 -0800 (PST) Date: Thu, 1 Mar 2018 15:52:16 +0000 From: Morten Rasmussen To: Jeremy Linton Cc: linux-acpi@vger.kernel.org, linux-arm-kernel@lists.infradead.org, sudeep.holla@arm.com, lorenzo.pieralisi@arm.com, hanjun.guo@linaro.org, rjw@rjwysocki.net, will.deacon@arm.com, catalin.marinas@arm.com, gregkh@linuxfoundation.org, mark.rutland@arm.com, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, wangxiongfeng2@huawei.com, vkilari@codeaurora.org, ahs3@redhat.com, dietmar.eggemann@arm.com, palmer@sifive.com, lenb@kernel.org, john.garry@huawei.com, austinwc@codeaurora.org, tnowicki@caviumnetworks.com Subject: Re: [PATCH v7 13/13] arm64: topology: divorce MC scheduling domain from core_siblings Message-ID: <20180301155216.GI4589@e105550-lin.cambridge.arm.com> References: <20180228220619.6992-1-jeremy.linton@arm.com> <20180228220619.6992-14-jeremy.linton@arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180228220619.6992-14-jeremy.linton@arm.com> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Jeremy, On Wed, Feb 28, 2018 at 04:06:19PM -0600, Jeremy Linton wrote: > Now that we have an accurate view of the physical topology > we need to represent it correctly to the scheduler. In the > case of NUMA in socket, we need to assure that the sched domain > we build for the MC layer isn't larger than the DIE above it. MC shouldn't be larger than any of the NUMA domains either. > To do this correctly, we should really base that on the cache > topology immediately below the NUMA node (for NUMA in socket) > or below the physical package for normal NUMA configurations. That means we wouldn't support multi-die NUMA nodes? > This patch creates a set of early cache_siblings masks, then > when the scheduler requests the coregroup mask we pick the > smaller of the physical package siblings, or the numa siblings > and locate the largest cache which is an entire subset of > those siblings. If we are unable to find a proper subset of > cores then we retain the original behavior and return the > core_sibling list. IIUC, for numa-in-package it is a strict requirement that there is a cache that span the entire NUMA node? For example, having a NUMA node consisting of two clusters with per-cluster caches only wouldn't be supported? > > Signed-off-by: Jeremy Linton > --- > arch/arm64/include/asm/topology.h | 5 +++ > arch/arm64/kernel/topology.c | 64 +++++++++++++++++++++++++++++++++++++++ > 2 files changed, 69 insertions(+) > > diff --git a/arch/arm64/include/asm/topology.h b/arch/arm64/include/asm/topology.h > index 6b10459e6905..08db3e4e44e1 100644 > --- a/arch/arm64/include/asm/topology.h > +++ b/arch/arm64/include/asm/topology.h > @@ -4,12 +4,17 @@ > > #include > > +#define MAX_CACHE_CHECKS 4 > + > struct cpu_topology { > int thread_id; > int core_id; > int package_id; > + int cache_id[MAX_CACHE_CHECKS]; > cpumask_t thread_sibling; > cpumask_t core_sibling; > + cpumask_t cache_siblings[MAX_CACHE_CHECKS]; > + int cache_level; > }; > > extern struct cpu_topology cpu_topology[NR_CPUS]; > diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c > index bd1aae438a31..1809dc9d347c 100644 > --- a/arch/arm64/kernel/topology.c > +++ b/arch/arm64/kernel/topology.c > @@ -212,8 +212,42 @@ static int __init parse_dt_topology(void) > struct cpu_topology cpu_topology[NR_CPUS]; > EXPORT_SYMBOL_GPL(cpu_topology); > > +static void find_llc_topology_for_cpu(int cpu) Isn't it more find core/node siblings? Or is it a requirement that the last level cache spans exactly one NUMA node? For example, a package level cache isn't allowed for numa-in-package? > +{ > + /* first determine if we are a NUMA in package */ > + const cpumask_t *node_mask = cpumask_of_node(cpu_to_node(cpu)); > + int indx; > + > + if (!cpumask_subset(node_mask, &cpu_topology[cpu].core_sibling)) { > + /* not numa in package, lets use the package siblings */ > + node_mask = &cpu_topology[cpu].core_sibling; > + } > + > + /* > + * node_mask should represent the smallest package/numa grouping > + * lets search for the largest cache smaller than the node_mask. > + */ > + for (indx = 0; indx < MAX_CACHE_CHECKS; indx++) { > + cpumask_t *cache_sibs = &cpu_topology[cpu].cache_siblings[indx]; > + > + if (cpu_topology[cpu].cache_id[indx] < 0) > + continue; > + > + if (cpumask_subset(cache_sibs, node_mask)) > + cpu_topology[cpu].cache_level = indx; I don't this guarantees that the cache level we found matches exactly the NUMA node. Taking the two cluster NUMA node example from above, we would set cache_level to point at the per-cluster cache as it is a subset of the NUMA node but it would only span half of the node. Or am I missing something? > + } > +} > + > const struct cpumask *cpu_coregroup_mask(int cpu) > { > + int *llc = &cpu_topology[cpu].cache_level; > + > + if (*llc == -1) > + find_llc_topology_for_cpu(cpu); > + > + if (*llc != -1) > + return &cpu_topology[cpu].cache_siblings[*llc]; > + > return &cpu_topology[cpu].core_sibling; > } > > @@ -221,6 +255,7 @@ static void update_siblings_masks(unsigned int cpuid) > { > struct cpu_topology *cpu_topo, *cpuid_topo = &cpu_topology[cpuid]; > int cpu; > + int idx; > > /* update core and thread sibling masks */ > for_each_possible_cpu(cpu) { > @@ -229,6 +264,16 @@ static void update_siblings_masks(unsigned int cpuid) > if (cpuid_topo->package_id != cpu_topo->package_id) > continue; > > + for (idx = 0; idx < MAX_CACHE_CHECKS; idx++) { > + cpumask_t *lsib; > + int cput_id = cpuid_topo->cache_id[idx]; > + > + if (cput_id == cpu_topo->cache_id[idx]) { > + lsib = &cpuid_topo->cache_siblings[idx]; > + cpumask_set_cpu(cpu, lsib); > + } Shouldn't the cache_id validity be checked here? I don't think it breaks anything though. Overall, I think this is more or less in line with the MC domain shrinking I just mentioned in the v6 discussion. It is mostly the corner cases and assumption about the system topology I'm not sure about. Morten