Received: by 2002:a05:6a10:6744:0:0:0:0 with SMTP id w4csp3656982pxu; Mon, 19 Oct 2020 18:44:20 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwsbgxt1EftiL0uBqZ3v8SLBVEMGbHONbFrOJucG8gUQTgHxcU6mcl59BWEeD9XUv2GFwcy X-Received: by 2002:a17:906:fcae:: with SMTP id qw14mr719557ejb.537.1603158260573; Mon, 19 Oct 2020 18:44:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1603158260; cv=none; d=google.com; s=arc-20160816; b=Y99NB4Wx3X/6OX2fcdQafl0ztGukiogi+cdARAibsDDYuoEhjAuI8zgQvvVbxX4gts bkKwG0hFX0GDCIoonSiYTZmyrbLwfetaSnKRyeIN1rG/5ZvPuWkyXDjxD0xjeknE1Laa QNjSdIBhzA7LMKHwBKNguNjzseigGT+UPsjuL33gXs8r0K1ceqvZxMjg7fgUkcSwC+rQ XE9BUCL9R8wOJocvqCz6O0CHraawvDhhXfi+3FmPFEcPi9MMhWFAYIxUSDpITDWtbsfI /YOA+M7zUbCN1s1AqziNR/ZAanQPSbcex28YjuPMVztKRdtjnls8bkq36Y4owo0K/zNV 510A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :organization:references:in-reply-to:message-id:subject:cc:to:from :date; bh=S8XeVdSZiOe40tEQ4vrV1Dw0QbM5u1iN7HSyWEMqCtA=; b=OGB/pe1kDGJFYcvuhhBaJTfDhXGGY0EvDTOY3g3oCkmsL4PsAfcXvmcvN2o/bCeDOT PTeDYG69LofNPjIYryfAs9ZOMBaHVo9NM/uZozdwNmIpUb3hCH/Z+f4IvmAmo3J0dott p5bASXkSK6eV4EZqJGJfKR4ozLj5luOPEMepwOo9Cu0fL2AiQ+uQTTrIN78fa3S45kEw mcYoui9lTHYpLHxspnFyCjt3utzgIOPlpk8zBlyYKjLcFAya0UgWsUaMmH++WHF46lwe B30nuF0tgzrC5f+0GXlIVUB7n/+v7iB/x0YkfKjGicyJg8gO66gy6PTbtpzpxT6dBh7A c05g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id da5si183172edb.595.2020.10.19.18.43.56; Mon, 19 Oct 2020 18:44:20 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728523AbgJSNoD (ORCPT + 99 others); Mon, 19 Oct 2020 09:44:03 -0400 Received: from lhrrgout.huawei.com ([185.176.76.210]:2990 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728339AbgJSNoC (ORCPT ); Mon, 19 Oct 2020 09:44:02 -0400 Received: from lhreml710-chm.china.huawei.com (unknown [172.18.7.106]) by Forcepoint Email with ESMTP id BE1ADB4746C3D098BE58; Mon, 19 Oct 2020 14:43:58 +0100 (IST) Received: from localhost (10.52.126.130) by lhreml710-chm.china.huawei.com (10.201.108.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1913.5; Mon, 19 Oct 2020 14:43:53 +0100 Date: Mon, 19 Oct 2020 14:41:57 +0100 From: Jonathan Cameron To: Morten Rasmussen CC: Peter Zijlstra , , , , , Len Brown , Greg Kroah-Hartman , Sudeep Holla , , Will Deacon , , Brice Goglin , , Jerome Glisse Subject: Re: [RFC PATCH] topology: Represent clusters of CPUs within a die. Message-ID: <20201019134157.00001c97@Huawei.com> In-Reply-To: <20201019131052.GC8004@e123083-lin> References: <20201016152702.1513592-1-Jonathan.Cameron@huawei.com> <20201019103522.GK2628@hirez.programming.kicks-ass.net> <20201019123226.00006705@Huawei.com> <20201019131052.GC8004@e123083-lin> Organization: Huawei Technologies Research and Development (UK) Ltd. X-Mailer: Claws Mail 3.17.4 (GTK+ 2.24.32; i686-w64-mingw32) MIME-Version: 1.0 Content-Type: text/plain; charset="US-ASCII" Content-Transfer-Encoding: 7bit X-Originating-IP: [10.52.126.130] X-ClientProxiedBy: lhreml711-chm.china.huawei.com (10.201.108.62) To lhreml710-chm.china.huawei.com (10.201.108.61) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, 19 Oct 2020 15:10:52 +0200 Morten Rasmussen wrote: > Hi Jonathan, > > On Mon, Oct 19, 2020 at 01:32:26PM +0100, Jonathan Cameron wrote: > > On Mon, 19 Oct 2020 12:35:22 +0200 > > Peter Zijlstra wrote: > > > > > On Fri, Oct 16, 2020 at 11:27:02PM +0800, Jonathan Cameron wrote: > > > > Both ACPI and DT provide the ability to describe additional layers of > > > > topology between that of individual cores and higher level constructs > > > > such as the level at which the last level cache is shared. > > > > In ACPI this can be represented in PPTT as a Processor Hierarchy > > > > Node Structure [1] that is the parent of the CPU cores and in turn > > > > has a parent Processor Hierarchy Nodes Structure representing > > > > a higher level of topology. > > > > > > > > For example Kunpeng 920 has clusters of 4 CPUs. These do not share > > > > any cache resources, but the interconnect topology is such that > > > > the cost to transfer ownership of a cacheline between CPUs within > > > > a cluster is lower than between CPUs in different clusters on the same > > > > die. Hence, it can make sense to deliberately schedule threads > > > > sharing data to a single cluster. > > > > > > > > This patch simply exposes this information to userspace libraries > > > > like hwloc by providing cluster_cpus and related sysfs attributes. > > > > PoC of HWLOC support at [2]. > > > > > > > > Note this patch only handle the ACPI case. > > > > > > > > Special consideration is needed for SMT processors, where it is > > > > necessary to move 2 levels up the hierarchy from the leaf nodes > > > > (thus skipping the processor core level). > > > > Hi Peter, > > > > > > > > I'm confused by all of this. The core level is exactly what you seem to > > > want. > > > > It's the level above the core, whether in an multi-threaded core > > or a single threaded core. This may correspond to the level > > at which caches are shared (typically L3). Cores are already well > > represented via thread_siblings and similar. Extra confusion is that > > the current core_siblings (deprecated) sysfs interface, actually reflects > > the package level and ignores anything in between core and > > package (such as die on x86) > > > > So in a typical system with a hierarchical interconnect you would have > > > > thread > > core > > cluster (possibly multiple layers as mentioned in Brice's reply). > > die > > package > > > > Unfortunately as pointed out in other branches of this thread, there is > > no consistent generic name. I'm open to suggestions! > > IIUC, you are actually proposing another "die" level? I'm not sure if we > can actually come up with a generic name since interconnects are highly > implementation dependent. Brice mentioned hwloc is using 'group'. That seems generic enough perhaps. > > How is you memory distributed? Do you already have NUMA nodes? If you > want to keep tasks together, it might make sense to define the clusters > (in your case) as NUMA nodes. We already have all of the standard levels. We need at least one more. On a near future platform we'll have full set (kunpeng920 is single thread) So on kunpeng 920 we have cores (clusters) die / llc shared at this level package (multiple NUMA nodes in each package) System, multiple packages. > > > Both ACPI PPTT and DT provide generic structures to represent layers of > > topology. They don't name as such, but in ACPI there are flags to indicate > > package, core, thread. > > I think that is because those are the only ones that a fairly generic > :-) It is also the only ones that scheduler cares about (plus NUMA). Agreed, I'm not proposing we add these to the kernel scheduler (at least for now). Another layer just means another layer of complexity. > > > > > For example, in zen2 this would correspond to a 'core complex' consisting > > 4 CPU cores (each one 2 threads) sharing some local L3 cache. > > https://en.wikichip.org/wiki/amd/microarchitectures/zen_2 > > In zen3 it looks like this level will be the same as that for the die. > > > > Given they used the name in knights landing (and as is pointed out in > > another branch of this thread, it's the CPUID description) I think Intel > > calls these 'tiles' (anyone confirm that?) > > > > A similar concept exists for some ARM processors. > > https://en.wikichip.org/wiki/hisilicon/microarchitectures/taishan_v110 > > CCLs in the diagram on that page. > > > > Centriq 2400 had 2 core 'duplexes' which shared l2. > > https://www.anandtech.com/show/11737/analyzing-falkors-microarchitecture-a-deep-dive-into-qualcomms-centriq-2400-for-windows-server-and-linux/3 > > > > From the info release at hotchips, it looks like the thunderx3 deploys > > a similar ring interconnect with groups of cores, each with 4 threads. > > Not sure what they plan to call them yet though or whether they will chose > > to represent that layer of the topology in their firmware tables. > > > > Arms CMN600 interconnect also support such 'clusters' though I have no idea > > if anyone has used it in this form yet. In that case, they are called > > "processor compute clusters" > > https://developer.arm.com/documentation/100180/0103/ > > > > Xuantie-910 is cluster based as well (shares l2). > > > > So in many cases the cluster level corresponds to something we already have > > visibility of due to cache sharing etc, but that isn't true in kunpeng 920. > > The problem I see is that the benefit of keeping tasks together due to > the interconnect layout might vary significantly between systems. So if > we introduce a new cpumask for cluster it has to have represent roughly > the same system properties otherwise generic software consuming this > information could be tricked. Agreed. Any software would currently have to do it's own benchmarking to work out how to use the presented information. It would imply that you 'want to look at this group of CPUs' rather than providing any hard rules. The same is true of die, which we already have. What that means will vary enormously between different designs in a fashion that may or may not be independent of NUMA topology. Note, there are people who do extensive benchmarking of NUMA topology as the information provided is either inaccurate / missing, or not of sufficient detail to do their scheduling. It's not a big load to do that sort of stuff on startup of software on an HPC system. > > If there is a provable benefit of having interconnect grouping > information, I think it would be better represented by a distance matrix > like we have for NUMA. There have been some discussions in various forums about how to describe the complexity of interconnects well enough to actually be useful. Those have mostly floundered on the immense complexity of designing such a description in a fashion any normal software would actually use. +cc Jerome who raised some of this in the kernel a while back. Add this cluster / group layer seems moderately safe as it just says 'something here you should consider', rather than making particular statements on expected performance etc. So I fully agree it would be good to have that info, if we can figure out how to do it! However, that is never going to be a short term thing. Thanks, Jonathan > > Morten