Received: by 2002:a05:6602:18e:0:0:0:0 with SMTP id m14csp341205ioo; Sat, 21 May 2022 00:33:12 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyPnYY0QkgPxg6jvSiJ7TjpkcbDVR3dAON9xnNN2erceENk2JaYGXH08MQwG0Y5Z5yU22/s X-Received: by 2002:a17:90b:3c50:b0:1df:7b1f:8b79 with SMTP id pm16-20020a17090b3c5000b001df7b1f8b79mr15226086pjb.71.1653118392100; Sat, 21 May 2022 00:33:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1653118392; cv=none; d=google.com; s=arc-20160816; b=DL9OZ3gfc4UO9gEC2EHzLZmPN/MZUNJF3PuwYLOv+aQFikn5/KCgjmJcW43dQkG/d7 wbdUsftZVDnJW4G8dKlP5BVesfYFUynByGkpekVW84z4MvoOqfsjC+kEtid2fEKaGWHy pJZngv8ktITpuxWGDAbTQd+gBmi9JtCab7jPc39L2+DjBt8AeMaXtW9VXfMaSiFaxA4X jz04TuODxF1FyiCfA1TNk9mmx7q0T6cHfDvpcXAAfNHoM8RXZ+UKMlQBt9h/CaCeATWT +MDjZl7FWhEz/PuDLl3vFAzRVX8FlTVI3vwLvLC98ZjY3+jNBG6alV1Yd4HyKi+gbQUU 0R3w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date; bh=GZrkPLFhqZmVCRW9owHfTBlDXjtKAf4CLowTWy6r4W4=; b=PEJRRQDLYWThCdKG2OTTYxDEYeen9/dSdSDodfrEjL9mhaKuwCM5YHXjz6AfgW01D+ DqVUtmbLGMUCgadOmxbdQPoYdVGDYRn8f6EJLuznJsW6cHhoQtoGw/az/ulFttn5SPFV qCMtO03sj+rE6JSV1HWd6nTavhY2/s6tBNBaTjJ/dRvVQEdMc1xbLyaFDrssrlIbiqRn k260Wpec1KbDFR0zZipjSEn60qbPAvzo9lk2Yn7JCHgDTzVgmTawJfje8/Deng5Lyytw Ak0FOD5YORQmQQN+vsuxDn277FXFakLfEmsuoqpTHrIOIjSGKsMWxpA2OmNer8MqPlcI vYYg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id e18-20020a170903241200b00161774763c1si1957570plo.608.2022.05.21.00.32.59; Sat, 21 May 2022 00:33:12 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242144AbiESQln (ORCPT + 99 others); Thu, 19 May 2022 12:41:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58956 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241788AbiESQlk (ORCPT ); Thu, 19 May 2022 12:41:40 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 909BA59095 for ; Thu, 19 May 2022 09:41:39 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 25AE21477; Thu, 19 May 2022 09:32:51 -0700 (PDT) Received: from localhost (ionvoi01-desktop.cambridge.arm.com [10.1.196.65]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B50B33F718; Thu, 19 May 2022 09:32:50 -0700 (PDT) Date: Thu, 19 May 2022 17:32:49 +0100 From: Ionela Voinescu To: Sudeep Holla Cc: Atish Patra , linux-kernel@vger.kernel.org, Atish Patra , Vincent Guittot , Morten Rasmussen , Dietmar Eggemann , Qing Wang , linux-arm-kernel@lists.infradead.org, linux-riscv@lists.infradead.org, Rob Herring Subject: Re: [PATCH v2 0/8] arch_topology: Updates to add socket support and fix cluster ids Message-ID: References: <20220518093325.2070336-1-sudeep.holla@arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220518093325.2070336-1-sudeep.holla@arm.com> X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Sudeep, On Wednesday 18 May 2022 at 10:33:17 (+0100), Sudeep Holla wrote: > Hi All, > > This series intends to fix some discrepancies we have in the CPU topology > parsing from the device tree /cpu-map node. Also this diverges from the > behaviour on a ACPI enabled platform. The expectation is that both DT > and ACPI enabled systems must present consistent view of the CPU topology. > > Currently we assign generated cluster count as the physical package identifier > for each CPU which is wrong. The device tree bindings for CPU topology supports > sockets to infer the socket or physical package identifier for a given CPU. > Also we don't check if all the cores/threads belong to the same cluster before > updating their sibling masks which is fine as we don't set the cluster id yet. > > These changes also assigns the cluster identifier as parsed from the device tree > cluster nodes within /cpu-map without support for nesting of the clusters. > Finally, it also add support for socket nodes in /cpu-map. With this the > parsing of exact same information from ACPI PPTT and /cpu-map DT node > aligns well. > > The only exception is that the last level cache id information can be > inferred from the same ACPI PPTT while we need to parse CPU cache nodes > in the device tree. > > P.S: I have not cc-ed Greg and Rafael so that all the users of arch_topology > agree with the changes first before we include them. > > v1[1]->v2: > - Updated ID validity check include all non-negative value > - Added support to get the device node for the CPU's last level cache > - Added support to build llc_sibling on DT platforms > > [1] https://lore.kernel.org/lkml/20220513095559.1034633-1-sudeep.holla@arm.com > > Sudeep Holla (8): > arch_topology: Don't set cluster identifier as physical package identifier > arch_topology: Set thread sibling cpumask only within the cluster > arch_topology: Set cluster identifier in each core/thread from /cpu-map > arch_topology: Add support for parsing sockets in /cpu-map > arch_topology: Check for non-negative value rather than -1 for IDs validity > arch_topology: Avoid parsing through all the CPUs once a outlier CPU is found > of: base: add support to get the device node for the CPU's last level cache > arch_topology: Add support to build llc_sibling on DT platforms > Just a recommendation for patch-set structure: it would be best to have the following sequence to maintain the same scheduler topology and behaviour when partially applying the set (currently testing this on Juno, but should be the case for other platforms as well): 2/8 arch_topology: Set thread sibling cpumask only within the cluster 5/8 arch_topology: Check for non-negative value rather than -1 for IDs validity 6/8 arch_topology: Avoid parsing through all the CPUs once a outlier CPU is found --> these are only preparation/cleanup patches and don't affect current functionality 7/8 of: base: add support to get the device node for the CPU's last level cache 8/8 arch_topology: Add support to build llc_sibling on DT platforms --> these will populate llc siblings but this list will be equal to core siblings (based on package_id) so nothing changes in the scheduler topology. Even if CONFIG_SCHED_CLUSTER=y, we still have cluster_id=-1 so nothing will change in that case either, for the patches so far. 1/8 arch_topology: Don't set cluster identifier as physical package identifier --> 1/8 is the trouble maker if it's the first patch as it will result in having all CPUs in core_siblings so the topology will be flattened to just an MC level for a typical b.L system like Juno. But if you add it after all of the above patches, the llc_siblings will contribute to create the same MC and DIE levels we expect. 3/8 arch_topology: Set cluster identifier in each core/thread from /cpu-map 4/8 arch_topology: Add support for parsing sockets in /cpu-map --> Here 3/8 will start creating complications when having clusters in DT and we have CONFIG_SCHED_CLUSTER=y. But I'll detail this in a reply to that patch. For CONFIG_SCHED_CLUSTER=n the topology and scheduler behaviour should be the same as before this set. Hope it helps, Ionela. > drivers/base/arch_topology.c | 75 +++++++++++++++++++++++++++-------- > drivers/of/base.c | 33 +++++++++++---- > include/linux/arch_topology.h | 1 + > include/linux/of.h | 1 + > 4 files changed, 85 insertions(+), 25 deletions(-) > > -- > 2.36.1 > >