Received: by 2002:a05:6a10:9848:0:0:0:0 with SMTP id x8csp1256640pxf; Fri, 19 Mar 2021 03:03:49 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwpFaOnJfXoXUuNUFi774u98qBOz35X/ybcKAwX6gxSGqc9mnZUQXVW3FZkZ63Cv09z/iaW X-Received: by 2002:a17:906:ada:: with SMTP id z26mr3376042ejf.438.1616148229582; Fri, 19 Mar 2021 03:03:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1616148229; cv=none; d=google.com; s=arc-20160816; b=MlbqHAAhJ/NtblBczzPpcBD0N9AIESoTYmF4LmyR0irzg9EsiEDzLbyh9DRXLhrsPf i2W/XpI+ozYqaSb1CuUa1dIm4gmKOElKAPzFcbXfoaw7X888z/rQDLYAW5pETt3Nf3JC qD1OZeQ3tQy75sHBuPJS+DLeLw1/oM19BxDwv6onnV1iOEzLK6OtCLsNkGFuxXYa0Nv0 pWdVVBR1UdIgMJpvd8LBqCztHu4AzSWW2lYisGraT4a3oCfHTtgzufGcWcmlXdCc4/Yc pDyJJRH2xuLyoOzDMsBP1Q0gu3eyxcaJNmq+VqFKPrzHFLtrc474Tprb+PpbcYTzOjVC I7wQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=77WNudMZNqf0U7GGPOezln/BA2/5fwLfDjuu8ENVvEI=; b=McSrMbtsrtCUDWXDZqia5c+CF7bjGtKymHWviOZ/C53Tch8Bo1seUdPG0Aj7q+sg/1 2DcUFXohbSogEGu016xCloLlz1ZPGEv+sCzN8OamkNqqLAFzZnAuMi37XoPmorHLxxvS 62xZxE8w9ox5+H9yIwax114wTwgbzBpZzjxHwzg3I3kiuR4GJD7YqJNIjKoo9P/5mNvb DeebEoKtfyikO4sUiZMBC2pD6+IjCyjRYvxVEX64Qn4ZNfX5rZO/NW05LfPeUx9ZAHRq MuvL5NQfWisa7i/08wgZG70dkViwsQXole+YrEO6HyRpUtRphDJ8Kv0iQLcvMDFVH6xG vXPg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=ueZ4Ty00; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id q14si3530381edw.414.2021.03.19.03.03.26; Fri, 19 Mar 2021 03:03:49 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=ueZ4Ty00; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230121AbhCSKCN (ORCPT + 99 others); Fri, 19 Mar 2021 06:02:13 -0400 Received: from mail.kernel.org ([198.145.29.99]:41068 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229955AbhCSKB6 (ORCPT ); Fri, 19 Mar 2021 06:01:58 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id B60D364F1F; Fri, 19 Mar 2021 10:01:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1616148118; bh=XLFDw8khS5R5XbbGs996WELjZyr175WNcs5UdIpfIRA=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=ueZ4Ty00UOKcVmirgOXqQUSiDvralM+0rUWatVk2a6TKRdWzI4NWLPla+quPZlVim lQ1GojFfcyLcAO/7y63S1W6u7dCVAF2nbLg/lKBhpD5t6qYDjXAQAiK3JS3CZd2TKO aCuNII3U/lwcqrgmkX8rp/MfjEcEs58CNzI97jdU= Date: Fri, 19 Mar 2021 11:01:55 +0100 From: Greg KH To: Jonathan Cameron Cc: "Song Bao Hua (Barry Song)" , "tim.c.chen@linux.intel.com" , "catalin.marinas@arm.com" , "will@kernel.org" , "rjw@rjwysocki.net" , "vincent.guittot@linaro.org" , "bp@alien8.de" , "tglx@linutronix.de" , "mingo@redhat.com" , "lenb@kernel.org" , "peterz@infradead.org" , "dietmar.eggemann@arm.com" , "rostedt@goodmis.org" , "bsegall@google.com" , "mgorman@suse.de" , "msys.mizuma@gmail.com" , "valentin.schneider@arm.com" , "juri.lelli@redhat.com" , "mark.rutland@arm.com" , "sudeep.holla@arm.com" , "aubrey.li@linux.intel.com" , "linux-arm-kernel@lists.infradead.org" , "linux-kernel@vger.kernel.org" , "linux-acpi@vger.kernel.org" , "x86@kernel.org" , "xuwei (O)" , "Zengtao (B)" , "guodong.xu@linaro.org" , yangyicong , "Liguozhu (Kenneth)" , "linuxarm@openeuler.org" , "hpa@zytor.com" Subject: Re: [RFC PATCH v5 1/4] topology: Represent clusters of CPUs within a die Message-ID: References: <20210319041618.14316-1-song.bao.hua@hisilicon.com> <20210319041618.14316-2-song.bao.hua@hisilicon.com> <20210319093616.00001879@Huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210319093616.00001879@Huawei.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Mar 19, 2021 at 09:36:16AM +0000, Jonathan Cameron wrote: > On Fri, 19 Mar 2021 06:57:08 +0000 > "Song Bao Hua (Barry Song)" wrote: > > > > -----Original Message----- > > > From: Greg KH [mailto:gregkh@linuxfoundation.org] > > > Sent: Friday, March 19, 2021 7:35 PM > > > To: Song Bao Hua (Barry Song) > > > Cc: tim.c.chen@linux.intel.com; catalin.marinas@arm.com; will@kernel.org; > > > rjw@rjwysocki.net; vincent.guittot@linaro.org; bp@alien8.de; > > > tglx@linutronix.de; mingo@redhat.com; lenb@kernel.org; peterz@infradead.org; > > > dietmar.eggemann@arm.com; rostedt@goodmis.org; bsegall@google.com; > > > mgorman@suse.de; msys.mizuma@gmail.com; valentin.schneider@arm.com; Jonathan > > > Cameron ; juri.lelli@redhat.com; > > > mark.rutland@arm.com; sudeep.holla@arm.com; aubrey.li@linux.intel.com; > > > linux-arm-kernel@lists.infradead.org; linux-kernel@vger.kernel.org; > > > linux-acpi@vger.kernel.org; x86@kernel.org; xuwei (O) ; > > > Zengtao (B) ; guodong.xu@linaro.org; yangyicong > > > ; Liguozhu (Kenneth) ; > > > linuxarm@openeuler.org; hpa@zytor.com > > > Subject: Re: [RFC PATCH v5 1/4] topology: Represent clusters of CPUs within > > > a die > > > > > > On Fri, Mar 19, 2021 at 05:16:15PM +1300, Barry Song wrote: > > > > diff --git a/Documentation/admin-guide/cputopology.rst > > > b/Documentation/admin-guide/cputopology.rst > > > > index b90dafc..f9d3745 100644 > > > > --- a/Documentation/admin-guide/cputopology.rst > > > > +++ b/Documentation/admin-guide/cputopology.rst > > > > @@ -24,6 +24,12 @@ core_id: > > > > identifier (rather than the kernel's). The actual value is > > > > architecture and platform dependent. > > > > > > > > +cluster_id: > > > > + > > > > + the Cluster ID of cpuX. Typically it is the hardware platform's > > > > + identifier (rather than the kernel's). The actual value is > > > > + architecture and platform dependent. > > > > + > > > > book_id: > > > > > > > > the book ID of cpuX. Typically it is the hardware platform's > > > > @@ -56,6 +62,14 @@ package_cpus_list: > > > > human-readable list of CPUs sharing the same physical_package_id. > > > > (deprecated name: "core_siblings_list") > > > > > > > > +cluster_cpus: > > > > + > > > > + internal kernel map of CPUs within the same cluster. > > > > + > > > > +cluster_cpus_list: > > > > + > > > > + human-readable list of CPUs within the same cluster. > > > > + > > > > die_cpus: > > > > > > > > internal kernel map of CPUs within the same die. > > > > > > Why are these sysfs files in this file, and not in a Documentation/ABI/ > > > file which can be correctly parsed and shown to userspace? > > > > Well. Those ABIs have been there for much a long time. It is like: > > > > [root@ceph1 topology]# ls > > core_id core_siblings core_siblings_list physical_package_id thread_siblings thread_siblings_list > > [root@ceph1 topology]# pwd > > /sys/devices/system/cpu/cpu100/topology > > [root@ceph1 topology]# cat core_siblings_list > > 64-127 > > [root@ceph1 topology]# > > > > > > > > Any chance you can fix that up here as well? > > > > Yes. we will send a separate patch to address this, which won't > > be in this patchset. This patchset will base on that one. > > > > > > > > Also note that "list" is not something that goes in sysfs, sysfs is "one > > > value per file", and a list is not "one value". How do you prevent > > > overflowing the buffer of the sysfs file if you have a "list"? > > > > > > > At a glance, the list is using "-" rather than a real list > > [root@ceph1 topology]# cat core_siblings_list > > 64-127 > > > > Anyway, I will take a look if it has any chance to overflow. > > It could in theory be alternate CPUs as comma separated list. > So it's would get interesting around 500-1000 cpus (guessing). > > Hopefully no one has that crazy a cpu numbering scheme but it's possible > (note that cluster is fine for this, but I guess it might eventually > happen for core-siblings list (cpus within a package). > > Shouldn't crash or anything like that but might terminate early. We have a broken sysfs api already for listing LED numbers that has had to be worked around in the past, please do not create a new one with that same problem, we should learn from them :) thanks, greg k-h