Received: by 2002:a05:6358:bb9e:b0:b9:5105:a5b4 with SMTP id df30csp2569650rwb; Sun, 4 Sep 2022 19:40:48 -0700 (PDT) X-Google-Smtp-Source: AA6agR5eOUgCI4ZCbHEnR/3/xwqX7u8Rsba8SSC6cMhdQNiUwnCtRzB+1f+E1QnwsPkmDAdsU3i8 X-Received: by 2002:a17:902:f54b:b0:174:f4a3:767c with SMTP id h11-20020a170902f54b00b00174f4a3767cmr31799753plf.4.1662345648194; Sun, 04 Sep 2022 19:40:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1662345648; cv=none; d=google.com; s=arc-20160816; b=yfoLkwaZDnc+lPC2/KT1mgmFCapIKeaiacDftIjqDCsUuKio427EPTAUU7DXxWFdUJ Hlv3xeWzKr32mVA6GL47K+ziulKOVCKzhcVuC4zYbkZsCT+prmu8Nn+ODQ6u9/aZnrZC GB0AB4Vz7eV9KvUveKvSQJ8BKPdRq/wg37bzBjafsOZimLH0pau/B0aF0RW9jt+uu+aT 5QHi01mmwa7bSIdlQ2na00r1G/iifZ2XPVkplfawMy/5yWMsk4mRaZVlPH+eeh7PZk4b UkD1g6DCgLmS+9zl52kBS9H4yiHLuA2UEroRW2nD665pS79XJ5vwO/UKT69fJ9tDVztf idGw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:user-agent:message-id:in-reply-to :date:references:subject:cc:to:from:dkim-signature; bh=FxETs9J6PLDJROlEdvwxIA3ZUg3o/B6MVSQyXSgo3L4=; b=WgNLb7NmKcHLGFSPM8lBjCUCfNgPYNbPWwiLjY2Vj6WwjhE806dtELHFEEW7vBBJHz dsTG76R/pcqzKkqgzgCR9GbVSoQJGzHxcCD5AyxCyaN+8m9ox2YtSLbjCYChzr8n7cmc OmutrGPMG+tQyQfWisCxiLaJwPCTQKDmAqWyycklHXjCFosEoSU0q9gHKUCiV2TN8bfQ VQ4u4c0SwScPayygzwNqc3CNV7CxfqvU7EBAQ3hDO3lJGRuLqoSEtyVi+b1jM0H1Z13i aLVHYVB3amtctsGyANB46eGqWFmcbjUcdWcsgpH4mD0w3pP9SDZq4pt+J6ohMevMftnv xtGA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=jvPU2t9N; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id d10-20020a63d70a000000b0042b6e87d126si9324536pgg.198.2022.09.04.19.40.37; Sun, 04 Sep 2022 19:40:48 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=jvPU2t9N; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235429AbiIEBxS (ORCPT + 99 others); Sun, 4 Sep 2022 21:53:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60084 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231540AbiIEBxR (ORCPT ); Sun, 4 Sep 2022 21:53:17 -0400 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5511A2612F for ; Sun, 4 Sep 2022 18:53:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1662342796; x=1693878796; h=from:to:cc:subject:references:date:in-reply-to: message-id:mime-version; bh=3cGUgcQzZq9BOyICE83jr0jcitIR/Hn4WKTkEyaaacw=; b=jvPU2t9NrgqAQ/LGjtRRYGXwQtdq58UfZMN1mDP7/OGew1yey8g2xFJt RKqR3mocwN8sA2vClSJ7KBk8wi44xJo3890yLwfDz89Nfa35QFNy6jGU4 MZ6+6IWw4is+2Ebs4+fsPQgzJiNI8+bac2roFb/YJ21jKTfHx+wGI4qax aeGsCeeg0uqADpM+RfaM8XqLNQjIDW9+cWAX7Cxkr9dkDlphBx7Xiw1t8 plBbbpqES+mYyfg0xkXUPbgqPtTDbpivLcNOig+mFp1hY4SglQg6Qj7P0 Uh6cM3HyDD3juQngndOnLORV24RcBSA0uf05ovq9xWIxe2W/I0NBuZGtu g==; X-IronPort-AV: E=McAfee;i="6500,9779,10460"; a="382587902" X-IronPort-AV: E=Sophos;i="5.93,290,1654585200"; d="scan'208";a="382587902" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Sep 2022 18:53:14 -0700 X-IronPort-AV: E=Sophos;i="5.93,290,1654585200"; d="scan'208";a="643620420" Received: from yhuang6-desk2.sh.intel.com (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Sep 2022 18:53:09 -0700 From: "Huang, Ying" To: Aneesh Kumar K V Cc: Wei Xu , Johannes Weiner , Linux MM , Andrew Morton , Yang Shi , Davidlohr Bueso , Tim C Chen , Michal Hocko , Linux Kernel Mailing List , Hesham Almatary , Dave Hansen , Jonathan Cameron , Alistair Popple , Dan Williams , jvgediya.oss@gmail.com, Bharata B Rao , Greg Thelen , Greg Kroah-Hartman , "Rafael J. Wysocki" Subject: Re: [PATCH v3 updated] mm/demotion: Expose memory tier details via sysfs References: <20220830081736.119281-1-aneesh.kumar@linux.ibm.com> <87tu5rzigc.fsf@yhuang6-desk2.ccr.corp.intel.com> <87pmgezkhp.fsf@yhuang6-desk2.ccr.corp.intel.com> <87fshaz63h.fsf@yhuang6-desk2.ccr.corp.intel.com> <698120ce-d4df-3d13-dea9-a8f5c298783c@linux.ibm.com> <87bkryz4nh.fsf@yhuang6-desk2.ccr.corp.intel.com> <2b4ddc45-74ae-27df-d973-6724f61f4e18@linux.ibm.com> <877d2mz3c1.fsf@yhuang6-desk2.ccr.corp.intel.com> <45488760-02b5-115b-c16d-5219303f2f33@linux.ibm.com> <871qsuyzr2.fsf@yhuang6-desk2.ccr.corp.intel.com> <672e528d-40b7-fc12-9b0c-1591d586c079@linux.ibm.com> <87wnamxi30.fsf@yhuang6-desk2.ccr.corp.intel.com> <5aaf395d-514a-2717-58c6-3845b97692bd@linux.ibm.com> Date: Mon, 05 Sep 2022 09:52:43 +0800 In-Reply-To: <5aaf395d-514a-2717-58c6-3845b97692bd@linux.ibm.com> (Aneesh Kumar K. V.'s message of "Fri, 2 Sep 2022 15:14:14 +0530") Message-ID: <87sfl6y4d0.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/27.1 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Aneesh Kumar K V writes: > On 9/2/22 2:34 PM, Huang, Ying wrote: >> Aneesh Kumar K V writes: >> >>> On 9/2/22 1:27 PM, Huang, Ying wrote: >>>> Wei Xu writes: >>>> >>>>> On Thu, Sep 1, 2022 at 11:44 PM Aneesh Kumar K V >>>>> wrote: >>>>>> >>>>>> On 9/2/22 12:10 PM, Huang, Ying wrote: >>>>>>> Aneesh Kumar K V writes: >>>>>>> >>>>>>>> On 9/2/22 11:42 AM, Huang, Ying wrote: >>>>>>>>> Aneesh Kumar K V writes: >>>>>>>>> >>>>>>>>>> On 9/2/22 11:10 AM, Huang, Ying wrote: >>>>>>>>>>> Aneesh Kumar K V writes: >>>>>>>>>>> >>>>>>>>>>>> On 9/2/22 10:39 AM, Wei Xu wrote: >>>>>>>>>>>>> On Thu, Sep 1, 2022 at 5:33 PM Huang, Ying wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>> Aneesh Kumar K V writes: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> On 9/1/22 12:31 PM, Huang, Ying wrote: >>>>>>>>>>>>>>>> "Aneesh Kumar K.V" writes: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> This patch adds /sys/devices/virtual/memory_tiering/ where all memory tier >>>>>>>>>>>>>>>>> related details can be found. All allocated memory tiers will be listed >>>>>>>>>>>>>>>>> there as /sys/devices/virtual/memory_tiering/memory_tierN/ >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> The nodes which are part of a specific memory tier can be listed via >>>>>>>>>>>>>>>>> /sys/devices/virtual/memory_tiering/memory_tierN/nodes >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> I think "memory_tier" is a better subsystem/bus name than >>>>>>>>>>>>>>>> memory_tiering. Because we have a set of memory_tierN devices inside. >>>>>>>>>>>>>>>> "memory_tier" sounds more natural. I know this is subjective, just my >>>>>>>>>>>>>>>> preference. >>>>>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> I missed replying to this earlier. I will keep memory_tiering as subsystem name in v4 >>>>>>>>>>>> because we would want it to a susbsystem where all memory tiering related details can be found >>>>>>>>>>>> including memory type in the future. This is as per discussion >>>>>>>>>>>> >>>>>>>>>>>> https://lore.kernel.org/linux-mm/CAAPL-u9TKbHGztAF=r-io3gkX7gorUunS2UfstudCWuihrA=0g@mail.gmail.com >>>>>>>>>>> >>>>>>>>>>> I don't think that it's a good idea to mix 2 types of devices in one >>>>>>>>>>> subsystem (bus). If my understanding were correct, that breaks the >>>>>>>>>>> driver core convention. >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> All these are virtual devices .I am not sure i follow what you mean by 2 types of devices. >>>>>>>>>> memory_tiering is a subsystem that represents all the details w.r.t memory tiering. It shows >>>>>>>>>> details of memory tiers and can possibly contain details of different memory types . >>>>>>>>> >>>>>>>>> IMHO, memory_tier and memory_type are 2 kind of devices. They have >>>>>>>>> almost totally different attributes (sysfs file). So, we should create >>>>>>>>> 2 buses for them. Each has its own attribute group. "virtual" itself >>>>>>>>> isn't a subsystem. >>>>>>>> >>>>>>>> Considering both the details are related to memory tiering, wouldn't it be much simpler we consolidate >>>>>>>> them within the same subdirectory? I am still not clear why you are suggesting they need to be in different >>>>>>>> sysfs hierarchy. It doesn't break any driver core convention as you mentioned earlier. >>>>>>>> >>>>>>>> /sys/devices/virtual/memory_tiering/memory_tierN >>>>>>>> /sys/devices/virtual/memory_tiering/memory_typeN >>>>>>> >>>>>>> I think we should add >>>>>>> >>>>>>> /sys/devices/virtual/memory_tier/memory_tierN >>>>>>> /sys/devices/virtual/memory_type/memory_typeN >>>>>>> >>>>>> >>>>>> I am trying to find if there is a technical reason to do the same? >>>>>> >>>>>>> I don't think this is complex. Devices of same bus/subsystem should >>>>>>> have mostly same attributes. This is my understanding of driver core >>>>>>> convention. >>>>>>> >>>>>> >>>>>> I was not looking at this from code complexity point. Instead of having multiple directories >>>>>> with details w.r.t memory tiering, I was looking at consolidating the details >>>>>> within the directory /sys/devices/virtual/memory_tiering. (similar to all virtual devices >>>>>> are consolidated within /sys/devics/virtual/). >>>>>> >>>>>> -aneesh >>>>> >>>>> Here is an example of /sys/bus/nd/devices (I know it is not under >>>>> /sys/devices/virtual, but it can still serve as a reference): >>>>> >>>>> ls -1 /sys/bus/nd/devices >>>>> >>>>> namespace2.0 >>>>> namespace3.0 >>>>> ndbus0 >>>>> nmem0 >>>>> nmem1 >>>>> region0 >>>>> region1 >>>>> region2 >>>>> region3 >>>>> >>>>> So I think it is not unreasonable if we want to group memory tiering >>>>> related interfaces within a single top directory. >>>> >>>> Thanks for pointing this out. My original understanding of driver core >>>> isn't correct. >>>> >>>> But I still think it's better to separate instead of mixing memory_tier >>>> and memory_type. Per my understanding, memory_type shows information >>>> (abstract distance, latency, bandwidth, etc.) of memory types (and >>>> nodes), it can be useful even without memory tiers. That is, memory >>>> types describes the physical characteristics, while memory tier reflects >>>> the policy. >>>> >>> >>> The latency and bandwidth details are already exposed via >>> >>> /sys/devices/system/node/nodeY/access0/initiators/ >>> >>> Documentation/admin-guide/mm/numaperf.rst >>> >>> That is the interface that libraries like libmemkind will look at for finding >>> details w.r.t latency/bandwidth >> >> Yes. Only with that, it's still inconvenient to find out which nodes >> belong to same memory type (has same performance, same topology, managed >> by same driver, etc). So memory types can still provide useful >> information even without memory tiering. >> > > I am not sure i quiet follow what to conclude from your reply. I used the subsystem name > "memory_tiering" so that all memory tiering related information can be consolidated there. > I guess you agreed to the above part that we can consolidated things like that. I just prefer to separate memory_tier and memory_type sysfs directories personally. Because memory_type describes the physical memory types and performance, while memory_tier is more about the policy to group memory_types. > We might end up adding memory_type there if we allow changing "abstract distance" of a > memory type from userspace later. Otherwise, I don't see a reason for memory type to be > exposed. But then we don't have to decide on this now. As above, because I think memory_type can provide value even outside of memory_tier, I prefer to add memory_type sysfs interface anyway personally. Best Regards, Huang, Ying