Received: by 2002:a05:6358:bb9e:b0:b9:5105:a5b4 with SMTP id df30csp2727219rwb; Sun, 4 Sep 2022 23:48:44 -0700 (PDT) X-Google-Smtp-Source: AA6agR4tawjkmFmt/eVIHP7cEnoQAG6sTHxs9pq2LhnqcypIcIwQk4blqNDxu8eqHZD2xqjqtRGi X-Received: by 2002:a17:902:f68e:b0:176:80da:e5f5 with SMTP id l14-20020a170902f68e00b0017680dae5f5mr10419323plg.148.1662360524580; Sun, 04 Sep 2022 23:48:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1662360524; cv=none; d=google.com; s=arc-20160816; b=uv6EL4Gnp4rxBrQcttWnH5Yu4pWZXJTzeRdg0IyvoOZMhLfZEmEF0QjmYNp1GOdQdr BhITu9A4gaVTAzWq5sDYumf/T38+cCNXFpRlvyDafsoQqq44WW3Npe4wuB5Fhu7eFYFV NHjSz4y5eUWZcOovIM88AYwOvvP0xc+PJvU4FE/XjasDK2uaKZNeJZWm4DaqKz4Eod9U UMkdVqTB68ImgeclqfBvOvMago/ZPQGH+bkAdaDR/lHyEJ2FYq0q83iYHOfvtXD+j1n3 ytz+RI7/j10KIcoB5ReN9AhJBsXPTO97zb5QybYIqL+TEqsjwr7IsZg64wc6rjPwTWhb /n3Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:user-agent:message-id:in-reply-to :date:references:subject:cc:to:from:dkim-signature; bh=AY1MWBtHPhCraIaUUtIpJgQDepvQvGx32pqoCUcqVNE=; b=pIMZRaFjmTKyIrQpr9Z57VLFzVAgQirLMuGrq7dd9DjSNLbMckUy+GQ8CYDZ1EqsO/ 7Rw0Mni7FCDvR0ujoDBHfut4nzJvcFxE1pT7gjHso5yxaPi6iOocvfGRODSXMXU2mhT9 s5B9g3nBnYyhu25664dqul6QP8+ausC8luHrM8DGsa1qKNXDGB41dJYcQ/FMmAo6yYyg fqhSpxmTiuFnshv0ia9+Lj9ohUxbZdU1UxLtJudzf7wrVZfchvJ+DdjzFO47D4app7sN QYLKW7GkwIMEbNRh8WnOeK2fxkBrQgibTAWL4I6JhMVTVjoHoFQtfQ/Y5MR1AaCGFKrU Y7QA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=lhu873Sk; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id n11-20020a170902e54b00b001714352fc52si10512728plf.315.2022.09.04.23.48.33; Sun, 04 Sep 2022 23:48:44 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=lhu873Sk; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236058AbiIEFyH (ORCPT + 99 others); Mon, 5 Sep 2022 01:54:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42658 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234466AbiIEFyC (ORCPT ); Mon, 5 Sep 2022 01:54:02 -0400 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 859AA30568 for ; Sun, 4 Sep 2022 22:54:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1662357240; x=1693893240; h=from:to:cc:subject:references:date:in-reply-to: message-id:mime-version; bh=44TiDQ4RTzlabfXYPiqLnFpbcMM+BRvIgGN36VptPe8=; b=lhu873SkMsmtVr+dgQQB30s8jQqKKlogaB6YTqEzS1m6At1nTyf2IiLP WoqKn+BQ0GJanlhVH1s5CXnd7tZASP+NQ9v0NbNivMGC53l1LQo4kCuRL UlkT0ImD16rvI1TW1lM1SEOaAEWoJ2V3Np9/f7A0RaiRgU4TQx95ABz5C b/SVP0/zLX0cLt+HO7n/b1fqpXBnqZiNaOd6yUpungZKTe4E05abxJOsi ttzQO5HOhw1bV416aUu1wFosx5owyU1/J1ECUG/HYcbv7PPmM3ZWyAF6J UO6djaNGowapJRoB2WxFsoXVeBLGtWJ4VrnCk/zRtvPZ2T5+amd+yS4ZR Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10460"; a="360267156" X-IronPort-AV: E=Sophos;i="5.93,290,1654585200"; d="scan'208";a="360267156" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Sep 2022 22:54:00 -0700 X-IronPort-AV: E=Sophos;i="5.93,290,1654585200"; d="scan'208";a="590786985" Received: from yhuang6-desk2.sh.intel.com (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Sep 2022 22:53:55 -0700 From: "Huang, Ying" To: Aneesh Kumar K V Cc: Wei Xu , Johannes Weiner , Linux MM , Andrew Morton , Yang Shi , Davidlohr Bueso , Tim C Chen , Michal Hocko , Linux Kernel Mailing List , Hesham Almatary , Dave Hansen , Jonathan Cameron , Alistair Popple , Dan Williams , jvgediya.oss@gmail.com, Bharata B Rao , Greg Thelen , Greg Kroah-Hartman , "Rafael J. Wysocki" Subject: Re: [PATCH v3 updated] mm/demotion: Expose memory tier details via sysfs References: <20220830081736.119281-1-aneesh.kumar@linux.ibm.com> <87pmgezkhp.fsf@yhuang6-desk2.ccr.corp.intel.com> <87fshaz63h.fsf@yhuang6-desk2.ccr.corp.intel.com> <698120ce-d4df-3d13-dea9-a8f5c298783c@linux.ibm.com> <87bkryz4nh.fsf@yhuang6-desk2.ccr.corp.intel.com> <2b4ddc45-74ae-27df-d973-6724f61f4e18@linux.ibm.com> <877d2mz3c1.fsf@yhuang6-desk2.ccr.corp.intel.com> <45488760-02b5-115b-c16d-5219303f2f33@linux.ibm.com> <871qsuyzr2.fsf@yhuang6-desk2.ccr.corp.intel.com> <672e528d-40b7-fc12-9b0c-1591d586c079@linux.ibm.com> <87wnamxi30.fsf@yhuang6-desk2.ccr.corp.intel.com> <5aaf395d-514a-2717-58c6-3845b97692bd@linux.ibm.com> <87sfl6y4d0.fsf@yhuang6-desk2.ccr.corp.intel.com> <87ilm2xv26.fsf@yhuang6-desk2.ccr.corp.intel.com> <8589e329-d06d-3be2-55f8-76d4539ea80f@linux.ibm.com> Date: Mon, 05 Sep 2022 13:53:53 +0800 In-Reply-To: <8589e329-d06d-3be2-55f8-76d4539ea80f@linux.ibm.com> (Aneesh Kumar K. V.'s message of "Mon, 5 Sep 2022 10:57:50 +0530") Message-ID: <87a67ext72.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/27.1 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Aneesh Kumar K V writes: > On 9/5/22 10:43 AM, Huang, Ying wrote: >> Aneesh Kumar K V writes: >> >>> On 9/5/22 7:22 AM, Huang, Ying wrote: >>>> Aneesh Kumar K V writes: >>>> >>>>> On 9/2/22 2:34 PM, Huang, Ying wrote: >>>>>> Aneesh Kumar K V writes: >>>>>> >>>>>>> On 9/2/22 1:27 PM, Huang, Ying wrote: >>>>>>>> Wei Xu writes: >>>>>>>> >>>>>>>>> On Thu, Sep 1, 2022 at 11:44 PM Aneesh Kumar K V >>>>>>>>> wrote: >>>>>>>>>> >>>>>>>>>> On 9/2/22 12:10 PM, Huang, Ying wrote: >>>>>>>>>>> Aneesh Kumar K V writes: >>>>>>>>>>> >>>>>>>>>>>> On 9/2/22 11:42 AM, Huang, Ying wrote: >>>>>>>>>>>>> Aneesh Kumar K V writes: >>>>>>>>>>>>> >>>>>>>>>>>>>> On 9/2/22 11:10 AM, Huang, Ying wrote: >>>>>>>>>>>>>>> Aneesh Kumar K V writes: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> On 9/2/22 10:39 AM, Wei Xu wrote: >>>>>>>>>>>>>>>>> On Thu, Sep 1, 2022 at 5:33 PM Huang, Ying wrote: >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> Aneesh Kumar K V writes: >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> On 9/1/22 12:31 PM, Huang, Ying wrote: >>>>>>>>>>>>>>>>>>>> "Aneesh Kumar K.V" writes: >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> This patch adds /sys/devices/virtual/memory_tiering/ where all memory tier >>>>>>>>>>>>>>>>>>>>> related details can be found. All allocated memory tiers will be listed >>>>>>>>>>>>>>>>>>>>> there as /sys/devices/virtual/memory_tiering/memory_tierN/ >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> The nodes which are part of a specific memory tier can be listed via >>>>>>>>>>>>>>>>>>>>> /sys/devices/virtual/memory_tiering/memory_tierN/nodes >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> I think "memory_tier" is a better subsystem/bus name than >>>>>>>>>>>>>>>>>>>> memory_tiering. Because we have a set of memory_tierN devices inside. >>>>>>>>>>>>>>>>>>>> "memory_tier" sounds more natural. I know this is subjective, just my >>>>>>>>>>>>>>>>>>>> preference. >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> I missed replying to this earlier. I will keep memory_tiering as subsystem name in v4 >>>>>>>>>>>>>>>> because we would want it to a susbsystem where all memory tiering related details can be found >>>>>>>>>>>>>>>> including memory type in the future. This is as per discussion >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> https://lore.kernel.org/linux-mm/CAAPL-u9TKbHGztAF=r-io3gkX7gorUunS2UfstudCWuihrA=0g@mail.gmail.com >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> I don't think that it's a good idea to mix 2 types of devices in one >>>>>>>>>>>>>>> subsystem (bus). If my understanding were correct, that breaks the >>>>>>>>>>>>>>> driver core convention. >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> All these are virtual devices .I am not sure i follow what you mean by 2 types of devices. >>>>>>>>>>>>>> memory_tiering is a subsystem that represents all the details w.r.t memory tiering. It shows >>>>>>>>>>>>>> details of memory tiers and can possibly contain details of different memory types . >>>>>>>>>>>>> >>>>>>>>>>>>> IMHO, memory_tier and memory_type are 2 kind of devices. They have >>>>>>>>>>>>> almost totally different attributes (sysfs file). So, we should create >>>>>>>>>>>>> 2 buses for them. Each has its own attribute group. "virtual" itself >>>>>>>>>>>>> isn't a subsystem. >>>>>>>>>>>> >>>>>>>>>>>> Considering both the details are related to memory tiering, wouldn't it be much simpler we consolidate >>>>>>>>>>>> them within the same subdirectory? I am still not clear why you are suggesting they need to be in different >>>>>>>>>>>> sysfs hierarchy. It doesn't break any driver core convention as you mentioned earlier. >>>>>>>>>>>> >>>>>>>>>>>> /sys/devices/virtual/memory_tiering/memory_tierN >>>>>>>>>>>> /sys/devices/virtual/memory_tiering/memory_typeN >>>>>>>>>>> >>>>>>>>>>> I think we should add >>>>>>>>>>> >>>>>>>>>>> /sys/devices/virtual/memory_tier/memory_tierN >>>>>>>>>>> /sys/devices/virtual/memory_type/memory_typeN >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> I am trying to find if there is a technical reason to do the same? >>>>>>>>>> >>>>>>>>>>> I don't think this is complex. Devices of same bus/subsystem should >>>>>>>>>>> have mostly same attributes. This is my understanding of driver core >>>>>>>>>>> convention. >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> I was not looking at this from code complexity point. Instead of having multiple directories >>>>>>>>>> with details w.r.t memory tiering, I was looking at consolidating the details >>>>>>>>>> within the directory /sys/devices/virtual/memory_tiering. (similar to all virtual devices >>>>>>>>>> are consolidated within /sys/devics/virtual/). >>>>>>>>>> >>>>>>>>>> -aneesh >>>>>>>>> >>>>>>>>> Here is an example of /sys/bus/nd/devices (I know it is not under >>>>>>>>> /sys/devices/virtual, but it can still serve as a reference): >>>>>>>>> >>>>>>>>> ls -1 /sys/bus/nd/devices >>>>>>>>> >>>>>>>>> namespace2.0 >>>>>>>>> namespace3.0 >>>>>>>>> ndbus0 >>>>>>>>> nmem0 >>>>>>>>> nmem1 >>>>>>>>> region0 >>>>>>>>> region1 >>>>>>>>> region2 >>>>>>>>> region3 >>>>>>>>> >>>>>>>>> So I think it is not unreasonable if we want to group memory tiering >>>>>>>>> related interfaces within a single top directory. >>>>>>>> >>>>>>>> Thanks for pointing this out. My original understanding of driver core >>>>>>>> isn't correct. >>>>>>>> >>>>>>>> But I still think it's better to separate instead of mixing memory_tier >>>>>>>> and memory_type. Per my understanding, memory_type shows information >>>>>>>> (abstract distance, latency, bandwidth, etc.) of memory types (and >>>>>>>> nodes), it can be useful even without memory tiers. That is, memory >>>>>>>> types describes the physical characteristics, while memory tier reflects >>>>>>>> the policy. >>>>>>>> >>>>>>> >>>>>>> The latency and bandwidth details are already exposed via >>>>>>> >>>>>>> /sys/devices/system/node/nodeY/access0/initiators/ >>>>>>> >>>>>>> Documentation/admin-guide/mm/numaperf.rst >>>>>>> >>>>>>> That is the interface that libraries like libmemkind will look at for finding >>>>>>> details w.r.t latency/bandwidth >>>>>> >>>>>> Yes. Only with that, it's still inconvenient to find out which nodes >>>>>> belong to same memory type (has same performance, same topology, managed >>>>>> by same driver, etc). So memory types can still provide useful >>>>>> information even without memory tiering. >>>>>> >>>>> >>>>> I am not sure i quiet follow what to conclude from your reply. I used the subsystem name >>>>> "memory_tiering" so that all memory tiering related information can be consolidated there. >>>>> I guess you agreed to the above part that we can consolidated things like that. >>>> >>>> I just prefer to separate memory_tier and memory_type sysfs directories >>>> personally. Because memory_type describes the physical memory types and >>>> performance, while memory_tier is more about the policy to group >>>> memory_types. >>>> >>> IMHO we can decide on that based on why we end up adding memory_type details to sysfs. If that >>> is only for memory tier modification from userspace we can look at adding that in the memory tiering >>> sysfs hierarchy. >>> >>> Also since we have precedence of consolidating things within a sysfs hierarchy as explained in previous emails, >>> I think we should keep "memory_tiering" as sysfs subsystem name? I hope we can get an agreement on that >>> for now? >> >> I prefer to separate memory_tier and memory_type, so the subsystem name >> should be "memory_tier". You prefer to consolidate memory_tier and >> memory_type, so the subsystem name should be "memory_tiering". >> >> The main reason behind my idea is that memory_type isn't tied with >> memory tiering directly. It describes some hardware property. Even if >> we don't use memory tiering, we can still use that to classify the >> memory devices in the system. >> >> Why do you want to consolidate them? To reduce one directory from >> sysfs? >> > > So that it is much intuitive for user to got to memory_tiering sysfs hierarchy > to change the memory tier levels. As I mentioned earlier the reason for consolidating things > is to accommodate the possibility of supporting changing abstract distance of a memory type > so that we can change the memory tier assignment of that specific > memory type. If we put memory_tier and memory_type into 2 directories, it will be much harder to change the abstract distance of a memory_type? > I don't see any other reason we would want to expose memory type to > userspace as of now. Just like we expose the device tree to the user space via sysfs. Memory types are used to describe some hardware property directly. Users need these hardware information to manage their system. Best Regards, Huang, Ying >> I want to get opinions from other people too. >> >> Best Regards, >> Huang, Ying > > -aneesh