Received: by 2002:a6b:500f:0:0:0:0:0 with SMTP id e15csp5826974iob; Tue, 10 May 2022 04:47:17 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxwH++BMJPpcBX6ztq69ofA/LeRczYICWDpy6Nq3S8p4V55rP2H/uzdB0TbjDiGEK2/Aco+ X-Received: by 2002:a17:907:1b0a:b0:6f0:e3d:1f5d with SMTP id mp10-20020a1709071b0a00b006f00e3d1f5dmr18837990ejc.418.1652183236766; Tue, 10 May 2022 04:47:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1652183236; cv=none; d=google.com; s=arc-20160816; b=i4gldEQ8pvPgUKs+EqizulLCBlccWEOUAMfAFYfq1ufJVyTiHatfCToywoFUfHV7Tk qe4dSWB1wUUNJ2rWQ/5jMfHHXZttXind1GUJVLPrBEF+u5vCb9Te85wUf+zLERNvVNkj J/ORkZikC85ZTsuxms74185i1fG9P/XQGmjCqyNlNlkHqtBHcHDkLjZB8kEBFBFYufkv W6atk4YewPRfGida+rX0fm1rjBNrccMpl/v8tztj+b1deYNygPhiuMdoHwn2tYpB0697 2DZtONpuYdsfdpvqDzTQckRNqAleH0P3n6RPR49SFXwF9ZiwTxmYl046dI2z6aeqTUZV /qOA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=+Wbz24k8xjmUuPLei/zNpZK/WIYdnk4llOYq685eLRM=; b=L2ZcWgU3OGb2S5XrEq8lP9cSJtxgPx6MZZNDqIdl8m/MpWN24ELnhypCSqoMlFdLws lHiDVL4/92P26JoH4zMCKy8xUApahdKAZBEppM6dwBSjmW0gVCohj09Ns1MZB+c6WMHD BqJaNo8AClwlXDIEU+Jrjp6wP0rBzmr1JBHdfUmfIabeufewI8sHNY4Z7GpjxXTDsGp1 DwUL/Aecg5RSlKhrhyEhM0XJel1DXf3qTp25F375jWzS51zRFRPZd1mlWw7tTZivHYaN /J4UrhsYKL8OKkPTB1+Yo+cVsCXkWl0f0Uuyd5Az5KGUVdhwTG/7NnsVrwC9AAzenrLe 53Hw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=mDcPIbNu; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id w12-20020a056402268c00b00425d3d46437si20288461edd.109.2022.05.10.04.46.51; Tue, 10 May 2022 04:47:16 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=mDcPIbNu; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231218AbiEJEiS (ORCPT + 99 others); Tue, 10 May 2022 00:38:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44738 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236476AbiEJEhP (ORCPT ); Tue, 10 May 2022 00:37:15 -0400 Received: from mail-ua1-x934.google.com (mail-ua1-x934.google.com [IPv6:2607:f8b0:4864:20::934]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 78F221A075C for ; Mon, 9 May 2022 21:32:23 -0700 (PDT) Received: by mail-ua1-x934.google.com with SMTP id p1so6212910uak.1 for ; Mon, 09 May 2022 21:32:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=+Wbz24k8xjmUuPLei/zNpZK/WIYdnk4llOYq685eLRM=; b=mDcPIbNuXIE/Aw2IWNYjGS7CWMKV0V9DdkVv1Z1/BFInvc7e3cabhTTCMRHlVVYDJZ cOsJxOGaAuBSMIF1Iv2dLL5gm8PRTUzihr6m5RhpaRr3+jfgaS5PeN392GfwVFpJmlE2 wqPNo0ugnwx5jL8LvzPat3rJlRAFcu7D7S4mYlXKk96tKy6jr+VzrxEDnl0/n+HQ0DQZ RQSFTG+t5tOAwlHduWOXZSJojiZXx9thViC30HpODEGL4n+K/0p+GQeOmMHYKHu9hGAw F+ryed9WgCX8HL275PrtaKxA87B/lt9FldnyMCjYSm4FzBcThnpp9Z+w9D5lKhOb4/Z3 XTOA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=+Wbz24k8xjmUuPLei/zNpZK/WIYdnk4llOYq685eLRM=; b=Of0cf4swJgBuavLhLSZCcI1HtqYYMXntA+hPlPEpVP6vf50uQJ0bJSS2/lnxBzvxrE g9o2ShlnwLpWfxeeR7eXOAd3FBO3jpquRnBv5yCBKFqDeIcC2e/DezPdteV8xfT4eMk/ ro6zlt0C56xD1HUwtmdQPXLJ1IM4cJAgX9kr5dW1b7pUXJvd2IWVJYvlZWwNuiWJmiuu L+aMbo6asHAzyUb7K9aOCsYfYovlPJ2eYcib/Wh8xt4BlDoYS9G/RKj9jpEaEQf2520H YWC+cL5BUVM9UPjfAYLVNFvKoSTrXfk4+eLKxSUHCixxzUAF6oZYvtpyAAZ/hR+Uc1Sf P+6A== X-Gm-Message-State: AOAM5318Z1QTXvr7rDcq4qMiSiRLzpxOLo7LmB09l4V4IeLhPllo7ZQ0 caalxLmTt8z8NkFFdQXhEMhyBmwEJv/VUB73Tq8LCQ== X-Received: by 2002:ab0:7643:0:b0:362:833d:5bfb with SMTP id s3-20020ab07643000000b00362833d5bfbmr11063151uaq.4.1652157141544; Mon, 09 May 2022 21:32:21 -0700 (PDT) MIME-Version: 1.0 References: <87tua3h5r1.fsf@nvdebian.thelocal> In-Reply-To: <87tua3h5r1.fsf@nvdebian.thelocal> From: Wei Xu Date: Mon, 9 May 2022 21:32:10 -0700 Message-ID: Subject: Re: RFC: Memory Tiering Kernel Interfaces To: Alistair Popple Cc: Yang Shi , Andrew Morton , Dave Hansen , Huang Ying , Dan Williams , Linux MM , Greg Thelen , "Aneesh Kumar K.V" , Jagdish Gediya , Linux Kernel Mailing List , Davidlohr Bueso , Michal Hocko , Baolin Wang , Brice Goglin , Feng Tang , Jonathan Cameron , Tim Chen Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-17.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, May 5, 2022 at 5:19 PM Alistair Popple wrote: > > Wei Xu writes: > > [...] > > >> > > >> > > >> > Tiering Hierarchy Initialization > >> > `==============================' > >> > > >> > By default, all memory nodes are in the top tier (N_TOPTIER_MEMORY). > >> > > >> > A device driver can remove its memory nodes from the top tier, e.g. > >> > a dax driver can remove PMEM nodes from the top tier. > >> > >> With the topology built by firmware we should not need this. > > I agree that in an ideal world the hierarchy should be built by firmware based > on something like the HMAT. But I also think being able to override this will be > useful in getting there. Therefore a way of overriding the generated hierarchy > would be good, either via sysfs or kernel boot parameter if we don't want to > commit to a particular user interface now. > > However I'm less sure letting device-drivers override this is a good idea. How > for example would a GPU driver make sure it's node is in the top tier? By moving > every node that the driver does not know about out of N_TOPTIER_MEMORY? That > could get messy if say there were two drivers both of which wanted their node to > be in the top tier. The suggestion is to allow a device driver to opt out its memory devices from the top-tier, not the other way around. I agree that the kernel should still be responsible for the final node-tier assignment by taking into account all factors: the firmware tables, device driver requests, and user-overrides (kernel argument or sysfs). > > I agree. But before we have such a firmware, the kernel needs to do > > its best to initialize memory tiers. > > > > Given that we know PMEM is slower than DRAM, but a dax device might > > not be PMEM, a better place to set the tier for PMEM nodes can be the > > ACPI code, e.g. acpi_numa_memory_affinity_init() where we can examine > > the ACPI_SRAT_MEM_NON_VOLATILE bit. > > > >> > > >> > The kernel builds the memory tiering hierarchy and per-node demotion > >> > order tier-by-tier starting from N_TOPTIER_MEMORY. For a node N, the > >> > best distance nodes in the next lower tier are assigned to > >> > node_demotion[N].preferred and all the nodes in the next lower tier > >> > are assigned to node_demotion[N].allowed. > >> > >> I'm not sure whether it should be allowed to demote to multiple lower > >> tiers. But it is totally fine to *NOT* allow it at the moment. Once we > >> figure out a good way to define demotion targets, it could be extended > >> to support this easily. > > > > You mean to only support MAX_TIERS=2 for now. I am fine with that. > > There can be systems with 3 tiers, e.g. GPU -> DRAM -> PMEM, but it is > > not clear yet whether we want to enable transparent memory tiering to > > all the 3 tiers on such systems. > > At some point I think we will need to deal with 3 tiers but I'd be ok with > limiting it to 2 for now if it makes things simpler. > > - Alistair > > >> > > >> > node_demotion[N].preferred can be empty if no preferred demotion node > >> > is available for node N. > >> > > >> > If the userspace overrides the tiers via the memory_tiers sysfs > >> > interface, the kernel then only rebuilds the per-node demotion order > >> > accordingly. > >> > > >> > Memory tiering hierarchy is rebuilt upon hot-add or hot-remove of a > >> > memory node, but is NOT rebuilt upon hot-add or hot-remove of a CPU > >> > node. > >> > > >> > > >> > Memory Allocation for Demotion > >> > `============================' > >> > > >> > When allocating a new demotion target page, both a preferred node > >> > and the allowed nodemask are provided to the allocation function. > >> > The default kernel allocation fallback order is used to allocate the > >> > page from the specified node and nodemask. > >> > > >> > The memopolicy of cpuset, vma and owner task of the source page can > >> > be set to refine the demotion nodemask, e.g. to prevent demotion or > >> > select a particular allowed node as the demotion target. > >> > > >> > > >> > Examples > >> > `======' > >> > > >> > * Example 1: > >> > Node 0 & 1 are DRAM nodes, node 2 & 3 are PMEM nodes. > >> > > >> > Node 0 has node 2 as the preferred demotion target and can also > >> > fallback demotion to node 3. > >> > > >> > Node 1 has node 3 as the preferred demotion target and can also > >> > fallback demotion to node 2. > >> > > >> > Set mempolicy to prevent cross-socket demotion and memory access, > >> > e.g. cpuset.mems=0,2 > >> > > >> > node distances: > >> > node 0 1 2 3 > >> > 0 10 20 30 40 > >> > 1 20 10 40 30 > >> > 2 30 40 10 40 > >> > 3 40 30 40 10 > >> > > >> > /sys/devices/system/node/memory_tiers > >> > 0-1 > >> > 2-3 > >> > > >> > N_TOPTIER_MEMORY: 0-1 > >> > > >> > node_demotion[]: > >> > 0: [2], [2-3] > >> > 1: [3], [2-3] > >> > 2: [], [] > >> > 3: [], [] > >> > > >> > * Example 2: > >> > Node 0 & 1 are DRAM nodes. > >> > Node 2 is a PMEM node and closer to node 0. > >> > > >> > Node 0 has node 2 as the preferred and only demotion target. > >> > > >> > Node 1 has no preferred demotion target, but can still demote > >> > to node 2. > >> > > >> > Set mempolicy to prevent cross-socket demotion and memory access, > >> > e.g. cpuset.mems=0,2 > >> > > >> > node distances: > >> > node 0 1 2 > >> > 0 10 20 30 > >> > 1 20 10 40 > >> > 2 30 40 10 > >> > > >> > /sys/devices/system/node/memory_tiers > >> > 0-1 > >> > 2 > >> > > >> > N_TOPTIER_MEMORY: 0-1 > >> > > >> > node_demotion[]: > >> > 0: [2], [2] > >> > 1: [], [2] > >> > 2: [], [] > >> > > >> > > >> > * Example 3: > >> > Node 0 & 1 are DRAM nodes. > >> > Node 2 is a PMEM node and has the same distance to node 0 & 1. > >> > > >> > Node 0 has node 2 as the preferred and only demotion target. > >> > > >> > Node 1 has node 2 as the preferred and only demotion target. > >> > > >> > node distances: > >> > node 0 1 2 > >> > 0 10 20 30 > >> > 1 20 10 30 > >> > 2 30 30 10 > >> > > >> > /sys/devices/system/node/memory_tiers > >> > 0-1 > >> > 2 > >> > > >> > N_TOPTIER_MEMORY: 0-1 > >> > > >> > node_demotion[]: > >> > 0: [2], [2] > >> > 1: [2], [2] > >> > 2: [], [] > >> > > >> > > >> > * Example 4: > >> > Node 0 & 1 are DRAM nodes, Node 2 is a memory-only DRAM node. > >> > > >> > All nodes are top-tier. > >> > > >> > node distances: > >> > node 0 1 2 > >> > 0 10 20 30 > >> > 1 20 10 30 > >> > 2 30 30 10 > >> > > >> > /sys/devices/system/node/memory_tiers > >> > 0-2 > >> > > >> > N_TOPTIER_MEMORY: 0-2 > >> > > >> > node_demotion[]: > >> > 0: [], [] > >> > 1: [], [] > >> > 2: [], [] > >> > > >> > > >> > * Example 5: > >> > Node 0 is a DRAM node with CPU. > >> > Node 1 is a HBM node. > >> > Node 2 is a PMEM node. > >> > > >> > With userspace override, node 1 is the top tier and has node 0 as > >> > the preferred and only demotion target. > >> > > >> > Node 0 is in the second tier, tier 1, and has node 2 as the > >> > preferred and only demotion target. > >> > > >> > Node 2 is in the lowest tier, tier 2, and has no demotion targets. > >> > > >> > node distances: > >> > node 0 1 2 > >> > 0 10 21 30 > >> > 1 21 10 40 > >> > 2 30 40 10 > >> > > >> > /sys/devices/system/node/memory_tiers (userspace override) > >> > 1 > >> > 0 > >> > 2 > >> > > >> > N_TOPTIER_MEMORY: 1 > >> > > >> > node_demotion[]: > >> > 0: [2], [2] > >> > 1: [0], [0] > >> > 2: [], [] > >> > > >> > -- Wei