Received: by 2002:ac0:bc90:0:0:0:0:0 with SMTP id a16csp3705590img; Mon, 25 Mar 2019 16:19:18 -0700 (PDT) X-Google-Smtp-Source: APXvYqy5BQZNF3W8xyIqOWpUnDauDK6IjCpA8Eg6xBb2gqTxPCrjWxfSvQPEwfRwHWaJzU5/tjYX X-Received: by 2002:a65:5049:: with SMTP id k9mr13096347pgo.229.1553555958909; Mon, 25 Mar 2019 16:19:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553555958; cv=none; d=google.com; s=arc-20160816; b=fb3v9M4TYLaQJTPDPlstsl/456+UDTGijW+xusjj44Dcc6e6U2AoI9B+yeEiPVtcJC BZZhw2jtXb0oJ5CtLAZOxrnD3uoaVvo3DObtwFdoFb6TFwr+Vz5vFIfiJNOHLOaG8D1C f1qH0iDGpGI90HrvRCKcgWnw+nAjHk0Y1726sYTQ9YXCM5/hiig40HlA17xy+a4rZTGs 8vGketU6j0kT106JGs/3Nv2xZqSJQMAcPA8yqQ0Zu53Hs1becmPcXa4O6czHplh9tkQq nEs5d8iymrcrO3//7t3eSGNWD1DnRL+Aejgrkw8w16G7mnHS0X5+lv9gfnpwkhs2Ow9f VHjg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=BDLk+jh0wm55D8ztyKQQjdMiusc96JMbtw9zCEj3y6A=; b=MC4LnrxJyKygSgwQjFLtv7pYjem9IEljh9ERZqf6DWGqtki6rtMQySx/Bfh31LnAyV SHILafVqvNlSXKApfV/rkkRMPGpzThwyloCrJc7I+gDafMF8wi9jB9Jw0EJGrMnE/1Wb fnR7b3Oii1GcCG7zPK14mDEtz2RvChxbs+quWolfkXGsipPhbE9Epe76wElR9x87EA2n LcqXWggyckI7t6txj7CaLOKjh5mGfCpx2tNkWWMlf7pDep/qdz3c3Jgw7DLFnHUbT856 Yoxa5aWdAo/hTeDqHaIvxJjlsD+B56NNZRutWnXjXRNrP65g0FumdBjSDVrddJuXy2e3 uoIw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel-com.20150623.gappssmtp.com header.s=20150623 header.b=bmWmtlOA; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 91si15775066ply.258.2019.03.25.16.19.03; Mon, 25 Mar 2019 16:19:18 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@intel-com.20150623.gappssmtp.com header.s=20150623 header.b=bmWmtlOA; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729382AbfCYXSR (ORCPT + 99 others); Mon, 25 Mar 2019 19:18:17 -0400 Received: from mail-ot1-f68.google.com ([209.85.210.68]:45432 "EHLO mail-ot1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727192AbfCYXSR (ORCPT ); Mon, 25 Mar 2019 19:18:17 -0400 Received: by mail-ot1-f68.google.com with SMTP id e5so9709302otk.12 for ; Mon, 25 Mar 2019 16:18:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=BDLk+jh0wm55D8ztyKQQjdMiusc96JMbtw9zCEj3y6A=; b=bmWmtlOAfTKBgIkAV4K73DnNnvKLXfnnZaIk5Csh5dgshTgCKm/M1xryTKHq9HRx+9 dhfR0DJy4SFcZfYzPjzPO1tzxGr8+/UqlIdiLnFC4QxVfnB2Zboaw9Po9hneBmzjsojy 09UJ+4jx2az1c121Hp4cjizVAdaTIimVGF34xvc1vHZnfd79kxZJl7peyySjd17HEFzJ M5jpsWKZEjrkYQu6gVasCkWeZPtpUR5rZXHi+uByOGwSYMEGb1EPOGw119GsMJ4WqjDd Q4AJrkeVaRNCtcEndaloOGysrz1p2xFiifiUH1ZzTZOFMqykD97ylfpbTFs3oSmM+27Z K+Rg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=BDLk+jh0wm55D8ztyKQQjdMiusc96JMbtw9zCEj3y6A=; b=OsOXAc4tarJW1DaqwvEu4FRcZRES8RtRj8nTf4aZaqD0FD6tl/mjIOgeKOIuN5i0mN 2m4varp4PPVAe3YFhzg8PwEChPs1XJivK3fSNQwf1y0IR2azl4ZdSy8a2GGyz2zKdhbk eJq1/SDPLQGjcjQVQup7yI5pw4nr3DbMO3eb+gfHb/iEhRezapNwj3XCT4Y3AHmVd9tA mYbGzsl9DzesTorCw4e+fsXo961Jx8owlNYteon9I2ZU2kDh/L1iL8mNw1pqFNTBW7Zx whF3oOq3EI5/8SgjiWsPzN6fwx0+idaGT23nmL7e8HEZ2le7c0TOpqE2khqjK02/arCe W0Kg== X-Gm-Message-State: APjAAAUuE3uhlTnUCTT8/Zq0hmfsZTjMPfqT0ZQkvUxLms9r3OlBsSFw VN3Omn78NPhVYuuYvtG/Tvy7vk/WcqwBCZ7GQ4YlBg== X-Received: by 2002:a9d:224a:: with SMTP id o68mr20779320ota.214.1553555896212; Mon, 25 Mar 2019 16:18:16 -0700 (PDT) MIME-Version: 1.0 References: <1553316275-21985-1-git-send-email-yang.shi@linux.alibaba.com> <1553316275-21985-2-git-send-email-yang.shi@linux.alibaba.com> <688dffbc-2adc-005d-223e-fe488be8c5fc@linux.alibaba.com> In-Reply-To: <688dffbc-2adc-005d-223e-fe488be8c5fc@linux.alibaba.com> From: Dan Williams Date: Mon, 25 Mar 2019 16:18:04 -0700 Message-ID: Subject: Re: [PATCH 01/10] mm: control memory placement by nodemask for two tier main memory To: Yang Shi Cc: Michal Hocko , Mel Gorman , Rik van Riel , Johannes Weiner , Andrew Morton , Dave Hansen , Keith Busch , Fengguang Wu , "Du, Fan" , "Huang, Ying" , Linux MM , Linux Kernel Mailing List , Vishal L Verma Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Mar 25, 2019 at 12:28 PM Yang Shi wrote: > > > > On 3/23/19 10:21 AM, Dan Williams wrote: > > On Fri, Mar 22, 2019 at 9:45 PM Yang Shi wrote: > >> When running applications on the machine with NVDIMM as NUMA node, the > >> memory allocation may end up on NVDIMM node. This may result in silent > >> performance degradation and regression due to the difference of hardware > >> property. > >> > >> DRAM first should be obeyed to prevent from surprising regression. Any > >> non-DRAM nodes should be excluded from default allocation. Use nodemask > >> to control the memory placement. Introduce def_alloc_nodemask which has > >> DRAM nodes set only. Any non-DRAM allocation should be specified by > >> NUMA policy explicitly. > >> > >> In the future we may be able to extract the memory charasteristics from > >> HMAT or other source to build up the default allocation nodemask. > >> However, just distinguish DRAM and PMEM (non-DRAM) nodes by SRAT flag > >> for the time being. > >> > >> Signed-off-by: Yang Shi > >> --- > >> arch/x86/mm/numa.c | 1 + > >> drivers/acpi/numa.c | 8 ++++++++ > >> include/linux/mmzone.h | 3 +++ > >> mm/page_alloc.c | 18 ++++++++++++++++-- > >> 4 files changed, 28 insertions(+), 2 deletions(-) > >> > >> diff --git a/arch/x86/mm/numa.c b/arch/x86/mm/numa.c > >> index dfb6c4d..d9e0ca4 100644 > >> --- a/arch/x86/mm/numa.c > >> +++ b/arch/x86/mm/numa.c > >> @@ -626,6 +626,7 @@ static int __init numa_init(int (*init_func)(void)) > >> nodes_clear(numa_nodes_parsed); > >> nodes_clear(node_possible_map); > >> nodes_clear(node_online_map); > >> + nodes_clear(def_alloc_nodemask); > >> memset(&numa_meminfo, 0, sizeof(numa_meminfo)); > >> WARN_ON(memblock_set_node(0, ULLONG_MAX, &memblock.memory, > >> MAX_NUMNODES)); > >> diff --git a/drivers/acpi/numa.c b/drivers/acpi/numa.c > >> index 867f6e3..79dfedf 100644 > >> --- a/drivers/acpi/numa.c > >> +++ b/drivers/acpi/numa.c > >> @@ -296,6 +296,14 @@ void __init acpi_numa_slit_init(struct acpi_table_slit *slit) > >> goto out_err_bad_srat; > >> } > >> > >> + /* > >> + * Non volatile memory is excluded from zonelist by default. > >> + * Only regular DRAM nodes are set in default allocation node > >> + * mask. > >> + */ > >> + if (!(ma->flags & ACPI_SRAT_MEM_NON_VOLATILE)) > >> + node_set(node, def_alloc_nodemask); > > Hmm, no, I don't think we should do this. Especially considering > > current generation NVDIMMs are energy backed DRAM there is no > > performance difference that should be assumed by the non-volatile > > flag. > > Actually, here I would like to initialize a node mask for default > allocation. Memory allocation should not end up on any nodes excluded by > this node mask unless they are specified by mempolicy. > > We may have a few different ways or criteria to initialize the node > mask, for example, we can read from HMAT (when HMAT is ready in the > future), and we definitely could have non-DRAM nodes set if they have no > performance difference (I'm supposed you mean NVDIMM-F or HBM). > > As long as there are different tiers, distinguished by performance, for > main memory, IMHO, there should be a defined default allocation node > mask to control the memory placement no matter where we get the information. I understand the intent, but I don't think the kernel should have such a hardline policy by default. However, it would be worthwhile mechanism and policy to consider for the dax-hotplug userspace tooling. I.e. arrange for a given device-dax instance to be onlined, but set the policy to require explicit opt-in by numa binding for it to be an allocation / migration option. I added Vishal to the cc who is looking into such policy tooling. > But, for now we haven't had such information ready for such use yet, so > the SRAT flag might be a choice. > > > > > Why isn't default SLIT distance sufficient for ensuring a DRAM-first > > default policy? > > "DRAM-first" may sound ambiguous, actually I mean "DRAM only by > default". SLIT should just can tell us what node is local what node is > remote, but can't tell us the performance difference. I think it's a useful semantic, but let's leave the selection of that policy to an explicit userspace decision.