Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp7729211imu; Fri, 28 Dec 2018 03:58:30 -0800 (PST) X-Google-Smtp-Source: ALg8bN4YjP1EPDK8rfyTWzb1SiXmn0wkBMoH63OkAgpYypFfLX2xIpAqwZeowCZM0zQ7QIvAv+NB X-Received: by 2002:a17:902:9345:: with SMTP id g5mr26815119plp.148.1545998310875; Fri, 28 Dec 2018 03:58:30 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1545998310; cv=none; d=google.com; s=arc-20160816; b=mz1cocDMNNmnqzaVqHGTIyk7HfaKzuoOuVBI4g+lxy5ya+2AkSmXyI35ue2S8MqlF6 6TpPSWJ2RHhJFnJTRHRuCApATnUF1ey/PttjDvz9Rfik0HKyDup0wJAJlqOxI8yS4sxZ 9ASQz5x0EnQayH1pMNx3jS0bT9DnMUJavC0cbU7+Xts9Qrm5b5yNVCIbD6dG7Q1n+qs8 x897wejOI3CgFvuxsSXpyRKRgNtZXrQKLODXcQxKE3sAECk4p7L+F9k5J0iqUDE+hM4l Mg10ycEGOyVcXBaZtc9hBHU4agJlHFiVHBbftGwU6PXQg+qmzteGjRNFeOEszNcxNQp1 3ejw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=QONUgLIHhyeVSi/FpttcYvbpTIvTToNQw97xPUI4P1c=; b=T5bdw5QncxmRfqR2xtUHQehSUGZcossA0JL8UMP+TWBO3kIQioG7YgkhD/WzFQo58H h4vZv016HxKKOXBdRA5B2IogcypAr55RhOmF88tY7Eua/Za+a/K9HhwZp5rhqKXgbDdp Df1D++eTdvRLPLTppiJxam625jH95cuRBbH37RuhkWCeKCqFHPCECMzg0HbwQYjS6u58 tQFFGwaJ/q/M/6siML/Jy1GICGs2g03pIsOJA+lvQ8kpTgQuRbGGRHTy7bEqEIOom/7x d8Xn4ZY8PKO8gdZPqcv+1uJgC7fRK1kUVUhD/qwQo97PQtJFX0V/e7u0iYlLhuCcSPzH YZkA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id v25si8565281pfj.139.2018.12.28.03.58.15; Fri, 28 Dec 2018 03:58:30 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727749AbeL1CcE (ORCPT + 99 others); Thu, 27 Dec 2018 21:32:04 -0500 Received: from mga01.intel.com ([192.55.52.88]:23041 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727253AbeL1CcE (ORCPT ); Thu, 27 Dec 2018 21:32:04 -0500 X-Amp-Result: UNKNOWN X-Amp-Original-Verdict: FILE UNKNOWN X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga101.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 27 Dec 2018 18:32:03 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,407,1539673200"; d="scan'208";a="113763810" Received: from wangdan1-mobl1.ccr.corp.intel.com (HELO wfg-t570.sh.intel.com) ([10.254.210.154]) by orsmga003.jf.intel.com with ESMTP; 27 Dec 2018 18:31:59 -0800 Received: from wfg by wfg-t570.sh.intel.com with local (Exim 4.89) (envelope-from ) id 1gchwM-0004JH-Ea; Fri, 28 Dec 2018 10:31:58 +0800 Date: Fri, 28 Dec 2018 10:31:58 +0800 From: Fengguang Wu To: Christopher Lameter Cc: Andrew Morton , Linux Memory Management List , Fan Du , kvm@vger.kernel.org, LKML , Yao Yuan , Peng Dong , Huang Ying , Liu Jingqi , Dong Eddie , Dave Hansen , Zhang Yi , Dan Williams Subject: Re: [RFC][PATCH v2 08/21] mm: introduce and export pgdat peer_node Message-ID: <20181228023158.v3zvbp3k7coodctv@wfg-t540p.sh.intel.com> References: <20181226131446.330864849@intel.com> <20181226133351.521151384@intel.com> <01000167f14761d6-b1564081-0d5f-4752-86be-2e99c8375866-000000@email.amazonses.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Disposition: inline In-Reply-To: <01000167f14761d6-b1564081-0d5f-4752-86be-2e99c8375866-000000@email.amazonses.com> User-Agent: NeoMutt/20170609 (1.8.3) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Dec 27, 2018 at 08:07:26PM +0000, Christopher Lameter wrote: >On Wed, 26 Dec 2018, Fengguang Wu wrote: > >> Each CPU socket can have 1 DRAM and 1 PMEM node, we call them "peer nodes". >> Migration between DRAM and PMEM will by default happen between peer nodes. > >Which one does numa_node_id() point to? I guess that is the DRAM node and Yes. In our test machine, PMEM nodes show up as memory-only nodes, so numa_node_id() points to DRAM node. Here is numactl --hardware output on a 2S test machine. available: 4 nodes (0-3) node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 node 0 size: 257712 MB node 0 free: 178251 MB node 1 cpus: 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 node 1 size: 258038 MB node 1 free: 174796 MB node 2 cpus: node 2 size: 503999 MB node 2 free: 438349 MB node 3 cpus: node 3 size: 503999 MB node 3 free: 438349 MB node distances: node 0 1 2 3 0: 10 21 20 20 1: 21 10 20 20 2: 20 20 10 20 3: 20 20 20 10 >then we fall back to the PMEM node? Fall back is possible but not the scope of this patchset. We modified fallback zonelists in patch 10 to simplify PMEM usage. With that patch, page allocations on DRAM nodes won't fallback to PMEM nodes. Instead, PMEM nodes will mainly be used by explicit numactl placement and as migration target. When there is memory pressure in DRAM node, LRU cold pages there will be demote migrated to its peer PMEM node on the same socket by patch 20. Thanks, Fengguang