Received: by 2002:ac0:bc90:0:0:0:0:0 with SMTP id a16csp5101951img; Wed, 27 Mar 2019 02:02:09 -0700 (PDT) X-Google-Smtp-Source: APXvYqxYHC4fzPaHUkZpgOzvCDRC7Jz9JBaEpPilTEMFgbHYfe9GnTFJPkeak78wfVnqbmFmMiVl X-Received: by 2002:a62:ee03:: with SMTP id e3mr34742299pfi.241.1553677329356; Wed, 27 Mar 2019 02:02:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553677329; cv=none; d=google.com; s=arc-20160816; b=jZD6pBCqGAJUWxY4gcmC9jflNcUtjjTcJ4T4rlRRrXnGQ+L+13gdiCdFeDjsXMTVHn Ng+ESapUulNjQ9YHhEeUBayrF1u8tW/UWsE5FNg78QzpSFXdm1pFel2JChflDgw9sHnD Jwo6n6zzGwe2v/sXEhuzicNVtuJZ8Qvct3DbqxXpCOT1IkZHnJH8kgVhjaaHKEVq4Kl8 kzX0mc9G+iv6D47RSZKJjEBz5ldbNoTkNv5W3kVuh1xD22vS9PatEPd8ljiuJNpp8hku VeS40LLBRrcNqceSEfZ60qBMXJhPIA/tid9Wb2tUb+qrkH1g5ku2+8vPxRlXhT2gajCu kruQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=qz7vWRDvBT6ms8RMeGmKQaJT8HTkwdif4BNMNeL14HI=; b=FcdF/KX61tfJSt4WmOpiJIPHWbtIei+AJeNFI2mlSl2qDus36e0Txq7Jj0PmOfyYXv jnyQZd/nzgrqKie4RKeef1ZXoZomE81dxfd7ZuXR9qWMC8vn/P4r5/7oPzo+GX4sMOFl f11F+ebP6qLDEvhFuY1KFIbdbUy0FwWoE0cnI7DZjgyJOpa9Cy5KDq7CuB2YJDm8jsz0 MhlA0BGZ1KQfVzRQBKIPDnlAkaR+EysmTC9W0V6XT2ML/CCjggf1R/sPcu41o8ToZSUg fga65IkurqFGVe+hZSJ2Mjzz/TP7qjYWBkDp7JMgpyoCrYQAKCsxV8hHtPAiI88qgAYD UHHA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id l69si12034932plb.272.2019.03.27.02.01.53; Wed, 27 Mar 2019 02:02:09 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732267AbfC0JBE (ORCPT + 99 others); Wed, 27 Mar 2019 05:01:04 -0400 Received: from mx2.suse.de ([195.135.220.15]:48914 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725768AbfC0JBD (ORCPT ); Wed, 27 Mar 2019 05:01:03 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 898C5AC86; Wed, 27 Mar 2019 09:01:01 +0000 (UTC) Date: Wed, 27 Mar 2019 10:01:00 +0100 From: Michal Hocko To: Yang Shi Cc: mgorman@techsingularity.net, riel@surriel.com, hannes@cmpxchg.org, akpm@linux-foundation.org, dave.hansen@intel.com, keith.busch@intel.com, dan.j.williams@intel.com, fengguang.wu@intel.com, fan.du@intel.com, ying.huang@intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [RFC PATCH 0/10] Another Approach to Use PMEM as NUMA Node Message-ID: <20190327090100.GD11927@dhcp22.suse.cz> References: <1553316275-21985-1-git-send-email-yang.shi@linux.alibaba.com> <20190326135837.GP28406@dhcp22.suse.cz> <43a1a59d-dc4a-6159-2c78-e1faeb6e0e46@linux.alibaba.com> <20190326183731.GV28406@dhcp22.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue 26-03-19 19:58:56, Yang Shi wrote: > > > On 3/26/19 11:37 AM, Michal Hocko wrote: > > On Tue 26-03-19 11:33:17, Yang Shi wrote: > > > > > > On 3/26/19 6:58 AM, Michal Hocko wrote: > > > > On Sat 23-03-19 12:44:25, Yang Shi wrote: > > > > > With Dave Hansen's patches merged into Linus's tree > > > > > > > > > > https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=c221c0b0308fd01d9fb33a16f64d2fd95f8830a4 > > > > > > > > > > PMEM could be hot plugged as NUMA node now. But, how to use PMEM as NUMA node > > > > > effectively and efficiently is still a question. > > > > > > > > > > There have been a couple of proposals posted on the mailing list [1] [2]. > > > > > > > > > > The patchset is aimed to try a different approach from this proposal [1] > > > > > to use PMEM as NUMA nodes. > > > > > > > > > > The approach is designed to follow the below principles: > > > > > > > > > > 1. Use PMEM as normal NUMA node, no special gfp flag, zone, zonelist, etc. > > > > > > > > > > 2. DRAM first/by default. No surprise to existing applications and default > > > > > running. PMEM will not be allocated unless its node is specified explicitly > > > > > by NUMA policy. Some applications may be not very sensitive to memory latency, > > > > > so they could be placed on PMEM nodes then have hot pages promote to DRAM > > > > > gradually. > > > > Why are you pushing yourself into the corner right at the beginning? If > > > > the PMEM is exported as a regular NUMA node then the only difference > > > > should be performance characteristics (module durability which shouldn't > > > > play any role in this particular case, right?). Applications which are > > > > already sensitive to memory access should better use proper binding already. > > > > Some NUMA topologies might have quite a large interconnect penalties > > > > already. So this doesn't sound like an argument to me, TBH. > > > The major rationale behind this is we assume the most applications should be > > > sensitive to memory access, particularly for meeting the SLA. The > > > applications run on the machine may be agnostic to us, they may be sensitive > > > or non-sensitive. But, assuming they are sensitive to memory access sounds > > > safer from SLA point of view. Then the "cold" pages could be demoted to PMEM > > > nodes by kernel's memory reclaim or other tools without impairing the SLA. > > > > > > If the applications are not sensitive to memory access, they could be bound > > > to PMEM or allowed to use PMEM (nice to have allocation on DRAM) explicitly, > > > then the "hot" pages could be promoted to DRAM. > > Again, how is this different from NUMA in general? > > It is still NUMA, users still can see all the NUMA nodes. No, Linux NUMA implementation makes all numa nodes available by default and provides an API to opt-in for more fine tuning. What you are suggesting goes against that semantic and I am asking why. How is pmem NUMA node any different from any any other distant node in principle? -- Michal Hocko SUSE Labs