Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp782233yba; Thu, 18 Apr 2019 09:26:20 -0700 (PDT) X-Google-Smtp-Source: APXvYqxl8JBHQyiGYkvkNYDamRWvHPHiDG7YUATRmEbKVeVUhyisSpTk0qoBb9nnfSDO3wL3GDdL X-Received: by 2002:a63:1e04:: with SMTP id e4mr4819680pge.191.1555604780134; Thu, 18 Apr 2019 09:26:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1555604780; cv=none; d=google.com; s=arc-20160816; b=lA0FGmaeuKSG5cp2by9zf8hUeEKs5kj6qqeNo/UA2o+g9ToQDQY+DohLONSbENVQm4 yItsaELvCnsL5ODyheqbzQMb/THm99NzSSHrcmPc6lgsaRkQ9cDADQu7oqgQLKT80tS5 I6SvzC/ssTQ+EVRMce9oYEl+XNTUBQ6PIFODsp1Z0S+vl5YlC67c22ccfsqY5NUiAT/s 3fND7AJHoN3qJh3oaISZEza4CC9NaDwzGSmUbr3AWdcz4vcjrTNvTSMggxht/bd8OwjB 2+dvjFmUj8abHsOGzcI824NzDjGJCthR4pD6FjFXXnrsdWM4q1xy5VPwsIcJFwZdlQ/M QcLQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-language :content-transfer-encoding:in-reply-to:mime-version:user-agent:date :message-id:references:cc:to:subject:from; bh=vkp8/nosqCBb5BPnvaBjttnZMMkTP5Cb5apy9+F143E=; b=MxVxKtdWh+AAandARVekLb6zkpxcDb9wfw8fXpnhD0JWRNxUiZsTZs9xVG0U4jbBdz aPBLWH7uOvO9j3ESYhkJ/xqiWGGi3MkqNTngSEG9UGFTv4JaiYeUnIpxAyXD9zQYyNtS DhrlC4KKvl4taWK9fplbRkmH9BjIaOsVQ6Ddx5OYcRKdz0DdFGMDYLDor8p7lZ69tEZw Dq4DtYAzfFLQ2gZgXNH0IXtn7J9B+cgXHXpnqK57GLPUgQ92r1aBL7qnj0a8/4vp/6UG Z9pQI6dtlLdJpEFrlux3Sm+9yinLQcoXUrImu74BqWJWeoFPpd0sZPj4jO/fenTBJkOZ yB3A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w10si2442966plz.37.2019.04.18.09.26.04; Thu, 18 Apr 2019 09:26:20 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389593AbfDRQYw (ORCPT + 99 others); Thu, 18 Apr 2019 12:24:52 -0400 Received: from out30-42.freemail.mail.aliyun.com ([115.124.30.42]:38848 "EHLO out30-42.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725888AbfDRQYw (ORCPT ); Thu, 18 Apr 2019 12:24:52 -0400 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R181e4;CH=green;DM=||false|;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e07417;MF=yang.shi@linux.alibaba.com;NM=1;PH=DS;RN=14;SR=0;TI=SMTPD_---0TPecSvf_1555604676; Received: from US-143344MP.local(mailfrom:yang.shi@linux.alibaba.com fp:SMTPD_---0TPecSvf_1555604676) by smtp.aliyun-inc.com(127.0.0.1); Fri, 19 Apr 2019 00:24:40 +0800 From: Yang Shi Subject: Re: [v2 RFC PATCH 0/9] Another Approach to Use PMEM as NUMA Node To: Michal Hocko Cc: Keith Busch , Dave Hansen , mgorman@techsingularity.net, riel@surriel.com, hannes@cmpxchg.org, akpm@linux-foundation.org, dan.j.williams@intel.com, fengguang.wu@intel.com, fan.du@intel.com, ying.huang@intel.com, ziy@nvidia.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <20190416074714.GD11561@dhcp22.suse.cz> <876768ad-a63a-99c3-59de-458403f008c4@linux.alibaba.com> <20190417092318.GG655@dhcp22.suse.cz> <20190417152345.GB4786@localhost.localdomain> <20190417153923.GO5878@dhcp22.suse.cz> <20190417153739.GD4786@localhost.localdomain> <20190417163911.GA9523@dhcp22.suse.cz> <20190417175151.GB9523@dhcp22.suse.cz> Message-ID: Date: Thu, 18 Apr 2019 09:24:35 -0700 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.12; rv:52.0) Gecko/20100101 Thunderbird/52.7.0 MIME-Version: 1.0 In-Reply-To: <20190417175151.GB9523@dhcp22.suse.cz> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Content-Language: en-US Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 4/17/19 10:51 AM, Michal Hocko wrote: > On Wed 17-04-19 10:26:05, Yang Shi wrote: >> On 4/17/19 9:39 AM, Michal Hocko wrote: >>> On Wed 17-04-19 09:37:39, Keith Busch wrote: >>>> On Wed, Apr 17, 2019 at 05:39:23PM +0200, Michal Hocko wrote: >>>>> On Wed 17-04-19 09:23:46, Keith Busch wrote: >>>>>> On Wed, Apr 17, 2019 at 11:23:18AM +0200, Michal Hocko wrote: >>>>>>> On Tue 16-04-19 14:22:33, Dave Hansen wrote: >>>>>>>> Keith Busch had a set of patches to let you specify the demotion order >>>>>>>> via sysfs for fun. The rules we came up with were: >>>>>>> I am not a fan of any sysfs "fun" >>>>>> I'm hung up on the user facing interface, but there should be some way a >>>>>> user decides if a memory node is or is not a migrate target, right? >>>>> Why? Or to put it differently, why do we have to start with a user >>>>> interface at this stage when we actually barely have any real usecases >>>>> out there? >>>> The use case is an alternative to swap, right? The user has to decide >>>> which storage is the swap target, so operating in the same spirit. >>> I do not follow. If you use rebalancing you can still deplete the memory >>> and end up in a swap storage. If you want to reclaim/swap rather than >>> rebalance then you do not enable rebalancing (by node_reclaim or similar >>> mechanism). >> I'm a little bit confused. Do you mean just do *not* do reclaim/swap in >> rebalancing mode? If rebalancing is on, then node_reclaim just move the >> pages around nodes, then kswapd or direct reclaim would take care of swap? > Yes, that was the idea I wanted to get through. Sorry if that was not > really clear. > >> If so the node reclaim on PMEM node may rebalance the pages to DRAM node? >> Should this be allowed? > Why it shouldn't? If there are other vacant Nodes to absorb that memory > then why not use it? > >> I think both I and Keith was supposed to treat PMEM as a tier in the reclaim >> hierarchy. The reclaim should push inactive pages down to PMEM, then swap. >> So, PMEM is kind of a "terminal" node. So, he introduced sysfs defined >> target node, I introduced N_CPU_MEM. > I understand that. And I am trying to figure out whether we really have > to tream PMEM specially here. Why is it any better than a generic NUMA > rebalancing code that could be used for many other usecases which are > not PMEM specific. If you present PMEM as a regular memory then also use > it as a normal memory. This also makes some sense. We just look at PMEM from different point of view. Taking into account the performance disparity may outweigh treating it as a normal memory in this patchset. A ridiculous idea, may we have two modes? One for "rebalancing", the other for "demotion"?