Received: by 2002:ac0:b08d:0:0:0:0:0 with SMTP id l13csp2541280imc; Sat, 23 Feb 2019 05:44:31 -0800 (PST) X-Google-Smtp-Source: AHgI3IY23bVldt5jbgwGMAfujQp0kHeDET9t0M+U2gNPObgvSvdaFs2lj/x+E0+VSUVM0GRDaJCO X-Received: by 2002:a63:f753:: with SMTP id f19mr8846701pgk.437.1550929471478; Sat, 23 Feb 2019 05:44:31 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1550929471; cv=none; d=google.com; s=arc-20160816; b=gKmHtY/mszsH2u0tFb/VvH1E3K0FaVh+xQE2x5hfPwA8Z0ss936Y5ILcawak2/oA2r EsvvNN08rlq11ygTJ1hpm/cFSzK4R7JDlHVaSMHUrigbXQXyeQDAs3gAfp457BqTQllP KVbmYCFAJgvAimdJamgIq+VOZlHkR84s4Ssj3frJvq5lppmox+6LFAc8uAsyqPnIHedI oPhMlqNP5aYMY2CGkyuO9uaQ9ZMoT2vCp91O5O9YdAh3YwU62TLb7nQU5aamsKz3pFry p9/yVh6247/0CVxf+onwZQXiS6oaSWjNXhcekMC2Oeksl8A6JMm0phwNEqtjD3wnYZj9 4Gow== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=HTsI+/Mownet+K33ms74eIwTvzNXgOgLCWjmZnuLzaE=; b=vJ2JYRyjJcr2LMvRqjmYSo8bF9tPR1EjhinCnBKrf32w/NKM4tcKhLz+W6cwJaBDRU tLH0Cqanuu3360M08ziRIdITJaAHzTnk5vi3rnm3LUyOWDUi96yd6TXRdnAuLvMdzvY1 zK5699tWoXsz85/ROSZ+WQBQMjA6Hdi9NTdcAxsw3kaN3TsdH39KLah7CkegFl5iNsut NU4xHwIFfXLjfLLluBVkpjgxM/Vx6dexN85yKGua8uDco6xzpjjRF5559cFPUM0fmVJ2 4aylJtuG9qNaTc9ObE7KxRXXs8iB8ALHtn3ig/JtVoHbDLGRP9arOuiWnhedpWtw/08v KAvA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z9si3605418pgv.265.2019.02.23.05.44.10; Sat, 23 Feb 2019 05:44:31 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727689AbfBWNm3 (ORCPT + 99 others); Sat, 23 Feb 2019 08:42:29 -0500 Received: from mga11.intel.com ([192.55.52.93]:29496 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725859AbfBWNm3 (ORCPT ); Sat, 23 Feb 2019 08:42:29 -0500 X-Amp-Result: UNKNOWN X-Amp-Original-Verdict: FILE UNKNOWN X-Amp-File-Uploaded: False Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 23 Feb 2019 05:42:28 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.58,403,1544515200"; d="scan'208";a="135739557" Received: from xiaqing-mobl.ccr.corp.intel.com (HELO wfg-t570.sh.intel.com) ([10.254.208.151]) by FMSMGA003.fm.intel.com with ESMTP; 23 Feb 2019 05:42:27 -0800 Received: from wfg by wfg-t570.sh.intel.com with local (Exim 4.89) (envelope-from ) id 1gxXZS-00010q-Pu; Sat, 23 Feb 2019 21:42:26 +0800 Date: Sat, 23 Feb 2019 21:42:26 +0800 From: Fengguang Wu To: "Aneesh Kumar K.V" Cc: Michal Hocko , lsf-pc@lists.linux-foundation.org, linux-mm@kvack.org, LKML , linux-nvme@lists.infradead.org Subject: Re: [LSF/MM ATTEND ] memory reclaim with NUMA rebalancing Message-ID: <20190223134226.spesmpw6qnnfyvrr@wfg-t540p.sh.intel.com> References: <20190130174847.GD18811@dhcp22.suse.cz> <87h8dpnwxg.fsf@linux.ibm.com> <20190223132748.awedzeybi6bjz3c5@wfg-t540p.sh.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Disposition: inline In-Reply-To: <20190223132748.awedzeybi6bjz3c5@wfg-t540p.sh.intel.com> User-Agent: NeoMutt/20170609 (1.8.3) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sat, Feb 23, 2019 at 09:27:48PM +0800, Fengguang Wu wrote: >On Thu, Jan 31, 2019 at 12:19:47PM +0530, Aneesh Kumar K.V wrote: >>Michal Hocko writes: >> >>> Hi, >>> I would like to propose the following topic for the MM track. Different >>> group of people would like to use NVIDMMs as a low cost & slower memory >>> which is presented to the system as a NUMA node. We do have a NUMA API >>> but it doesn't really fit to "balance the memory between nodes" needs. >>> People would like to have hot pages in the regular RAM while cold pages >>> might be at lower speed NUMA nodes. We do have NUMA balancing for >>> promotion path but there is notIhing for the other direction. Can we >>> start considering memory reclaim to move pages to more distant and idle >>> NUMA nodes rather than reclaim them? There are certainly details that >>> will get quite complicated but I guess it is time to start discussing >>> this at least. >> >>I would be interested in this topic too. I would like to understand > >So do me. I'd be glad to take in the discussions if can attend the slot. > >>the API and how it can help exploit the different type of devices we >>have on OpenCAPI. >> >>IMHO there are few proposals related to this which we could discuss together >> >>1. HMAT series which want to expose these devices as Numa nodes >>2. The patch series from Dave Hansen which just uses Pmem as Numa node. >>3. The patch series from Fengguang Wu which does prevent default >>allocation from these numa nodes by excluding them from zone list. >>4. The patch series from Jerome Glisse which doesn't expose these as >>numa nodes. >> >>IMHO (3) is suggesting that we really don't want them as numa nodes. But >>since Numa is the only interface we currently have to present them as >>memory and control the allocation and migration we are forcing >>ourselves to Numa nodes and then excluding them from default allocation. > >Regarding (3), we actually made a default policy choice for >"separating fallback zonelists for PMEM/DRAM nodes" for the >typical use scenarios. > >In long term, it's better to not build such assumption into kernel. >There may well be workloads that are cost sensitive rather than >performance sensitive. Suppose people buy a machine with tiny DRAM >and large PMEM. In which case the suitable policy may be to > >1) prefer (but not bind) slab etc. kernel pages in DRAM >2) allocate LRU etc. pages from either DRAM or PMEM node The point is not separating fallback zonelists for DRAM and PMEM in this case. >In summary, kernel may offer flexibility for different policies for >use by different users. PMEM has different characteristics comparing >to DRAM, users may or may not be treated differently than DRAM through >policies. > >Thanks, >Fengguang