Received: by 2002:ac0:b08d:0:0:0:0:0 with SMTP id l13csp2530238imc; Sat, 23 Feb 2019 05:29:29 -0800 (PST) X-Google-Smtp-Source: AHgI3IZ9qZltIzsEMfjkVaJNSSFfYIGjA9iJyDSqzzErQBB9Gwpsu1jVnKVLNMQrL59JYBXQN7iI X-Received: by 2002:a62:2f45:: with SMTP id v66mr9535359pfv.137.1550928569497; Sat, 23 Feb 2019 05:29:29 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1550928569; cv=none; d=google.com; s=arc-20160816; b=eX0UJUTRGv63PhFzU9a+90IvstPxS5+EdPTnxaI/hVGW3nm1cmGiPP59orGHJWusU8 iLrGYN6y8op7btyS+BrE+NMLZONM0mWBhqqkq9JuY7PER4gN4ddi9xUCefDGtwb1I+ci v84Y24Vw8JctzVQQwHT+svcirF58bPE7EG/qYDTPNmnhFst6OUQFz404A0xG5L3qKqlE olp+yeyZK107hWyi3Xwd6ap1hWntJVrdk2aaQyNVT0Qn2Sg1S6vYjP/OeIZV+ajfFbOR qmk2r/e0df+D99X7IoBqskZNgQvzprpK9W9eU/Usrb1aepTDoto5+gydV/HNf5KRvHRJ SYPQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=xfeCRzpqKVfmRtcBUM7dvs52nVHBKG/RDM1btAzuTcw=; b=xO+0fW6X/FGSSB0LmKCPVF3XXnadURvSJZT2g3QvIdqNVRsbvXhtTtQI9sMHX5Lv2/ 3r6G366Ij060pwbMie4mp/u7FtJZqXdDRB/TOqj42iyx8BSR19S6atPCvmGtL2acfRd5 xG+g2a7OtaJzTlqG/1VecNjmjvi6HWjPXT7ZX2bnAZhu4oFR/N43BEtbWrXle/NzPNSu JRmw4m/yXnK0Idc1gaWV6QxZePs/KBAi/k/QVm2a70dxr9pQ6n+MBjPc+mZJeWs1HRIE djbIcwAS0TVNxThVcnZTLWliybi3bwFVgIfuhAUaHNplq43WhH6EzjVJBz8k0WRvfwNt hdkw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z67si3965670pfb.268.2019.02.23.05.29.14; Sat, 23 Feb 2019 05:29:29 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727340AbfBWN1w (ORCPT + 99 others); Sat, 23 Feb 2019 08:27:52 -0500 Received: from mga09.intel.com ([134.134.136.24]:29833 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725859AbfBWN1w (ORCPT ); Sat, 23 Feb 2019 08:27:52 -0500 X-Amp-Result: UNSCANNABLE X-Amp-File-Uploaded: False Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 23 Feb 2019 05:27:51 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.58,403,1544515200"; d="scan'208";a="135737996" Received: from xiaqing-mobl.ccr.corp.intel.com (HELO wfg-t570.sh.intel.com) ([10.254.208.151]) by FMSMGA003.fm.intel.com with ESMTP; 23 Feb 2019 05:27:49 -0800 Received: from wfg by wfg-t570.sh.intel.com with local (Exim 4.89) (envelope-from ) id 1gxXLI-0000tT-Of; Sat, 23 Feb 2019 21:27:48 +0800 Date: Sat, 23 Feb 2019 21:27:48 +0800 From: Fengguang Wu To: "Aneesh Kumar K.V" Cc: Michal Hocko , lsf-pc@lists.linux-foundation.org, linux-mm@kvack.org, LKML , linux-nvme@lists.infradead.org Subject: Re: [LSF/MM ATTEND ] memory reclaim with NUMA rebalancing Message-ID: <20190223132748.awedzeybi6bjz3c5@wfg-t540p.sh.intel.com> References: <20190130174847.GD18811@dhcp22.suse.cz> <87h8dpnwxg.fsf@linux.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Disposition: inline In-Reply-To: <87h8dpnwxg.fsf@linux.ibm.com> User-Agent: NeoMutt/20170609 (1.8.3) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Jan 31, 2019 at 12:19:47PM +0530, Aneesh Kumar K.V wrote: >Michal Hocko writes: > >> Hi, >> I would like to propose the following topic for the MM track. Different >> group of people would like to use NVIDMMs as a low cost & slower memory >> which is presented to the system as a NUMA node. We do have a NUMA API >> but it doesn't really fit to "balance the memory between nodes" needs. >> People would like to have hot pages in the regular RAM while cold pages >> might be at lower speed NUMA nodes. We do have NUMA balancing for >> promotion path but there is notIhing for the other direction. Can we >> start considering memory reclaim to move pages to more distant and idle >> NUMA nodes rather than reclaim them? There are certainly details that >> will get quite complicated but I guess it is time to start discussing >> this at least. > >I would be interested in this topic too. I would like to understand So do me. I'd be glad to take in the discussions if can attend the slot. >the API and how it can help exploit the different type of devices we >have on OpenCAPI. > >IMHO there are few proposals related to this which we could discuss together > >1. HMAT series which want to expose these devices as Numa nodes >2. The patch series from Dave Hansen which just uses Pmem as Numa node. >3. The patch series from Fengguang Wu which does prevent default >allocation from these numa nodes by excluding them from zone list. >4. The patch series from Jerome Glisse which doesn't expose these as >numa nodes. > >IMHO (3) is suggesting that we really don't want them as numa nodes. But >since Numa is the only interface we currently have to present them as >memory and control the allocation and migration we are forcing >ourselves to Numa nodes and then excluding them from default allocation. Regarding (3), we actually made a default policy choice for "separating fallback zonelists for PMEM/DRAM nodes" for the typical use scenarios. In long term, it's better to not build such assumption into kernel. There may well be workloads that are cost sensitive rather than performance sensitive. Suppose people buy a machine with tiny DRAM and large PMEM. In which case the suitable policy may be to 1) prefer (but not bind) slab etc. kernel pages in DRAM 2) allocate LRU etc. pages from either DRAM or PMEM node In summary, kernel may offer flexibility for different policies for use by different users. PMEM has different characteristics comparing to DRAM, users may or may not be treated differently than DRAM through policies. Thanks, Fengguang